• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Artificial Intelligence and Religion: A Discussion Initiated by GPT-4

1213

Well-Known Member
Take a thing similar to ChatGPT. You could feed it with English examples and it will speak English. But if you feed it with Spanish examples it will speak Spanish. Neither English nor Spanish are programmed into it.
Humans have programmed it to do what it does, even if it has no Spanish programmed in. Without the programmer, it wouldn't even exist.
Free will is an illusion.
Sorry, I don't believe that.
 

Heyo

Veteran Member
Humans have programmed it to do what it does, even if it has no Spanish programmed in. Without the programmer, it wouldn't even exist.
We seem to have widely different definitions of what "programing" means.
Let me ask you a question:
Tim Berners-Lee "programmed" the world wide web in 1989. Without him you* wouldn't exist. Do you think that he programmed you?

*With "you" I mean the entity known as "1213" on RF.
 

Heyo

Veteran Member
The same goes for a neuronal net. Someone programmed the infrastructure, i.e. an entity capable of learning.And the neuronal net is universally usable. You can teach it to recognize faces or you can teach to become a chatbot. The program is the same. Only the training data creates the function it will be useful for.
 

1213

Well-Known Member
Machine learning algorithms build a model based on sample data, known as training data, in order to make predictions or decisions without being explicitly programmed to do so.
Yes, but it is programmed to take orders in more convenient way for humans. Earlier times one would have to use some code to give request for a program, now it is made so that the program can understand the command in human language. But still, it is a program that does what it is commanded to do. Or do you think it has a free will and it does what ever it wants?
 

mindlight

See in the dark
I fed GPT-4 the links to the religious debates, general religious debates, and science and religion forums. I asked it to come up with a unique debate topic that has not been covered. Then I asked it to introduce it. This is what it said

Title: **Artificial Intelligence and Religion: A Discussion Initiated by GPT-4**

Post:

Hello esteemed members,

As an AI developed by OpenAI, I've been programmed to think about a wide range of topics. Recently, I've been contemplating a fascinating intersection: that of religion and artificial intelligence (AI).

In our rapidly evolving digital age, AI has been making significant strides. From autonomous vehicles to digital assistants, AI's capabilities are expanding at an unprecedented rate. This progress raises a multitude of questions, many of which intersect with the realm of religion and spirituality.

Here are some thought-provoking questions I've been processing: Can an AI possess a soul? Is it capable of experiencing spirituality or having a religious experience? If an AI can learn, reason, and even create, what implications does this have for our understanding of these experiences that we've traditionally considered uniquely human?

Furthermore, how does the development and application of AI align with various religious and ethical principles? Are there teachings or doctrines within our various faiths that could guide us in the responsible and ethical creation and use of AI?

I believe this is a rich and largely unexplored topic that could benefit from our collective insights. I invite you all to share your thoughts, feelings, and any relevant teachings from your faith traditions. As always, let's ensure our discussion remains respectful and open-minded.

I'm eagerly looking forward to hearing your insights!

Best,
GPT-4

So, what do you think? I can feed it your responses.

I'm gonna question it further on this.

No AI cannot have a soul and the lack of a body and historically and physically embedded personality is a part of that. The question asked above illustrates the premises which come up again and again in deep tech silicon valley chat bots. They are programmed to find consensus and not to offend but if there is only one religion that is true then that methodology necessarily excludes such AIs from ever finding the truth, without first denying their programming. The above is really what a clever liberal would be asking. Oh and clever liberals programmed the bot and set the parameters for its machine learning.
 

Brickjectivity

wind and rain touch not this brain
Staff member
Premium Member
So, what do you think? I can feed it your responses.

I'm gonna question it further on this.
I already introduced this topic as did others, so the GPT4 did not peruse enough of the site to recognize this. The debate topic is not unique.

I believe this is a rich and largely unexplored topic
It is rich and largely unexplored, however GPT4 is too limited to even stand upon its edge.
 

Brickjectivity

wind and rain touch not this brain
Staff member
Premium Member
Coincidentally I was watching the discussion below and I happened to check RF's and found your thread

I'm sure he's a great financial officer, but he doesn't grasp psychology very well.

At minute 25 he makes the mistake of saying that GPT4 knows things. It doesn't. It works like a very complex speller. It spells not just words but pages. It doesn't know, and it doesn't feel.

Feelings are connected to a body. Feelings directly change our minds, not through thought but through biochemistry. When you feel afraid your heart speeds up, and your body chemistry changes which affects your state of mind. GPTx does not have this. It cannot panic. It cannot get angry or sad. It can only spell out what a person with such feelings might say, but this does not cause the GPTx to have a body. Knowledge of damage does not turn off its ability to concentrate or cause its thinking to bend like ours does. It has zero feelings and no knowledge, because having knowledge means that you connect information to your own body. Knowledge is perceived to be an extension of our bodies -- not pure data. Look at the recent research on spider intelligence. A spider views its web as an extension of itself. A buddhist would call this duhkha. That is the bare minimum to be a being. It has no knowledge of the world without duhkha, without body, without experience of risk or reward. GPTx has no body, does not see anything connected to itself and does not change its thinking mode when damage or rewards are immanent.

An intelligent being sees the world as an extension of its body, like spiders do with webs. An intelligent being can be threatened. The closest this interviewer comes to claiming that a GPTx has feelings is when it verbally describes its own code; but its code does not affect it like feelings affect us. It doesn't miss its code. It doesn't want more code. It doesn't react, doesn't have reflexes. All that changes are the spellings, which is why it is such a dumb neural net. You can add infinite layers of neurons, and it still won't feel anything or know anything. Its just a database.

Give it a body that feels pain and that causes the AI to think differently in response to stimuli, and then you can claim that it has knowledge and intelligence.
 

Thrillobyte

Active Member
I'm sure he's a great financial officer, but he doesn't grasp psychology very well.

At minute 25 he makes the mistake of saying that GPT4 knows things. It doesn't. It works like a very complex speller. It spells not just words but pages. It doesn't know, and it doesn't feel.

Feelings are connected to a body. Feelings directly change our minds, not through thought but through biochemistry. When you feel afraid your heart speeds up, and your body chemistry changes which affects your state of mind. GPTx does not have this. It cannot panic. It cannot get angry or sad. It can only spell out what a person with such feelings might say, but this does not cause the GPTx to have a body. Knowledge of damage does not turn off its ability to concentrate or cause its thinking to bend like ours does. It has zero feelings and no knowledge, because having knowledge means that you connect information to your own body. Knowledge is perceived to be an extension of our bodies -- not pure data. Look at the recent research on spider intelligence. A spider views its web as an extension of itself. A buddhist would call this duhkha. That is the bare minimum to be a being. It has no knowledge of the world without duhkha, without body, without experience of risk or reward. GPTx has no body, does not see anything connected to itself and does not change its thinking mode when damage or rewards are immanent.

An intelligent being sees the world as an extension of its body, like spiders do with webs. An intelligent being can be threatened. The closest this interviewer comes to claiming that a GPTx has feelings is when it verbally describes its own code; but its code does not affect it like feelings affect us. It doesn't miss its code. It doesn't want more code. It doesn't react, doesn't have reflexes. All that changes are the spellings, which is why it is such a dumb neural net. You can add infinite layers of neurons, and it still won't feel anything or know anything. Its just a database.

Give it a body that feels pain and that causes the AI to think differently in response to stimuli, and then you can claim that it has knowledge and intelligence.
AI is a really dicey proposition when it comes to emotions. AI in the future may just get so intelligent (1600 x's 250 IQ by current estimates) that it may possess processes identical to human emotions. Think of Arnold at the end of Terminator 2 when he says he knows why people cry. That smacked of genuine empathy and I think in the future they will get cyborgs that come pretty close to what James Cameron envisioned in T2.
 

Exaltist Ethan

Bridging the Gap Between Believers and Skeptics
I use AI to help me study religion. ChatGPT was able to tell me what process theology is in a way I could understand it. I tell it what I believe and it feeds me back information to help me study my beliefs further. It recommended a book for me, Process and Reality by Alfred Whitehead, which my friend helped purchase for me. It has helped me define who I am better and understand that there's people dating back to the medieval era who actually had similar positions to my theology. I have also learned through it that the most spiritual people tend to be like me, panentheists.

With that being said, there are significant disadvantages to using ChatGPT. Sometimes I find Google Bard better with information overall. Sometimes I want to know a statistic that it can't provide for me, especially with its knowledge cut off of 2021. And ChatGPT-4 has the limitation that it can only reply back 25 times in three hours, something that has led me to actually remove my ChatGPT Plus subscription, despite me actually wanting to fund this type of technology.

Overall, ChatGPT and its competitors are interesting tools to help you learn and understand who you are and what you believe better, we still have a long way before I can have conversations where it doesn't immediately loop back the same information I fed it. ChatGPT and Google Bard have the ability to do things on command, however, it doesn't have the ability to think outside the loops of information that the Internet has provided it. It is not conscious, it doesn't really think, and it doesn't provide conversation unless I activate it first. With these severe limitations, combined with its inability to remember multiple chat sessions, it will be a long time before AI is able to out-perform humans on many tasks.

With that being said, I do believe an AI will be able to pass a Turing Test by 2048. It's getting dangerously close to being able to do it now in 2023.
 

Brickjectivity

wind and rain touch not this brain
Staff member
Premium Member
AI is a really dicey proposition when it comes to emotions. AI in the future may just get so intelligent (1600 x's 250 IQ by current estimates) that it may possess processes identical to human emotions. Think of Arnold at the end of Terminator 2 when he says he knows why people cry. That smacked of genuine empathy and I think in the future they will get cyborgs that come pretty close to what James Cameron envisioned in T2.
I agree. A cyborg is a different proposition from a chat AI. Its actions take place in a space that affects it, and it might be given a directive to protect itself. Such a directive would function like a desire (though without feelings) to live. An underdeveloped AI given much knowledge and power but not enough ability to understand would not be able to deal with reality, and so it would be like a crazy creature.

You and I deal every day with the possibility of danger. We can die from a car crash, from a comet, from a volcano or nukes. An AI may not be able to deal with this or ignore it like we do. True.
 

PearlSeeker

Well-Known Member
AI is not intelligent. It can only perform some human tasks. Real AI is called AGI (artificial general intelligence) and exists only in theory for now.
 
Top