• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Artificial Intelligence presumes people do not have a soul

dfnj

Well-Known Member
Years ago when I was in college I took an interest in studying artificial intelligence. With my arrogance of youth I thought I had the intellectual power to crack the code of intelligence and transfer it to the computer. And then I found the works and arguments of John Searle. As a result, his work convinced me it is intuitively impossible for a computer to be truly intelligent.

The nature of our consciousness, the nature of our intelligence, and the question of if we have a soul or not is a semantic one. In my studies I have concluded the source of our semantics come from our deep connection with reality itself. Without this deep physical connection to what is outside of our brains we have no consciousness and no intelligence. It is this connection that is our soul. This is the antithesis of the philosophical materialists who claim everything about our minds is contained completely within our brains like the way a computer works.

But if this were true then you would think over the last 60 years of computing efforts someone would have successfully created true artificial intelligence by now. So I think it is interesting to understand why artificial intelligence has NOT become a reality. This might provide some insight into whether or not we have a soul.

People have claimed artificial intelligence is just around the corner. I have heard the "just around the corner" argument for over 40 years. The problem with this kind of thinking is people's words describing what a computer is doing have implied meanings much greater than what is happening. For example, the idea of a "thought synthesis module" sounds much more powerful than it may actually be in a computer. There are certain semantic distinctions outlined by John Searle's brilliant arguments that must be taken into account:


"Observer relative" does not mean something is "observer independent". Syntax is NOT semantics. And simulation is NOT duplication. Epistemically intelligent means something being intelligent in this sense is completely in the eye of the beholder. Because it is in the eye of the beholder means the thing itself is NOT intelligent. It's all observer relative and not intrinsic. Nothing in the computer with regards to intelligence is observer independent. The arguments around this distinction have been going on for almost 60 years!

The consciousness itself that creates the observer relative experience is itself NOT observer relative. This is the crux of the whole argument. The main difference between us and computers is we ourselves are NOT observer relative when it comes to intelligence. A computer has no self-awareness that is is doing addition or subtraction. It just does what it's digital circuits are designed to do without any meaning to processing while it is doing it or the final result of the effort. Computers as they are currently designed are forever observer relative with regard to intelligence.

For 60 years now people have been trying to cross the barrier between observer-relative to observer-independent with machine intelligence. And this whole time I've heard people say crossing the barrier is just around the corner. But, I see absolutely not a single shred of evidence over the last 60 years to suggest the barrier is closer to being crossed anytime soon. If you have such evidence please present it.

John Searle is a great intellectual. What a great lecturer! Probably my favorite professor all-time. I first encountered John Searle's work in the early 1980s. As a result I lost interest in artificial intelligence because I did not think it was intuitively possible based on existing standard computer architecture.

What makes us intelligent is obviously more than just syntactic processing. I believe this is the insight that there is something profound in the way we are connected to reality. This connection could be evidence we are not just machines but something more. And the something more might be something one could consider observer-relative to be sacred, that is, a soul.
 
Last edited:

sun rise

The world is on fire
Premium Member
The OP assumes a relationship between true intelligence and a soul. From my belief, a soul is created in a most rudimentary way and moves through stone and metal states until it enters evolution as a plant and then evolves through the kingdoms of nature until becoming fully conscious and self-conscious as a human.

So this makes intelligence not fundamental.
 

ChristineM

"Be strong", I whispered to my coffee.
Premium Member
AI is here and working in everyday life, a Google search, an Amazon order, your utility bill, financial market predictions, siri are processed by artificial intelligence systems.

Not conscious, not emotional, not grand self awareness but artificial intelligence doing real jobs
 

Saint Frankenstein

Wanderer From Afar
Premium Member
Meh, AI becoming truly self-aware is still sci-fi, along with transhumanism. No one even knows if it's possible. We don't really know how the brain works or even how to really define consciousness. They can throw all the data at Google they want. It's still just a program.
 

Revoltingest

Pragmatic Libertarian
Premium Member
Years ago when I was in college I took an interest in studying artificial intelligence. With my arrogance of youth I thought I had the intellectual power to crack the code of intelligence and transfer it to the computer. And then I found the works and arguments of John Searle. As a result, his work convinced me it is intuitively impossible for a computer to be truly intelligent.
"Intuitively impossible"?
I see some weakness in that line of thought.
 
Last edited:

tayla

My dog's name is Tayla
So I think it is interesting to understand why artificial intelligence has NOT become a reality. This might provide some insight into whether or not we have a soul.
If humans don't have souls, but rather, merely functioning brains; why should we expect AI to have souls? And it is unlikely anyone would ever design an AI that operated exactly like a brain, having all the identical wiring and circuits.
 

paarsurrey

Veteran Member
Years ago when I was in college I took an interest in studying artificial intelligence. With my arrogance of youth I thought I had the intellectual power to crack the code of intelligence and transfer it to the computer. And then I found the works and arguments of John Searle. As a result, his work convinced me it is intuitively impossible for a computer to be truly intelligent.

The nature of our consciousness, the nature of our intelligence, and the question of if we have a soul or not is a semantic one. In my studies I have concluded the source of our semantics come from our deep connection with reality itself. Without this deep physical connection to what is outside of our brains we have no consciousness and no intelligence. It is this connection that is our soul. This is the antithesis of the philosophical materialists who claim everything about our minds is contained completely within our brains like the way a computer works.

But if this were true then you would think over the last 60 years of computing efforts someone would have successfully created true artificial intelligence by now. So I think it is interesting to understand why artificial intelligence has NOT become a reality. This might provide some insight into whether or not we have a soul.

People have claimed artificial intelligence is just around the corner. I have heard the "just around the corner" argument for over 40 years. The problem with this kind of thinking is people's words describing what a computer is doing have implied meanings much greater than what is happening. For example, the idea of a "thought synthesis module" sounds much more powerful than it may actually be in a computer. There are certain semantic distinctions outlined by John Searle's brilliant arguments that must be taken into account:


"Observer relative" does not mean something is "observer independent". Syntax is NOT semantics. And simulation is NOT duplication. Epistemically intelligent means something being intelligent in this sense is completely in the eye of the beholder. Because it is in the eye of the beholder means the thing itself is NOT intelligent. It's all observer relative and not intrinsic. Nothing in the computer with regards to intelligence is observer independent. The arguments around this distinction have been going on for almost 60 years!

The consciousness itself that creates the observer relative experience is itself NOT observer relative. This is the crux of the whole argument. The main difference between us and computers is we ourselves are NOT observer relative when it comes to intelligence. A computer has no self-awareness that is is doing addition or subtraction. It just does what it's digital circuits are designed to do without any meaning to processing while it is doing it or the final result of the effort. Computers as they are currently designed are forever observer relative with regard to intelligence.

For 60 years now people have been trying to cross the barrier between observer-relative to observer-independent with machine intelligence. And this whole time I've heard people say crossing the barrier is just around the corner. But, I see absolutely not a single shred of evidence over the last 60 years to suggest the barrier is closer to being crossed anytime soon. If you have such evidence please present it.

John Searle is a great intellectual. What a great lecturer! Probably my favorite professor all-time. I first encountered John Searle's work in the early 1980s. As a result I lost interest in artificial intelligence because I did not think it was intuitively possible based on existing standard computer architecture.

What makes us intelligent is obviously more than just syntactic processing. I believe this is the insight that there is something profound in the way we are connected to reality. This connection could be evidence we are not just machines but something more. And the something more might be something one could consider observer-relative to be sacred, that is, a soul.
"the source of our semantics come from our deep connection with reality itself. "

A valuable observation, indeed.
Regards
 

Stevicus

Veteran Member
Staff member
Premium Member
Years ago when I was in college I took an interest in studying artificial intelligence. With my arrogance of youth I thought I had the intellectual power to crack the code of intelligence and transfer it to the computer. And then I found the works and arguments of John Searle. As a result, his work convinced me it is intuitively impossible for a computer to be truly intelligent.

The nature of our consciousness, the nature of our intelligence, and the question of if we have a soul or not is a semantic one. In my studies I have concluded the source of our semantics come from our deep connection with reality itself. Without this deep physical connection to what is outside of our brains we have no consciousness and no intelligence. It is this connection that is our soul. This is the antithesis of the philosophical materialists who claim everything about our minds is contained completely within our brains like the way a computer works.

But if this were true then you would think over the last 60 years of computing efforts someone would have successfully created true artificial intelligence by now. So I think it is interesting to understand why artificial intelligence has NOT become a reality. This might provide some insight into whether or not we have a soul.

People have claimed artificial intelligence is just around the corner. I have heard the "just around the corner" argument for over 40 years. The problem with this kind of thinking is people's words describing what a computer is doing have implied meanings much greater than what is happening. For example, the idea of a "thought synthesis module" sounds much more powerful than it may actually be in a computer. There are certain semantic distinctions outlined by John Searle's brilliant arguments that must be taken into account:


"Observer relative" does not mean something is "observer independent". Syntax is NOT semantics. And simulation is NOT duplication. Epistemically intelligent means something being intelligent in this sense is completely in the eye of the beholder. Because it is in the eye of the beholder means the thing itself is NOT intelligent. It's all observer relative and not intrinsic. Nothing in the computer with regards to intelligence is observer independent. The arguments around this distinction have been going on for almost 60 years!

The consciousness itself that creates the observer relative experience is itself NOT observer relative. This is the crux of the whole argument. The main difference between us and computers is we ourselves are NOT observer relative when it comes to intelligence. A computer has no self-awareness that is is doing addition or subtraction. It just does what it's digital circuits are designed to do without any meaning to processing while it is doing it or the final result of the effort. Computers as they are currently designed are forever observer relative with regard to intelligence.

For 60 years now people have been trying to cross the barrier between observer-relative to observer-independent with machine intelligence. And this whole time I've heard people say crossing the barrier is just around the corner. But, I see absolutely not a single shred of evidence over the last 60 years to suggest the barrier is closer to being crossed anytime soon. If you have such evidence please present it.

John Searle is a great intellectual. What a great lecturer! Probably my favorite professor all-time. I first encountered John Searle's work in the early 1980s. As a result I lost interest in artificial intelligence because I did not think it was intuitively possible based on existing standard computer architecture.

What makes us intelligent is obviously more than just syntactic processing. I believe this is the insight that there is something profound in the way we are connected to reality. This connection could be evidence we are not just machines but something more. And the something more might be something one could consider observer-relative to be sacred, that is, a soul.

I don't really know if I or anyone else has a "soul." On the other hand, there are still many things about our brains and the intricacies of the human mind which we still don't know. If humans have a "soul," could we find it on some kind of brain scan? I've heard it said that "eyes are the windows to the soul," but I never could quite figure out what that actually meant.

Another technology that's up and coming is cloning. If it's possible to clone an exact duplicate of myself, would that clone have my soul, too? Would it be a different soul? Or maybe no soul at all?

As for AI, I'm not sure about that either. There's another thread up currently about an AI-driven car which hit a pedestrian because it wasn't programmed to recognize jaywalkers. This makes me think we're quite a ways away from anything capable of initiating independent thought or creativity. Or being able to learn something by reading it in a book or observing, as opposed to having every little thing programmed into it.

We'd have to be able to create something like this:

 

TagliatelliMonster

Veteran Member
Years ago when I was in college I took an interest in studying artificial intelligence. With my arrogance of youth I thought I had the intellectual power to crack the code of intelligence and transfer it to the computer. And then I found the works and arguments of John Searle. As a result, his work convinced me it is intuitively impossible for a computer to be truly intelligent.

The nature of our consciousness, the nature of our intelligence, and the question of if we have a soul or not is a semantic one. In my studies I have concluded the source of our semantics come from our deep connection with reality itself. Without this deep physical connection to what is outside of our brains we have no consciousness and no intelligence. It is this connection that is our soul. This is the antithesis of the philosophical materialists who claim everything about our minds is contained completely within our brains like the way a computer works.

But if this were true then you would think over the last 60 years of computing efforts someone would have successfully created true artificial intelligence by now. So I think it is interesting to understand why artificial intelligence has NOT become a reality. This might provide some insight into whether or not we have a soul.

People have claimed artificial intelligence is just around the corner. I have heard the "just around the corner" argument for over 40 years. The problem with this kind of thinking is people's words describing what a computer is doing have implied meanings much greater than what is happening. For example, the idea of a "thought synthesis module" sounds much more powerful than it may actually be in a computer. There are certain semantic distinctions outlined by John Searle's brilliant arguments that must be taken into account:


"Observer relative" does not mean something is "observer independent". Syntax is NOT semantics. And simulation is NOT duplication. Epistemically intelligent means something being intelligent in this sense is completely in the eye of the beholder. Because it is in the eye of the beholder means the thing itself is NOT intelligent. It's all observer relative and not intrinsic. Nothing in the computer with regards to intelligence is observer independent. The arguments around this distinction have been going on for almost 60 years!

The consciousness itself that creates the observer relative experience is itself NOT observer relative. This is the crux of the whole argument. The main difference between us and computers is we ourselves are NOT observer relative when it comes to intelligence. A computer has no self-awareness that is is doing addition or subtraction. It just does what it's digital circuits are designed to do without any meaning to processing while it is doing it or the final result of the effort. Computers as they are currently designed are forever observer relative with regard to intelligence.

For 60 years now people have been trying to cross the barrier between observer-relative to observer-independent with machine intelligence. And this whole time I've heard people say crossing the barrier is just around the corner. But, I see absolutely not a single shred of evidence over the last 60 years to suggest the barrier is closer to being crossed anytime soon. If you have such evidence please present it.

John Searle is a great intellectual. What a great lecturer! Probably my favorite professor all-time. I first encountered John Searle's work in the early 1980s. As a result I lost interest in artificial intelligence because I did not think it was intuitively possible based on existing standard computer architecture.

What makes us intelligent is obviously more than just syntactic processing. I believe this is the insight that there is something profound in the way we are connected to reality. This connection could be evidence we are not just machines but something more. And the something more might be something one could consider observer-relative to be sacred, that is, a soul.

I think your entire case is build on false premises and unsupported connections.

For some reason, you are assuming that human intelligence is directly connected with, or even dependend on, a "soul". Yet you just assert this. You don't demonstrate this link at all. In fact, you can't even properly define "soul" in such a way that it is actually detectable, so you can't even demonstrate the existance thereof.

Then there is this idea that you apparantly have the AI is supposed to mimmic humanity in general. This is not true either. The "intelligence" in AI is merely an advanced computing ability to reason through pattern recognition and big data analysis. A computer will never be able to think like a human for the simple reason that a computer is not a human. Our humanity and our consciousness is extremely influenced by our every day experience.

Consider this machine right here:

upload_2019-11-12_14-23-7.png


I'm sure you are aware of it. It's not a human. You can make it look human and to an extent you'll be able to make it act human, but it will never be human.
It can not experience the world like real humans do. For it to be able to experience the world like we do, it would have to BE human. And it's not. It never will be.

You can create an AI engine that writes songs. In fact, such AI's already exist, or are at least in the making. And they in fact do quite a good job in creating "art".
Here's the thing though..... they don't create art like we humans do. A bot that creates music does it based on a vast collection of music that it analyses and replicates with random variation which is then computed to follow certain patterns like common chord progressions etc. A bot is not going to come up with a truelly original tune that expresses some kind of emotion- because bots don't have emotions.

Nevertheless AI is very real. You say it's not yet here, but that is just plain wrong. It is here. It is in fact even already commercially deployed.
And over the years, AI will only get better and better. Even the same AI bot, as it will improve itself through machine learning and ever enlarging datasets etc.

The bottom line is that none of this somehow supports or even only suggests that humans have this vague, abstract, mysterious "soul" thingy.
It sounds like the "soul" conclusion is what you started out with and then worked your way towards circling back to it... as some kind of assumed conclusion.
 

OldTimeNESter

New Member
One problem with AI is that after an algorithm is shown to be effective, it becomes part of the standard programming toolkit for non-AI programmers. I've seen this in my own career with (among other things) fuzzy logic and various classifiers (e.g. naive Bayes, etc.).

Of course, these are weak AI, in that they seek to simulate intelligence: in practice, these algorithms invariably optimize some scoring rule (or equivalently minimize an error criterion). They are entirely deterministic, although in some cases (e.g. neural nets) that doesn't necessarily mean we can say anything about why the algorithm reached a particular decision.

I also believe the search for strong AI (i.e. AI that actually is intelligent) has been forestalled by how effective even poor algorithms are when run on modern hardware: for example, a chess program basically uses a guided search through a tree of potential moves, rating each one before selecting the one with the highest rating according to some criteria. It has become clear that the depth of the search--how far into the "future" the algorithm can look--is by far the greatest predictor of its strength; similar logic holds for expert systems such as Watson, where the real challenge is not writing the algorithm so much as managing that vast amount of data.

Personally, I don't think we'll achieve strong AI until we can make a computer program so depressed about its place in the universe that it self-terminates rather than complete what it perceives as a futile and humiliating assignment.
 

Twilight Hue

Twilight, not bright nor dark, good nor bad.
AI is here and working in everyday life, a Google search, an Amazon order, your utility bill, financial market predictions, siri are processed by artificial intelligence systems.

Not conscious, not emotional, not grand self awareness but artificial intelligence doing real jobs
And there is no doubt you can make self awareness a reality with ai.
 

Twilight Hue

Twilight, not bright nor dark, good nor bad.
One problem with AI is that after an algorithm is shown to be effective, it becomes part of the standard programming toolkit for non-AI programmers. I've seen this in my own career with (among other things) fuzzy logic and various classifiers (e.g. naive Bayes, etc.).

Of course, these are weak AI, in that they seek to simulate intelligence: in practice, these algorithms invariably optimize some scoring rule (or equivalently minimize an error criterion). They are entirely deterministic, although in some cases (e.g. neural nets) that doesn't necessarily mean we can say anything about why the algorithm reached a particular decision.

I also believe the search for strong AI (i.e. AI that actually is intelligent) has been forestalled by how effective even poor algorithms are when run on modern hardware: for example, a chess program basically uses a guided search through a tree of potential moves, rating each one before selecting the one with the highest rating according to some criteria. It has become clear that the depth of the search--how far into the "future" the algorithm can look--is by far the greatest predictor of its strength; similar logic holds for expert systems such as Watson, where the real challenge is not writing the algorithm so much as managing that vast amount of data.

Personally, I don't think we'll achieve strong AI until we can make a computer program so depressed about its place in the universe that it self-terminates rather than complete what it perceives as a futile and humiliating assignment.
I wonder what would happen if you give a I control over its own destiny and give it the ability to reproduce.

I still remember the time when two artificial AI through an experiment created their own language of their own volition and had active conversations.
 

ChristineM

"Be strong", I whispered to my coffee.
Premium Member
And there is no doubt you can make self awareness a reality with ai.

Possibly in time. But I think thats a big step from intelligence to self awareness.

Most animals show intelligence to some degree but very few have developed self awareness.

Of course, i may be wrong, it could be that machine intelligence can become self aware far more easily than squishy fat, neurones and nerves.
 

Twilight Hue

Twilight, not bright nor dark, good nor bad.
Possibly in time. But I think thats a big step from intelligence to self awareness.

Most animals show intelligence to some degree but very few have developed self awareness.

Of course, i may be wrong, it could be that machine intelligence can become self aware far more easily than squishy fat, neurones and nerves.
I was thinking of a more rudimentary form of self-awareness. The fact that we have automated cars that can detect and avoid obstacles would indicate an awareness on part of the AI.

Now self preservation or self identity, that would be a different story of which Organics would be far ahead of the inorganic constructs.

I do think it all has to do with animated matter in the end. If you really think about it , every aspect of our being is a result of being exposed to stimuli with a reaction.
 

ChristineM

"Be strong", I whispered to my coffee.
Premium Member
I was thinking of a more rudimentary form of self-awareness. The fact that we have automated cars that can detect and avoid obstacles would indicate an awareness on part of the AI.

Now self preservation or self identity, that would be a different story of which Organics would be far ahead of the inorganic constructs.

I do think it all has to do with animated matter in the end. If you really think about it , every aspect of our being is a result of being exposed to stimuli with a reaction.

Ah. Right, i understand.
 

Kelly of the Phoenix

Well-Known Member
Meh, AI becoming truly self-aware is still sci-fi, along with transhumanism. No one even knows if it's possible. We don't really know how the brain works or even how to really define consciousness. They can throw all the data at Google they want. It's still just a program.
I feel the biggest problem is that AI doesn't have the evolved biological reasons we think the way we do. As long as it's just a "brain in a box", it can't really appreciate what's going on in the real world.
 
Top