• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Artificial Intelligence

MD

qualiaphile
Human beings will eventually create AI, although it won't happen in any of our lifetimes.
 

apophenia

Well-Known Member
Human beings will eventually create AI, although it won't happen in any of our lifetimes.


We have already created AI.

I think you mean we will not create strong AI - which is an intelligence equal to or greater than human intelligence.Although how we test for that, I don't know. It seems to me that we could only determine if an AI equals human intelligence in specific kinds of tasks.

Or perhaps you mean AI with conscious awareness - which at the moment could not be verified, as conscious awareness is not accounted for, described, or testable as science currently stands.

For all we know the air you breathe has awareness. Science cannot test for what it cannot define.

This is why science fundamentalists always try to define awareness in a way which equates to complex behaviour/stimulus response - because that definition gives the false impression that science knows something about self/awareness, which it absolutely does not.That definition also means that we can create 'awareness' , which makes scientists seem godlike, but relies on a dodgy incomplete definition.
 

MD

qualiaphile
We have already created AI.

I think you mean we will not create strong AI - which is an intelligence equal to or greater than human intelligence.Although how we test for that, I don't know. It seems to me that we could only determine if an AI equals human intelligence in specific kinds of tasks.

Or perhaps you mean AI with conscious awareness - which at the moment could not be verified, as conscious awareness is not accounted for, described, or testable as science currently stands.

For all we know the air you breathe has awareness. Science cannot test for what it cannot define.

This is why science fundamentalists always try to define awareness in a way which equates to complex behaviour/stimulus response - because that definition gives the false impression that science knows something about self/awareness, which it absolutely does not.That definition also means that we can create 'awareness' , which makes scientists seem godlike, but relies on a dodgy incomplete definition.

Yes I meant to say conscious AI. We already have AI yes, and I think we should have forms of strong AI within a few decades (although I don't think it will be a Kurzweilian singularity). I actually believe that the universe has a sense of awareness to it, but right now our knowledge about consciousness and awareness is **** poor at best and it will take a very long time before we have a good understanding on what consciousness it before we can even begin to implement it. Conscious AI could be 500 years away, it could be even 2000 years away. Hard to tell.
 

idav

Being
Premium Member
If a scientist can show me how a machine can be observed to be ''aware of a sense of self' I will be very surprised indeed.

Quite a tall order you have there. It isn't enough that with some animals we can test whether it is aware that it is aware but you want to test whether an animal is aware that it is self aware. Sheesh, sounds a bit redundant.

I think I see what your getting at though. You want proof that they can contemplate the meaning of existence. That is the why question? I'm sure to it takes a higher level of sentience to ponder why. Basically we are machines that learned to do this so any machine with enough potential intelligence will be able to do the same and they would be able to show it same way any human does.
 

1137

Here until I storm off again
Premium Member
Actually science works by making inferences based on facts and evidence. There is no reason not to think that awareness, sense of self, "consciousness" (which is, by definition, the same as awareness) all come from processes in the brain. Since we don't completely know how, just that it's logical to think that way, people come in saying we don't understand consciousness, cannot define it, and it is tied to the magical spirit world, tended to by fairies and unicorns.

We pretty much are machines, we are just organic.
 

MD

qualiaphile
Actually science works by making inferences based on facts and evidence. There is no reason not to think that awareness, sense of self, "consciousness" (which is, by definition, the same as awareness) all come from processes in the brain. Since we don't completely know how, just that it's logical to think that way, people come in saying we don't understand consciousness, cannot define it, and it is tied to the magical spirit world, tended to by fairies and unicorns.

We pretty much are machines, we are just organic.

Actually by making such an assumption you're going against the principles of science itself. There is no evidence whatsoever about why we have subjective experience, and what the mechanism of creating qualia is. To assume that it is nothing more than neural signals creating patterns which create our reality is as valid as saying a soul creates reality.

There's an ontological gap between the neural correlates of consciousness and subjective experience. Consciousness is more than just awareness, it has intentionality, qualia and perhaps even free will as well. Just because it is created by the brain, doesn't mean we understand it. As of now there are several competing theories with regards to consciousness, of which very few specify consciousness as an emergent magical property of pure neural firing.
 

1137

Here until I storm off again
Premium Member
Actually by making such an assumption you're going against the principles of science itself. There is no evidence whatsoever about why we have subjective experience, and what the mechanism of creating qualia is. To assume that it is nothing more than neural signals creating patterns which create our reality is as valid as saying a soul creates reality.

There's an ontological gap between the neural correlates of consciousness and subjective experience. Consciousness is more than just awareness, it has intentionality, qualia and perhaps even free will as well. Just because it is created by the brain, doesn't mean we understand it. As of now there are several competing theories with regards to consciousness, of which very few specify consciousness as an emergent magical property of pure neural firing.

There is no reason not to believe consciousness is a product of the brain. It does not create reality, reality is objective. It changes our perception, sure. Our emotions, views, I guess you could call that "our reality", it is subjective. But if everything else comes from the brain, why wouldn't consciousness. The definition of consciousness is, in fact, awareness. Logically awareness comes from the brain processing information from the five senses. What we intend to do, what we feel, how we perceive, our ideas, our sense of self, all of this is caused by the brain (and DNA). As for free will, there is no such thing. We live in a deterministic and mechanical universe, our minds not exluded. If we think consciousness is separate from the physical, give ONE example of that being so or explain how physical things like rocks can exist without being conscious. Just because we do not understand does not mean we can throw all evidence and reason out the window and accept they magical, soul induced consciouness guided by angels.
 

MD

qualiaphile
There is no reason not to believe consciousness is a product of the brain. It does not create reality, reality is objective. It changes our perception, sure. Our emotions, views, I guess you could call that "our reality", it is subjective. But if everything else comes from the brain, why wouldn't consciousness. The definition of consciousness is, in fact, awareness. Logically awareness comes from the brain processing information from the five senses. What we intend to do, what we feel, how we perceive, our ideas, our sense of self, all of this is caused by the brain (and DNA). As for free will, there is no such thing. We live in a deterministic and mechanical universe, our minds not exluded. If we think consciousness is separate from the physical, give ONE example of that being so or explain how physical things like rocks can exist without being conscious. Just because we do not understand does not mean we can throw all evidence and reason out the window and accept they magical, soul induced consciouness guided by angels.

Did you read what I wrote or are you purposely ignoring my post?

Consciousness is intricately tied with brain, but that doesn't mean we know what it is. To call it neural impulses is slightly less fallacious than calling it a soul. Reality is not truly objective, reality is susceptible to what the observer observes. Reality is thus subjective. This is even true in quantum physics.

We've had this debate before and I'm tired of trying to tell you that consciousness in the scientific sense is not only awareness. Awareness is awareness. Consciousness is a synonym for what constitutes our mental universe. The senses interpret the universe and somehow we achieve our own version of reality.

There is very little evidence that the attributes of consciousness are caused by neural impulses. The neurons might simply be channeling some sort of neutral third substance into what we consider a mind. This is called neutral monism, the idea that the universe is made of a third substance and mind as well as matter are created from this third 'stuff', depending on its arrangement. If you really want me to quote authority, most neuroscientists studying consciousness are neutral monists.

To state that science shows that consciousness is caused by neural impulses firing away is an illogical argument, no matter how logical it may seem to you.
 

1137

Here until I storm off again
Premium Member
First, I've never debated you to my knowledge. Second, there is objective reality, it is just perceived subjectively. If you are going to argue that nothing exists external of the mind it is a waste of my time, and I shall move on.
 

MD

qualiaphile
First, I've never debated you to my knowledge. Second, there is objective reality, it is just perceived subjectively. If you are going to argue that nothing exists external of the mind it is a waste of my time, and I shall move on.

Yes we have debated many months ago.

Second, there is an objective reality, but the only way we observe this objective reality is through our subjective lens. Thus at best we can come to as non biased a view as we can about the objective reality. But it is still subjective, to a very minute extent.

There is also a subjective reality. As in the richness of color, the feelings of emotion. These should not exist in an objective universe but do, and before you say 'our brains make it up' there is no evidence that our brains make it up and it is an illogical statement. If our brains are made of objective matter, subjectivity should not exist.

Our universe is composed of objective and subjective units of information, which are derived from a neutral one. Whether that neutral substance is space-time, energy, platonic realms, FSM, God, etc I don't know. But objective matter cannot and should not result in subjective mind and is thus not the only factor in creating mind.
 

apophenia

Well-Known Member
Quite a tall order you have there. It isn't enough that with some animals we can test whether it is aware that it is aware but you want to test whether an animal is aware that it is self aware. Sheesh, sounds a bit redundant.

You are misrepresenting the situation, and what I posted.

We have tests that indicate that a creature can register the fact that it has a white spot painted on its face. From such evidence we conclude 'self-awareness'.

But this is no more compelling a proof of self-awareness than a digital camera detecting a smile.

Perhaps can provide examples of compelling evidence of the capacity to prove self-awareness in the sense of awareness of being, because that particular example is so easy to duplicate with a PC it hardly amounts to evidence of anything beyond simple computation.



I think I see what your getting at though. You want proof that they can contemplate the meaning of existence. That is the why question? I'm sure to it takes a higher level of sentience to ponder why. Basically we are machines that learned to do this so any machine with enough potential intelligence will be able to do the same and they would be able to show it same way any human does.
That has nothing at all to do with my observation, although it is another example of something no machine is ever likely to do.

My observation was limited to the fact that there is no scientific method of determining the presence of awareness of being.

So to paraphrase - if you can show me any scientific method to determine the presence of 'awareness of being', I will be very surprised indeed.

It seems to me that this is in no way practically necessary or useful, irregardless of whether it is possible. Very useful AI can and will be developed, and many human capacities will be duplicated synthetically. It is not a requirement of any such system that I can imagine that it be aware that it exists.

What I don't get is why the claim even needs to be made that science will be able to produce entities aware of their existence.

My impression is that those who persist in making this claim do so because of a refusal to accept the possibility of any aspect of reality not being explainable by science. That seems to be the crux of the issue. I have been flamed countless times over the years for daring to suggest that science is not (at least eventually) omniscience !

Why is it necessary to believe that science can produce consciousness ? Because, make no mistake, lots of supporters of double-plus-strong AI (self awareness) simply refuse to accept that there is any limit to science.

There is a huge difference between acknowledging the power of the scientific method and the claim that science is not only the only source of valuable knowledge, but also that science is (or will soon be) omniscience. That is not a noble appreciation of science, that is fundamentalism. And that is what emerges whenever this question of strong AI is raised. I get the distinct impression that for many people the belief that everything is scientifically explainable is actually a pathological emotional need, not unlike the need to believe in god.
 

apophenia

Well-Known Member
One point I do want to add to that - by saying that there is the possibility that awareness will never be scientifically understood is not to posit anything mystical or supernatural.

I am not, for example, arguing that awareness is 'a property of the soul'. I have no more belief in souls than I do that science will be able to create consciousness in synthetic life.

It is not a question of choosing between two propositions, one secular and one religious or metaphysical. My observations about the lack of scientific knowledge about consciousness does not equate to religiously inspired guerilla ontology, although that is a common interpretation.

I have no religious beliefs to defend, nor is my world view threatened by science.
 

LegionOnomaMoi

Veteran Member
Premium Member
My observation was limited to the fact that there is no scientific method of determining the presence of awareness of being.

So to paraphrase - if you can show me any scientific method to determine the presence of 'awareness of being', I will be very surprised indeed.

The problem isn't so much one of methodology as it is one of definitions, criteria, and agreement concerning these. And this applies to humans as well. The usefulness of something like the Turing test (not his actual "imitation game", but some version of it which allows people to interact with the program/system until they are either satisfied that it is not human, or that it is at least indistinguishable from humans) is that it bypasses this question entirely. It produces other problems, of course, but the lack of empirically developed methods and criteria for testing consciousness or self-awareness applies to humans as well: we can assume we possess it, but there is no widely acknowledged way to scientifically show we actually do. And any informal procedure can be applied to non-humans as easily as it can humans.

It seems to me that this is in no way practically necessary or useful, irregardless of whether it is possible. Very useful AI can and will be developed, and many human capacities will be duplicated synthetically. It is not a requirement of any such system that I can imagine that it be aware that it exists.

The term "artificial intelligence" isn't actually used much, especially when referring to programs or machines which are designed to "learn". In part this is due to outdated connotations, assumptions, views, etc., associated with "AI", but it is also because there are programs and algorithms designed to adapt, learn, and solve problems for human application, but which are not attempts to simulate the human "mind", as well as those which are.

However, for both (systems designed purely for application and those designed to simulate human cognition or the "mind") self-awareness would be extremely useful and probably necessary for anything intended replicate much of human cognitive capabilities. The enormous divide between all existing computational intelligence paradigms, soft computing/machine learning algorithms, and related areas, and the brain (not just human) is the brain's ability to process meaning. That is, while a computer can crunch numbers and output extremely useful results, it has no capacity to understand either what it is intended to do, what it is doing, or what the results it produces mean. Additionally, the most important concept or notion which brains can produce is one of "self". This is what allows a system to take meaningful input (some arbitrary or semi-arbitrary set of symbols to which it attaches meaning the way we do with language), understand these concepts as external to itself, reflect on them, and intentionally carry out some procedure on this input.

Watson is a useful example here. The reason anybody cares that a computer could solve jeopardy problems has nothing to do with the computers "knowledge". Watson had massive databases to access. What was special was that Watson itself had to parse the input (the jeopardy "question"), access data, and return a response, all without human guidance. Any human with access to the google and a reasonable amount of internat savvy could easily do what Watson did, and better. Why? Because Watson couldn't actually understand "Its largest airport was named for a World War II hero; its second largest, for a World War II battle" there was no way for Watson to process what these words meant, and then go searching through databases. Instead, it relied on an specialized databases filled with examples of human speech, and "learned" before the game to take Jeapordy "questions", apply very sophisticated "matching" algorithms to come up with a number of possibilities which it determined were close to the input, weigh these to determine which was most likely the desired "match", and then use the databanks of information to return an answer.

In other words, it relied on massive databases of human speech to match input with, because all it could do was treat these as meaningless symbols (like a calculator does with numbers). With some of the most sophisticated algorithms and machinery in the world, Watson was capable of parsing Jeopardy "questions" with far more difficulty and far less accuracy than humans. However, combined stored data containing the answers (and the fact that computers are far superior at storing and accessing data), it won.

If something like Watson could be asked, for example, "who was Alexander the Great's teacher?" and could understand the concepts behind this question (what a "teacher" is, who Alexander the Great is, what a "teacher" meant in that day, the fact that although undoubtedly many people taught Alexander the Great, the fact that Aristotle was one of these means he's generally regarded as the teacher of Alexander, etc.) and possessed a concept of "self" which allowed it to understand what was aksed of it (a system cannot, after all, understand what you want it to do unless it has some sense of itself as an agent capable of "doing"), then the combination of this ability + access to databases would mean incredibly powerful problem solvers. All of the problems which make things like facial recognition, language processing, data mining, etc., so difficult for computers are because "AI" programs cannot understand themselves as entities intended to do some task involving processing meaningful input.

Basically, as long as we can't create AI systems capable of conceptual self-representation, they will require extremely complex algorithms to do things even dogs (let alone humans) do effortlessly. Self-awareness is extremely important.

What I don't get is why the claim even needs to be made that science will be able to produce entities aware of their existence.

My impression is that those who persist in making this claim do so because of a refusal to accept the possibility of any aspect of reality not being explainable by science. That seems to be the crux of the issue. I have been flamed countless times over the years for daring to suggest that science is not (at least eventually) omniscience !

Perhaps that has more to do with the specific people you are talking to. After all, the "standard" model of physics involves an absolute limit to human knowledge. I think the reason most scientists/researchers believe that we can produce systems which are self-aware has less to do with faith in science and more to do with 1) the fact that as humans do it, it can be done, and 2) no compelling enough evidence to think that the mechanisms humans rely on cannot be simulated (even if this would require something other than a digitial computer).

Again, perhaps my experience reflects a particular type of interaction with particular types of individuals (people working in some field related to machine learning, cognitive science, systems complexity, etc.) rather than people in general. But even though most of those I have worked with or talked with in such fields are atheists or at least agnostics who tend towards atheism, I haven't had the experiences you describe.
 
Last edited:

otokage007

Well-Known Member
No it is not possible. My brother is an engeneer and he works programming robots, when asked him about his opinion, he said:

"It is not possible yet, probably until Biology unravels the human brain functioning and we are able to create an artificial brain based on that."

So I don't think it is possible yet, but of course I think it will be possible in a not-very-distant future.
 

idav

Being
Premium Member
Watson is a useful example here. The reason anybody cares that a computer could solve jeopardy problems has nothing to do with the computers "knowledge". Watson had massive databases to access. What was special was that Watson itself had to parse the input (the jeopardy "question"), access data, and return a response, all without human guidance. Any human with access to the google and a reasonable amount of internat savvy could easily do what Watson did, and better. Why? Because Watson couldn't actually understand "Its largest airport was named for a World War II hero; its second largest, for a World War II battle" there was no way for Watson to process what these words meant, and then go searching through databases. Instead, it relied on an specialized databases filled with examples of human speech, and "learned" before the game to take Jeapordy "questions", apply very sophisticated "matching" algorithms to come up with a number of possibilities which it determined were close to the input, weigh these to determine which was most likely the desired "match", and then use the databanks of information to return an answer.

In other words, it relied on massive databases of human speech to match input with, because all it could do was treat these as meaningless symbols (like a calculator does with numbers). With some of the most sophisticated algorithms and machinery in the world, Watson was capable of parsing Jeopardy "questions" with far more difficulty and far less accuracy than humans. However, combined stored data containing the answers (and the fact that computers are far superior at storing and accessing data), it won.
It was an important step here what Watson did. But just because it had to search out the questions doesn't mean it isn't significant. The machine has to be able to learn just as it takes us many years to master our language, it would require a machine to have real experience to learn the old fashioned way. Google searched things are understandable as you delve into the detail and understanding what the symbols mean is enough to understand the meaning, which watson was able to do.
 

idav

Being
Premium Member
One point I do want to add to that - by saying that there is the possibility that awareness will never be scientifically understood is not to posit anything mystical or supernatural.
My position is that knowledge is at least possible to obtain for any natural phenomenon otherwise we go into the mystical or supernatural. I don't know how much time we need to understand awareness but I don't think it is a mystery outside the realm of simple physical processes. If the sheer complexity is the issue then it isn't much of an issue. A simple physical process multiplied a trillion times would surely seem extraordinary when we are unfamiliar with every detail.
 

LegionOnomaMoi

Veteran Member
Premium Member
It was an important step here what Watson did. But just because it had to search out the questions doesn't mean it isn't significant. The machine has to be able to learn just as it takes us many years to master our language, it would require a machine to have real experience to learn the old fashioned way. Google searched things are understandable as you delve into the detail and understanding what the symbols mean is enough to understand the meaning, which watson was able to do.

The issue isn't the need "to search". It's a qualitatively different methodology. Watson isn't different from a calculator; it's just faster and has more storage. The important difference is the ability to attach meaning to input. Watson couldn't. Which means it couldn't actually understand the "questions". It could use algorithms to search through a massive number of similar "questions" and use adaptive programming and the intelligence of the designers to find the best "match". It's like giving a snail the data storage of the internet without improving it's cogntiive abilities. It can react, and you can make it react in certain ways, but you need something completely different to get a program/machine which can "understand".
 

idav

Being
Premium Member
The issue isn't the need "to search". It's a qualitatively different methodology. Watson isn't different from a calculator; it's just faster and has more storage. The important difference is the ability to attach meaning to input. Watson couldn't. Which means it couldn't actually understand the "questions". It could use algorithms to search through a massive number of similar "questions" and use adaptive programming and the intelligence of the designers to find the best "match". It's like giving a snail the data storage of the internet without improving it's cogntiive abilities. It can react, and you can make it react in certain ways, but you need something completely different to get a program/machine which can "understand".

If it didn't understand then it wouldn't be able to answer an interpretive question about a painting. Interpreting the symbols is what gives text meaning because we can associate the symbols as some picture we saw or some live experience you have. Sure watson is no smarter than a child since it doesn't have experience. The capacity to learn is the most important features of intelligence but of course being able actually process that amount of data is important as intelligence increases.
 

Reptillian

Hamburgler Extraordinaire
We don't really need to understand how thinking or intelligence work to reproduce them. It's like complex patterns with cellular automata.

Imagine a long row of squares on a piece of paper with each square either being filled in, or left blank. Below that row, I draw another identical row of squares, all initially blank. Now I either fill in each square in the new row, or leave it blank; depending upon whether the square directly above it is filled in, and whether the two squares touching that square are filled in or blank.

For example if all three squares above the square of interest are filled in, you might leave the new one blank...if the one directly above the square of interest is blank but the ones on the left and right are filled in, you might decide to fill it in. There are 8 different configurations for the three squares (called cells) above the cell of interest. and you leave the new cell blank or fill it in for each configuration. So there are 2 choices for each of the 8 configurations, giving a total of 256 different possible sets of rules for filling in the new row. So you can number them :)
ElementaryCARules_900.gif


These are some examples of rules.

Now once your new row is complete, you repeat the process and draw another new row beneath that and use your chosen rule to complete that row...and continue the process. In the end you get a picture that looks like this:

ElementaryCARule030_700.gif


Not all rules produce such interesting patterns...over a lot of updates you can get some really complex pictures with certain rules

images


Now what does this have to do with artificial intelligence? (since you might ask)

Well, you could attempt to reproduce the above complex picture on a piece of paper by drawing in each triangle...this would be like trying to completely understand thinking before creating an intelligent machine. But you could do what we did before and use a simple starting state with simple rules, and let the complexity fall out on it's own...this is the approach to artificial intelligence that a lot of modern people are taking. You build the framework, figure out the basic rules, and sort of let the complexity and thinking "fall out". No complete understanding of the human mind required.
 
Top