• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Misconceptions

atanu

Member
Premium Member
Turing test - Wikipedia

Many cite Turing Test criterion as proof of mind being nothing beyond material controlled computation. People think that through his discovery of computational universality and the Test criterion for intelligence, Turing had proven that there was nothing more to mind, brain, or the physical world than the unfolding of an immense computation.

But is this assessment correct?

in a 1951 radio address Turing himself brought up Eddington, and the possible limits on prediction of human brains imposed by the uncertainty principle:

If it is accepted that real brains, as found in animals, and in particular in men, are a sort of machine it will follow that our digital computer suitably programmed, will behave like a brain. [But the argument for this conclusion] involves several assumptions which can quite reasonably be challenged. [It is] necessary that this machine should be of the sort whose behaviour is in principle predictable by calculation. We certainly do not know how any such calculation should be done, and it was even argued by Sir Arthur Eddington that on account of the indeterminacy principle in quantum mechanics no such prediction is even theoretically possible.

Furthermore, in 1954 again Turing quoted Eddington in postcard message to Robin Gandy, that reads, in part:

Messages from the Unseen World
The Universe is the interior of the Light Cone of the Creation. Science is a Differential Equation. Religion is a Boundary Condition

So, it seems that Turing postulated and believed that a Turing machine will be able to fool us in thinking that it was intelligent. But Turing did not necessarily believe that such a programmed machine would possess the same intelligence as ours. In short, ability to imitate a part of intelligent action is not same as possessing the same intelligence. So, my first question is, in light of what Turing himself said, will a Turing machine someday passing a Turing verbal imitation test prove that human intelligence is nothing but computation?

The point however seems even more deeper in light of Turing's reference to "The Universe is the interior of the Light Cone of the Creation. Science is a Differential Equation. Religion is a Boundary Condition". It is, in my opinion, a point related to operation of will.

Niels Bohr, from a 1932 lecture about the implications of Heisenberg’s uncertainty principle:

[W]e should doubtless kill an animal if we tried to carry the investigation of its organs so far that we could tell the part played by the single atoms in vital functions. In every experiment on living organisms there must remain some uncertainty as regards the physical conditions to which they are subjected, and the idea suggests itself that the minimal freedom we must allow the organism will be just large enough to permit it, so to say, to hide its ultimate secrets from us.

Or this, from the physicist Arthur Compton:

A set of known physical conditions is not adequate to specify precisely what a forth- coming event will be. These conditions, insofar as they can be known, define instead a range of possible events from among which some particular event will occur. When one exercises freedom, by his act of choice he is himself adding a factor not supplied by the physical conditions and is thus himself determining what will occur. That he does so is known only to the person himself. From the outside one can see in his act only the working of physical law.

So, my second question is same as the question put forth by Scott_Aaronson in his seminal work "Ghost in Quantum Turing Machine": "Does quantum mechanics (specifically, say, the No-Cloning Theorem or the uncertainty principle) put interesting limits on an external agent’s ability to scan, copy, and predict human brains and other complicated biological systems, or doesn’t it?"
.............
Note: The material for this thread is cited from the following:
Scott Aaronson. “Scott_Aaronson_Ghost in Quantum Turing Machine”.
https://www.scottaaronson.com/papers/giqtm3.pdf

Scott has cited the following papers (pertaining to this thread) in his paper:

S. M. Shieber, editor. The Turing Test: Verbal Behavior as the Hallmark of Intelligence. Bradford Books, 2004.
A. Hodges. Alan Turing: The Enigma. Princeton University Press, 2012. Centenary edition.”
N. Bohr. Atomic Physics and Human Knowledge. Dover, 2010. First published 1961.
A. H. Compton. Science and Man’s freedom. The Atlantic, 200(4):71–74, October 1957
.............

...
 
Last edited:

atanu

Member
Premium Member
I have gathered the main two questions in this post.

1. In light of what Turing himself said, will a Turing machine that someday passes a Turing verbal imitation test prove that human intelligence is nothing but computation?


2. Does quantum mechanics put limits on an external agent’s ability to scan, copy, and predict human brains and other complicated biological systems, or doesn’t it?

...

Bonus observation:

One who knows the code of an AI machine can predict all situations related to that machine. So, do not be fooled by assertions of Abacus mystics that you will be immortal once your brain is uploaded. No one knows the code that runs our brains.
 

sayak83

Veteran Member
Staff member
Premium Member
Turing test - Wikipedia

Many cite Turing Test criterion as proof of mind being nothing beyond material controlled computation. People think that through his discovery of computational universality and the Test criterion for intelligence, Turing had proven that there was nothing more to mind, brain, or the physical world than the unfolding of an immense computation.

But is this assessment correct?

in a 1951 radio address Turing himself brought up Eddington, and the possible limits on prediction of human brains imposed by the uncertainty principle:

If it is accepted that real brains, as found in animals, and in particular in men, are a sort of machine it will follow that our digital computer suitably programmed, will behave like a brain. [But the argument for this conclusion] involves several assumptions which can quite reasonably be challenged. [It is] necessary that this machine should be of the sort whose behaviour is in principle predictable by calculation. We certainly do not know how any such calculation should be done, and it was even argued by Sir Arthur Eddington that on account of the indeterminacy principle in quantum mechanics no such prediction is even theoretically possible.

Furthermore, in 1954 again Turing quoted Eddington in postcard message to Robin Gandy, that reads, in part:

Messages from the Unseen World
The Universe is the interior of the Light Cone of the Creation Science is a Differential Equation. Religion is a Boundary Condition

So, it seems that Turing postulated and believed that a Turing machine will be able to fool us in thinking that it was intelligent. But Turing did not necessarily believe that such a programmed machine would possess the same intelligence as ours. In short, ability to imitate a part of intelligent action is not same as possessing the same intelligence. So, my first question is, in light of what Turing himself said, will a Turing machine someday passing a Turing verbal imitation test prove that human intelligence is nothing but computation?

The point however seems even more deeper in light of Turing's reference to "The Universe is the interior of the Light Cone of the Creation Science is a Differential Equation. Religion is a Boundary Condition". It is, in my opinion, a point related to operation of will.

Niels Bohr, from a 1932 lecture about the implications of Heisenberg’s uncertainty principle:

[W]e should doubtless kill an animal if we tried to carry the investigation of its organs so far that we could tell the part played by the single atoms in vital functions. In every experiment on living organisms there must remain some uncertainty as regards the physical conditions to which they are subjected, and the idea suggests itself that the minimal freedom we must allow the organism will be just large enough to permit it, so to say, to hide its ultimate secrets from us.

Or this, from the physicist Arthur Compton:

A set of known physical conditions is not adequate to specify precisely what a forth- coming event will be. These conditions, insofar as they can be known, define instead a range of possible events from among which some particular event will occur. When one exercises freedom, by his act of choice he is himself adding a factor not supplied by the physical conditions and is thus himself determining what will occur. That he does so is known only to the person himself. From the outside one can see in his act only the working of physical law.

So, my second question is same as the question put forth by Scott_Aaronson in his seminal work "Ghost in Quantum Turing Machine" is "Does quantum mechanics (specifically, say, the No-Cloning Theorem or the uncertainty principle) put interesting limits on an external agent’s ability to scan, copy, and predict human brains and other complicated biological systems, or doesn’t it?"
.............
Note: The material for this thread is cited from the following:
Scott Aaronson. “Scott_Aaronson_Ghost in Quantum Turing Machine”.
https://www.scottaaronson.com/papers/giqtm3.pdf

Scott has cited the following papers (pertaining to this thread) in his paper:

S. M. Shieber, editor. The Turing Test: Verbal Behavior as the Hallmark of Intelligence. Bradford Books, 2004.
A. Hodges. Alan Turing: The Enigma. Princeton University Press, 2012. Centenary edition.”
N. Bohr. Atomic Physics and Human Knowledge. Dover, 2010. First published 1961.
A. H. Compton. Science and Man’s freedom. The Atlantic, 200(4):71–74, October 1957
.............

...
What's to stop a machine we create to have an Atman in it as well?
 

Heyo

Veteran Member
I have gathered the main two questions in this post.

1. In light of what Turing himself said, will a Turing machine that someday passes a Turing verbal imitation test prove that human intelligence is nothing but computation?
No.

CS is an extension of mathematics and in mathematics proof is possible like the 2 most famous laws of CS, proven by Turing. (1. Everything that is computable, can be computed with a Turing machine. 2. Not everything is computable.)
The Turing test, however, is a scientific experiment. Science never proves. It either disproves or confirms a hypothesis.
So much for the philosophy of science.

So, if a Turing machine can successfully play the imitation game, it would confirm the hypothesis that human intelligence is nothing but computation (for well defined values of "human intelligence" and the "Turing Test").
For a very low standard of "Turing Test" this has already been done by ELIZA, a program created by Joseph Weizenbaum in the '60s.
 

HonestJoe

Well-Known Member
1. In light of what Turing himself said, will a Turing machine that someday passes a Turing verbal imitation test prove that human intelligence is nothing but computation?
No, but I question your opening assertion that “many” say it would anyway.

2. Does quantum mechanics put limits on an external agent’s ability to scan, copy, and predict human brains and other complicated biological systems, or doesn’t it?
I know little to nothing about QM but I gather it puts limits on our ability to observe anything with any confidence. The basics of time and space put some limits on us anyway though. I’m not convinced we can ever know the answers to these questions.

Incidentally, I’m not sure why you posted in “Science and Religion” given there is no good reason to bring religion in to the conversation. :cool:

One who knows the code of an AI machine can predict all situations related to that machine.
That isn’t true. The whole point of true AI is that it wouldn’t be limited to following fixed rules and algorithms but to be able to construct logical structures on the fly to address any problems presented to it, much (though not exactly) like the human brain appears to.
 

ratiocinator

Lightly seared on the reality grill.
1. In light of what Turing himself said, will a Turing machine that someday passes a Turing verbal imitation test prove that human intelligence is nothing but computation?

Prove? No. Provide additional evidence? Yes.

2. Does quantum mechanics put limits on an external agent’s ability to scan, copy, and predict human brains and other complicated biological systems, or doesn’t it?

It puts a limit of our ability to scan any physical object in complete detail. Whether said limit is at all relevant to the workings of the human brain is debatable but I'd guess not.

You might be interested in this: The four biggest challenges in brain simulation
 

Howard Is

Lucky Mud

One who knows the code of an AI machine can predict all situations related to that machine. So, do not be fooled by assertions of Abacus mystics that you will be immortal once your brain is uploaded. No one knows the code that runs our brains.

That isn’t actually correct.

One of the problems with modern AI is that there is no way to know how it arrived at its conclusions.

Neural nets aren’t code. They are trained, not programmed.

And of course there is no code in our brains.

Each is as impenetrable as the other.
 

Howard Is

Lucky Mud
So, if a Turing machine can successfully play the imitation game, it would confirm the hypothesis that human intelligence is nothing but computation (for well defined values of "human intelligence" and the "Turing Test").

No, I don’t think so.

It would confirm that humans can make errors of judgement.

There is also animism involved.

If a human believes that an android is human, it is not because the android has a mind equal to a human. It is because of a projection of ‘life’ onto the android.

This is the basic flaw of the Turing test.
 

Polymath257

Think & Care
Staff member
Premium Member
I have gathered the main two questions in this post.

1. In light of what Turing himself said, will a Turing machine that someday passes a Turing verbal imitation test prove that human intelligence is nothing but computation?

No, that alone would not prove this. It would make it more plausible, but not prove it. An actual proof would involve showing how the brain actually does it.

oes quantum mechanics put limits on an external agent’s ability to scan, copy, and predict human brains and other complicated biological systems, or doesn’t it?

...


No. No copying at the quantum level would be necessary, I would bet. Copying at the level of, say, neurons, is not prohibited by the No-Cloning theorem.

The No-cloning theorem specifically talks about cloning quantum states exactly. This is not required unless you really want to copy fundamental particle by fundamental particle.


Bonus observation:

One who knows the code of an AI machine can predict all situations related to that machine. So, do not be fooled by assertions of Abacus mystics that you will be immortal once your brain is uploaded. No one knows the code that runs our brains.

And I think this observation is mostly false. It would be true if the AI never interacted with the external world. But the external world is unpredictable, so we cannot know the actual data going into the AI, which means we cannot predict behaviors exactly. Even a degree difference in a direction of scanning will give different input, which would lead to different behavior, potentially.

The other problem, at least for now, is that our brains are massively parallel processors and not the sequential processors of todays computer cores. That alone affects how the information is processed, especially in an interactive system: one piece of information can be processed *simultaneously* by several different areas of the brain. That vastly increases speed of response and thereby reliability in a real-world system.

Another aspect here is that it is no longer clear that a Turing Machine is actually the best model of computing in today's world. A TM always has all of its data on its tape before starting the computation. Modern systems almost never satisfy this criterion and, in fact, are expected to interact in real time. That is far beyond the model allowed by a TM (maybe a different formal machine?).
 

Howard Is

Lucky Mud
No. No copying at the quantum level would be necessary, I would bet. Copying at the level of, say, neurons, is not prohibited by the No-Cloning theorem.

At this point, copying a neuron is well beyond our capacity.

Unfortunately some early naive assumptions have stuck, at least in the mind of the general public.
One particularly wrong assumption has made the task of copying and equalling the brain seem straightforward, and that is that a neuron is a simple computing unit. At one stage, comparisons were being made along the lines of ‘when a CPU has as many gates as there are neurons in the brain’, some kind of computing parity has been achieved.

Wrong.

A single neuron is a very complex biocomputer, and may rely on quantum computing to do what it does.

We want the brain to be simple and replicable.

But it just isn’t.
 

Polymath257

Think & Care
Staff member
Premium Member
At this point, copying a neuron is well beyond our capacity.

Unfortunately some early naive assumptions have stuck, at least in the mind of the general public.
One particularly wrong assumption has made the task of copying and equalling the brain seem straightforward, and that is that a neuron is a simple computing unit. At one stage, comparisons were being made along the lines of ‘when a CPU has as many gates as there are neurons in the brain’, some kind of computing parity has been achieved.

Wrong.

A single neuron is a very complex biocomputer, and may rely on quantum computing to do what it does.

We want the brain to be simple and replicable.

But it just isn’t.

I agree that a neuron is, itself, quite complex. I doubt that quantum level effects are important for the working of a neuron, though. The difference of levels is just too great. While modeling the orbitals using QM can help understand fitting into receptors, this doesn't involve actual entangled systems, which are incredibly fragile (and not likely to stay entangled in the 'hot' dense systems of a body).
 

atanu

Member
Premium Member
No, but I question your opening assertion that “many” say it would anyway.

You are free to question.

I know little to nothing about QM ....

Incidentally, I’m not sure why you posted in “Science and Religion” given there is no good reason to bring religion in to the conversation. :cool:

You have any problem that I posted it in 'Science and Religion'? Why? What is your problem?

[That isn’t true. The whole point of true AI is that it wouldn’t be limited to following fixed rules and algorithms but to be able to construct logical structures on the fly to address any problems presented to it, much (though not exactly) like the human brain appears to.

How do you say "That isn't true"?

Do you know coding? AI is coded and it can be known. But for human intelligence that is not true. The following is from the paper of Scott Aaronson that I cited previously.

Scott Aaronson. “The Ghost in the Quantum Turing Machine”.
https://www.scottaaronson.com/papers/giqtm3.pdf
If you know the code of an AI, then regardless of how intelligent the AI seems to be, you can “unmask” it as an automaton, blindly following instructions. To do so, how- ever, you don’t need to trap the AI in a self-referential paradox: it’s enough to verify that the AI’s responses are precisely the ones predicted (or probabilistically predicted) by the code that you possess! Both with the Penrose-Lucas argument and with this simpler argument, it seems to me that the real issue is not whether the AI follows a program, but rather, whether it follows a program that’s knowable by other physical agents.

If your code is knowable by physical agents, then you too are an automaton.

...
 
Last edited:

atanu

Member
Premium Member
C'mon. A cat also has atman. I don't believe it can realise "I am that". Indeed, most humans don't think such way.

C'mon. A cat is not AI that will usurp the powers of humans.

I agree to you partially. Most humans will not realise "I am That". But most humans will introspect "Who Am I?". Those who do not introspect so are at the level of automatons.
 

atanu

Member
Premium Member
That isn’t actually correct.

One of the problems with modern AI is that there is no way to know how it arrived at its conclusions.

Neural nets aren’t code. They are trained, not programmed.

And of course there is no code in our brains.

Each is as impenetrable as the other.

That is actually correct. Do not get carried away by abacus mystics.

The following is from the paper of Scott Aaronson that I cited previously.

Scott Aaronson. “The Ghost in the Quantum Turing Machine”.
https://www.scottaaronson.com/papers/giqtm3.pdf
If you know the code of an AI, then regardless of how intelligent the AI seems to be, you can “unmask” it as an automaton, blindly following instructions. To do so, how- ever, you don’t need to trap the AI in a self-referential paradox: it’s enough to verify that the AI’s responses are precisely the ones predicted (or probabilistically predicted) by the code that you possess! Both with the Penrose-Lucas argument and with this simpler argument, it seems to me that the real issue is not whether the AI follows a program, but rather, whether it follows a program that’s knowable by other physical agents.

If your brain code is knowable by physical agents, then you too are an automaton.
...
 

Audie

Veteran Member
Turing test - Wikipedia

Many cite Turing Test criterion as proof of mind being nothing beyond material controlled computation. People think that through his discovery of computational universality and the Test criterion for intelligence, Turing had proven that there was nothing more to mind, brain, or the physical world than the unfolding of an immense computation.

But is this assessment correct?

in a 1951 radio address Turing himself brought up Eddington, and the possible limits on prediction of human brains imposed by the uncertainty principle:

If it is accepted that real brains, as found in animals, and in particular in men, are a sort of machine it will follow that our digital computer suitably programmed, will behave like a brain. [But the argument for this conclusion] involves several assumptions which can quite reasonably be challenged. [It is] necessary that this machine should be of the sort whose behaviour is in principle predictable by calculation. We certainly do not know how any such calculation should be done, and it was even argued by Sir Arthur Eddington that on account of the indeterminacy principle in quantum mechanics no such prediction is even theoretically possible.

Furthermore, in 1954 again Turing quoted Eddington in postcard message to Robin Gandy, that reads, in part:

Messages from the Unseen World
The Universe is the interior of the Light Cone of the Creation. Science is a Differential Equation. Religion is a Boundary Condition

So, it seems that Turing postulated and believed that a Turing machine will be able to fool us in thinking that it was intelligent. But Turing did not necessarily believe that such a programmed machine would possess the same intelligence as ours. In short, ability to imitate a part of intelligent action is not same as possessing the same intelligence. So, my first question is, in light of what Turing himself said, will a Turing machine someday passing a Turing verbal imitation test prove that human intelligence is nothing but computation?

The point however seems even more deeper in light of Turing's reference to "The Universe is the interior of the Light Cone of the Creation. Science is a Differential Equation. Religion is a Boundary Condition". It is, in my opinion, a point related to operation of will.

Niels Bohr, from a 1932 lecture about the implications of Heisenberg’s uncertainty principle:

[W]e should doubtless kill an animal if we tried to carry the investigation of its organs so far that we could tell the part played by the single atoms in vital functions. In every experiment on living organisms there must remain some uncertainty as regards the physical conditions to which they are subjected, and the idea suggests itself that the minimal freedom we must allow the organism will be just large enough to permit it, so to say, to hide its ultimate secrets from us.

Or this, from the physicist Arthur Compton:

A set of known physical conditions is not adequate to specify precisely what a forth- coming event will be. These conditions, insofar as they can be known, define instead a range of possible events from among which some particular event will occur. When one exercises freedom, by his act of choice he is himself adding a factor not supplied by the physical conditions and is thus himself determining what will occur. That he does so is known only to the person himself. From the outside one can see in his act only the working of physical law.

So, my second question is same as the question put forth by Scott_Aaronson in his seminal work "Ghost in Quantum Turing Machine": "Does quantum mechanics (specifically, say, the No-Cloning Theorem or the uncertainty principle) put interesting limits on an external agent’s ability to scan, copy, and predict human brains and other complicated biological systems, or doesn’t it?"
.............
Note: The material for this thread is cited from the following:
Scott Aaronson. “Scott_Aaronson_Ghost in Quantum Turing Machine”.
https://www.scottaaronson.com/papers/giqtm3.pdf

Scott has cited the following papers (pertaining to this thread) in his paper:

S. M. Shieber, editor. The Turing Test: Verbal Behavior as the Hallmark of Intelligence. Bradford Books, 2004.
A. Hodges. Alan Turing: The Enigma. Princeton University Press, 2012. Centenary edition.”
N. Bohr. Atomic Physics and Human Knowledge. Dover, 2010. First published 1961.
A. H. Compton. Science and Man’s freedom. The Atlantic, 200(4):71–74, October 1957
.............

...

Huh. I thought you were going to list yours.

Misconception on my part there.
 
Top