• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Misconceptions

atanu

Member
Premium Member
I was just looking at Wikipedia (Gödel's incompleteness theorems - Wikipedia) which starts...

Godel’s interpretation of his Incompleteness theorems has nothing to with physical implementation. It is a disjunctive conclusion that either the mind is infinite and different from a computing machine or if it is a computing machine then it is subject to fundamental limitations of Incompleteness.

I am repeating Godel’s own statement below.

So the following disjunctive conclusion is inevitable: Either mathematics is incompletable in this sense, that its evident axioms can never be comprised in a finite rule, that is to say, the human mind (even within the realm of pure mathematics) infinitely surpasses the powers of any finite machine, or else there exist absolutely unsolvable diophantine problems of the type specified . . . (Gödel 1995: 310).

I watched this lecture given by Hameroff...he was summarizing work done by himself and others:

Thank you. I will watch it.

(I do not believe that the ‘quality of experience’ that accompanies our cognition is ever computable - conventional or quantum. That is because first party experience is not same as looking up a giant look up table and say “ABC person’s xyz centre in brain is lighted up and thus he is in certain kind of pain”.

But even if we assume that first party experience can be computed or that a brain can be cloned, there are fundamental barriers. I think from POV of physics and maths, Aaronson’s treatment of the subject in two linked papers is lucid and comprehensive.
 

sealchan

Well-Known Member
Godel’s interpretation of his Incompleteness theorems has nothing to with physical implementation. It is a disjunctive conclusion that either the mind is infinite and different from a computing machine or if it is a computing machine then it is subject to fundamental limitations of Incompleteness.

I am repeating Godel’s own statement below.





Thank you. I will watch it.

(I do not believe that the ‘quality of experience’ that accompanies our cognition is ever computable - conventional or quantum. That is because first party experience is not same as looking up a giant look up table and say “ABC person’s xyz centre in brain is lighted up and thus he is in certain kind of pain”.

But even if we assume that first party experience can be computed or that a brain can be cloned, there are fundamental barriers. I think from POV of physics and maths, Aaronson’s treatment of the subject in two linked papers is lucid and comprehensive.

I will make an effort to read the linked papers.
 

Polymath257

Think & Care
Staff member
Premium Member
The problem is that Godel's Incompleteness theorem is a statement about formal systems. And it isn't even about *all* types of formal systems: only those with a recursively defined axiom system.

Now, why that is relevant to consciousness is far from clear. But let's go to the solution of Hilbert's 10th problem (concerning Diophantine equations). What was shown is that there is no *single* program that will say definitively whether a given type of equation has a solution or not with the same program working for all equations.

Again, it is far from clear what that has to do with consciousness. For one thing, it isn't clear that Turing machines are the appropriate model for interaction in the real world and in real time.Turing machines, by definition, have all data on their tape before they start processing.

Second, it is clear enough that there are Diophantine equations that can't even be written down in the space available in the observable universe and in the time span of many millions of years. Such equations will be forever unsolvable by humans. So, in Godel's dichotomy, it is clear to me that there are equations where it is impossible for any human to know whether there is a solution.

I fail to see why that is such a hard thing to grasp.
 

Polymath257

Think & Care
Staff member
Premium Member
That is the subject of the thread. We know the code of AI, because we have written it. Any physical agent can be made to run the code. Human mind is not similar, for reasons noted in the first post.


This is clearly false, even for modern codes. There are programs where *nobody* fully understands how they work. Just take some piece of operating system software that literally has a million lines of code. No single person fully understands that code.

Second, it seems clear to me that consciousness is a 'real time interactive process'. That means it may not be possible to 'run' it on all hardware. Some hardware may just be too slow to get the real time interaction that is required.

Furthermore, it is clear that the human brain is *massively* parallel processing. At this point, we really don't have analogous hardware for that degree of parallel computation, especially with the additional requirement that it function interactively with the real world in real time.
 

atanu

Member
Premium Member
The problem is that Godel's Incompleteness theorem is a statement about formal systems. And it isn't even about *all* types of formal systems: only those with a recursively defined axiom system.

Now, why that is relevant to consciousness is far from clear. But let's go to the solution of Hilbert's 10th problem (concerning Diophantine equations). What was shown is that there is no *single* program that will say definitively whether a given type of equation has a solution or not with the same program working for all equations.

Again, it is far from clear what that has to do with consciousness. For one thing, it isn't clear that Turing machines are the appropriate model for interaction in the real world and in real time.Turing machines, by definition, have all data on their tape before they start processing.

Second, it is clear enough that there are Diophantine equations that can't even be written down in the space available in the observable universe and in the time span of many millions of years. Such equations will be forever unsolvable by humans. So, in Godel's dichotomy, it is clear to me that there are equations where it is impossible for any human to know whether there is a solution.

I fail to see why that is such a hard thing to grasp.

Thank you for your observations. The points raised however have everything to do with the questions whether a) Passing of an imitation Turing test by a machine will mean that the machine is conscious? and b) Whether consciousness is computation? I will make a few observations:

1. I agree that Godel’s theorem is about formal system. But that is what Gödel is reminding in his disjunctive statements. And that is what Scott, without invoking Gödel, points out. There are aspects of ‘will’ and ‘unpredictability’ of human brain-consciousness that are not even in same class as computability.

2. We seem to agree ‘finally’ that Turing machine is not appropriate model of consciousness. And I believe that unlike AI enthusiasts, Turing acknowledged that from the beginning. His comments noted in the first post of the thread indicates that. So, even if a Turing machine imitates whole range of conscious actions, the subjective consciousness will be missing.

3. Even if we assume that consciousness is nothing but mechanism of brain, Godel’s disjunctive statements and other fundamental barriers noted by other authors quoted in this thread, indicate that a brain will not be clone-able.

4. You say “there are Diophantine equations that can't even be written down in the space available in the observable universe and in the time span of many millions of years. Such equations will be forever unsolvable by humans.”

I think ‘unsolvable by humans’ is not what Godel’s disjunctive statements indicate. He means ‘unsolvable in the formal system ‘. I honestly believe that you have not intuited what Gödel’s disjunctive statements entail.

...
 

atanu

Member
Premium Member
This is clearly false, even for modern codes. There are programs where *nobody* fully understands how they work. Just take some piece of operating system software that literally has a million lines of code. No single person fully understands that code.

Second, it seems clear to me that consciousness is a 'real time interactive process'. That means it may not be possible to 'run' it on all hardware. Some hardware may just be too slow to get the real time interaction that is required.

Furthermore, it is clear that the human brain is *massively* parallel processing. At this point, we really don't have analogous hardware for that degree of parallel computation, especially with the additional requirement that it function interactively with the real world in real time.

That is wrong.

A code is copyable, stoppable, and re-runnable by a mechanical agent. That does not apply to brain.
 

Polymath257

Think & Care
Staff member
Premium Member
Thank you for your observations. The points raised however have everything to do with the questions whether a) Passing of an imitation Turing test by a machine will mean that the machine is conscious? and b) Whether consciousness is computation? I will make a few observations:

1. I agree that Godel’s theorem is about formal system. But that is what Gödel is reminding in his disjunctive statements. And that is what Scott, without invoking Gödel, points out. There are aspects of ‘will’ and ‘unpredictability’ of human brain-consciousness that are not even in same class as computability.

This I do disagree with.

2. We seem to agree ‘finally’ that Turing machine is not appropriate model of consciousness. And I believe that unlike AI enthusiasts, Turing acknowledged that from the beginning. His comments noted in the first post of the thread indicates that. So, even if a Turing machine imitates whole range of conscious actions, the subjective consciousness will be missing.

But the main reason I don't think Turing machines are a good model is that they are not interactive systems.

3. Even if we assume that consciousness is nothing but mechanism of brain, Godel’s disjunctive statements and other fundamental barriers noted by other authors quoted in this thread, indicate that a brain will not be clone-able.

The No-Clone theorem only says that a quantum system cannot be cloned in each and every detail. It is very far from clear that this level of detail is required to reproduce a mind.

4. You say “there are Diophantine equations that can't even be written down in the space available in the observable universe and in the time span of many millions of years. Such equations will be forever unsolvable by humans.”

I think ‘unsolvable by humans’ is not what Godel’s disjunctive statements indicate. He means ‘unsolvable in the formal system ‘. I honestly believe that you have not intuited what Gödel’s disjunctive statements entail.
...

I'm pretty familiar with Hilbert's 10th problem, the 'solution' of it, of Turing machines, the Halting problem, and Godel's results.

Yes, there are Diophantine equations that cannot be solved within any given set of recursively defined axioms. But that says nothing about consciousness. What Godel was claiming is that the human mind either cannot be modeled by such a system or there are Diophantine problems that cannot be solved by humans. In that dichotomy, the latter is clearly the case.

And yes, we can go further. Godel's proof allows, for any particular axiom system, the generation of a Diophantine equation that cannot be solved by that axiom system. So?
 

atanu

Member
Premium Member
This I do disagree with.

But the main reason I don't think Turing machines are a good model is that they are not interactive systems.

Not only that. But okay.

The No-Clone theorem only says that a quantum system cannot be cloned in each and every detail. It is very far from clear that this level of detail is required to reproduce a mind.

That is begging the question. How do you know to what fine extent cloning is required?

I'm pretty familiar with Hilbert's 10th problem, the 'solution' of it, of Turing machines, the Halting problem, and Godel's results.

Yes, there are Diophantine equations that cannot be solved within any given set of recursively defined axioms. But that says nothing about consciousness.

The self consciousness is recursive and also we are free to halt

What Godel was claiming is that the human mind either cannot be modeled by such a system or there are Diophantine problems that cannot be solved by humans. In that dichotomy, the latter is clearly the case.

And yes, we can go further. Godel's proof allows, for any particular axiom system, the generation of a Diophantine equation that cannot be solved by that axiom system. So?

Yet not touching the most significant branch of Godel's disjunctive statement.:D
 

Polymath257

Think & Care
Staff member
Premium Member
Not only that. But okay.

That's the main issue I see.

That is begging the question. How do you know to what fine extent cloning is required?

Why would we expect that cloning of the quantum state would be necessary? Frankly, I would expect cloning at the level of chemicals would be more than sufficient. And probably cloning at the level of neurons. Both of those are far above the quantum level cloning (which would include nuclear states, for example).


The self consciousness is recursive and also we are free to halt

I'm not sure that either of those is true.

Yet not touching the most significant branch of Godel's disjunctive statement.:D

What do you consider that to be? His conclusion was an *or* statement. The obviously true branch is the one that states there are unsolvable problems.
 

atanu

Member
Premium Member
That's the main issue I see.

Why? The language imitation game is itself meant to appear as if conscious interaction has taken place. It is a make believe after all.

Why would we expect that cloning of the quantum state would be necessary? Frankly, I would expect cloning at the level of chemicals would be more than sufficient. And probably cloning at the level of neurons. Both of those are far above the quantum level cloning (which would include nuclear states, for example).

I have answered this already. But Okay. I will answer again. How do you know as to what fine level you must clone the brain to give birth to consciousness? The point is that ultimately ‘the no clonability’ will operate.

But even this point is not critical. The main empirical difference between a brain and a code is simple: that the latter is stoppable, copyable, restartable by a physical agent. That is not true for brain.

Human mind is not predictable and is endowed with will. This is not true of AI.

I'm not sure that either of those is true.

Can’t argue against assertions.

What do you consider that to be? His conclusion was an *or* statement. The obviously true branch is the one that states there are unsolvable problems.

You are presuming that one branch of Godel’s disjunctive is the only true branch. That is your issue.

However if I accept your presumption, this branch says that if the brain is ‘mechanism’ then there are unsolvable problems. It is a fundamental issue. So, to whatever level you may go down to model the brain, Incompleteness will apply.

OTOH, if you believe that mind-intellect can overcome Incompleteness and model the brain correctly, then intellect stands as infinitely superior to mechanism.

...
 
Last edited:

Polymath257

Think & Care
Staff member
Premium Member
However if I accept your presumption, this branch says that if the brain is ‘mechanism’ then there are unsolvable problems. It is a fundamental issue. So, to whatever level you may go down to model the brain, Incompleteness will apply.

Yes, there are unsolvable problems. So?

OTOH, if you believe that mind-intellect can overcome Incompleteness and model the brain correctly, then intellect stands as infinitely superior to mechanism.
...

I think it can model the brain well enough to understand how consciousness works at an equivalent level to how we understand that atomic spectra work.
 

Polymath257

Think & Care
Staff member
Premium Member
I have answered this already. But Okay. I will answer again. How do you know as to what fine level you must clone the brain to give birth to consciousness? The point is that ultimately ‘the no clonability’ will operate.

Well, for example, nuclear states are pretty irrelevant for chemistry and *all* of life seems to work at the level of chemistry. So there would be no reason to require a cloning of nuclear states.

But even this point is not critical. The main empirical difference between a brain and a code is simple: that the latter is stoppable, copyable, restartable by a physical agent. That is not true for brain.

The code is NOT stoppable if it is to keep up with the real world. Depending on how it relates to hardware, it may not be copyable or restartable by an arbitrary physical agent.


Human mind is not predictable and is endowed with will. This is not true of AI.

Assertions, both. And....

Can’t argue against assertions.

You are presuming that one branch of Godel’s disjunctive is the only true branch. That is your issue.

Right. In an 'or' statement, the truth is valid even if only one side is established. It is NOT required for both branches to be true. The one side is clearly true, so the statement (even if you agree witth Godel) does not say anything about the other branch being true or false.
 

atanu

Member
Premium Member
Well, for example, nuclear states are pretty irrelevant for chemistry and *all* of life seems to work at the level of chemistry. So there would be no reason to require a cloning of nuclear states.

How do you know? That is begging the question.

The code is NOT stoppable if it is to keep up with the real world. Depending on how it relates to hardware, it may not be copyable or restartable by an arbitrary physical agent.

How is that relevant even? AI code is copyable, restartable and stoppable in million installations. Humans are empirically different.

Right. In an 'or' statement, the truth is valid even if only one side is established. It is NOT required for both branches to be true. The one side is clearly true, so the statement (even if you agree witth Godel) does not say anything about the other branch being true or false.

I agree. For you and for the purpose of this thread the clause “there exists unsolvable problems, in case human mind is ruled by mechanism”, is sufficient.

But fortunately, Godel did not believe like you.
 

atanu

Member
Premium Member
Yes, there are unsolvable problems. So?


[quote]I think it can model the brain well enough to understand how consciousness works at an equivalent level to how we understand that atomic spectra work.[/QUOTE]

Are you trying to belittle me?

On on hand, you agree with Godel that in case human mind is mechanism there are unsolvable problems. Then, in next sentence you say that the mind will ‘understand’ how it works?

Either you are being funny or you do not comprehend that a machine cannot understand itself. Only true intelligence can determine truth value of propositions.
 

Polymath257

Think & Care
Staff member
Premium Member
Are you trying to belittle me?


Not at all. In fact, I have no idea why you would think that given what I wrote.

On on hand, you agree with Godel that in case human mind is mechanism there are unsolvable problems. Then, in next sentence you say that the mind will ‘understand’ how it works?

Yes. The 'unsolvable problems' are a very specific type of mathematical problem. That is completely irrelevant to understanding how consciousness arises as a brain process. There are unsolvable problems, but those unsolvable problems are not related to understanding consciousness.

Either you are being funny or you do not comprehend that a machine cannot understand itself. Only true intelligence can determine truth value of propositions.

I think what you fail to grasp is that we can get sufficient approximations that are perfectly good while not being 'perfect'. In fact, that accounts for essentially ALL of science. There is nothing that limits approximations that are arbitrarily good.
 

Howard Is

Lucky Mud

I am afraid that is not true.

MIT researchers can now track AI’s decisions back to single neurons.

Howsoever fuzzy the decision processes may be, in principle the tree can be traced back.[/QUOTE]

That is a single AI designed by MIT.

It has not enabled that function in all AIs.

Your “I’m afraid that’s not true” is a little too cocky.

Conceivably one day all neural AIs will have traceable logic. At the moment...only that one that I am now aware of.

If you find more, cool, let me know. Without trying yo be a smart ***, which is entirely unnecessary.

At least AIs can get to an answer without sticking out their little virtual tongues and going nya nya nya !
 

paarsurrey

Veteran Member
What's to stop a machine we create to have an Atman in it as well?
Human cannot create an Atman/soul/Spirit. Can they, please?

Regards
________________
[17:86]وَ یَسۡـَٔلُوۡنَکَ عَنِ الرُّوۡحِ ؕ قُلِ الرُّوۡحُ مِنۡ اَمۡرِ رَبِّیۡ وَ مَاۤ اُوۡتِیۡتُمۡ مِّنَ الۡعِلۡمِ اِلَّا قَلِیۡلًا ﴿۸۶﴾
And they ask thee concerning the soul. Say, ‘The soul is by the command of my Lord; and of the knowledge thereof you have been given but a little.’
The Holy Quran - Chapter: 17: Bani Isra'il
 
Last edited:
Top