• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Misconceptions

Polymath257

Think & Care
Staff member
Premium Member
That is actually correct. Do not get carried away by abacus mystics.

The following is from the paper of Scott Aaronson that I cited previously.

Scott Aaronson. “The Ghost in the Quantum Turing Machine”.
https://www.scottaaronson.com/papers/giqtm3.pdf
If you know the code of an AI, then regardless of how intelligent the AI seems to be, you can “unmask” it as an automaton, blindly following instructions. To do so, how- ever, you don’t need to trap the AI in a self-referential paradox: it’s enough to verify that the AI’s responses are precisely the ones predicted (or probabilistically predicted) by the code that you possess! Both with the Penrose-Lucas argument and with this simpler argument, it seems to me that the real issue is not whether the AI follows a program, but rather, whether it follows a program that’s knowable by other physical agents.

If your brain code is knowable by physical agents, then you too are an automaton.
...

OK, so? We are all biological machines. I don't expect there to be a 'code' in the sense of computer codes, but I *do* expect everything to be physical at base.
 

Polymath257

Think & Care
Staff member
Premium Member
Why do you guess so?

Well, the no-clone results say that it is impossible to clone a full quantum state, including entanglements, etc.

But once you get to the size of even fairly small molecules, the quantum states are not particularly relevant to the chemistry. And it is ultimately the chemistry that determines how the brain is functioning.

Quantum states are incredibly delicate things, as we are finding out from our studies of quantum computing. The brain, as a high temperature, complex system is incredibly good at collapsing quantum states.

So I would highly doubt that cloning at the quantum level is particularly relevant to discussions of consciousness.
 

HonestJoe

Well-Known Member
You are free to question.
I did. Feel free to answer. :cool:

You have any problem that I posted it in 'Science and Religion'? Why? What is your problem?
Not a problem as such, just curiosity. I explained why – the topic has absolutely nothing to do with religion.

How do you say "That isn't true"? Do you know coding?
Yes.

AI is coded and it can be known. But for human intelligence that is not true.
You made a specific statement; “One who knows the code of an AI machine can predict all situations related to that machine” (my emphasis). You are quoting Aaronson talking about being able to predict enough of its responses to identify it as the AI you have the code for. These are complex and largely speculative questions in theoretical computing, not least because nobody has developed a true AI (nor is anywhere close). We’re all guessing, even the likes of Aaronson.

If your code is knowable by physical agents, then you too are an automaton.
By your particular definition, quite possibly. Why would that be a problem? We might want to be more than complex machines but that isn’t sufficient reason to say that we must be.
 
Last edited:

Heyo

Veteran Member
If a human believes that an android is human, it is not because the android has a mind equal to a human. It is because of a projection of ‘life’ onto the android.

This is the basic flaw of the Turing test.
First a little nitpick: it is not that 'life' is projected but intelligence.
And intelligence seems somehow difficult to define. But if intelligence exists we must be able to measure it. You might argue that the imitation game is not the best measure for intelligence but that only scratches at the definition for Turing Test. (And I can somewhat agree. We are conducting Turing Tests on our interlocutors and often I don't detect any intelligence in their comments.)
But that doesn't invalidate the general idea that, if we have a test for human intelligence and a Turing Machine passes that test, it must be concluded that intelligence is only computational.
 

A Vestigial Mote

Well-Known Member
One who knows the code of an AI machine can predict all situations related to that machine.
But wouldn't the goal be to create a machine "mind" that you CAN'T predict the consciousness-related behavior of? As in - emulating the human mind, such that its own growth makes absolute prediction harder and harder the more it learns and grows?
So, do not be fooled by assertions of Abacus mystics that you will be immortal once your brain is uploaded. No one knows the code that runs our brains.
This is a fool's errand with even half a second's thought to the matter anyway. The moment your brain can be "uploaded" and "YOU" can sit there and stare at the machine that is now emulating your brain is the awkward (and probably terrifying) moment when you must surely realize that whatever the machine is, it is not "YOU." Not even close.
 
Last edited:

viole

Ontological Naturalist
Premium Member
No, that alone would not prove this. It would make it more plausible, but not prove it. An actual proof would involve showing how the brain actually does it.



No. No copying at the quantum level would be necessary, I would bet. Copying at the level of, say, neurons, is not prohibited by the No-Cloning theorem.

The No-cloning theorem specifically talks about cloning quantum states exactly. This is not required unless you really want to copy fundamental particle by fundamental particle.




And I think this observation is mostly false. It would be true if the AI never interacted with the external world. But the external world is unpredictable, so we cannot know the actual data going into the AI, which means we cannot predict behaviors exactly. Even a degree difference in a direction of scanning will give different input, which would lead to different behavior, potentially.

The other problem, at least for now, is that our brains are massively parallel processors and not the sequential processors of todays computer cores. That alone affects how the information is processed, especially in an interactive system: one piece of information can be processed *simultaneously* by several different areas of the brain. That vastly increases speed of response and thereby reliability in a real-world system.

Another aspect here is that it is no longer clear that a Turing Machine is actually the best model of computing in today's world. A TM always has all of its data on its tape before starting the computation. Modern systems almost never satisfy this criterion and, in fact, are expected to interact in real time. That is far beyond the model allowed by a TM (maybe a different formal machine?).

I think that everything reduces to a Turing machine, ultimately. I believe not even quantum computers are able to compute what a Turing machine cannot (they merely slash the complexity of the computation). And QC are parallel in a very fundamental way.

In other words: I don’t think that we have models of computation, in reality or in theory, that are more powerful than universal Turing machines (if we abstract from the performance, of course).

On the contrary, since Universal Turing machines have an infinite tape, the space of computations they can perform outperforms the one that can be performed by any physical computer.

Ciao

- viole
 

sayak83

Veteran Member
Staff member
Premium Member
C'mon. A cat is not AI that will usurp the powers of humans.

I agree to you partially. Most humans will not realise "I am That". But most humans will introspect "Who Am I?". Those who do not introspect so are at the level of automatons.
I did not understand your answer. Are you saying that cats and non-realized humans do not have Atman? I always thought that they(cats) all have Atman but lack a sophisticated mind that is able to discover this fact about themselves.

An automaton, on the other hand, lack self-will and self-generated internal motivations for actions. They are like puppets or people when they are hypnotized. Cats are not automatons obviously, they have internal motivations stemming from inner desires and needs. So they certainly have a Self, even if they don't know that they have it.
 

atanu

Member
Premium Member
I did not understand your answer. Are you saying that cats and non-realized humans do not have Atman? I always thought that they(cats) all have Atman but lack a sophisticated mind that is able to discover this fact about themselves.

An automaton, on the other hand, lack self-will and self-generated internal motivations for actions. They are like puppets or people when they are hypnotized. Cats are not automatons obviously, they have internal motivations stemming from inner desires and needs. So they certainly have a Self, even if they don't know that they have it.

I will ask you to go back to your original question “What's to stop a machine we create to have an Atman in it as well?”.

My response was to this question. As per advaita Vedanta, all beings although essentially identical as brahman are limited-constrained names-forms of non dual atman-brahman. But we are all practically automatons only, as long as we are not realised of our nature. A cat is an automaton, most beings are so, and a Turing machine is also so.

But a human being has the potential to shrug away the veil of ignorance and realise “I am that”. A cat too (as per Hindu thought) after after passing through many incarnation stages may or may not develop a competence to enquire of its nature.

Will a Turing machine do so? Will it ask “Who Am I?” Or will it seek to consummate an orgasmic bliss? Etc. Etc.
 
Last edited:

Polymath257

Think & Care
Staff member
Premium Member
I think that everything reduces to a Turing machine, ultimately. I believe not even quantum computers are able to compute what a Turing machine cannot (they merely slash the complexity of the computation). And QC are parallel in a very fundamental way.

In other words: I don’t think that we have models of computation, in reality or in theory, that are more powerful than universal Turing machines (if we abstract from the performance, of course).

Well, we have *models* (super-tasks, etc). To what extent they can be made in the real world is debatable. I've seen claims that using a black hole for time dilation could give supertasks. I'm not sure i agree.

On the contrary, since Universal Turing machines have an infinite tape, the space of computations they can perform outperforms the one that can be performed by any physical computer.

Most certainly.

My concern in this discussion is that the tape of the Turing Machine is fixed ahead of time. That means it is, essentially, an isolated system. When we move into systems that interact with an outside world that may not be predictable, it seems less obvious to me that the Turing Machine model is enough. In particular, the 'predictability' that seems to be a focus here would be lost unless the inputs are predictable.

Also, in an interactive environment, it may not be the case that the machine will react exactly the same way to a given input the second (or third, etc) time it gets it. So, the predictability is based on the *complete* history of ALL the input into the machine.
 

atanu

Member
Premium Member
OK, so? We are all biological machines. I don't expect there to be a 'code' in the sense of computer codes, but I *do* expect everything to be physical at base.

I think we have finished scope for further discussions. That will not change my respect for you, however. But I will not wish to point out again and again about the ‘explanatory gap’.

I do not think that material ultimates that are abstractions and characterised by properties like mass, momentum, spin, charge, can explain ‘self’, ‘discernment’, ‘will’, ‘intentionality’, ‘experience’ and many other phenomenal aspects.

I believe that consciousness is the fundamental aspect of existence.

:)
 

Polymath257

Think & Care
Staff member
Premium Member
But wouldn't the goal be to create a machine "mind" that you CAN'T predict the consciousness-related behavior of? As in - emulating the human mind, such that its own growth makes absolute prediction harder and harder the more it learns and grows?


Well, that is certainly the case for even today's computers. Since they often interact in real time with an unpredictable outside world, the response is often quite far from predictable.

This is a fool's errand with even half a second's thought to the matter anyway. The moment your brain can be "uploaded" and "YOU" can sit there and stare at the machine that is now emulating your brain is the awkward (and probably terrifying moment) when you must surely realize that whatever the machine is, it is not "YOU." Not even close.

Well, it just means that there are then *two* consciousnesses with the same memories from some point back.
 

Polymath257

Think & Care
Staff member
Premium Member
I think we have finished scope for further discussions. That will not change my respect for you.

I do not think that material ultimates that are abstractions and characterised by properties like mass, momentum, spin, charge, can explain ‘self’, ‘discernment’, ‘will’, ‘intentionality’, ‘experience’ and many other phenomenal aspects.

Well, it's hard enough to describe the properties of a large molecule using only those characteristics. Getting a description for something like a neuron would be essentially impossible. But I don't think anyone disagrees with the statement that neurons are physical things.
 

atanu

Member
Premium Member
But wouldn't the goal be to create a machine "mind" that you CAN'T predict the consciousness-related behavior of? As in - emulating the human mind, such that its own growth makes absolute prediction harder and harder the more it learns and grows?

It may be the goal to create a machine mind that we cannot predict. But Scott Aaronson’s point is ‘knowability’ of the code that runs the machine. It is there in post 18.


This is a fool's errand with even half a second's thought to the matter anyway. The moment your brain can be "uploaded" and "YOU" can sit there and stare at the machine that is now emulating your brain is the awkward (and probably terrifying moment) when you must surely realize that whatever the machine is, it is not "YOU." Not even close.

There are many other potential paradoxical situations listed in Aaronson’s paper referenced in OP.
 

atanu

Member
Premium Member
I did. Feel free to answer. :cool:

Not a problem as such, just curiosity. I explained why – the topic has absolutely nothing to do with religion.

I do not agree. If you wish I can clarify.


You made a specific statement; “One who knows the code of an AI machine can predict all situations related to that machine” (my emphasis). You are quoting Aaronson talking about being able to predict enough of its responses to identify it as the AI you have the code for. These are complex and largely speculative questions in theoretical computing, not least because nobody has developed a true AI (nor is anywhere close). We’re all guessing, even the likes of Aaronson.

What you say does not invalidate what Scott said or what I said by paraphrasing him. We have a physical code. In case of brains we have none.

The purpose was to highlight the immaturity of claims of AI fanboys. You are basically saying the same thing. IMO, AI may turn out to be dangerous, not because it may imprison humans but because some humans may misuse it against some other humans.

By your particular definition, quite possibly. Why would that be a problem? We might want to be more than complex machines but that isn’t sufficient reason to say that we must be.

I do not understand this.
 

A Vestigial Mote

Well-Known Member
Well, it just means that there are then *two* consciousnesses with the same memories from some point back.
Though then the two diverge and memories and any psychological development is necessarily different for the two "entities" from that point forward.

I guess my main point was simply that, if you are able to sit and witness the machine that is now supposedly "you" then it has to be realized that it cannot possibly ever be "you" in an exact sense - because there you both are, existing simultaneously, and yet experiencing things separately.
 

atanu

Member
Premium Member

Thank you for the link. The following conclusion is important, IMO.

Some aspects of mind, such as understanding, agency and consciousness, might never be captured by digital brain simulations. Simulations that lack a representation of consciousness might be of limited use in understanding phenomena as complex as psychiatric conditions.”
This is similar to what Bohr’s opinion cited in the OP and repeated below.

[W]e should doubtless kill an animal if we tried to carry the investigation of its organs so far that we could tell the part played by the single atoms in vital functions. In every experiment on living organisms there must remain some uncertainty as regards the physical conditions to which they are subjected, and the idea suggests itself that the minimal freedom we must allow the organism will be just large enough to permit it, so to say, to hide its ultimate secrets from us.
 
Last edited:
Top