• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Humans are like robots. Choice is determined.

LegionOnomaMoi

Veteran Member
Premium Member
Non-local effects of digital computers exists as well. Mulit-core cpus, physics processors, graphic processors all work in coordination from non local effects of the thematic or high level objective of the software! Non-locality can even be observed in single cpus!


Over the past two decades or so, what had formerly been only a thought experiment about quantum nonlocality was empirically found. Particles which were seperated, even by several miles, showed correlated behavior. You may have heard some form of the dictum "correlation does not imply causation". This is a bit inaccurate. It does imply causation, but it leaves open three possible causes. If x & y are correlated, then either x causes y, or y causes x, or a third thing causes both. Hence I can say that drinking water is correlated with substance abuse/drug use, but the "cause" of the correlation is simply that everybody drinks water.

So if the behavior between paired photons which are seperated by several miles is "correlated", what's the cause? That's what nonlocality is about. Not parallel processing or multiple active threads. It is that the speed at which we observe correlations between regions of the brain which are seperated appear to occur at impossible speeds (or perhaps in "no time").

To put it another way, take the knowledge representation you have talked about: that idea of "concepts" structured in terms of inheritence, classes, all hierarchically arranged. Well, there is a paper (actually several) "Toward an adequate mathematical model of mental space: Conscious/unconscious dynamics on m-adic trees" which is exactly that. A model of "mental states", ideas, memory, etc., all designed to describe the brain and "mind" in terms of hierarchically structured mental "space." But what is this space? "in the conventional dynamical approach dynamical systems work in the real physical space of electric potentials and in our approach dynamical systems work in the m-adic mental space". In other words, the mental states don't exist in the physical world. Often (see e.g., "Quantum-like model of brain's functioning: Decision making from decoherence") the "space" is explicitly stated to be Hilbert space, but the point is that these structures of mental states, concepts, and so forth may appear very similar to those you have touched upon, but they cannot exist in the physical world as we know it.

Why on earth would a guy like Andrei Khrennikov, a Professor of Department of Mathematics, Statistics and Computer Sciences, create a model of how the brain works which can't actually occur in physical space? He makes this clear in an earlier paper "Classical and quantum dynamics on p-adic trees of ideas":
"I think that the phenomenon of consciousness will be never reduced to ordinary physical phenomena. And the modern neurophysiological activity gives some evidences of this." So he deals with a mathematical space, the same way quantum physics does (and, as in quantum physics, the relationship between the mathematical formalisms and reality are unknown).

Most importantly, if correct, this means that we will never get a digital computer to be "conscious" or self-aware. That's the kind of nonlocality I refer to: an apparent (or actual) violation of classical causality. Not parallel processing or distributed storage being used by software.
 

LegionOnomaMoi

Veteran Member
Premium Member
Yet software acts as a single entity coordinating in a meta-physical way all the disparate hardware located anywhere in the world!


So software is a metaphysical entity?



You still dig that ditch and fail to realize concepts are aggregates of rules, networked through relational features and attributes
That's because the only people who think that this is a way to actually describe what's going on in the "mind" are denying that the mind can actually be explained using reductionism. And if they are correct, then we can't have computers with "minds" or aware of concepts.


Here gain you simply look at the hardware and not the software. Software must be aware of internal and external states in a meta-physical or thematic way.
I wasn't aware that many computer scientists, programmers, engineers, etc., deal with metaphysics.


Software is an entity whose physicality, as an objective doesn’t exist, no more than the theme of a book physically exists.
How does it not exist physically?



Where themes are collections of data that human brain neurons interpret through execution of neural systems formed by neural populations locally and non-locally that exchange spike trains. Computer software are collections of data that cpu hardware executes locally and non-locally. Both can encode many layers of abstraction and network data.
The problem is that this inaccurately describes both neural activity and computer software.

So Watson is more rule based than applying mathematical equations as a 3D graphics program.
The "rules" are mathematical. That's what "formal" means. Formal languages are mathematical languages, whether they deal with numbers and operators or the symbols from mathematical logic.
 

Copernicus

Industrial Strength Linguist
[/FONT]
Over the past two decades or so, what had formerly been only a thought experiment about quantum nonlocality was empirically found. Particles which were seperated, even by several miles, showed correlated behavior. You may have heard some form of the dictum "correlation does not imply causation". This is a bit inaccurate. It does imply causation, but it leaves open three possible causes. If x & y are correlated, then either x causes y, or y causes x, or a third thing causes both. Hence I can say that drinking water is correlated with substance abuse/drug use, but the "cause" of the correlation is simply that everybody drinks water.

So if the behavior between paired photons which are seperated by several miles is "correlated", what's the cause? That's what nonlocality is about. Not parallel processing or multiple active threads. It is that the speed at which we observe correlations between regions of the brain which are seperated appear to occur at impossible speeds (or perhaps in "no time").

To put it another way, take the knowledge representation you have talked about: that idea of "concepts" structured in terms of inheritence, classes, all hierarchically arranged. Well, there is a paper (actually several) "Toward an adequate mathematical model of mental space: Conscious/unconscious dynamics on m-adic trees" which is exactly that. A model of "mental states", ideas, memory, etc., all designed to describe the brain and "mind" in terms of hierarchically structured mental "space." But what is this space? "in the conventional dynamical approach dynamical systems work in the real physical space of electric potentials and in our approach dynamical systems work in the m-adic mental space". In other words, the mental states don't exist in the physical world. Often (see e.g., "Quantum-like model of brain's functioning: Decision making from decoherence") the "space" is explicitly stated to be Hilbert space, but the point is that these structures of mental states, concepts, and so forth may appear very similar to those you have touched upon, but they cannot exist in the physical world as we know it.

Why on earth would a guy like Andrei Khrennikov, a Professor of Department of Mathematics, Statistics and Computer Sciences, create a model of how the brain works which can't actually occur in physical space? He makes this clear in an earlier paper "Classical and quantum dynamics on p-adic trees of ideas":
"I think that the phenomenon of consciousness will be never reduced to ordinary physical phenomena. And the modern neurophysiological activity gives some evidences of this." So he deals with a mathematical space, the same way quantum physics does (and, as in quantum physics, the relationship between the mathematical formalisms and reality are unknown).

Most importantly, if correct, this means that we will never get a digital computer to be "conscious" or self-aware. That's the kind of nonlocality I refer to: an apparent (or actual) violation of classical causality. Not parallel processing or distributed storage being used by software.
Legion, the key to these arguments is some coherent argument that ties consciousness to the phenomenon of quantum entanglement. Otherwise, all we have is what is sometimes referred to as "quantum mysticism"--baseless speculation that makes reference to quantum level phenomena as if they were directly relevant to the macro-level phenomenon being explained. What experiment has shown information processing in the brain to take place at super-luminal speeds?

Even if you could show consciousness to somehow relate to quantum phenomena, bear in mind that we are also making great strides towards the development of quantum computers.
 

LegionOnomaMoi

Veteran Member
Premium Member
Legion, the key to these arguments is some coherent argument that ties consciousness to the phenomenon of quantum entanglement.
Actually, the nomenclature used by Khrennikov (quantum-like) and other like him is there for a reason: it is to distinguish this approach from a description of the mind which actually seeks to explain it using quantum processes. Khrennikov does not. The formalism is there, but not QM itself. Khrennikov makes this particularly clear in e.g., "Quantum-like model of processing of information in the brain based on classical electromagnetic field". The idea behind this approach is mainly due to what is sometimes called the quantum-to-classical divide, cut, or transition. But the problem is that all these usually refer not to the divide between classical and quantum physics per se, but to how we derive classical laws from quantum. However, here it refers to the (very much related) issue of the incompleteness of classical physics, the possible incompleteness of quantum physics, and how the two correspond.

Thus the term "quantum-like" refers to the development of pre-quantum models. Khrennikov himself developed one (see esp. "A pre-quantum classical statistical model with infinite-dimensional phase space"). It does not refer to quantum physics, and therefore (if one treats the brain as a quantum computer), it shouldn't prevent any simulation using classical computation. The problem of simulation doesn't come from the use of quantum formalisms itself, but from the particular manner in which they are used. This is akin to Rosen's proof on the incomputability of living systems, in that models like Khrennikov's are not problematic because they are quantum-like, but because they ascribe to mental states an irreducible quality.

Otherwise, all we have is what is sometimes referred to as "quantum mysticism"--baseless speculation that makes reference to quantum level phenomena as if they were directly relevant to the macro-level phenomenon being explained.

They have to be, if QM is considered a complete theory. That is, as quantum physics is supposed to be a complete description of physical reality, whatever we observe at the macroscopic level must be derived from it. Unfortunately, for most of the 20th century the manner in which this was accomplished was to introduce a logical circularity and hope that things would somehow sort themselves out. This has not happened.

What experiment has shown information processing in the brain to take place at super-luminal speeds?

That's difficult to say, because physicists don't agree on the interpretation of nonlocality. It's also difficult because any experiment ultimately involves macroscopic systems saying something about a theoretically inaccessible system. See, for example,
Quantum-classical correspondence in the brain: scaling, action distances and predictability behind neural signals
Plausibility of Quantum Coherent States in Biological Systems
and for perhaps the most recent and comprehensive review:
Quantum physics in neuroscience and psychology: a neurophysical model of mind–brain interaction

As for experimental evidence, I'll give a few examples beginning with some I commented on earlier (yes, I'm that lazy):
However, there are issues with Tegmark's description of neural activity (some are merely the result of the study being over a decade old) as well as his use of decoherence. We don't even have to switch journals. The obvious place to start would by the reply to Tegmark in volume 65 by Hagan, Hameroff, and Tuszynski. But as it is Hameroff's model under attack, why not go with a neutral party? Volume 70 (2004) included a study by Rosa & Faber: "Quantum models of the mind: Are they compatible with environment decoherence?" The authors (like Tegmark) criticized the Penrose & Hameroff model and its account of coherence. However, they state:

"based on this difference, we do not conclude, as Tegmark does, that the quantum approach to the brain problem is refuted if we use decoherence instead of gravitational collapse. The first point is that we must also consider the time for building coherence, while the system either remains relatively isolated to sustain coherence or there is no coherent collective state...Our result does not discard the conjecture that quantum theory can help us to understand the functioning of the brain, and maybe also to understand consciousness...We still propose a new quantum model in the brain where the most important thing is the sequence of coherent states accumulating in the microtubule."

Then there's the issue of the change with the field of physics concerning quantum coherence: Tegmark got it wrong. In fact, a paper published from the 2011 Journal of Physics conference not only criticizes Tegmark's analysis, but proposes several components of biological systems which rely on quantum coherence. In the paper ("Plausibility of quantum coherent states in biological systems" by V Salari, J Tuszynski, M Rahnama, G Bernroider)...
Another paper ("Quantum mechanical aspects of cell microtubules: science fiction or realistic possibility?") published in the same volume (306) of the Journal of Physics conference proceedings has more of the same...

In fact, in Kurita's paper "Indispensable role of quantum theory in the brain dynamics" (published in the peer-reviewed journal BioSystems vol. 80, 2005) Tegmark's study in particular is heavily criticized as flawed.


To the above we might add (to support the quantum-like, rather than quantum, theories of mind):
Some remarks on an experiment suggesting quantum-like behavior of cognitive entities and formulation of an abstract quantum mechanical formalism to describe cognitive entity and its dynamics

On the Existence of Quantum Wave Function and Quantum Interference Effects in Mental States: An Experimental Confirmation During Perception and Cognition in Humans

as well as any number of other "studies" from the journal NeuroQuantology, a journal which shouldn't exist.

The problems with experimental support for quantum (or quantum-like) theories of mind are not all that different from any other. Neural correlates of consciousness depend upon a number of assumptions which have little empirical support, are circular in nature, and/or are unfounded. This holds true whether or not we introduce quantum formalism. In fact, some theoretical support for quantum theories of conscousness consist mainly of the interpretation of empirical studies (e.g., Quantum Transition Probabilities and the Level of Consciousness)

However, QM theories of mind face the additional difficulty of trying to demonstrate how measurements of something that can't be measured tell us anything.

On the other hand, any classical description of the brain has to deal with quantum mechanics. Period. Modern physics is quantum physics (at least at the level of systems), and so the "mind' is ultimately produced by quantum physics. The problem is that what was hoped to be a rather clear line between the quantum world and the classical world is not clear at all. In fact, the entire focus in quantum physics has changed from "measurements" (in which some quantum system is described via system states which do not correspond with physical reality in any known way) to the study of decoherence. That is, instead of trying to study quantum systems and describe them through the measurement process, the idea is to preserve a quantum system and study under what conditions the quantum processes decohere rather than cohere.


bear in mind that we are also making great strides towards the development of quantum computers.
I don't know about that. Quantum computation is mainly a new spin on modern physics: instead of dealing with the measurement problem, we turn the whole thing into information theory. Much of the work I've read on quantum computing differs little in content from that in a textbook I have on the subject published in 1998. Additionally, this has increased attention on a more serious problem modern physics faces (thanks to the relegation of quantum systems to mathematics) for the entire scientific program, namely violating classical causality:
Quantum Causality: Conceptual Issues in the Causal Theory of Quantum Mechanics (Studies in History and Philosophy of Science)
On physical and mathematical causality in quantum mechanics
and my personal favorite as far as that disaster we call quantum physics is concerned:
Causality Is Inconsistent With Quantum Field Theory
 
Last edited:

atanu

Member
Premium Member
Suppose some machine passed Turing test and suppose we ignore John Searle and assume that passing Turing test means presence of understanding. Then, who will know the feat? Conscious being/s beings. No?


:eek:

The problem, as I see it, is that we have no middle ground for our theoretical models of cognition. -------- Myself, I favor using humans as seeing-eye dogs for robots in the near term. --------

.

Hello Copernicus

That is surely an improvement, from holding the view that humans are robots.

Further, I humbly request you to ponder over the fact that we Are seeing-eye dogs for the brain also.)(
 
Last edited:

Leonardo

Active Member
[/font][/color]
To put it another way, take the knowledge representation you have talked about: that idea of "concepts" structured in terms of inheritence, classes, all hierarchically arranged. Well, there is a paper (actually several) "Toward an adequate mathematical model of mental space: Conscious/unconscious dynamics on m-adic trees" which is exactly that. A model of "mental states", ideas, memory, etc., all designed to describe the brain and "mind" in terms of hierarchically structured mental "space." But what is this space? "in the conventional dynamical approach dynamical systems work in the real physical space of electric potentials and in our approach dynamical systems work in the m-adic mental space". In other words, the mental states don't exist in the physical world. Often (see e.g., "Quantum-like model of brain's functioning: Decision making from decoherence") the "space" is explicitly stated to be Hilbert space, but the point is that these structures of mental states, concepts, and so forth may appear very similar to those you have touched upon, but they cannot exist in the physical world as we know it.

This kind of approach makes for great pseudo science: "The neuron is the little black box no one can figure out, so little leprechauns live there and make everything happen!" :biglaugh:

Again...Your problem is the approach. Working with a generic form of meaning allows the use of another kind of space and that is relational space...:)


And I know you're going to ask; "relational space comparing what?" :biglaugh::sorry1: Its as far as I can go...
 
Last edited:

Leonardo

Active Member
[/font][/color]
That's because the only people who think that this is a way to actually describe what's going on in the "mind" are denying that the mind can actually be explained using reductionism. And if they are correct, then we can't have computers with "minds" or aware of concepts.

And there is a Santa Claus and he has flying reindeer and if you been good he'll come down your chimney and give you a gift.

How does it not exist physically?

Tell me the weight or size of the theme of any book.

The problem is that this inaccurately describes both neural activity and computer software.

No it doesn't! But to put things in perspective how about we go back in time and we let Einstein figure out how my Samsung galaxy works. He'd notice that the device uses electrical energy, and it can transmit several kinds of wavelengths, one range more powerful than the rest. He observes its behaviors and then opens up the phone to have a look inside. He notices wires, things that emit light but he's not sure why. He finally reaches a black thin square thing with dozens of wires going into it. With permission from the president, because Einstein is soo smart they allow him to open the thin black square to see what's inside knowing he might destroy it! When he does pry open the square all he finds is a thin piece of silicon with metal pins terminating on its edges. He looks at with a microscope but its hopeless.

He therefore concludes that reductionism is useless in explaining my phone...


The "rules" are mathematical. That's what "formal" means. Formal languages are mathematical languages, whether they deal with numbers and operators or the symbols from mathematical logic.


Ah...not entirely, Watson uses more procedural functions, like parsing strings, ranking words, and queries in a database which uses logic for comparisons. Nothing as intense as 3D graphics.
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
This kind of approach makes for great pseudo science: "The neuron is the little black box no one can figure out, so little leprechauns live there and make everything happen!" :biglaugh:

Ironically, it was the behaviorist framework (the one which understood actions as motivated by punishments and rewards) which treated the mind as a black box. You adopt the view of those who denied that we could ever have the ability to understand the physical processes of the brain and relate them to the "mind", so instead they attempted to understand it in terms of observable actions in response to rewards and to punishments.

Also, this isn't my view. I never said I agree with those who find quantum (or quantum-like) processes to be essential for consciousness, conceptual processing, and/or the "mind". I was merely explaining why your comparison between "nonlocality" it computers is not equivalent with the nonlocality in the brain.

Again...Your problem is the approach. Working with a generic form of meaning allows the use of another kind of space and that is relational space...:)

I didn't describe my approach. I described another's. As for "relational space", the term is meaningless as far as computability is concerned until you can define it formally.

And there is a Santa Claus and he has flying reindeer and if you been good he'll come down your chimney and give you a gift.

So you compare scientific research to Santa? I didn't cite a philosophy journal or a religious blog. What I cited constitutes a part of scientific literature. You may not agree (and I may not as well), but ridicule is hardly a counter-argument.

Tell me the weight or size of the theme of any book.
I meant software. How does software, which we can completely describe through physical correspondences (i.e., in terms of the actual physical states of bits and logic gates), not have physical reality? We can do this with software now. We cannot with brains now. It may be that we can, but as numerous scientists in different fields have argued, it may be that we can't.

No it doesn't!
Neural populations don't exchange spike trains. Yet you said they did:
themes are collections of data that human brain neurons interpret through execution of neural systems formed by neural populations locally and non-locally that exchange spike trains
Nor is there anything "non-local" or metaphysical about software.
But to put things in perspective how about we go back in time and we let Einstein figure out how my Samsung galaxy works.
Why? Einstein spent a majority of his life trying to prove that quantum physics (a model of reality he was instrumental in founding) was inherently flawed. And he failed. Hilbert openly mocked Einstein's inabilities when it came to math. And Einstein's arguments about the nature of reality were regarded as baseless by the Copenhagen interpretation and in the stand models of physics today. And as for using him to understand some "black box", that's actually what he failed to do. He wanted to explain quantum physics in terms of classical reality, without Heisenberg's uncertainty or the quantum weirdness of empirical study, but failed.
Ah...not entirely, Watson uses more procedural functions, like parsing strings, ranking words, and queries in a database which uses logic for comparisons. Nothing as intense as 3D graphics.
I didn't say anything about 3D graphics, nor does anything you say negate or render inaccurate my description.
 
Last edited:

idav

Being
Premium Member
I work in cognitive neuropsychology, and I don't know how the brain does it. I have worked with a lot of people who (unlike me) have a PhD in this or a related field, and none of them do either. As I've said before, something so basic as neural encoding still sparks considerable debate. First, much of computational neuroscience has been devoted to what is often called the "labelled-line" or "labeled-line" theory. Simplistically, each individual "receptor" neuron in the eye carries unique information to the brain that collectively allow animals (in a particularly famous study done in 1959, the animal was a frog) to "see". In other words, there is a "line" from each receptor to some specific place (or even neuron) in the brain. In this model, visual neurons are more akin to "bits" in that although it takes a lot of them, each one is somehow "meaningful".
Yes I'd agree.

That's no longer considered true even for neural receptors. Volume 130 of the edited series Progress in Brain Research (Advances in Neural Population Coding; 2001) represents a turning point in computational neuroscience and neuroscience in general away from this idea. But the problem (and the reason for the volume) is what to replace it with: "If the brain uses distributed codes, as certainly seems to be the case, does this mean that neurons cannot be 'labeled lines'? Clearly, to support a code of any complexity, active populations must be discriminable from one another, which means that differences among the individual cells are important. Neurons cannot respond equally well to everything and form useful representations of different things. Thus, the sharp dichotomy between distributed coding and labeled lines seems to be a false one and the critical question is 'labeled how and with what'."
This doesn't throw out the former perception at all. It just gets more complicated because the neurons are redundant. It still entails a visual as a sort of a bit of information that has meaning. Problem now is that when we see a car there are several nuerons putting the pieces together and there is no single neuron that tells us car.
That was back in 2001, before neuroimaging studies (and in particular fMRI studies) were as prevalent. Now we know more about how much farther away from understanding the "neural code" we are than previously believed. For one thing, it is now certain that the neural "bit" isn't typically based on the activity of individual neurons, but on the synchronization/correlation of their spike trains. Thus most of the time, the "minimal" meaningful information (the "bit") is a constantly changing level of correlated activity among a changing number of neurons.
As per what I said above, I don't see how it changes the fact that the brain is encoded bits of information that can later be retrieved using proper stimuli.
But it gets worse. In a monograph published the same year as the volume referenced above (Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems), the authors baldly state "[t]he nature of the neural code is a topic of intense debate within the neuroscience community. Much of the discussion has focused on whether neurons use rate coding or temporal coding, often without a clear definition of what these terms mean." A later volume (Dynamical Systems in Neuroscience) from the same series (MIT's Computational Neuroscience) closes with a chapter on neural "bursts" in which the author remarks that "single spikes may be noise." In other words, the way that neurons work, as described in just about every undergrad textbook on neurophysiology, is just plain wrong. These textbooks start at the level of the neuron and offer a simplistic (or distorted) model of how neurons "fire". They then typically skip how this means anything and it is just assumed that this firing is somehow the basis of the "neural code". As it turns out, this description may be describing what is only "noise", rather than part of "the neural code". And it is certain that even if there is some meaning to the all-or-nothing action potentials (firing) described so often, this is not the typical "minimal unit".
OK so neurons are more complicated than we thought. This still doesn't change it. Yes the brain is complex but you need to explain more than complexity to get away from bits that are stored, retrieved, encoded and decoded in a way that becomes something meaningfuln in a particular language.
And things get even worse still. As stated in the article "Nonlocal mechanism for cluster synchronization in neural circuits" (from the journal Europhysics Letters), "one of the main enigmas in neuroscience" is not about the neural code per se (in that it isn't about how bursts or correlations of spike trains and so forth can be "units" of information), but about the level and speed of correlations of nonlocal neural populations. In other words, not only do find that the "minimal unit" doesn't really exist except as a concept (in that the minimal unit is described as something which changes in size and nature), but the same coordinated activity which can make up a "minimal unit" within a neural population can be found among neural populations themselves. Moreover, this synchronization between different cortical neural populations occurs almost instantaneously. Which means that the "minimal unit" can not only be correlations among various neurons, but even correlations between correlated neural populations themselves.
Thats interesting, I was wondering about that. It is certainly amazing that it is able to act as one unit. That many connections would be equivalent to a million computers with as many cores as we can fit and being able to communicate as one system.
One major theory about how brains can deal with concepts concerns such massively parallel and quantum-like (or actually quantum) properties (e.g., nonlocal correlations) of the brain. The theory goes something like this: we know that concepts are not Platonic ideals. That is, there isn't a concept "tree" which corresponds to any single representation of neural activity in our brain because there isn't any single concept "tree". A "web" can be a spider web, a method for knowledge representation, a "web of deceit" or of lies, the internet, etc. Single concepts are really not single at all: they are networks of abstractions which share certain relationships in terms of particular semantic content. For example, the interconnectedness and structure of a spider web is metaphorically mapped onto the idea of something like lots of intricate lies also "organized" to deceive, or a bunch of connected computers. It may be that the seemingly impossible level of coordination between and within neural populations allows us to process concepts by allowing us to represent related conceptual content in distint cortical regions which are nonetheless strongly connected.
I think how complicated that is stems from the way it evolved. Not the most efficient way but certainly works and is redundant enough.
We can't even accurately model this level of coordination on a computer, let alone build computers capable of it. And it may very well be that no digital computer will ever be capable of what enables brains to deal with concepts rather than just "syntax" or formal rules.
We learn based on rules too. We create concepts and things like numbers and letters go to a certain part of the brain. If the neurons were not being decoded the syntax and coding would be pure gibberish.

Not at all. For one thing, machine learning has produced a great deal. I am simply distinguishing (as everyone who works in the cognitive sciences does) between qualitatively different types of awareness. More importantly, I am suggesting that the current work in A.I. cannot result in accociative learning. It was largely based on how simple organisms which are purely reflexive "learn", and thus based on non-associative learning. That's not what we want if we want A.I. To continue to hope that more of the same (i.e., increasingly sophisticated neural network algorithms or pattern recognition algorithms) will somehow get us from non-associative to associative learning seems foolish. This is not to say we can't make this leap, or even that we can't do it on computers. Just that I don't think the current approach will get us anywhere and something else is needed.
Cognitive aspects would emerge somehow as AI evolves. People may never agree that a machine is sentient even if you were able to ask it and responded with a firm "yes".
You might think of Watson in terms of the word problems from high school mathematics which almost everyone hates. They hate these because there is an extra step: turning the question into a mathematical equation, or equations, or mathematical expression or expressions. Once this is done, the word problem is no longer a word problem but is like the other math problems. With Watson, people who actually understood language built databases so that the "word problems" could be reduced to a bunch of equations.

Watson does far more than a calculator can do. Watson did stuff instantaneously what some of us couldn't do with an hour of google time. To the point of interpreting a painting to fit the ambiguous question answer style. I couldn't have done that. It is a huge feat and demonstrated correlation and awareness of concepts. I believe the techniques are similar to the way the brains memory works since memory is really a collection of bits put together like a puzzle.
 

Leonardo

Active Member
Watson does far more than a calculator can do. Watson did stuff instantaneously what some of us couldn't do with an hour of google time. To the point of interpreting a painting to fit the ambiguous question answer style. I couldn't have done that. It is a huge feat and demonstrated correlation and awareness of concepts. I believe the techniques are similar to the way the brains memory works since memory is really a collection of bits put together like a puzzle.

Actually using Google to get results from the topics in Jeopardy, as they're written on the boards, works! Within in the first page you get a relevant answer.
 
Last edited:

Leonardo

Active Member
Ironically, it was the behaviorist framework (the one which understood actions as motivated by punishments and rewards) which treated the mind as a black box. You adopt the view of those who denied that we could ever have the ability to understand the physical processes of the brain and relate them to the "mind", so instead they attempted to understand it in terms of observable actions in response to rewards and to punishments.

Again you don't understand the idea proposed. You keep confusing it with some ancient idea that never described a competitive arbitration system involving the limbic and sensory systems signaling that quantifies emotional gratification in grades of positive to negative states that resolves choice in any mammalian animal! Your comments are just more hyperbole. :p

Why? Einstein spent a majority of his life trying to prove that quantum physics (a model of reality he was instrumental in founding) was inherently flawed. And he failed. Hilbert openly mocked Einstein's inabilities when it came to math. And Einstein's arguments about the nature of reality were regarded as baseless by the Copenhagen interpretation and in the stand models of physics today. And as for using him to understand some "black box", that's actually what he failed to do. He wanted to explain quantum physics in terms of classical reality, without Heisenberg's uncertainty or the quantum weirdness of empirical study, but failed.

Wow...:facepalm: I mean wow...OK... then change Einstien to Bohr... :rolleyes:

I didn't say anything about 3D graphics, nor does anything you say negate or render inaccurate my description.

No but I did as a comparison of the type of mathimatical calculations Watson was performing.
 

LegionOnomaMoi

Veteran Member
Premium Member
This doesn't throw out the former perception at all.
It's a completey different one.

As per what I said above, I don't see how it changes the fact that the brain is encoded bits of information that can later be retrieved using proper stimuli.

Because it's all well and good to say that the brain encodes "bits", but if you can't show me anything in the brain which corresponds in general to a bit (and at the moment, no one can), then it doesn't mean much.

OK so neurons are more complicated than we thought. This still doesn't change it.
It's not that neurons are more complicated. It's that the neural "code" is. And it is a qualitative change. Think of it this way: why does your computer use a binary storage system, in which the minimal unit can only have 2 states? We've been storing data as only 0s and 1s for ages. Why did nobody ever think "wait, we can double our storage capacity just by using four states instead of two!" The answer is this is a qualitative change and would result in an increase in complexity/issues such that it wouldn't be worth it.

Most people (including psych students) think of the neural code in terms of action potentials which are described as the neuron firing when it reaches a certain threshold. We've known this was wrong since the 40s. But we didn't know how wrong until more recently. We now know how strong the tendency for neurons to synchronize is (so strong that it can happen in brain slices preserved outside the brain, and even without synaptic transmission/signals). We also know that too much synchrony makes brain function impossible because the brain would almost be one neuron. So now, rather than trying to explain the neural "code" in terms of single spikes, we believe brains make use of all of (i.e., these all constitute meaningful "units" of information) the following

1) The ways in which the rate of of firing changes over time
2) The rate itself
3) The correlations between the activity of neurons
4) The ability for neural populations to synchronize
5) Various neurophysiological properties that allow neurons to desynchronize given certain conditions (which aren't very clear) rather than do what it appears they do so readily
6) The size of a synchronized neural assembly
7) The frequency range at local and nonlocal scales

It quickly becomes very difficult to defend any notion of something like a "bit" in the brain when we know at least that if there is some minimal unit, it is constantly changing in a number of different ways. Which is why there has been an increased emphasize not on the neural code or neural firing, but on functional connectivity. What kind of conditions create what kind of neural assemblies which (again, under what conditions) may synchronize with other (nonlocal) assemblies? The question is then not about code or bits, but more about how information is represented by patterns of coordinated neural activity distributed throughout the brain.

With computers, we know exactly how information is represented: through individual binary states, each one existing always in the same place, and always only in one of two states. By contrast, both the firing and the non-firing states of neurons form part of the mininal units of information, and neither is that minimal unit, which is a misnomer anyway.

Computers were designed explicitly to be organized. Every bit has a place and one of two states, and every step of every program corresponds to specific changes in specific bit states. They were designed for control and to work very precisely with very precise rules. That's why we can make computers so great at chess. We're dealing with something which can easily be broken down into rules. Where we have clear rules, we have math, and thus we can get a computational device to implement the rules.

Now we are increasingly using computers in ways they were never intended to be used. We employ fuzzy logic which violates the logic built into the computer itself. We create mathematical learning models which don't have explicit rules, in that the only rules are how the the computer will, in general, adapt over time to input. And so on. But there is a problem. The artificial neural networks we program do not do what actual neural networks do. They can't. Because neurons do not have well-defined states, and neural networks do not consist of collections of discrete states. That is fundamentally what computers are: collections of discrete, binary states. So we take a simplified mathematical model of actual neural networks, and imitate this on a machine designed from the ground up to work differently.

Again, I'm not saying this is a bad thing. From both a theoretical and applied perspective (i.e., in terms of understanding something like the brain as well as coming up with programs that recognize faces or recommend books for you automatically) we've done a lot. But everything we've done ultimately goes back to logic gates and binary states, and thus to pure computation rather than comprehension. It may be that the architecture cannot do what brains do. It may be that it can, but we need to understand how associative learning works in a way we aren't even close to yet. Whatever it is that is missing, it seems clear that it is a qualitative difference.


Thats interesting, I was wondering about that. It is certainly amazing that it is able to act as one unit. That many connections would be equivalent to a million computers with as many cores as we can fit and being able to communicate as one system.

Keep in mind also that a single neuron (e.g., a Purkinje cell) can be connected to well over 100,000 other neurons. That is, a single neuron is constantly receiving input from over 100,000 other neurons.

I think how complicated that is stems from the way it evolved. Not the most efficient way but certainly works and is redundant enough.
"Certainly works"? Computers are extremely efficient. That's why it's so hard to get them to do what slugs and plants can do as far as learning is concerned. The complexity of the brain is what makes mammals and in particular humans capable of doing what no other known system can do.


We learn based on rules too. We create concepts and things like numbers and letters go to a certain part of the brain. If the neurons were not being decoded the syntax and coding would be pure gibberish.

First, concepts are not rules, and the problem is we don't know how something like a computer which has only the ability to work with rules can learn concepts. We don't know how we do. Second, "things like numbers and letters" do not "go to a certain part of the brain". This is what makes fMRI analysis such a challenge. A number or word or similar stimulus will increase activity in multiple places in the brain and will change each time in ways we don't understand. So if I want to say that certain regions are involved in, say, processing action words/verbs, I have to show that these regions are significantly more activated compared to some control, and futher that the other regions which will also be significantly more activated are not somehow the "core" of the whatever action word the subject is exposed to. This is a central area of debate in the cognitive sciences, because one group maintains that concepts like action words are represented apart from the motor and sensory systems of the brain. So they explain the activation in these regions during experimentation as indicative of something else. The other group maintain that cognition is fundamentally embodied, and that part of the representation of concepts like action verbs and even abstract notions like hope make use of sensorimotor regions of the brain.

What remains true either way, however, is that we can't point to some place in your brain where the concept of "1" exists. It is somehow represented across distributed and changing neural assemblies. At best we can say there are regions which are likely to be involved in representing the concept "1".


Watson does far more than a calculator can do
That's because it's a bigger calculator.

Watson did stuff instantaneously what some of us couldn't do with an hour of google time.
How fast could you solve the following equation:

132,674.65754 * 13.1^8 /.234= x ?

That's a very simple equation. It's straightforward arithmetic. The rules are simple, but the calculations are difficult for us because we aren't calculators. Computers are. Nobody should be impressed that Watson could find the answers. That wasn't what the challenging part was. Computers are great at storing data and accessing it. The challenging part was getting Watson to parse the question, sort through an enormous database which had to be specially made so that Watson didn't have to understand words or language to calculate an answer. It's challenging because we had to turn language into something it isn't: math.


I believe the techniques are similar to the way the brains memory works since memory is really a collection of bits put together like a puzzle.
If that were true, then whenever someone asks you a question like "how's it going", you would root through an enormous database, find a bunch of possible matches that correspond to what the question might mean, calculate the probabilities that each is what the question means, select the most probable, and then return a programmed answer.
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
Again you don't understand the idea proposed. You keep confusing it with some ancient idea that never described a competitive arbitration system involving the limbic and sensory systems signaling that quantifies emotional gratification in grades of positive to negative states that resolves choice in any mammalian animal!

The reason the behaviorist "award/punishment" model was abandoned was (in part) how poorly it could explain the neural activity of rats. A series of studies done about 50+ years ago showed that rats learned and internally represented spatial concepts without either rewards or punishments. It's also a useless model because it depends upon an arbitrary classification of what constitutes a "reward" or a "punishment", and doesn't explain anything about how the mind works. Finally, it's an impossible model because it suggests that the "limbic and sensory systems" actually do have some quantifiable gradient of "emotional gratification", and that this non-existant activity can somehow be identified as responsible for "choice" (which, like rewards and punishments, is a rather arbitrary and ill-defined term especially for a formal model).



No but I did as a comparison of the type of mathimatical calculations Watson was performing.
Why?
 

Leonardo

Active Member
The reason the behaviorist "award/punishment" model was abandoned was (in part) how poorly it could explain the neural activity of rats. A series of studies done about 50+ years ago showed that rats learned and internally represented spatial concepts without either rewards or punishments.

You see, you really don't understand the idea proposed. What is proposed is that the memory of experiences have qualities of emotional gratification as do sensory systems. The learning described in the experiment you cite would be explained as the rat’s sensory system and/or memory of experiences signal and competes in response to the stimuli of the environment. Out of all the outcomes the rat’s brain figures out or remembers, navigating a spatial area wins over being stationary because moving about produces emotional gratifications from the production of adrenaline, dopamine, etc. We can take the notion further, and perhaps too anthropomorphic, but state because of the similarity of mammalian brains that the rat's choice is motivated by satiating curiosity which is a form of emotional gratification. The rewards and punishments are just terms used as analogies they are not treats, but biochemical and spiking responses that signify a positive or negative emotional state from perceived outcomes. Also the emotional signalling originates from the limbic and sensory systems.


Another example, so you get the point. When you are trying to solve a problem neurologically the brain is examining outcomes of various approaches. The conscious decision of which outcome to choose happens when you get the "Ah ha" effect. That "Ah ha" effect is a form of emotional gratification that motivates you to act on the solution. This is just the tip of the ice berg and evaluation of outcomes also brings about other memories that have emotional qualities that can and will affect the net emotional gratification of a percieved choice.
 
Last edited:

PolyHedral

Superabacus Mystic
But this is not the same as what is going on in the brain. Of course you are correct: being able to store data is pretty meaningless as far as running any program or software or whatever is concerned. But there is still a well-defined distinction between data storage (i.e., the states of "bits") and processing.
Not if you don't have an omniscient view of the subsystems, there isn't. The entire point of ORM and MVC structures is that the structure of the data I am requesting is completely disconnected from the structure of the data actually being stored. I therefore have no idea what sort of processing goes on to fulfil my request, or the types of objects that are stored to track it.

They do go hand in hand in order for a computer to do anything, but they are distinct. This is not true of the brain. Processing corresponds in some largely unknown way to correlations between spike trains among neurons and neural populations. That's also what storage is. The same physical processes, physical parts, and often (to an unknown degree) the same places are literally both "storing" and "processing."
Do you know what a memristor does? :p

The properties have no meaning to the computer.
I just outlined how they do.

If I say the class "tree" has certain properties such as being a plant, having limbs, having roots, etc., and that something which inherits from "tree" like "oak" will also have properties like leaves, these have meaning only to those who understand english.
...Unless you go on to define what those things are in terms the computer can understand. "Red" is just a string, but if the computer has an eye, and you tell it "Things which are 'red' produce this response in the eye", then it'll understand fine.

So if the behavior between paired photons which are seperated by several miles is "correlated", what's the cause?
The entanglement.

In other words, the mental states don't exist in the physical world. Often [...] the "space" is explicitly stated to be Hilbert space, but the point is that these structures of mental states, concepts, and so forth may appear very similar to those you have touched upon, but they cannot exist in the physical world as we know it.
Of course not - they are mental states, forged of pure logic. Pure logical constructs cannot be instantiated in reality, only encoded in it.

Most importantly, if correct, this means that we will never get a digital computer to be "conscious" or self-aware.
We have a logical model of a concious mind? Cool! Let's write a calculator for it! ...It involves a Hilbert space? So what? Computers can do symbolic calculations fine. :cool:
 

LegionOnomaMoi

Veteran Member
Premium Member
You see, you really don't understand the idea proposed. What is proposed is that the memory of experiences have qualities of emotional gratification as do sensory systems. The learning described in the experiment you cite would be explained as the rat’s sensory system and/or memory of experiences signal and competes in response to the stimuli of the environment. Out of all the outcomes the rat’s brain figures out or remembers, navigating a spatial area wins over being stationary because moving about produces emotional gratifications from the production of adrenaline, dopamine, etc.

This misses the point of the experiments. They were conducted when behaviorism was in full swing, and by (arguably) by behaviorists. The first was in 1930: "Introduction and removal of reward, and maze performance in rats" by Tolman & Holznik. In this experiment, three groups of rats were studied under different conditions. Each group had to learn to navigate the maze. The first group was rewarded with food for each successful navigation. The second was never rewarded. The third was unrewarded for ten days, after which the rats were rewarded like the first group.

The problem for the reward/punishment (or reinforcement) theory of learning was that the third group learned to navigate the maze faster than the first when they started being rewarded. In other words, for the first 10 days, when there was no incentive to learn anything, the rats did anyway such that when they had an incentive, they already knew enough about the maze to navigate faster than the "always rewarded" group.
In a second experiment (Tolman, Ritchie, and Kalish, 1946), the nature of what was learned was tested. Basically, they tested to see if rats learned to navigate the maze in terms of rules (i.e., go left, go right, go right again, and presto! there's the food), or if they somehow abstractly represented spatial data. They did this using two conditions. In the first, the food reward was always in the same place, but they changed the place at which the rats entered the maze. In the second, they changed the location of the food, but only in such a way that the rules for turning didn't change (that is, the same sequence of left/right turns would bring the rat to the food). The first group learned was consistently better at navigating the maze to get to the food. This showed that they weren't learning rules, because (unlike the first group) the rules changed depending on where the rats were allowed to enter the maze. Yet they were still better at getting the reward than the rats who only had to follow a set of procedures.
Tolman constructed a model of this new idea about learning which he called "cognitive maps (you can find his paper for free here: "Cognitive maps in rats and men"). He showed
1) The stimulus-response theory of learning was inadequate, as it could not explain experimental results and
2) That somehow conceptual information, rather than mere procedural rules, were represented in the brains of rats.
This doesn't even bring us to the 50s, and we already have strong counter-evidence for the idea that learning can be reduced to rewards and punishments.



The rewards and punishments are just terms used as analogies they are not treats, but biochemical and spiking responses that signify a positive or negative emotional state from perceived outcomes.
Which would be great, if there were any way for such spiking responses to exist such that we could observe them and know they were positive or negative. However, even if there were, these would still leave us with how this has anything to do with choice, understanding, awareness, etc. But as there is not, and as we know that brains are capable of learning without any such "spiking responses", from rats to humans, the fact that there are no such identifiable "spiking responses" doesn't pose a problem.


Another example, so you get the point. When you are trying to solve a problem neurologically the brain is examining outcomes of various approaches. The conscious decision of which outcome to choose happens when you get the "Ah ha" effect. That "Ah ha" effect is a form of emotional gratification that motivates you to act on the solution.
Upon what is this based? That is, if I'm conscious of this examination of approaches, then the "ah ha" effect hasn't explained anything, because it happens after conscious examination. If I'm not conscious, then you'd need some empirical support for the idea that my brain is sorting through various approaches and then further support to show that some "ah ha" effect exists and is the motivating factor.

This is just the tip of the ice berg and evaluation of outcomes also brings about other memories that have emotional qualities that can and will affect the net emotional gratification of a percieved choice.
It's circular reasoning unless you can find point to a empirically established method of observing this "emotional gratification" gradient using neuroimaging and reference to neurophysiology such that you can simply look at the activity of the brain and be able to distinguish different grades of "emotional gratification" without ever seeing what the individual is doing.
 

Leonardo

Active Member
[/font][/color][/font][/color]
The problem for the reward/punishment (or reinforcement) theory of learning was that the third group learned to navigate the maze faster than the first when they started being rewarded. In other words, for the first 10 days, when there was no incentive to learn anything, the rats did anyway such that when they had an incentive, they already knew enough about the maze to navigate faster than the "always rewarded" group.

You still aren't getting the idea...the reward is emotional gratification generated from the limbic system or the sensory system, its irrelevant if there's a food cache or not! The neurology could generate the signaling based on past experiences or current stimuli, inclusive of just exercising, satiating curiosity, an itch, or taking a view of some colorful object. The point here is that mammal brains are wired with sensory and limbic signaling that produce states that are positive or negative. The idea extends to the notion of Elman's eta idea that sensory systems are genetically coded to senstitize to the environment and preprocess information that allow neocortical processes to digest or learn. The sensory system is a conceptual framework of knowledge gleaned over by hundredes of millions of years of evolution. The sensory system provides the foundation for any animal, inclusive of humans, of precieving nutritious food, or harmful substances, genders, motion detection, edge detection, potentially viable food sources from color, etc. This is all knowledge encoded in animals genetically and these senses find connections to the limbic system that create signalling that quantifies positive or negative emotional gratifications.

So signaling is not just based on physical rewards but the evaluation of the greater of sensual or emotional gratifications at a subconscious level from precieved outcomes.

Upon what is this based? That is, if I'm conscious of this examination of approaches, then the "ah ha" effect hasn't explained anything, because it happens after conscious examination. If I'm not conscious, then you'd need some empirical support for the idea that my brain is sorting through various approaches and then further support to show that some "ah ha" effect exists and is the motivating factor.

"Ah ha" is an emotional feeling, at least for me it is, and it gets me to focus my attention to an idea without me choosing so! Key point here is "attention to an idea without choosing" I could be thinking about something completely different when I get an "ah ha" moment about a problem I was working on days ago. This proves that the brain can work on problems without conscious awareness. Your perspective is clueless to what people actual experience. That you're not aware of "ah ha" moments might explain why you don't understand this idea. :)
 
Last edited:

uberrobonomicon4000

Active Member
A few reasons why humans are not robots.

-Humans are capable of thinking for themselves.
-Humans are capable of surviving without being programed to survived (that is if you believe in evolution).
-Humans can have sex to reproduce, robots can’t.

I disagree with the op and think its just a bunch of a nonsense to compare humans to computers or robots. Which gets into another subject altogether. I don't view or think computers should be classified as robots. Robots are capable of interacting with the world around them. Computers, not so much.
 

LegionOnomaMoi

Veteran Member
Premium Member
I therefore have no idea what sort of processing goes on to fulfil my request, or the types of objects that are stored to track it.
Which is an entirely different question. The above doesn't address whether there is a sharp, clear distinction between storage and processing. In fact, it doesn't even address anything about the internal make-up of the computer from point of view necessary to make the relevant comparison to the brain. I'm not arguing that when programming, we know the state of every bit and how it is being changed. I simply said that there was a clear distinction between data storage and data processing. This distinction is at the heart of computer science and existed before computers (albeit not in exactly the same way). And it is fundamentally different from the way the brain works, where no such divide exists:

"Both living organisms and computers are “information-processing machines” that operate on the basis of internally stored programs, but the differences between these systems are also quite large. In the case of living organisms, self-assembly occurs following an internal program, and the nervous system and brain formed in this way function as an autonomous information machine. Unlike traditional computers which must be “driven” from the outside, biological systems have somehow incorporated within them rules on how to function. Moreover, in the case of biological entities for which there is no external blueprint, the design plan is entirely internal and is thought to undergo changes both in the evolution of species and in the development of individuals....
It is, however, true that a biological computer (or biocomputer) of a completely different nature from today’s electronic computers already exists in the form of the fundamental phenomenon of life. The most advanced machinery, a living organism, operates with functional elements that are of molecular dimensions and actually exploits the quantum-size effects of its components. Yet the quintessentially biological functions of living forms: autonomy, self-organization, self-replication, and development, as witnessed in both evolution and individual ontogeny, are completely absent from current computing machines" from Information Processing by Biochemical Systems: Neural Network-Type Configurations (Wiley, 2010).

Do you know what a memristor does? :p

Do you mean do I know what it is? Because it hasn't done much of anything yet (other than theoretically). The book I cited above is basically an extended study of an actually implemented basic biochemical neural network system which outperforms digital computers. Of course, like so many of these systems (including quantum computers), what we have actually been able to do is vastly disproportionate to what we have talked about what we might be able to do. Nanotechnology, biochips/biocomputing, quantum computers, and various hybrids are all exciting stuff, and not just because of increased computational power. However, as I've said before, after ~70 years of A.I. being "just around the corner", as much as I am impressed by what we are capable of doing, I'm far less impressed by speculations on what this will actually mean in terms of artificial intelligence.


I just outlined how they do.

You mean here?
Though they perform the same function, the latter is far more meaningful, both to a human and, importantly, to the computer. It is meaningful to the computer because of reflection, which exposes its own inner workings as data - in this case, its own expectations of how things behave! Our software is capable of navel-gazing! Now that the computer knows what it's expecting, it can invent data to violate those expectations, and work through the model to determine a new answer.
(I know I only quoted part).

The problem is that this idea of introspection as reflection (or that reflection makes a computer's processing of code more "meaningful") is mainly impressive because of the way you describe it, as the results are something we've been capable of (and without reflection) since the paradigm shift in A.I. away from explicit procedures and the development of machine learning. I posted something (I think on this thread) on Samuel's work in the 50s and 60s teaching a computer to learn to play checkers rather than programming with the rules.

You are still talking only about procedures. This:
As far as I'm concerned, this is 1) learning, 2) the concept of addition. (The bit missing from this being fully intelligent is that we haven't covered comparing the improved function to real-world evidences yet.)
is making the claim that the concept of addition is the ability to carry out addition. It is certainly true that a machine can be programmed to learn addition rather than be programmed with the rules (just as with checkers 50+ years ago). But there is nothing here that isn't procedural. This:
Now that the computer knows what it's expecting
involves a linguistic "trick" by using the words "knows" and "expecting". For one thing, your speudo-code includes the addition operator in the definition. All you've done is place parameters, limiting an operation which was there to begin with. For another, you've defined "knowing" what is expected as the ability of the computer to do what computers do: compute. Third, in more complex learning algorithms, in which the actual rules are "learned", we're still limited to rules/procedures. It's still sea slug learning (just with a faster processor, more storage space, and a less complex design).

For this learning to be equated with knowing, we require knowing to be no more than the ability to carry out procedures. Which is a fine definition, but it doesn't help us much, because instead of giving us a definition of knowing or understanding which would help us further work in A.I., we've defined what we want to achieve as something we've been capable of building for decades.

...Unless you go on to define what those things are in terms the computer can understand. "Red" is just a string, but if the computer has an eye, and you tell it "Things which are 'red' produce this response in the eye", then it'll understand fine.
It will "produce" fine. And when sea slugs do this same non-associative learning to pair being poked with a noxious stimulus, they "understand fine" in exactly the same way. They can respond with the learned procedure. Once again, though, defining this to be understanding limits us, because we've simply collapsed two different types of learning into one, and left ourselves with nothing useful to get us from sea slug learning to something that actually internally represents concepts rather than procedures.


The entanglement.
"What's the entanglement"? "It's what allows paired photons to correlate across long distances". And 'round we go.


Of course not - they are mental states, forged of pure logic. Pure logical constructs cannot be instantiated in reality, only encoded in it.

They aren't "forged" of logic at all. They are descriptive models that can be implemented on a computer, but only in ways meaningful to humans because they rely on concepts.
 
Top