• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Is the internet conscious of itself yet ?

LegionOnomaMoi

Veteran Member
Premium Member
I'm going to start with this:
Hofstadter,, mostly. He describes the brain's conceptual engine as using copy-on-write prototyping, which fits both what I've heard and my own experience. 

Yes I read the book (my father bought it some years ago). So you're current understanding of A.I. research and consciousness is based on a 40 year old book?

Searle argues that conciousness is a physical property. That's preposterous in light of physics, so please excuse me if I take him less than totally seriously.
What Searle argues (at least what I was refering to) was that passing the Turing test doesn't necessarily indicate strong A.I.


<br>(The answer to the Chinese room is, IMO, that the <i>book</i> is the thing that understands.)<br><br><br>I don't really see how that's relevant to it being uncomputable

1) The book is the thing that understands!??? The whole thought experiment concerns the implications of passing the "Turing test." It shows that an entity can output data given input (such as a question) accurately or in the way a human would and not understand it.
2) I didn't say it had anything to do with being uncomputable. You brought up Turing, I was responding to part of that.

Complexity just means you need more powerful algorithms to succeed.
Since Turing, von Neumann, Shannon, etc., the algorithmic approach has qualitatively changed. "Learning" algorithms no longer specify procedures, nor do they resemble a type of Turing machine. They algorithms determine how weighted connections change given input (that's simplistic, but it's the general idea). Of course, they are still algorithms. But current research on consciouness (artificial or no) suggests that
1) an algorithmic approach may be fundamentally wrong or
2) if strong A.I. can be based on algorithms, then the emergent consciousness will be indeterministic, such that neither the program/computer nor the programmers will be capable of knowing how it emerged. We already have problems determining the returned values of some complex ANNs.


Personally, I find the concept of embodied cognition obvious. Of course the structure of the mind will be influenced by the hardware its running on; abstract Turing, stack and finite-state machines and so forth only exist in mathematics.

I'm not sure if you know what embodied cognition is. It has nothing to do with the hardware of the brain per se. It is a rejection of domain specificity and has to do with the relationship between cognitive processes and sensory-motor experience (i.e. language is based on, for example, spatial/temporal metaphors).
 

LegionOnomaMoi

Veteran Member
Premium Member
If you a stimulus is given to the brain because your were frightened that is simply reaction to stimuli. The representation is artificially created for us.
Of course we react to stimuli. And we do have automated stimulus/response learning methods. The point is we have more, and consciousness and awareness IS the "more."

I thought that perhaps part of the problem here is that I'm not used to discussing these issues with people who aren't in the field. I learned quickly when teaching/tutoring students in math, latin, greek, etc., that I had to develop ways to simplify the material and yet get across enough of what was important. So I picked up a book today written for a wider audience rather than the books/papers I have which require a background in mathematics, neurobiology, information theory, etc. The following excerpts are from Intelligence by Jeff Hawkins:

He begins by describing the type of thinking PolyHedral seems to refer to (classical cognitive science/A.I.): "To these scientists [intelligence involved] just programming problems. Computers could do anything a brain could do, and more, so why constrain your thinking by the biological messiness of nature's computer?...They believed it was best to study the ultimate limits of computation as best expressed in digital computers."
He's describing the state of research in the late 70s and early 80s. He continues:
"This struck me as precisely the wrong way to tack the problem. Intuitively I felt that the artificial intelligence approach would not only fail to create programs that do what humans can do, it would not teach us what intelligence is. Computers and brains are built on completely different principles. One is programmed, the other is self-learning. One has to be perfect to work at all, one is naturally flexible and tolerant of failures. One has a central processor, the other has no centralized control. The list of differences goes on and on."

As it turns out, he was right and they were wrong.

"Neural networks were a genuine improvement over the AI approach. Instead of programming computers, neural network researchers, also known as connectionists, were interested in learning what kinds of behaviors could be exhibited by hooking a bunch of neurons together...On the surface, neural networks seemed to be a great fit with my own interestes. But I quickly became disillusioned with the field. By this time I had formed an opinion that three things were essential to understanding the brain."
The three things he lists are time, feedback processes, and physical architecture. Unfortunately, "Most neural networks consisted of a small number of neurons connected in three rows...These simple neural networks only processed static patterns, did not use feedback, and didn't look anything like the brain...I thought the field would quickly move on to more realistic networks, but it didn't."

That's just a brief history of the rejection of classical AI and the lack of continued progress using a superior method. But what about intelligence/understanding/awareness? First, although Jeff Hawkins does not agree, he does note that "A suprising number of people, including a few neuroscientists, believe that somehow the brain and intelligence are beyond explanation. And some believe that even if we could understand them, it would be impossible to build machines that could work the same way, that intelligence requires a human body, neurons, and perhaps some new unfathomable laws of physics."
Personally, having read a great deal of scientists working in fields related to AI (cognitive science, neurobiology, neuroscience, computer science, etc.), Hawkins isn't really accurately representing the number or arguments of those who don't believe we can build intelligent machines. But the books is designed to be simple, so it's not a big deal. At any rate, he disagrees.

Hawkins goes one (much later) to distinguish between the learning of single-celled organisms and plants and how these evolved into intelligence/awareness. Simple intelligence (dogs, cats, rats, etc.) is possible because of heirarchical processes which allow self-modification of neurons and the connections between them: "But with [the neocortex's] heirarchichal structure, invariant representations, and prediction by analogy, the cortex allows mammals to exploit much more of the structure of the world than an animal without a neocortex can. Our cortically endowed ancestors could envision how to make a net and catch fish. The fish are not able to learn that nets mean death..."

His discussions of consciousness aren't very helpful (as he admits, that isn't his area), but he does mention some of the basic concepts involved:self-awareness and qualia. Qualia is still used, but it's a bit old school. Conceptual representation, semantic memory, and similar more precise scientific terms are used more frequently.

But the basic point (and Hawkins discusses and agrees with Searle's chinese room argument when he illustrates this point) is the difference between responding and understanding. A system, whether it is a neural network or a venus fly trap, can respond. But it cannot understand. It has no awareness of its responses because awareness requires the storage of abstract concepts (bug, food, close, etc.) which can be generalized, extended, and modified by the system. Simple response learning involves no understanding or awareness because this "learning" is simply a change an involuntary change in response parameters. Once a computer program "learns" to recognize the word "food", all it has "learned" is given X input, output Y. When a dog hears "food," the dog does this too, but much more: the dog activates a semantic (abstract concept) memory which involves takeing the X input (the audio), processing it, relating it to this abstract concept which is a generalization of and relatable to multiple specific instantiations of different Y values (this or that dog food, human table scraps, something which fell on the floor and is edible) as well as representations of related concepts (food bowl, the act of eating, the location associated with eating, etc). And these can all be adapted by the dog itself.


The knee jerk response is something that is already learned or preprogrammed.
And that's the type of "learning" single-cell organisms do. A venus fly trap is preprogrammed to close it's "jaws" under X condition. A neural network is preprogrammed to adjust its weights given X input. A system capable of awarenes can self-adjust.

Through patterns and algorithms. That is essentially what your describing.

Through patterns, yes. Algorithms? Maybe. We don't know. If we did, we could program them. To explain, let me address this:
I mentioned the quantum process but you mentioned "defy classical mechanics". I'm not really sure it does.

From Davies' paper in Re-Emergence of Emergence (Oxford University Press, 2006):
"Recent work by Max Bennett (Bennett and Barden, 2001) in Australia has determined that neurons continually put out little tendrils that can link up with others and effectively rewire the brain on a time scale of twenty minutes! This seems to serve the function of adapting the neuro-circuitry to operate more effectively in the light of various mental experiences (e.g. learning to play a video game). To the physicist this looks deeply puzzling. How can a higher-level phenomenon like ‘experience’, which is also a global concept, have causal control over microscopic regions at the sub-neuronal level? The tendrils will be pushed and pulled by local forces (presumably good old electromagnetic ones). So how does a force at a point in space (the end of a tendril) ‘know about’, say, the thrill of a game?"

From Chapter 10 (included in the partial book I provided you): "..synchronization links together processes in distant parts of the brain. According to a popular hypothesis, development of transient synchronous clusters in neural networks spanning the whole brain is responsible for the appearence of distinct mental states which make up the flow of human consciousness.
When large-scale synchronization of neuronal processes is discussed, one should avoid the mistake of assuming that it merely results from the synchonization of states of individual neurons. If this were the case, the whole brain or large parts would have behaved just like a single neuron....We show that interaction between networks can lead to mutual synchronization of their activity patterns and to spontaneous seperation of the enseble into coherent network clusters."


From Scott's paper in Evolution and Emergence (Oxford University Press, 2007):
"Under [strong downward causation], it is supposed that upper-level phenomena can act as efficient causal agents in the dynamics of lower levels. In other words, upper-level organisms can modify the physical and chemical laws governing their molecular constituents."

from Gur, Contreraras, and, Gur's paper in Indeterminacy: The Mapped, the Navigable, and the Uncharted (MIT press, 2009):baseline state indeterminacy [of the brain] can be ontological, that is, the very structure of the brain dictates indeterministic states, independently of any observation..."

Aren't neurons really determined by our genetic code? At some point the program comes from somewhere.
As I think I mentioned before, genetics determines quite little. That's why geneticists are now concentrating more on epigenetics. Certainly, genetics don't "determine" neural activity. Nor does it seem that this activity can be equated with a program.

That kinda sounds like how that jeapordy robot knows things.
Again, "knowing" and "storing data" are two very different things.
 

idav

Being
Premium Member
Of course we react to stimuli. And we do have automated stimulus/response learning methods. The point is we have more, and consciousness and awareness IS the "more."
The more is the complexity.
That's just a brief history of the rejection of classical AI and the lack of continued progress using a superior method. But what about intelligence/understanding/awareness? First, although Jeff Hawkins does not agree, he does note that "A suprising number of people, including a few neuroscientists, believe that somehow the brain and intelligence are beyond explanation. And some believe that even if we could understand them, it would be impossible to build machines that could work the same way, that intelligence requires a human body, neurons, and perhaps some new unfathomable laws of physics."
Personally, having read a great deal of scientists working in fields related to AI (cognitive science, neurobiology, neuroscience, computer science, etc.), Hawkins isn't really accurately representing the number or arguments of those who don't believe we can build intelligent machines. But the books is designed to be simple, so it's not a big deal. At any rate, he disagrees.
Of course if your goal is to replicate a brain then the neuron is the route but this does not put down the achievements of the field of AI. Like I said before there is more than one way to skin a cat. As for awareness the neuron is not a requirement.
Hawkins goes one (much later) to distinguish between the learning of single-celled organisms and plants and how these evolved into intelligence/awareness. Simple intelligence (dogs, cats, rats, etc.) is possible because of heirarchical processes which allow self-modification of neurons and the connections between them: "But with [the neocortex's] heirarchichal structure, invariant representations, and prediction by analogy, the cortex allows mammals to exploit much more of the structure of the world than an animal without a neocortex can. Our cortically endowed ancestors could envision how to make a net and catch fish. The fish are not able to learn that nets mean death..."
The answer is evolution which brought about the complexity you speak of.
And that's the type of "learning" single-cell organisms do. A venus fly trap is preprogrammed to close it's "jaws" under X condition. A neural network is preprogrammed to adjust its weights given X input. A system capable of awarenes can self-adjust.
The difference is that we can reprogram and the plant cannot.


Through patterns, yes. Algorithms? Maybe. We don't know. If we did, we could program them. To explain, let me address this:
Many algorithms I'm sure. We do program them.

From Davies' paper in Re-Emergence of Emergence (Oxford University Press, 2006):
"Recent work by Max Bennett (Bennett and Barden, 2001) in Australia has determined that neurons continually put out little tendrils that can link up with others and effectively rewire the brain on a time scale of twenty minutes! This seems to serve the function of adapting the neuro-circuitry to operate more effectively in the light of various mental experiences (e.g. learning to play a video game). To the physicist this looks deeply puzzling. How can a higher-level phenomenon like ‘experience’, which is also a global concept, have causal control over microscopic regions at the sub-neuronal level? The tendrils will be pushed and pulled by local forces (presumably good old electromagnetic ones). So how does a force at a point in space (the end of a tendril) ‘know about’, say, the thrill of a game?"
Thats something I didn't consider but it makes sense.



From Scott's paper in Evolution and Emergence (Oxford University Press, 2007):
"Under [strong downward causation], it is supposed that upper-level phenomena can act as efficient causal agents in the dynamics of lower levels. In other words, upper-level organisms can modify the physical and chemical laws governing their molecular constituents."
That makes sense which I have brought up more than a few times. Emergence of upper levels of awareness through the evolutionary process.
from Gur, Contreraras, and, Gur's paper in Indeterminacy: The Mapped, the Navigable, and the Uncharted (MIT press, 2009):baseline state indeterminacy [of the brain] can be ontological, that is, the very structure of the brain dictates indeterministic states, independently of any observation..."
Noted.
As I think I mentioned before, genetics determines quite little. That's why geneticists are now concentrating more on epigenetics. Certainly, genetics don't "determine" neural activity. Nor does it seem that this activity can be equated with a program.
The basic structure of our brain is more than just a little determined. Our whole ability to learn requires the neurons to react a certain way under certain circumstances. In fact there are various ways of learning and the brain "knows" how to act give certain stimuli.
Again, "knowing" and "storing data" are two very different things.
As I said, that example of a computer does more than just store data, it literally knows the data it stores when asked.
 

LegionOnomaMoi

Veteran Member
Premium Member
so is this robot aware or self aware?
No. The fact that they used actual brain cells versus artificial neurons doesn't change the (comparative) simplicity of the network. Brain cells don't mean understanding, any more than neural networks do. We can build artificial neural networks that do the same exact thing. Awareness requires conceptual representation and (the related concept of) semantic memory.
 

idav

Being
Premium Member
No. The fact that they used actual brain cells versus artificial neurons doesn't change the (comparative) simplicity of the network. Brain cells don't mean understanding, any more than neural networks do. We can build artificial neural networks that do the same exact thing. Awareness requires conceptual representation and (the related concept of) semantic memory.
It needs conceptual representation to avoid a wall don't you think.
 

idav

Being
Premium Member
Not at all. What concept is being represented?
Sometimes we don't learn all that fast. We could be be blind as a bat and keep running into a wall but hopefully learn not to do that. The concept is not to do that. Whether you see the wall or feel or hear the wall in front of you, if you've learned to avoid it you have learned the concept of the wall.
 

LegionOnomaMoi

Veteran Member
Premium Member
The more is the complexity.
True, but qualitatively different complexity. That's why we all of our attempts to model some form of consciousness have failed. Neural networks were an improvement, but they fall very, very, short of awareness.

Of course if your goal is to replicate a brain then the neuron is the route but this does not put down the achievements of the field of AI.
AI research rejected the classical approach in favor of neural networks. The "computers are pretty much brains, maybe better" failed utterly.

Like I said before there is more than one way to skin a cat. As for awareness the neuron is not a requirement.
The only way we have been able to come up with computer programs or computer systems which imitate even the basic stimulus/response learning is by using artificial neural networks.

The answer is evolution which brought about the complexity you speak of.
The answer to what? I didn't ask a question.

The difference is that we can reprogram and the plant cannot.
The problem is equating thought/awareness with a program. Having studied brain data, code, neural networks, and learning paradigms, I can tell you that lumping it all under "program" renders the term meaningless. What we can do is self-determine, store abstract concepts, categorize and generalize, activate these concepts because we are aware, and so on. Other mammals can do this to a lesser extent. Computers (artificial neural networks or not) and simple organisms simply lack this capacity, because they aren't aware. It's a qualitative difference.



Many algorithms I'm sure. We do program them.

And algorithm is an explicit, well-defined rule. By program, I mean we could get a computer to do it. We haven't come close.


Emergence of upper levels of awareness through the evolutionary process.
Emergence is one explanation for awareness, and as far as we know the only things capable of awareness are capable because of advanced cortical functioning (as you say, through evolution). But this isn't a coherent model. The problem is that are knowledge of complex systems (our ability to model them), even extremely complex systems, is limited to those which depend on initial conditions. How a system can determine it's own conditions, how neurons can act synchronically without any local causes, we don't know. We have guesses about what might be involved. But we have no explicit algorithms, equations, models, or any similar well-defined explanation.


The basic structure of our brain is more than just a little determined. Our whole ability to learn requires the neurons to react a certain way under certain circumstances. In fact there are various ways of learning and the brain "knows" how to act give certain stimuli.

So little of this is determined by genetics.
As I said, that example of a computer does more than just store data, it literally knows the data it stores when asked.
What does it know? When someone asks me "what's an animal?" I can answer in any number of ways because I have stored a concept with graded membership, heirachies, related concepts, etc. And I can pick and choose how I assemble the subcategories and their relations, and reorganize them. When a computer stores the data, I have to make it access these data, and it can only access exactly what I tell it to. It doesn't "know" the data, it doesn't "know" anything about it. Saying a computer "knows" the data stored is like saying a book "knows" the data it stores. One's just electronic.
 

LegionOnomaMoi

Veteran Member
Premium Member
The concept is not to do that.
That's not necessarily a concept. It can be, but it isn't in this case. It is absolutely possible to "learn" without any awareness or conceptual representation. I've written programs which do it. They are not only completely determined by the parameters in the algorthims I set, all they do is change adjust weights between connections according to the algorithm I select. It can recognize a pattern, but it has no idea what that pattern is, no "concept" of the pattern, no ability to generalize to other patterns, to organize these patterns, or anything involved in awareness.


Whether you see the wall or feel or hear the wall in front of you, if you've learned to avoid it you have learned the concept of the wall.

A robot doesn't. It's preprogrammed to move. When it hits and obstacle, it's preprogrammed to adjust in particular ways. But it doesn't store any concept of a "wall" or of an "obstacle." These are abstractions. It's the equivalent of the venus fly trap. My concept of "wall" is generalized. It can refer to any number of instantiations, from the great wall of china to a metaphorical wall. I have a concept I'm storing. The robot is storing a reaction.
 

idav

Being
Premium Member
That's not necessarily a concept. It can be, but it isn't in this case. It is absolutely possible to "learn" without any awareness or conceptual representation. I've written programs which do it. They are not only completely determined by the parameters in the algorthims I set, all they do is change adjust weights between connections according to the algorithm I select. It can recognize a pattern, but it has no idea what that pattern is, no "concept" of the pattern, no ability to generalize to other patterns, to organize these patterns, or anything involved in awareness.




A robot doesn't. It's preprogrammed to move. When it hits and obstacle, it's preprogrammed to adjust in particular ways. But it doesn't store any concept of a "wall" or of an "obstacle." These are abstractions. It's the equivalent of the venus fly trap. My concept of "wall" is generalized. It can refer to any number of instantiations, from the great wall of china to a metaphorical wall. I have a concept I'm storing. The robot is storing a reaction.
Your taking for granted all the things we are preprogrammed to do. It doesn't matter that there has to be a programmer. In nature, nature is the programmer. We are programmed to have fear, love, lust, hunger as well as the brain being programmed to run our heart, lungs, digestive system etc.
 

LegionOnomaMoi

Veteran Member
Premium Member
Your taking for granted all the things we are preprogrammed to do. It doesn't matter that there has to be a programmer. In nature, nature is the programmer. We are programmed to have fear, love, lust, hunger as well as the brain being programmed to run our heart, lungs, digestive system etc.
The problem is the programming analogy only works to a certain extent, and it fails when it comes to awareness. I'm not saying we aren't heavily influenced by everything from genetics/epigenetics to upbringing. Or that every behavior/thought we have is not automated (plenty are). And in other mammals this is even more true. But our capacity for semantic memory and conceptual representation is not found in anything except advanced cortical structure. I can learn in the way the robot or an ANN does. When a former Israeli special ops and current tactical expert came to our krav maga class to teach us stick, edged, weapons, and gun defenses/attacks, he had us do push ups if after disarming our partners we handed back the knife/gun. Why? Because a police officer was so used to doing this in training that he actually handed back the gun to an assailant he had disarmed. That's procedural memory and is quite similar to the way a ANN or simple organism learns. However, I also have the capacity for semantic memory. Which means I can store concepts. Which means I have the capacity for awareness. Programs don't.
 

idav

Being
Premium Member
The problem is the programming analogy only works to a certain extent, and it fails when it comes to awareness. I'm not saying we aren't heavily influenced by everything from genetics/epigenetics to upbringing. Or that every behavior/thought we have is not automated (plenty are). And in other mammals this is even more true. But our capacity for semantic memory and conceptual representation is not found in anything except advanced cortical structure. I can learn in the way the robot or an ANN does. When a former Israeli special ops and current tactical expert came to our krav maga class to teach us stick, edged, weapons, and gun defenses/attacks, he had us do push ups if after disarming our partners we handed back the knife/gun. Why? Because a police officer was so used to doing this in training that he actually handed back the gun to an assailant he had disarmed. That's procedural memory and is quite similar to the way a ANN or simple organism learns. However, I also have the capacity for semantic memory. Which means I can store concepts. Which means I have the capacity for awareness. Programs don't.
Yet this bolded part is yet another form of programming that we are capable of. I am also aware that we have various forms of memory which would mean different programming. The jeopardy robot only tapped into one form of memory so it only scratches the surface as we are have many forms of memory combined with multiple algorithms per memory type. This just means that the human mind is the most complex thing in the universe. However other organisms certainly have awareness with far less complexity.
 

LegionOnomaMoi

Veteran Member
Premium Member
Yet this bolded part is yet another form of programming that we are capable of. I am also aware that we have various forms of memory which would mean different programming. The jeopardy robot only tapped into one form of memory so it only scratches the surface as we are have many forms of memory combined with multiple algorithms per memory type. This just means that the human mind is the most complex thing in the universe. However other organisms certainly have awareness with far less complexity.
Aleksander and Dunmall (2003) and Aleksander (2005)​
have developed an approach to machine consciousness
based around five axioms, which they believe are minimally necessary for consciousness:
1.
Depiction. The system has perceptual states that ‘represent’ elements of the world and their location.
2.
Imagination. The system can recall parts of the world or create sensations that are like parts of the world.
3.
Attention. The system is capable of selecting which parts of the world to depict or imagine.
4.
Planning. The system has control over sequences of states to plan actions.

5.
Emotion. The system has affective states that evaluate planned actions and determine the ensuing action

Other mammals with advanced cortical structures are "aware." Simple organisms and our most advanced ANNs are not. Learning and awareness are simply not interchangable. Nor can one equate consciousness with programming, because consciousness is self-determining. To say one is "programmed" means one is responding to code. If one is "re-writing" the "code" then that isn't programming.
 

idav

Being
Premium Member
Aleksander and Dunmall (2003) and Aleksander (2005)​
have developed an approach to machine consciousness
based around five axioms, which they believe are minimally necessary for consciousness:
1.
Depiction. The system has perceptual states that &#8216;represent&#8217; elements of the world and their location.
2.
Imagination. The system can recall parts of the world or create sensations that are like parts of the world.
3.
Attention. The system is capable of selecting which parts of the world to depict or imagine.
4.
Planning. The system has control over sequences of states to plan actions.

5.
Emotion. The system has affective states that evaluate planned actions and determine the ensuing action

Other mammals with advanced cortical structures are "aware." Simple organisms and our most advanced ANNs are not. Learning and awareness are simply not interchangable. Nor can one equate consciousness with programming, because consciousness is self-determining. To say one is "programmed" means one is responding to code. If one is "re-writing" the "code" then that isn't programming.
I can understand your example of being conscious but being aware something quite different. The consciousness is the analyzing part which is a product of the awareness an organism has of it's environment. The awareness can grow to another level when the organism is able to look back at itself. So a simple organism will be hungry(sensory) but it takes consciousness/self-awareness for the organism to be aware that its hungry.

Our brains are programmed to learn in a certain way. We are born with the basic functionality. When I say we rewrite programming I mean that we are learning. Learning is just programming that essentially overrides the original core programming which is why a programmer will not be able to predict what a learning machine is capable of.
 

LegionOnomaMoi

Veteran Member
Premium Member
I can understand your example of being conscious but being aware something quite different. The consciousness is the analyzing part which is a product of the awareness an organism has of it's environment. The awareness can grow to another level when the organism is able to look back at itself. So a simple organism will be hungry(sensory) but it takes consciousness/self-awareness for the organism to be aware that its hungry.

Our brains are programmed to learn in a certain way. We are born with the basic functionality. When I say we rewrite programming I mean that we are learning. Learning is just programming that essentially overrides the original core programming which is why a programmer will not be able to predict what a learning machine is capable of.

Procedural learning isn't "rewriting code" the way that awareness/conceptual representation/semantic memory allows. I can provide you with some resources on learning, A.I. models, "awareness" (it's not really used in the way you use it, that's self-awareness), etc.
 

idav

Being
Premium Member
Procedural learning isn't "rewriting code" the way that awareness/conceptual representation/semantic memory allows. I can provide you with some resources on learning, A.I. models, "awareness" (it's not really used in the way you use it, that's self-awareness), etc.
I shouldn't have use rewriting. Over riding is more accurate.
 

PolyHedral

Superabacus Mystic
I'm going to start with this:
Yes I read the book (my father bought it some years ago). So you're current understanding of A.I. research and consciousness is based on a 40 year old book?
Yes, although I forgot to mention Less Wrong. GEB isn't about the details of neurotransmitters and so forth, but more about the high-level reasoning involved in learning. As I said, it seems to do a good enough job of explaining the mind in terms of algorithms.

What Searle argues (at least what I was refering to) was that passing the Turing test doesn't necessarily indicate strong A.I.
I think he's trying to rationalize a prior assumption. If it quacks like a duck, walks like a duck, and swims like a duck, so as far as I'm concerned, it's a duck.

1) The book is the thing that understands!??? The whole thought experiment concerns the implications of passing the "Turing test." It shows that an entity can output data given input (such as a question) accurately or in the way a human would and not understand it.
Of course the book('s contents) does. It'd be ridiculous to say that the human does, since the human is just a dumb machine that executes the book('s contents).

Since Turing, von Neumann, Shannon, etc., the algorithmic approach has qualitatively changed. "Learning" algorithms no longer specify procedures, nor do they resemble a type of Turing machine.
Of course they specify a procedure. Modern general purpose CPUs can't do anything except execute procedures. (Albeit have many tricks to execute the procedures non-linearly, and may execute multiple instructions in parallel.)

Of course, they are still algorithms. But current research on consciouness (artificial or no) suggests that
1) an algorithmic approach may be fundamentally wrong...
That seems very unlikely, since algorithms can rewrite and evaluate themselves just fine, and I've seen no reason to think "conciousness" is anything beyond that.

2) if strong A.I. can be based on algorithms, then the emergent consciousness will be indeterministic, such that neither the program/computer nor the programmers will be capable of knowing how it emerged. We already have problems determining the returned values of some complex ANNs.
You're basically saying that neither the program itself or us will be able to tell how human-level intelligence works in real-time. Isn't that obvious?

I'm not sure if you know what embodied cognition is. It has nothing to do with the hardware of the brain per se. It is a rejection of domain specificity and has to do with the relationship between cognitive processes and sensory-motor experience (i.e. language is based on, for example, spatial/temporal metaphors).
Again, that also seems obvious with the proviso that we're currently incapable of directly writing knowledge/concepts into the brain. One can only learn from "experience" (where experience includes someone explaining something to you) and since we experience 3D space, linear time, etc, most of our ideas will be based on that. The exception is esoteric mathematics, and that arose because someone generalized the symbols on paper, not because they arrived at the concept of (e.g.) 4D space in any natural way.

True, but qualitatively different complexity. That's why we all of our attempts to model some form of consciousness have failed. Neural networks were an improvement, but they fall very, very, short of awareness.
All attempts to model conciousness have failed because people can't make up their mind as to what they mean by conciousness.

The only way we have been able to come up with computer programs or computer systems which imitate even the basic stimulus/response learning is by using artificial neural networks.
Obviously I don't know for sure, but AFAIK Watson never uses any sort of neural net. It might use evolutionary methods, but there is also Bayesian probability models and so forth available.

Having studied brain data, code, neural networks, and learning paradigms, I can tell you that lumping it all under "program" renders the term meaningless
"Program" is still very meaningful. It's a method, an algorithm, a guide to achieve something. Just because it's easy to describe what it is, doesn't mean that they're limited by anything. "Being human" is a goal you need an algorithm for.

What we can do is self-determine, store abstract concepts, categorize and generalize, activate these concepts because we are aware, and so on. Other mammals can do this to a lesser extent. Computers (artificial neural networks or not) and simple organisms simply lack this capacity, because they aren't aware. It's a qualitative difference.
It also seems slightly circular. What exactly does being "aware" entail?

How a system can determine it's own conditions, how neurons can act synchronically without any local causes, we don't know. We have guesses about what might be involved. But we have no explicit algorithms, equations, models, or any similar well-defined explanation.
A system can't determine its own conditions, by definition. And just because we have no idea what the algorithms involved are doesn't mean there aren't any.

When a computer stores the data, I have to make it access these data, and it can only access exactly what I tell it to. It doesn't "know" the data, it doesn't "know" anything about it. Saying a computer "knows" the data stored is like saying a book "knows" the data it stores. One's just electronic.
Well, there's nothing that prevents a book from telling you to rewrite it, or to do or think things based on the structure of the book itself. Great use of the latter has been used in fiction, for instance.
 

LegionOnomaMoi

Veteran Member
Premium Member
I've providing some links to papers (and a book) I've uploaded which provide some background the issues discussed. I've tried to ensure that most are more general reviews and less technical, but I've included some (for those so inclines) which are require a background in math/physics/& or biology.

First the book: Remergence of Emergence (Oxford University Press, 2006).
I was looking for an except from this and found the whole book (meaning I wasted money buying it). It's an edited volume with various papers written by specialists from different fields, and divided into 4 sections.

The following two papers are on consciousness and neurobiology. Both are from peer-reviewed science journals, and both are reviews which (I think) are pretty accesible even for those without a background. They also differ in views and some of the concepts discussed:
Neuroontology, neurobiological naturalism, and consciousness- A challenge to scientific reduction and a solution

Neurobiology of consciousness- an overview

The following papers (again from peer-reviewed science journals) are mainly for idav, as they discuss levels of awareness/consciousness (e.g., for single-celled organisms) and related issues:
The concept possession hypothesis of self-consciousness
neural awareness and perceptual awareness

This review (also peer-reviewed science journal) is specifically on the state of A.I.
Progress in machine consiousness

This paper concerns the issues facing neuroscience, determinism, free-will, etc.
Neuroscience, Free will & moral responsibility

This is a rather technical account of complexity in artificial and computational neural networks
Controlling chaos in a chaotic neural network
 

LegionOnomaMoi

Veteran Member
Premium Member
Yes, although I forgot to mention Less Wrong. GEB isn't about the details of neurotransmitters and so forth, but more about the high-level reasoning involved in learning.
I've read it (albeit a while ago). And while it was an interesting read, and it is still referenced here and there, it's outdated. Additionally, when I first started reading about emergence it reminded me of his "strange loops."

As I said, it seems to do a good enough job of explaining the mind in terms of algorithms.
That's what classical cognitive science thought as well. I'm not saying the whole field now believes that the "mind" is indeterministic, nondeterministic (or non-computable, a term used in the Penrose-Hammeroff Model), but the algorithmic approach has definitiely been abandoned in the classical computation sense (e.g., Turing and von Neumann), and there are a number of specialists in fields related to A.I. and cognitive science who believe that even the types of algorithms used in ANNs (or any type of algorithm) is sufficient.

I think he's trying to rationalize a prior assumption. If it quacks like a duck, walks like a duck, and swims like a duck, so as far as I'm concerned, it's a duck.
Actually a form of his argument, or at least it's implication, is now widely accepted. Once we started to build computers and programs which could learn and exhibit extremely complex behavior, and yet not even come close to consciousness, scientists across fields began to pay more attention to what it means to understand. And they were late to the game. The work on metaphor in cognition by Lakoff and Johnson (1980) and Lakoff (1986) should have already made cognitive scientists aware what is involved in understanding, concepts, etc. Categorization, generalization, prototypicality, embodied cognition, etc., were all around, but these ideas were coming from linguists, and as the Chomskyan paradigm still dominated both linguistics and cognitive science, it took some time for the cognitive linguistic framework to gain much more widespread acceptance.



Of course the book('s contents) does. It'd be ridiculous to say that the human does, since the human is just a dumb machine that executes the book('s contents).

First, the original argument was that the room could process symbols (like a computer) without understanding. The first main counter-argument (or at least the one which Searle felt valid enough to change his argument) was to take the human as the machine, and then his critique holds. The issue is that processing and understanding are not the same. Or, more technically, pattern recognition and conceptual representation are not the same.



That seems very unlikely, since algorithms can rewrite and evaluate themselves just fine, and I've seen no reason to think "conciousness" is anything beyond that.

Because, as we learned when we started writing programs (and even building machines which took much more advantage of massive connectionalism of the neural system rather than simply simulate it through a program), there is a very large gap between recognition and understanding. Our most sophisticated learning machines/programs allow advanced responses, but (as we quickly learned) despite their ability to behave chaotically, adapt, and so on, conceptual representation and semantic memory is a whole different ballgame. So currently the issue among A.I. researchers is what a machine capable of "understanding" might even involve, if it is possible at all.

Isn't that obvious?

Rather, what I was trying to say is that if we can create a conscious entity, it will mean creating something which is self-determining and has "free will" in that it's "mind" will allow it to chose actions which are at least partially determined by the "mind" itself in a non-computable manner.

Again, that also seems obvious with the proviso that we're currently incapable of directly writing knowledge/concepts into the brain. One can only learn from "experience" (where experience includes someone explaining something to you) and since we experience 3D space, linear time, etc, most of our ideas will be based on that. The exception is esoteric mathematics, and that arose because someone generalized the symbols on paper, not because they arrived at the concept of (e.g.) 4D space in any natural way.
They're are embodied accounts of mathematics. But I'm still not sure you understand what I mean (which is my fault as I haven't really explained it; then again, the subject is complex and is hotly debated). For example, at very basic levels of language we use spacial and temporal notions to illustrate abstract notions. I could be "going to the store" or "going to write a book." One is the natural use of a verb of motion, the other uses the same verb to express purpose. Or take the deictic pronoun "there." It's spacial. But most of the time it's used in highly abstract constructions: "there's a reason we haven't created programs with consciousness" or "there's the arrogant Dr. House we're used to. I knew it was an act." I can come to a ball game, but I can also come to a conclusion. I can have (hold, grasp) a slice of cake, or I can have an idea (in fact, even the perfect tense- "I have, but wherefore I know not..."- comes from the extending the notion of holding/grasping to possession to "possessing" a completed action). fMRI studies also seem to indicate that when we store concepts like "hammer" or "cup" at least part of this storage involves a motor program. Other studies indicate that abstract concepts have spatial directionality: sad is down, hope is up, etc. Embodied cognition isn't just the notion that our thought is influenced by our environment, but that highly abstract levels of conceptual representation and categorization are extensions of concepts based in perceptual-motor experience.


All attempts to model conciousness have failed because people can't make up their mind as to what they mean by conciousness.
There is disagreement. But these are more details than anything else. They wouldn't prevent us from modeling consciousness if we had any idea how the brain does what it does which allows us to be self-aware, conscious, store abstract generalized concepts, categorize, etc. Things that 50 years ago (even 30 years ago) were thought to be straightforward and simple (like categorization) have since the 1990s attracted much more attention because of their complexity.

Obviously I don't know for sure, but AFAIK Watson never uses any sort of neural net.
It is. DeepQA (the underlying algorithms) is a "learning" connectionalist network (neural network). It's a supervised ANN, which learns by adjusting weights. During the actual game, the way it "decides" to answer a question is whether or not the weights sum to the "neural threshold."

It might use evolutionary methods, but there is also Bayesian probability models and so forth available.

Bayesian models, fuzzy logic, etc., are ALL used in ANNs.


"Program" is still very meaningful. It's a method, an algorithm, a guide to achieve something. Just because it's easy to describe what it is, doesn't mean that they're limited by anything. "Being human" is a goal you need an algorithm for.

Actually Being Human is a BBC show. In all seriousness, you're assuming this, and from what I can tell the basis of beliefs about consciousness is an outdated and largely abandoned view. I could be wrong, of course, but so far you've mentioned a 30 year old book and a website.


It also seems slightly circular. What exactly does being "aware" entail?
The term isn't usually used as I used it here, because I wan't being technical.

A system can't determine its own conditions, by definition.
Not by definition. That's simply the limit of our capacity to model systems.

And just because we have no idea what the algorithms involved are doesn't mean there aren't any.
That's true. But there is good reason to think that there aren't.
 
Top