If you a stimulus is given to the brain because your were frightened that is simply reaction to stimuli. The representation is artificially created for us.
Of course we react to stimuli. And we do have automated stimulus/response learning methods. The point is we have more, and consciousness and awareness IS the "more."
I thought that perhaps part of the problem here is that I'm not used to discussing these issues with people who aren't in the field. I learned quickly when teaching/tutoring students in math, latin, greek, etc., that I had to develop ways to simplify the material and yet get across enough of what was important. So I picked up a book today written for a wider audience rather than the books/papers I have which require a background in mathematics, neurobiology, information theory, etc. The following excerpts are from
Intelligence by Jeff Hawkins:
He begins by describing the type of thinking PolyHedral seems to refer to (classical cognitive science/A.I.): "To these scientists [intelligence involved] just programming problems. Computers could do anything a brain could do, and more, so why constrain your thinking by the biological messiness of nature's computer?...They believed it was best to study the ultimate limits of computation as best expressed in digital computers."
He's describing the state of research in the late 70s and early 80s. He continues:
"This struck me as precisely the wrong way to tack the problem. Intuitively I felt that the artificial intelligence approach would not only fail to create programs that do what humans can do, it would not teach us what intelligence is. Computers and brains are built on completely different principles. One is programmed, the other is self-learning. One has to be perfect to work at all, one is naturally flexible and tolerant of failures. One has a central processor, the other has no centralized control. The list of differences goes on and on."
As it turns out, he was right and they were wrong.
"Neural networks were a genuine improvement over the AI approach. Instead of programming computers, neural network researchers, also known as
connectionists, were interested in learning what kinds of behaviors could be exhibited by hooking a bunch of neurons together...On the surface, neural networks seemed to be a great fit with my own interestes. But I quickly became disillusioned with the field. By this time I had formed an opinion that three things were essential to understanding the brain."
The three things he lists are time, feedback processes, and physical architecture. Unfortunately, "Most neural networks consisted of a small number of neurons connected in three rows...These simple neural networks only processed static patterns, did not use feedback, and didn't look anything like the brain...I thought the field would quickly move on to more realistic networks, but it didn't."
That's just a brief history of the rejection of classical AI and the lack of continued progress using a superior method. But what about intelligence/understanding/awareness? First, although Jeff Hawkins does not agree, he does note that "A suprising number of people, including a few neuroscientists, believe that somehow the brain and intelligence are beyond explanation. And some believe that even if we could understand them, it would be impossible to build machines that could work the same way, that intelligence requires a human body, neurons, and perhaps some new unfathomable laws of physics."
Personally, having read a great deal of scientists working in fields related to AI (cognitive science, neurobiology, neuroscience, computer science, etc.), Hawkins isn't really accurately representing the number or arguments of those who don't believe we can build intelligent machines. But the books is designed to be simple, so it's not a big deal. At any rate, he disagrees.
Hawkins goes one (much later) to distinguish between the learning of single-celled organisms and plants and how these evolved into intelligence/awareness. Simple intelligence (dogs, cats, rats, etc.) is possible because of heirarchical processes which allow self-modification of neurons and the connections between them: "But with [the neocortex's] heirarchichal structure, invariant representations, and prediction by analogy, the cortex allows mammals to exploit much more of the structure of the world than an animal without a neocortex can. Our cortically endowed ancestors could envision how to make a net and catch fish. The fish are not able to learn that nets mean death..."
His discussions of consciousness aren't very helpful (as he admits, that isn't his area), but he does mention some of the basic concepts involved:self-awareness and
qualia. Qualia is still used, but it's a bit old school. Conceptual representation, semantic memory, and similar more precise scientific terms are used more frequently.
But the basic point (and Hawkins discusses and agrees with Searle's chinese room argument when he illustrates this point) is the difference between responding and understanding. A system, whether it is a neural network or a venus fly trap, can respond. But it cannot understand. It has no awareness of its responses because awareness requires the storage of abstract concepts (bug, food, close, etc.) which can be generalized, extended, and modified by the system. Simple response learning involves no understanding or awareness because this "learning" is simply a change an involuntary change in response parameters. Once a computer program "learns" to recognize the word "food", all it has "learned" is given X input, output Y. When a dog hears "food," the dog does this too, but much more: the dog activates a semantic (abstract concept) memory which involves takeing the X input (the audio), processing it, relating it to this abstract concept which is a generalization of and relatable to multiple specific instantiations of different Y values (this or that dog food, human table scraps, something which fell on the floor and is edible) as well as representations of related concepts (food bowl, the act of eating, the location associated with eating, etc). And these can all be adapted by the dog itself.
The knee jerk response is something that is already learned or preprogrammed.
And that's the type of "learning" single-cell organisms do. A venus fly trap is preprogrammed to close it's "jaws" under X condition. A neural network is preprogrammed to adjust its weights given X input. A system capable of awarenes can self-adjust.
Through patterns and algorithms. That is essentially what your describing.
Through patterns, yes. Algorithms? Maybe. We don't know. If we did, we could program them. To explain, let me address this:
I mentioned the quantum process but you mentioned "defy classical mechanics". I'm not really sure it does.
From Davies' paper in
Re-Emergence of Emergence (Oxford University Press, 2006):
"Recent work by Max Bennett (Bennett and Barden, 2001) in Australia has determined that neurons continually put out little tendrils that can link up with others and effectively rewire the brain on a time scale of twenty minutes! This seems to serve the function of adapting the neuro-circuitry to operate more effectively in the light of various mental experiences (e.g. learning to play a video game). To the physicist this looks deeply puzzling. How can a higher-level phenomenon like experience, which is also a global concept, have causal control over microscopic regions at the sub-neuronal level? The tendrils will be pushed and pulled by local forces (presumably good old electromagnetic ones). So how does a force at a point in space (the end of a tendril) know about, say, the thrill of a game?"
From Chapter 10 (included in the partial book I provided you): "..synchronization links together processes in distant parts of the brain. According to a popular hypothesis, development of transient synchronous clusters in neural networks spanning the whole brain is responsible for the appearence of distinct mental states which make up the flow of human consciousness.
When large-scale synchronization of neuronal processes is discussed, one should avoid the mistake of assuming that it merely results from the synchonization of states of individual neurons. If this were the case, the whole brain or large parts would have behaved just like a single neuron....We show that interaction between networks can lead to mutual synchronization of their activity patterns and to spontaneous seperation of the enseble into coherent network clusters."
From Scott's paper in
Evolution and Emergence (Oxford University Press, 2007):
"Under [strong downward causation], it is supposed that upper-level phenomena can act as efficient causal agents in the dynamics of lower levels. In other words,
upper-level organisms can modify the physical and chemical laws governing their molecular constituents."
from Gur, Contreraras, and, Gur's paper in
Indeterminacy: The Mapped, the Navigable, and the Uncharted (MIT press, 2009):baseline state indeterminacy [of the brain] can be ontological, that is, the very structure of the brain dictates indeterministic states, independently of any observation..."
Aren't neurons really determined by our genetic code? At some point the program comes from somewhere.
As I think I mentioned before, genetics determines quite little. That's why geneticists are now concentrating more on epigenetics. Certainly, genetics don't "determine" neural activity. Nor does it seem that this activity can be equated with a program.
That kinda sounds like how that jeapordy robot knows things.
Again, "knowing" and "storing data" are two very different things.