Yet several computations and mass storage is exactly what is needed for the brain to do it.
I work in cognitive neuropsychology, and I don't know how the brain does it. I have worked with a lot of people who (unlike me) have a PhD in this or a related field, and none of them do either. As I've said before, something so basic as neural encoding still sparks considerable debate. First, much of computational neuroscience has been devoted to what is often called the "labelled-line" or "labeled-line" theory. Simplistically, each individual "receptor" neuron in the eye carries unique information to the brain that collectively allow animals (in a particularly famous study done in 1959, the animal was a frog) to "see". In other words, there is a "line" from each receptor to some specific place (or even neuron) in the brain. In this model, visual neurons are more akin to "bits" in that although it takes a lot of them, each one is somehow "meaningful".
That's no longer considered true
even for neural receptors. Volume 130 of the edited series
Progress in Brain Research (
Advances in Neural Population Coding; 2001) represents a turning point in computational neuroscience and neuroscience in general away from this idea. But the problem (and the reason for the volume) is what to replace it with: "If the brain uses distributed codes, as certainly seems to be the case, does this mean that neurons cannot be 'labeled lines'? Clearly, to support a code of any complexity, active populations must be discriminable from one another, which means that differences among the individual cells are important. Neurons cannot respond equally well to everything and form useful representations of different things. Thus, the sharp dichotomy between distributed coding and labeled lines seems to be a false one and the critical question is 'labeled how and with what'."
That was back in 2001, before neuroimaging studies (and in particular fMRI studies) were as prevalent. Now we know more about how much farther away from understanding the "neural code" we are than previously believed. For one thing, it is now certain that the neural "bit" isn't typically based on the activity of individual neurons, but on the synchronization/correlation of their spike trains. Thus most of the time, the "minimal" meaningful information (the "bit") is a constantly changing level of correlated activity among a changing number of neurons.
But it gets worse. In a monograph published the same year as the volume referenced above (
Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems), the authors baldly state "[t]he nature of the neural code is a topic of intense debate within the neuroscience community. Much of the discussion has focused on whether neurons use rate coding or temporal coding, often without a clear definition of what these terms mean." A later volume (
Dynamical Systems in Neuroscience) from the same series (MIT's
Computational Neuroscience) closes with a chapter on neural "bursts" in which the author remarks that "single spikes may be noise." In other words, the way that neurons work, as described in just about every undergrad textbook on neurophysiology, is just plain wrong. These textbooks start at the level of the neuron and offer a simplistic (or distorted) model of how neurons "fire". They then typically skip how this means anything and it is just assumed that this firing is somehow the basis of the "neural code". As it turns out, this description may be describing what is only "noise", rather than part of "the neural code". And it is certain that even if there is some meaning to the all-or-nothing action potentials (firing) described so often, this is not the typical "minimal unit".
And things get even worse still. As stated in the article "Nonlocal mechanism for cluster synchronization in neural circuits" (from the journal
Europhysics Letters), "one of the main enigmas in neuroscience" is not about the neural code per se (in that it isn't about how bursts or correlations of spike trains and so forth can be "units" of information), but about the level and speed of correlations
of nonlocal neural populations. In other words, not only do find that the "minimal unit" doesn't really exist except as a concept (in that the minimal unit is described as something which changes in size and nature), but the same coordinated activity which can make up a "minimal unit" within a neural population can be found among neural populations themselves. Moreover, this synchronization between different cortical neural populations occurs almost instantaneously. Which means that the "minimal unit" can not only be correlations among various neurons, but even correlations between correlated neural populations themselves.
One major theory about how brains can deal with concepts concerns such massively parallel and quantum-like (or actually quantum) properties (e.g., nonlocal correlations) of the brain. The theory goes something like this: we know that concepts are not Platonic ideals. That is, there isn't a concept "tree" which corresponds to any single representation of neural activity in our brain because there isn't any single concept "tree". A "web" can be a spider web, a method for knowledge representation, a "web of deceit" or of lies, the internet, etc. Single concepts are really not single at all: they are networks of abstractions which share certain relationships in terms of particular semantic content. For example, the interconnectedness and structure of a spider web is metaphorically mapped onto the idea of something like lots of intricate lies also "organized" to deceive, or a bunch of connected computers. It may be that the seemingly impossible level of coordination between and within neural populations allows us to process concepts by allowing us to represent related conceptual content in distint cortical regions which are nonetheless strongly connected.
We can't even accurately model this level of coordination on a computer, let alone build computers capable of it. And it may very well be that no digital computer will ever be capable of what enables brains to deal with concepts rather than just "syntax" or formal rules.
I understand you believe we are cheating to mimic awareness but I think there is more than sever ways to be aware.
Not at all. For one thing, machine learning has produced a great deal. I am simply distinguishing (as everyone who works in the cognitive sciences does) between qualitatively different types of awareness. More importantly, I am suggesting that the current work in A.I. cannot result in
accociative learning. It was largely based on how simple organisms which are purely reflexive "learn", and thus based on non-associative learning. That's not what we want if we want A.I. To continue to hope that more of the same (i.e., increasingly sophisticated neural network algorithms or pattern recognition algorithms) will somehow get us from non-associative to associative learning seems foolish. This is not to say we can't make this leap, or even that we can't do it on computers. Just that I don't think the current approach will get us anywhere and something else is needed.
How was Watson not aware of all that data it is required to sift through and analyze?
The same way a pocket calculater isn't aware of things like algebra. Computers compute (hence the name). They were built to carry out mathematical operations. The fundamental method for doing this is the design of logic gates which allow us to carry out basic logical operations automatically. As a lot of mathematics can be reduced to these logical operations, as long as we can come up with well-defined mathematical functions, we can (at least approximately) implement these on a computer.
Watson did that. It did a lot of automated mathematics. In order to make Watson capable of answering anything, humans had to build specialized databases which were annotated (or labeled) in a particular way to enable mathematical procedures to sort through them without understanding anything. To say that Watson was "aware" of the data is like saying your calculator is aware of addition, pi, trig, etc., simply because you can make it calculate the answers to math problems.
You might think of Watson in terms of the word problems from high school mathematics which almost everyone hates. They hate these because there is an extra step: turning the question into a mathematical equation, or equations, or mathematical expression or expressions. Once this is done, the word problem is no longer a word problem but is like the other math problems. With Watson, people who actually understood language built databases so that the "word problems" could be reduced to a bunch of equations.