This doesn't throw out the former perception at all.
It's a completey different one.
As per what I said above, I don't see how it changes the fact that the brain is encoded bits of information that can later be retrieved using proper stimuli.
Because it's all well and good to say that the brain encodes "bits", but if you can't show me anything in the brain which corresponds in general to a bit (and at the moment, no one can), then it doesn't mean much.
OK so neurons are more complicated than we thought. This still doesn't change it.
It's not that neurons are more complicated. It's that the neural "code" is. And it is a qualitative change. Think of it this way: why does your computer use a binary storage system, in which the minimal unit can only have 2 states? We've been storing data as only 0s and 1s for ages. Why did nobody ever think "wait, we can double our storage capacity just by using four states instead of two!" The answer is this is a qualitative change and would result in an increase in complexity/issues such that it wouldn't be worth it.
Most people (including psych students) think of the neural code in terms of action potentials which are described as the neuron firing when it reaches a certain threshold. We've known this was wrong since the 40s. But we didn't know
how wrong until more recently. We now know how strong the tendency for neurons to synchronize is (so strong that it can happen in brain slices preserved outside the brain, and even without synaptic transmission/signals). We also know that too much synchrony makes brain function impossible because the brain would almost be one neuron. So now, rather than trying to explain the neural "code" in terms of single spikes, we believe brains make use of
all of (i.e., these all constitute meaningful "units" of information) the following
1) The ways in which the
rate of of firing changes over time
2) The rate itself
3) The correlations between the activity of neurons
4) The ability for neural populations to synchronize
5) Various neurophysiological properties that allow neurons to desynchronize given certain conditions (which aren't very clear) rather than do what it appears they do so readily
6) The size of a synchronized neural assembly
7) The frequency range at local and nonlocal scales
It quickly becomes very difficult to defend any notion of something like a "bit" in the brain when we know at least that if there is some minimal unit, it is constantly changing in a number of different ways. Which is why there has been an increased emphasize not on the neural code or neural firing, but on functional connectivity. What kind of conditions create what kind of neural assemblies which (again, under what conditions) may synchronize with other (nonlocal) assemblies? The question is then not about code or bits, but more about how information is represented by patterns of coordinated neural activity distributed throughout the brain.
With computers, we know exactly how information is represented: through individual binary states, each one existing always in the same place, and always only in one of two states. By contrast, both the firing and the non-firing states of neurons form part of the mininal units of information, and neither is that minimal unit, which is a misnomer anyway.
Computers were designed explicitly to be organized. Every bit has a place and one of two states, and every step of every program corresponds to specific changes in specific bit states. They were designed for control and to work very precisely with very precise rules. That's why we can make computers so great at chess. We're dealing with something which can easily be broken down into rules. Where we have clear rules, we have math, and thus we can get a computational device to implement the rules.
Now we are increasingly using computers in ways they were never intended to be used. We employ fuzzy logic which violates the logic built into the computer itself. We create mathematical learning models which don't have explicit rules, in that the only rules are how the the computer will, in general, adapt over time to input. And so on. But there is a problem. The artificial neural networks we program do not do what actual neural networks do. They can't. Because neurons do not have well-defined states, and neural networks do not consist of collections of discrete states. That is fundamentally what computers are: collections of discrete, binary states. So we take a simplified mathematical model of actual neural networks, and imitate this on a machine designed from the ground up to work differently.
Again, I'm not saying this is a bad thing. From both a theoretical and applied perspective (i.e., in terms of understanding something like the brain as well as coming up with programs that recognize faces or recommend books for you automatically) we've done a lot. But everything we've done ultimately goes back to logic gates and binary states, and thus to pure computation rather than comprehension. It may be that the architecture
cannot do what brains do. It may be that it can, but we need to understand how associative learning works in a way we aren't even close to yet. Whatever it is that is missing, it seems clear that it is a qualitative difference.
Thats interesting, I was wondering about that. It is certainly amazing that it is able to act as one unit. That many connections would be equivalent to a million computers with as many cores as we can fit and being able to communicate as one system.
Keep in mind also that a single neuron (e.g., a Purkinje cell) can be connected to well over 100,000 other neurons. That is, a single neuron is constantly receiving input from over 100,000 other neurons.
I think how complicated that is stems from the way it evolved. Not the most efficient way but certainly works and is redundant enough.
"Certainly works"? Computers are extremely efficient. That's why it's so hard to get them to do what slugs and plants can do as far as learning is concerned. The complexity of the brain is what makes mammals and in particular humans capable of doing what no other known system can do.
We learn based on rules too. We create concepts and things like numbers and letters go to a certain part of the brain. If the neurons were not being decoded the syntax and coding would be pure gibberish.
First, concepts are not rules, and the problem is we don't know how something like a computer which has only the ability to work with rules can learn concepts. We don't know how we do. Second, "things like numbers and letters" do not "go to a certain part of the brain". This is what makes fMRI analysis such a challenge. A number or word or similar stimulus will increase activity in multiple places in the brain and will change each time in ways we don't understand. So if I want to say that certain regions are involved in, say, processing action words/verbs, I have to show that these regions are significantly more activated compared to some control, and futher that the other regions which will also be significantly more activated are not somehow the "core" of the whatever action word the subject is exposed to. This is a central area of debate in the cognitive sciences, because one group maintains that concepts like action words are represented apart from the motor and sensory systems of the brain. So they explain the activation in these regions during experimentation as indicative of something else. The other group maintain that cognition is fundamentally embodied, and that part of the representation of concepts like action verbs and even abstract notions like hope make use of sensorimotor regions of the brain.
What remains true either way, however, is that we can't point to some place in your brain where the concept of "1" exists. It is somehow represented across distributed and changing neural assemblies. At best we can say there are regions which are likely to be involved in representing the concept "1".
Watson does far more than a calculator can do
That's because it's a bigger calculator.
Watson did stuff instantaneously what some of us couldn't do with an hour of google time.
How fast could you solve the following equation:
132,674.65754 * 13.1^8 /.234= x ?
That's a very simple equation. It's straightforward arithmetic. The rules are simple, but the calculations are difficult for us because we aren't calculators. Computers are. Nobody should be impressed that Watson could find the answers. That wasn't what the challenging part was. Computers are great at storing data and accessing it. The challenging part was getting Watson to parse the question, sort through an enormous database which had to be specially made so that Watson didn't have to understand words or language to calculate an answer. It's challenging because we had to turn language into something it isn't: math.
I believe the techniques are similar to the way the brains memory works since memory is really a collection of bits put together like a puzzle.
If that were true, then whenever someone asks you a question like "how's it going", you would root through an enormous database, find a bunch of possible matches that correspond to what the question might mean, calculate the probabilities that each is what the question means, select the most probable, and then return a programmed answer.