Memory just holds the data in whatever form can be meaningful for a particular language. The processing is doing something with that data.
This is a conceptual distinction. With a computer, we can certainly contrast RAM and the CPU with this apprach in a meaningful way, but what about the brain? Where is data "held" such that something else "processes" it (and what is that?) You have a constantly active brain, with memory distributed throughout, and no clear way to seperate "data" from "data processing". In fact, it seems likely that any such distinction is at least largely abitrary and probably rather fundamentally flawed.
Essentially thats what cells do. They communicate.
You misunderstood. I meant "yes, I am saying that neither groups nor individual neurons speak a language. And the fact that cells "communicate" doesn't say much at all about how memory, processing, and cognition in general happens, as all cells communicate and many as part of a larger system of interconnected cells, but very few such systems allow associate learning/conceptual processing. In fact, at the moment only brains do. Neurons communicate, as do cells, but most neural (nervous) systems are incapable of associative learning. Poking a sea slug causes neural communication and non-associative learning, but this will never turn into something even rats are capable of: internal representation of abstractions/concepts.
It is meaningful because a neuron is quite complex. All sorts of spike patterns and being able to connect to 100,000 other neurons simultaneously.
Weather is really complex. A swinging pendulum is really complex. Single celled organisms are really, really complex. But complexity doesn't make something a computer. Cloud dynamics are a very complex result of lots of connects processes and systems. But clouds are not computers.
Fundamentally, learning is a process that animals share. The difference with humans doesn't change that.
No. But the difference between the types of learning certain animals are capable of, and certain others are not, makes all the difference in the world. The most important, and most basic distinction here is associative vs. non-associative. Some animals can do both. But most can do only the second. It is this kind of learning (learning without understanding or awareness) which computers are currently capable of.
What your wanting is a computer to be able to go back and critique itself and it's answers. I'm sure a robot would have no problem doing the type of learning your asking for with proper programming.
That's not what I want out of a computer. Assessing states (external and internal) through procedures is nothing new. Interal representation of abstract, conceptual content, on the other hand, would be something quite new, and that's what I would like.
Data can be anything but it certainly doesn't mean anything if you don't speak the language of pebbles, coins or skin cells etc.
Pebbles, coins, and skin cells don't have language.
Binary is a way to send data in terms of on off switches. That would go for anything really, it either happens or it doesn't.
Let's use actual language here and see if we can't find a way in which this appears to be incorrect. When I was in 2nd grade, I "read" Moby Dick. The square quotes there are because while it is true that I read every word and every page, I understood almost none of it. In fact, it may not even be true to say I read every word, as can you really say you read a word or phrase you didn't understand? You can scan the characters in the line καίτοιἀληθές γε ὡς ἔπος εἰπεῖν οὐδὲν εἰρήκασιν, but if you can't read ancient Greek, can you really be said to have read these words? I use this example because it came up in a conversation not long ago about literature and reading and my aunt told me that what I had done "wasn't really reading." And in most ways that matter, it wasn't. But not in all ways.
Another example: there's a joke (not PC, and offensive to many) I've come across on things like coffee mugs which reads "AA is for quitters". Is someone who quits smoking or drinking a "quitter"? If I decide to stop doing something, under what exact, binary conditions have I "quit" doing that something. For example, if I become comotose incapable of higher cognitive functions, have I quit "thinking" or quit "talking" in the same way I can quit smoking or quit a job?
If I drive someone crazy, have I driven them anywhere? If I make someone angry, upset, feel insulted, and hurt, all through a single insult, how is this binary? And have I made them angry in the same way I might make my bed?
Things get even worse when we get to properties/features of things, which we want computers to recognize and understand. If a person is tall at 7 feet, are they not tall at 6? Is a cell which is part of an animal "alive" in the same way a single celled organism is? Is a virus alive?
Information theory, even in physics, runs into a problem rather quickly when trying to deal with language. It's all well and good to say everything is data or bits, but unless you have some method of explaining how bits get turned into concepts, it doesn't help that much here.
Your just speaking to the method. Encoded in the neuron(s) is a picture of your car as you remember it. There isn't a jpeg in there!
The method is what matters. Simply defining "learning" as processing bits doesn't get us very far. It doesn't tell us what makes humans capable of things computers can't do, but we want them to be able to.
The neurons are communicating something be it a car truck or plane.
To whom or what are they communicating it to?
Because we take our knowledge for granted.
This is true, but in quite the opposite way you think. Before computers were around, and the Turing machine was only theoretical, the fact that we had an idea of automated computation already meant speculations about A.I. Once we had computers, A.I. was a sure thing in a matter of a few years. Until people realized how extremely complex all the things they took for granted were. Evidence suggests that newborn infants can recognize their mothers' faces. Learning novel concepts, recognizing objects, etc., comes so easily to us that it wasn't until we tried to get computers to do this that we realized how hard it really was. It's hard because computers deal only with rules, with procedures. Which means that in order to get them to do anything, you need an explicit procedure. But we can construct the most sophisticated learning machine on the planet using current algorithms and cutting edge technology, expose it to enormous amounts of language use (far, far, far, far more than humans require), and it will never learn language. Because formal rules (the type computers need) involve specific procedures to deal with specific input and do specific things as a result of this input. Like an electric drill, or a jackhammer, or even crank operated generator, computers are machines that deal with
doing and nothing else. I can make a car do all sorts of nifty stuff (well, maybe I can't but stunt drivers and mechanics can). I can make computers do this too. But it's all about making the machine take the input, mindlessly do whatever it is I constructed it to, and return some result.
When we learn language, we don't just learn rules. We learn words & phrases. We even learn words that mean one thing in one context, and something totally different in another. Even before Chomsky, philosphers of language were trying to treat language in terms of formal logic. And for decades, linguists have tried to reduce language to rules manipulating words (or atomic units of language) such that they could generate any grammarical sentence from only the meaning of those words (or atomic units) and rules. They couldn't. Because even allowing for word meaning isn't enough. In human languages, grammar (the rules) is often meaningful as well, perhaps always meaningful. So how do we get a machine to understand what a language means using only meaningless procedures/rules?
It takes us years to really get language, a machine should be allowed the same if you want to make the comparison and for language to be truly meaningful to a robot.
Humans take years because (among other things) it takes a certain amount of time to obtain the necessary exposure to language. That's not true for computers. With my learning algorithms in place, I get the computer to "experience" in a few minutes the exposure to language a human might not get in a lifetime.
Babies don't just learn language over night. It takes years of hearing people explain things and associating objects with words.
That's because it takes years of exposure and trial and error for children to get the necessary experience with language. It doesn't for a computer, because I can run thousands of trials and exposure to enormous amounts of language in under a day.
Babies also go through a period where facial recognition isn't possible and they can't really tell people apart.
It appears that from birth they may be able to recognize their mothers.
The brain works very hard to sort all this out for us as any processing requires. Yes it takes a sophisticated program no doubt.
The issue isn't whether it takes a sophisticated program, but rather what type of sophistication it takes.