I know they don't process the same. They weren't built to process the same. They are however both processing something from memory.
How are you distinguishing processing from memory?
Your just saying that it isn't a single neuron but a group of neurons. Does this mean the group isn't speaking a language?
Yes.
Yes one single neuron is like a computer
It isn't. Not in any meaningful way.
But it takes us years to learn this stuff so don't take it for granted. You give the computer iyears of learning experience required by a human to learn language and then you can compare.
Done. It doesn't work. Why? Because current machine learning is fundamentally different from a type of learning humans (among other animals) can do.
The neurons just send a code that has to be decripted.
Signals & information do not necessarily a code or language make. I can describe pebbles, coins, skin cells, etc., in terms of information. But a coin only becomes a binary unit of information if it is flipped and some interpreter is there to interpret the resulting state. Neural networks certainly involve signals. But it is misleading (especially if one is not familiar with how the brain works) to refer to this as code or as language.
Within any given network there are tons of languages passing through all as ones and zeros.
This is only one particular type of "language". Binary. It is certainly not the only type of information, let alone language, there is. And it does not seem to conform to the fundamental structure of reality. Either way, to assume all code or languages are 0's and 1's is very problematic. It forces you to reduce all signals to a sequence of equally likely binary states.
Same must be true for neurons.
Why? It isn't true of qubits. These can be in a superposition of both |0> and a |1>
simultaneously.
When a person remembers what their car looks like, what is it that is happening. First the person had to see their car and store it in memory. When trying to remember a car the code from the neurons must be translated into something meaningful for our consciousness. Their is a language being used that has to be decoded.
Again, this seperates memory, code, and processing. One central reason behind the ability of humans to understand concepts is probably that this doesn't happen.When you remember your car, you aren't required to recall on a single (or multiple) particular memories. In fact, you can picture your car without recalling any actual experience with it. More importantly, the what "code from the neurons" allows you to do this?
We've proved that our ability to think at high levels isn't as far out as we thought.
We haven't. In fact, there are proofs that a computer is incapable of processing concepts. These aren't by any means universally accepted, but then again the Church-Turing thesis is not without critics.
What we've done is find out that things we took for granted as "simple" are much more complicated than we thought (which was why as soon as computers were developed A.I. was only years away; and it remained "only years away" for decades).
Everyone should agree that the brain is the most powerful machine we know of in the universe. How is it more than a machine?
But there is a difference between a machine and a computer. Certainly, brains are in some sense machines, and they do compute. But this does not make them computers. It does not mean that all thought can be a matter of computation. And if it were true that all thought is, then this could be proved.
Yeah it's all about math when babies are learning it too as their brains map out proper configurations.
How so?
I'd say "hows what going".
It takes experience to learn those meanings, why hold that against a computer.
Because it doesn't take the same kind of experience, nor would an infinite amount of experience using current learning algorithms somehow make a computer go from meaningless number crunching to A.I. That Sci. Fi. ("They say it got smart. A new order of intelligence"). When someone asks you "how's it going" you don't need to root through a database filled with all of your experiences, sorting through and calculating probabilities just to figure out what the question means.
More importantly, you can readily process novel concepts and abstract from specific. For example, imagine I had no idea what a kangaroo was. I'm out exploring Australia, with a guy named Dundee (also known as Crocodile Dundee), and I see this weird looking giant rabbit like creature hopping around. I ask Dundee, what's that? He says it's a kangaroo. A few hours later, I see another similar looking giant rabbit like creature. But I don't ask this time. Because even though this isn't the same kangaroo I saw before, I now have a new category "kangaroo" which I can now readily extend to specific kangaroos.
By contrast, if I had a sophisticated program set-up to a camera, and fed the program a picture of the first kangaroo, not only could it not recognize the second kangaroo as a kangaroo, it couldn't even recognize the same kangaroo from a different angle. It would take multiple training sessions to get the computer to distinguish "kangaroo" from "non-kangaroo". And if one kangaroo happened to have a baby kangaroo in its pouch, or if I showed it a wallaby, all that learning might be completely destroyed and the computer may be back at square one. This is because, unlike humans, computers do represent abstractions (like the concept "kangaroo"), so in order to get a computer to distinguish a kangaroo from other animals, I need to repeatedly expose it to kangaroos and to animals so that over time it adapts (like the sea snail) and pairs particular features such that given input with these features, it will output "kangaroo", and given input without, it will ouput "not kangaroo". But once I introduce a wallaby, which has many of the same features, the computer may output "kangaroo". And when I input "wrong", the pattern of features which represent the way that the computer distinguishes "kangaroo" from "not kangaroo" may radically shift such that it can't distinguish kangaroos anymore.