• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Humans are like robots. Choice is determined.

LegionOnomaMoi

Veteran Member
Premium Member
You still aren't getting the idea...the reward is emotional gratification generated from the limbic system or the sensory system, its irrelevant if there's a food cache or not!
And until you show me an empirically based model of these systems and how they produce what can be verified independent of context outside of the neurophysiological processes which make them up, and then demonstrate how this model explains what we actually know about the brain (especially the contradictions it has with our understanding), then you haven't generated anything which is useful to understand choices, consiousness, etc.

Basically, you've stated that certain systems are involved in a processes ( those that create "emotional gratificication" states) which you claim all choices can be understood in terms of. You haven't pointed to any evidence that such processes exist in any meaningful way, nor how our understanding of the limbic or sensory systems make your model plausible. You've simply created a vague model, asserted that it is somehow produced by certain parts of the brain, and then declared all choices to be understood by it. As stated, your theory cannot be falsified, because everything can count as verification. That's not science.

The point here is that mammal brains are wired with sensory and limbic signaling that produce states that are positive or negative.

And how can we determine (just by looking at brain activity) that
1) Seeing certain activity means a positive state, while seeing other activity means a negative state
& more importantly
2) how this means that it is these states which all choices are reducible to.

The idea extends to the notion of Elman's eta idea that sensory systems are genetically coded to senstitize to the environment and preprocess information that allow neocortical processes to digest or learn.
Elman's η idea about genetic coding? Jeffrey Elman? They guy who co-authored Rethinking Innatedness? Here's an article by him: "Learning and development in neural networks: the importance of starting small".

He begins the entire paper (not the abstract) with "Humans differ from other species along many dimensions, but two are particularly noteworthy. Humans display an exceptional capacity to learn; and humans are remarkable for the unusually long time it takes to reach maturity. The adaptive advantage of learning is clear, and it may be argued that, through culture, learning has created the bases for a non-genetically based transmision of behaviors which may accelerate the evolution of our species."

All this time you've talked about reinforcement, it was simply based on a learning parameter in one type of artificial neural network?
 
Last edited:

idav

Being
Premium Member
It's a completey different one.

The core has to be the same and is.

Because it's all well and good to say that the brain encodes "bits", but if you can't show me anything in the brain which corresponds in general to a bit (and at the moment, no one can), then it doesn't mean much.
Memory is what your looking for. Storage and retrieval. A bit is merely an amount of data. Doesn't the brain hold data?

It's not that neurons are more complicated. It's that the neural "code" is. And it is a qualitative change. Think of it this way: why does your computer use a binary storage system, in which the minimal unit can only have 2 states? We've been storing data as only 0s and 1s for ages. Why did nobody ever think "wait, we can double our storage capacity just by using four states instead of two!" The answer is this is a qualitative change and would result in an increase in complexity/issues such that it wouldn't be worth it.
No doubt but doing quantum computing doesn't change the essential nature of memory and retrieval. A quantum bit means far more storage and complexity without needing as much space and power at which point would make it 100 times easier to get to the level of processing and networking the human mind does. No where are you describing a system that doesn't require data, retrieval and representation of that data.

Most people (including psych students) think of the neural code in terms of action potentials which are described as the neuron firing when it reaches a certain threshold. We've known this was wrong since the 40s. But we didn't know how wrong until more recently. We now know how strong the tendency for neurons to synchronize is (so strong that it can happen in brain slices preserved outside the brain, and even without synaptic transmission/signals). We also know that too much synchrony makes brain function impossible because the brain would almost be one neuron. So now, rather than trying to explain the neural "code" in terms of single spikes, we believe brains make use of all of (i.e., these all constitute meaningful "units" of information) the following
Again with the complexity. This doesn't change that neurons speak a language that needs to be decoded. What ever the neurons are "saying" represents data that has been stored through experience.

1) The ways in which the rate of of firing changes over time
2) The rate itself
3) The correlations between the activity of neurons
4) The ability for neural populations to synchronize
5) Various neurophysiological properties that allow neurons to desynchronize given certain conditions (which aren't very clear) rather than do what it appears they do so readily
6) The size of a synchronized neural assembly
7) The frequency range at local and nonlocal scales

It quickly becomes very difficult to defend any notion of something like a "bit" in the brain when we know at least that if there is some minimal unit, it is constantly changing in a number of different ways. Which is why there has been an increased emphasize not on the neural code or neural firing, but on functional connectivity. What kind of conditions create what kind of neural assemblies which (again, under what conditions) may synchronize with other (nonlocal) assemblies? The question is then not about code or bits, but more about how information is represented by patterns of coordinated neural activity distributed throughout the brain.
It isn't hard. Redundancy is overrated here. Data is data no matter how it is coded or by what means.

Now we are increasingly using computers in ways they were never intended to be used. We employ fuzzy logic which violates the logic built into the computer itself. We create mathematical learning models which don't have explicit rules, in that the only rules are how the the computer will, in general, adapt over time to input. And so on. But there is a problem. The artificial neural networks we program do not do what actual neural networks do. They can't. Because neurons do not have well-defined states, and neural networks do not consist of collections of discrete states. That is fundamentally what computers are: collections of discrete, binary states. So we take a simplified mathematical model of actual neural networks, and imitate this on a machine designed from the ground up to work differently.
This certainly doesn't mean the neurons are speaking gibberish. Do they store data or not?
Again, I'm not saying this is a bad thing. From both a theoretical and applied perspective (i.e., in terms of understanding something like the brain as well as coming up with programs that recognize faces or recommend books for you automatically) we've done a lot. But everything we've done ultimately goes back to logic gates and binary states, and thus to pure computation rather than comprehension. It may be that the architecture cannot do what brains do. It may be that it can, but we need to understand how associative learning works in a way we aren't even close to yet. Whatever it is that is missing, it seems clear that it is a qualitative difference.
Advanced forms of computation is comprehension. No doubt we will have a hard time proving machines are aware but this is a philosophical problem. Awareness is possible in many forms but hard to prove without asking the organism.

"Certainly works"? Computers are extremely efficient. That's why it's so hard to get them to do what slugs and plants can do as far as learning is concerned. The complexity of the brain is what makes mammals and in particular humans capable of doing what no other known system can do.
So we don't make machines inefficient like lifeforms. Yes the complexity makes it possible but this hardly separates us from machines.



First, concepts are not rules, and the problem is we don't know how something like a computer which has only the ability to work with rules can learn concepts. We don't know how we do. Second, "things like numbers and letters" do not "go to a certain part of the brain". This is what makes fMRI analysis such a challenge. A number or word or similar stimulus will increase activity in multiple places in the brain and will change each time in ways we don't understand. So if I want to say that certain regions are involved in, say, processing action words/verbs, I have to show that these regions are significantly more activated compared to some control, and futher that the other regions which will also be significantly more activated are not somehow the "core" of the whatever action word the subject is exposed to. This is a central area of debate in the cognitive sciences, because one group maintains that concepts like action words are represented apart from the motor and sensory systems of the brain. So they explain the activation in these regions during experimentation as indicative of something else. The other group maintain that cognition is fundamentally embodied, and that part of the representation of concepts like action verbs and even abstract notions like hope make use of sensorimotor regions of the brain.
This sounds like a philosophical debate. Representation of concepts in the brain are as much computations as anything else the brain does. That is what the brain does, processing based on stimuli.
What remains true either way, however, is that we can't point to some place in your brain where the concept of "1" exists. It is somehow represented across distributed and changing neural assemblies. At best we can say there are regions which are likely to be involved in representing the concept "1".
Maybe but 1 isn't just some subjective concept in the brain. It remembers it based on representation of real life events. We correlate to make it possible and the same is possible for any machine.

That's because it's a bigger calculator.
Much more than that.

How fast could you solve the following equation:

132,674.65754 * 13.1^8 /.234= x ?

That's a very simple equation. It's straightforward arithmetic. The rules are simple, but the calculations are difficult for us because we aren't calculators. Computers are. Nobody should be impressed that Watson could find the answers. That wasn't what the challenging part was. Computers are great at storing data and accessing it. The challenging part was getting Watson to parse the question, sort through an enormous database which had to be specially made so that Watson didn't have to understand words or language to calculate an answer. It's challenging because we had to turn language into something it isn't: math.
As I said Watson knew art and how to analyze it against information and an ambiguous questioning scheme. This cannot be done by a calculator, it would need something close to the facial recognition technology.

If that were true, then whenever someone asks you a question like "how's it going", you would root through an enormous database, find a bunch of possible matches that correspond to what the question might mean, calculate the probabilities that each is what the question means, select the most probable, and then return a programmed answer.
How is this not similar to human memory retrieval. Humans do the exact same thing when a word is said and dozens of representations of that word come to our conscious or subconscious to effect what we envision. We pick the most accurate representation based on context just like Watson. This also happens to be what makes us so imaginative in the brain which really turns out to be reconstruction of already known objects.
 

atanu

Member
Premium Member
The computer.

In that case, you could name it more meaningfully. Computer term is useless. I will say that Brahman suits perfectly, since the word means what you ascribe to a computer.

The only problem is that a human will be creating Brahman --and that is too much ego.
 

Copernicus

Industrial Strength Linguist
A few reasons why humans are not robots.

-Humans are capable of thinking for themselves.
So are all creatures with brains, and so are robots. That is why we describe robots as autonomous machines. Now I grant you that there are a lot of ways to construe the meaning of "think for themselves", but you didn't specify what you meant by that expression.

-Humans are capable of surviving without being programed to survived (that is if you believe in evolution).
Again, you don't really define what you mean by "programmed". Humans (and, let's not forget, animals) adapt to their environments through a combination of innate "programming" and learned experience. Robots already do that on a more primitive level, and there is every reason to believe that it is possible to design machines to "program" themselves through machine learning techniques. Machine learning is an important branch of AI.

-Humans can have sex to reproduce, robots can’t
This is a bizarre criterion, but I guess we know what's on your mind now. ;) Machines can be designed to self-replicate. They can also be designed to have fun while self-replicating. Why not?

I disagree with the op and think its just a bunch of a nonsense to compare humans to computers or robots. Which gets into another subject altogether. I don't view or think computers should be classified as robots. Robots are capable of interacting with the world around them. Computers, not so much.
I don't mind you disagreeing with me. That's kind of the point of posting in a debate forum--to get positive and negative feedback. But where did you get the idea that I was equating computers to robots? I was equating humans and robots. My position is that humans are essentially robots made out of biological materials created by the replication and complex folding of complex DNA molecules. The same is true of all creatures with brains. Brains are essentially guidance systems for bodies that move.

Its not hard to follow. Just remember humans are not robots and Copernicus wants everyone to think humans are programmed to act and think a certain way.
I don't really want to make everybody think that. My ambitions really go no further than the handful of readers following this thread. :) I am advocating the view that humans are essentially robots. It is truly ironic in these continual debates on determinism that people seem so repelled by the thought that they might be nothing more than robots--i.e. robots horrified at the thought of being thought robots.
 
Last edited:

Copernicus

Industrial Strength Linguist
And they want to say that though all are programmed robots but they are special -- they are privy to special knowledge.
Where did you get this special knowledge about what they want? I'm guessing telepathy. :sarcastic
 

uberrobonomicon4000

Active Member
So are all creatures with brains, and so are robots. That is why we describe robots as autonomous machines. Now I grant you that there are a lot of ways to construe the meaning of "think for themselves", but you didn't specify what you meant by that expression.
No, I'm sorry, robots are not capable of thinking for themselves. Autonomous just means they are capable of learning how to interact with their environment.
Either you think humans are capable of thinking for themselves or they aren't.
Again, you don't really define what you mean by "programmed". Humans (and, let's not forget, animals) adapt to their environments through a combination of innate "programming" and learned experience. Robots already do that on a more primitive level, and there is every reason to believe that it is possible to design machines to "program" themselves through machine learning techniques. Machine learning is an important branch of AI.

You created this topic. You should revise your own terms before asking me to define them for you.

This is a bizarre criterion, but I guess we know what's on your mind now. ;) Machines can be designed to self-replicate. They can also be designed to have fun while self-replicating. Why not?
Care to give an example where machines self replicate without human intervention?

I don't mind you disagreeing with me. That's kind of the point of posting in a debate forum--to get positive and negative feedback. But where did you get the idea that I was equating computers to robots? I was equating humans with robots. My position is that humans are essentially robots made out of biological materials created by the unfolding of complex DNA molecules. The same is true of all creatures with brains. Brains are essentially guidance systems for bodies that move.


I don't really want to make everybody think that. My ambitions really go no farther than the handful of readers following this thread. :) I am advocating the view that humans are essentially robots. It is truly ironic in these continual debates on determinism that people seem so repelled by the thought that they might be nothing more than robots--i.e. robots horrified at the thought of being thought robots.
I see your point. Humans can or may only think one way without (using the old phrase here of) thinking outside the box. But if that is the case, people would be like robots, running into walls 50 times out the day until they realize x doesn't compute and they need to go a different direction.
 

LegionOnomaMoi

Veteran Member
Premium Member
I was doing some reading thanks to another thread, and happened upon an article which displays all the optimism I have criticized (and been criticized for criticizing it). However, the one thing I actually found interesting about the article was the passing remarks the authors made regarding naysayers:
"Not all experts believe that the time is ripe for a return to the original goals. Craig Silverstein of Google (a company that carries out a huge amount of narrow-AI research under the supervision of AI guru Peter Norvig) recently told a reporter that such computers are “hundreds of years away”. Marc Andreesen, the founder of Netscape, said that “we are no closer to a computer that thinks like a human than we were fifty years ago”. Some AI researchers, such as Selmer Bringsjord (chair of RPI's Department of Cognitive Science) even doubt that computers will ever display humanlike intelligence. And the standard university AI curriculum continues to focus almost entirely on narrow AI, with only passing reference to the field's original grand goals."

I don't know why the authors restricted themselves to these few, as they are neither representative of those who think that A.I. is farther away from the current work than commonly thought, nor representative of the various types of nay-sayers (from those who argue that a soul is necessary to those who argue that we simply need better algorithms which are not likely to be created in the near future). But I have repeatedly said that we are simply doing with computers what we have always done (meaningless manipulation), only faster and with wider application. And it was interesting to hear that the founder of Netscape (not a philosopher, not a cognitive scientist, not a theologian, not a neuroscientist, but a visionary in the computer industry who currently is involved with the most popular developments/sites of web 2.0) thought exactly the same thing. Obviously, I'm not asserting this makes either of us correct (and I would bet that both of us would be happy if proved wrong, although clearly I can't speak for Andreessen). But this does at least indicate that there are those whose interests are in commerce, and who are very familiar with the cutting-edge R & D, who are at least as skeptical as I.
 

LegionOnomaMoi

Veteran Member
Premium Member
The core has to be the same and is.

As there is plenty of evidence that it is not, why say it "has to be the same"?

Memory is what your looking for. Storage and retrieval. A bit is merely an amount of data. Doesn't the brain hold data?

A bit is more than just an amount of data. It is a formal definition of minimal units: The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information.

This is where things like "chunking" come into play for humans. If I rattle off a string of 10 numbers, you'll be unlikely to be able to repeat it back to me. Short term memory has certain limits. However, phone numbers have 10 digits, and we can often remember them without a problem. Likewise, I could ask you to memorize a bunch of letters I am going to say (one at a time), and it wouldn't be long before you there were too many letters to repeat back. But if I listed letters like
the-quick-brown-fox-jumped-over-the-lazy-dog, or the numbers 1-555-328-7682, you may very well be able to bypass the restrictions on short-term memory. This is because if I give you the letters "sdjkrs", each one is a unit of information (a bit). However, if the letters are "computation" then you don't process the letters in the same way at all. It becomes a word. The bit, or minimal unit, is context-dependent. In a computer, it is clearly and easily identified: it is the binary storage, the 0's and 1's.

The brain has nothing like this.

More importantly, the bits in a computer are safely secluded from not only the processor, but the software. If I am running a web browser or playing a game, the states of bits will change, but the software isn't going to suddenly create a different hard drive, or alter the way the CPU works at an architectural level. That's what the brain does. Memory isn't safely tucked away, only to be accessed by some program, implemented by some processor. There is no clear distinction between the three. Unlike computers, brains actually (and continually) change the physical make-up of the processor & memory because both are part of the "software" somehow.

No doubt but doing quantum computing doesn't change the essential nature of memory and retrieval.
That's because it doesn't involve either. And to the extent it might (or in the few basic implementations we have) it absolutely does change the essential nature.

A quantum bit
"Actual quantum computation processes are very different from those of a classical counterpart. In a classical computer, we input the data from a keyboard or other input devices and the signal is sent to the I/O port of the computer, which is then stored in the memory, then fed into the microprocessor, and the result is stored in the memory before it is printed or it is displayed on the screen. Thus information travels around the circuit. In contrast, information in quantum computation is stored in a register, first of all, and then external fields, such as oscillating magnetic fields, electric fields or laser beams are applied to produce gate operations on the register. These external fields are designed so that they produce desired gate operation, i.e., unitary matrix acting on a particular set of qubits. Therefore the information sits in the register and they are updated each time the gate operation acts on the register." (emphasis added)
pp. 64-65 of Quantum Computing: From Linear Algebra to Physical Realizations (CRC press, 2008).


Again with the complexity. This doesn't change that neurons speak a language that needs to be decoded. What ever the neurons are "saying" represents data that has been stored through experience.

It does indeed change that. Because it seems that for the most part, unlike the binary units of computers, neurons do not "speak" any language. Neural assemblies do. More importantly, the "data" is also the processor, and is also the software. So not only does the "code" continually change, the physical implementation device it "runs" on isn't actually seperated from it, but is it.


Data is data no matter how it is coded or by what means.

This is rather fundamentally inaccurate. In a famous sentence which was designed to show the difference between syntax (rules) and semantics, Chomsky came up with "colorless green ideas sleep furiously" which obeys all the "rules" of grammar, but is nonsensical. It can be made sensical (my grandfather published a paper on this in Linguistics), but not simply by rules. Even if I could theoretically design a program which had all the rules of grammar (and, as someone who finds the cognitive linguistic framework far more sound than any generative grammar, I don't), there would still be nothing to make nonsensical but grammatically correct sentences nonsensical. The sentence is still data. I can type it and it is obviously then represented somewhere in my computer. But it doesn't make any sense. What is it about this sentence that doesn't make sense? It's not the rules, as Chomsky showed. It's meaning. To say that "data is data" is akin to saying that because we're all made up of atoms there is no difference between a plant and a rock and a computer. It ignores what makes rocks what they are, compared to plants, or compared to computers: some sort of organization unique to each which makes them more than just a bunch of atoms, but a particular, special bunch.


This certainly doesn't mean the neurons are speaking gibberish. Do they store data or not?
No. They don't. The reason that computer programmers talk about storage, while cognitve and neuro- scientists talk about "representation" has (in part) to do with the fact that the brain doesn't have a hard drive. Our memory is not some distinct, identifiable sequence of binary states, but is represented by the same "thing" that processes it. Neurons can represent conceptual structures somehow, but one key reason they can is likely that unlike data storage, the actual "hard drive" is not distinct from the processor or software. Thus to say that neurons "store data" would be like saying your web browser stores data. Only it doesn't. On a computer, whether we're talking RAM or a hard drive (or other storage), we are dealing with physical states distinct from the processor, which allows the processor to use code to change these states and implement programs. The brain doesn't have this.

Advanced forms of computation is comprehension. No doubt we will have a hard time proving machines are aware but this is a philosophical problem.

It's not that we willl have a hard time proving computers are aware. The problem is that proof involves reference to formal languages, like mathematics and logic (not that the two are really that distinct). But these are designed to negate the ambiguities of natural language, and to bypass meaning as much as possible. It is these languages which allow us to get computers to do anything. How, though, does one formally define "meaning"? Any definition necessarily depends upon meaning. So to prove that computers can't be "aware" involves using computer-like language to define meaning. If this could be done, however, then we would have a way to get computers to understand meaning. It is the fact that, after decades of work, we have found no way to prove that concepts are impossible to reach from computation, nor to prove that they are, which is perhaps most suggestive (apart from philosophical arguments or appeals to intuition) that the two are fundamentally different.


Maybe but 1 isn't just some subjective concept in the brain. It remembers it based on representation of real life events.
Not really. In fact, people who have memory disorders which prevent them from recalling "real life events" (episodic memory) can still understand language and numbers. Episodic memory may be necessary to learn concepts, in that there must be a time in which you are taught them, but once learned even people who cannot recall, or create, new episodic memories can still understand.


As I said Watson knew art and how to analyze it against information and an ambiguous questioning scheme.
Yes, you said that. But it isn't correct. This is why the wrong answers it did give were so bizarre.

This cannot be done by a calculator, it would need something close to the facial recognition technology.

Facial recognition technology is all about math. You can read an entire book on the subject and never get a line of code. This is true for machine learning in general. Why? Because all learning that computers do is meaningless mathematical manipulation. It's the same that your calculator does. It is all rules and procedures.


How is this not similar to human memory retrieval.
How much of bayesian probability are you familiar with? How about linear algebra and calculus? If the answer is "not a whole lot", then there's the difference.

Humans do the exact same thing when a word is said and dozens of representations of that word come to our conscious or subconscious to effect what we envision.
That's not what computers do.

We pick the most accurate representation based on context just like Watson.
Really? So when someone asks you "how's it going", you don't understand the question until you have compared it against thousands of examples from speech, calculated probablities that humans are in general not capable of at all, and all that not to answer the question, but just to understand it?
 
Last edited:

Copernicus

Industrial Strength Linguist
No, I'm sorry, robots are not capable of thinking for themselves. Autonomous just means they are capable of learning how to interact with their environment.
You could just look the word up in a dictionary, but never mind. It means that the machines are capable of interaction with their environment independently of external control.

Either you think humans are capable of thinking for themselves or they aren't.
Tautologies are always true. I think that humans are machines that are capable of thinking for themselves. You may disagree, but I want you to understand what I am actually claiming.

You created this topic. You should revise your own terms before asking me to define them for you.
I was commenting on your usage, not my own. I have no telepathic powers that would allow me to know what you mean by words like "programmed". I put scare quotes around the word, because "program" is being used in a loose metaphorical sense.

Care to give an example where machines self replicate without human intervention?
Of course not. I never claimed to be talking about current technology. However, self-replicating robots are theoretically possible. Why wouldn't they be? And we could introduce sex into the self-replication process, if we wanted to use that means to incentivize procreation.

I see your point. Humans can or may only think one way without (using the old phrase here of) thinking outside the box. But if that is the case, people would be like robots, running into walls 50 times out the day until they realize x doesn't compute and they need to go a different direction.
People do occasionally run into walls, but your idea of robotic behavior seems to be based on childood experiences with wind-up toys. In any case, you can bring yourself more up-to-date by reading about the DARPA Grand Challenges.
 

atanu

Member
Premium Member
Care to give an example where machines self replicate without human intervention?

However, self-replicating robots are theoretically possible. Why wouldn't they be? And we could introduce sex into the self-replication process, if we wanted to use that means to incentivize procreation.

The question is "---without human intervention". And the answer is "------we could introduce sex-----".

Yes. I actually do want to run into a wall.:D
 

Leonardo

Active Member
And until you show me an empirically based model of these systems and how they produce what can be verified independent of context outside of the neurophysiological processes which make them up, and then demonstrate how this model explains what we actually know about the brain (especially the contradictions it has with our understanding), then you haven't generated anything which is useful to understand choices, consiousness, etc.

And you still don't get it, I called it an idea. Its an approach that can be coded for a machine to understand and create choices not just based on logic but qualities of experiences and that is the point. But to digress, remember Einstein or was it Bohr that had the problem with the cell phone that was beyond reductionism or was it the lack of tools, like an electron microscope...:rolleyes:

Basically, you've stated that certain systems are involved in a processes ( those that create "emotional gratificication" states) which you claim all choices can be understood in terms of. You haven't pointed to any evidence that such processes exist in any meaningful way, nor how our understanding of the limbic or sensory systems make your model plausible. You've simply created a vague model, asserted that it is somehow produced by certain parts of the brain, and then declared all choices to be understood by it. As stated, your theory cannot be falsified, because everything can count as verification. That's not science.

Yes it is science; its called a hypothesis, and testing it isn't so far off in the future. Many are confident that the ability to monitor individual neurons in vivo will be possible. :flirt:


Elman's η idea about genetic coding? Jeffrey Elman? They guy who co-authored Rethinking Innatedness? Here's an article by him: "Learning and development in neural networks: the importance of starting small".

All this time you've talked about reinforcement, it was simply based on a learning parameter in one type of artificial neural network?

You didn't read the book, and so again you don't get it...:facepalm:
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
You didn't read the book, and so again you don't get it...:facepalm:
I'll start here. I did read the book. I have also read much more of his work. If you would like to challenge this, feel free. Cite any part of the book, or any other work by him, which you think supports your conception and we'll go from there.

And you still don't get it, I called it an idea. Its an approach that can be coded for a machine to understand and create choice not just based on logic but qualities of experiences and that is the point.
Ideas are great. But in order for them to pan out (especially as models for science) they need to be more than just speculation. And as your idea is based upon the misunderstanding of not only the brain, or even neuroscience, but also of the very work you base it upon, you need something more than "it's an idea" to make it worth considering from a scientific perspective.

But to digress, remember Einstein or was it Bohr that had the problem with the cell phone that was beyond reductionism or was it the lack of tools, like an electron microscope...:rolleyes:

What?



Yes it is science; its called a hypothesis, and testing it isn't so far off in the future. Many are confident that the ability to monitor individual neurons will be possible. :flirt:

We can currently monitor individual neurons. That's how we know that individual neurons can't be the key here. And as for testing your model, it is based upon the work of a researcher who explicitly rejects your model. Furthermore, it has been tested. There is no support for it anywhere.
 

idav

Being
Premium Member
As there is plenty of evidence that it is not, why say it "has to be the same"?
Cause data can be just about anything and it is always a matter of interpreting the data.


This is where things like "chunking" come into play for humans. If I rattle off a string of 10 numbers, you'll be unlikely to be able to repeat it back to me. Short term memory has certain limits. However, phone numbers have 10 digits, and we can often remember them without a problem. Likewise, I could ask you to memorize a bunch of letters I am going to say (one at a time), and it wouldn't be long before you there were too many letters to repeat back. But if I listed letters like
the-quick-brown-fox-jumped-over-the-lazy-dog, or the numbers 1-555-328-7682, you may very well be able to bypass the restrictions on short-term memory. This is because if I give you the letters "sdjkrs", each one is a unit of information (a bit). However, if the letters are "computation" then you don't process the letters in the same way at all. It becomes a word. The bit, or minimal unit, is context-dependent. In a computer, it is clearly and easily identified: it is the binary storage, the 0's and 1's.
I know they don't process the same. They weren't built to process the same. They are however both processing something from memory.



That's because it doesn't involve either. And to the extent it might (or in the few basic implementations we have) it absolutely does change the essential nature.
This doesn't change the fundamental aspect of learning and using the data to process new information.

"Actual quantum computation processes are very different from those of a classical counterpart. In a classical computer, we input the data from a keyboard or other input devices and the signal is sent to the I/O port of the computer, which is then stored in the memory, then fed into the microprocessor, and the result is stored in the memory before it is printed or it is displayed on the screen. Thus information travels around the circuit. In contrast, information in quantum computation is stored in a register, first of all, and then external fields, such as oscillating magnetic fields, electric fields or laser beams are applied to produce gate operations on the register. These external fields are designed so that they produce desired gate operation, i.e., unitary matrix acting on a particular set of qubits. Therefore the information sits in the register and they are updated each time the gate operation acts on the register." (emphasis added)
pp. 64-65 of Quantum Computing: From Linear Algebra to Physical Realizations (CRC press, 2008).

Cool.


It does indeed change that. Because it seems that for the most part, unlike the binary units of computers, neurons do not "speak" any language. Neural assemblies do. More importantly, the "data" is also the processor, and is also the software. So not only does the "code" continually change, the physical implementation device it "runs" on isn't actually seperated from it, but is it.

Your just saying that it isn't a single neuron but a group of neurons. Does this mean the group isn't speaking a language?:facepalm:

Yes one single neuron is like a computer and a group of neurons would be like a network of computers acting towards a goal.


This is rather fundamentally inaccurate. In a famous sentence which was designed to show the difference between syntax (rules) and semantics, Chomsky came up with "colorless green ideas sleep furiously" which obeys all the "rules" of grammar, but is nonsensical. It can be made sensical (my grandfather published a paper on this in Linguistics), but not simply by rules. Even if I could theoretically design a program which had all the rules of grammar (and, as someone who finds the cognitive linguistic framework far more sound than any generative grammar, I don't), there would still be nothing to make nonsensical but grammatically correct sentences nonsensical. The sentence is still data. I can type it and it is obviously then represented somewhere in my computer. But it doesn't make any sense. What is it about this sentence that doesn't make sense? It's not the rules, as Chomsky showed. It's meaning. To say that "data is data" is akin to saying that because we're all made up of atoms there is no difference between a plant and a rock and a computer. It ignores what makes rocks what they are, compared to plants, or compared to computers: some sort of organization unique to each which makes them more than just a bunch of atoms, but a particular, special bunch.
But it takes us years to learn this stuff so don't take it for granted. You give the computer iyears of learning experience required by a human to learn language and then you can compare.

No. They don't. The reason that computer programmers talk about storage, while cognitve and neuro- scientists talk about "representation" has (in part) to do with the fact that the brain doesn't have a hard drive. Our memory is not some distinct, identifiable sequence of binary states, but is represented by the same "thing" that processes it. Neurons can represent conceptual structures somehow, but one key reason they can is likely that unlike data storage, the actual "hard drive" is not distinct from the processor or software. Thus to say that neurons "store data" would be like saying your web browser stores data. Only it doesn't. On a computer, whether we're talking RAM or a hard drive (or other storage), we are dealing with physical states distinct from the processor, which allows the processor to use code to change these states and implement programs. The brain doesn't have this.
The neurons just send a code that has to be decripted. Within any given network there are tons of languages passing through all as ones and zeros. They are fundamentally the same in that respect but if you can decode all the languages you can recreate the picture of the data being passed through the wires. Same must be true for neurons. When a person remembers what their car looks like, what is it that is happening. First the person had to see their car and store it in memory. When trying to remember a car the code from the neurons must be translated into something meaningful for our consciousness. Their is a language being used that has to be decoded.


It's not that we willl have a hard time proving computers are aware. The problem is that proof involves reference to formal languages, like mathematics and logic (not that the two are really that distinct). But these are designed to negate the ambiguities of natural language, and to bypass meaning as much as possible. It is these languages which allow us to get computers to do anything. How, though, does one formally define "meaning"? Any definition necessarily depends upon meaning. So to prove that computers can't be "aware" involves using computer-like language to define meaning. If this could be done, however, then we would have a way to get computers to understand meaning. It is the fact that, after decades of work, we have found no way to prove that concepts are impossible to reach from computation, nor to prove that they are, which is perhaps most suggestive (apart from philosophical arguments or appeals to intuition) that the two are fundamentally different.
We've proved that our ability to think at high levels isn't as far out as we thought. Everyone should agree that the brain is the most powerful machine we know of in the universe. How is it more than a machine?





Facial recognition technology is all about math. You can read an entire book on the subject and never get a line of code. This is true for machine learning in general. Why? Because all learning that computers do is meaningless mathematical manipulation. It's the same that your calculator does. It is all rules and procedures.
Yeah it's all about math when babies are learning it too as their brains map out proper configurations.

Really? So when someone asks you "how's it going", you don't understand the question until you have compared it against thousands of examples from speech, calculated probablities that humans are in general not capable of at all, and all that not to answer the question, but just to understand it?
I'd say "hows what going". ;)

It takes experience to learn those meanings, why hold that against a computer.
 

Leonardo

Active Member
We can currently monitor individual neurons. That's how we know that individual neurons can't be the key here. And as for testing your model, it is based upon the work of a researcher who explicitly rejects your model. Furthermore, it has been tested. There is no support for it anywhere.

Legion...Why are you taking things out of context? I clearly said monitoring neurons in vivo. The best that can be done today requires surgery and its not effective at collecting large samples, as in thousands or millions of neurons. The researcher's book is titled "Rethinking Innateness", what do you think that means? You didn't read the book and its obvious...:yes:
 

uberrobonomicon4000

Active Member
You could just look the word up in a dictionary, but never mind. It means that the machines are capable of interaction with their environment independently of external control.


Tautologies are always true. I think that humans are machines that are capable of thinking for themselves. You may disagree, but I want you to understand what I am actually claiming.


I was commenting on your usage, not my own. I have no telepathic powers that would allow me to know what you mean by words like "programmed". I put scare quotes around the word, because "program" is being used in a loose metaphorical sense.


Of course not. I never claimed to be talking about current technology. However, self-replicating robots are theoretically possible. Why wouldn't they be? And we could introduce sex into the self-replication process, if we wanted to use that means to incentivize procreation.


People do occasionally run into walls, but your idea of robotic behavior seems to be based on childood experiences with wind-up toys. In any case, you can bring yourself more up-to-date by reading about the DARPA Grand Challenges.
That is nice you want to get all technical and everything now since you weren’t very technical in your OP. When you say: “choice is a determined process” in relation to a robot, most people can automatically assume they are programmed to act certain way or perform certain tasks. Even you used the word programmed in your OP so I’m not real sure where the confusion is.

I don’t need to look up any words in the dictionary. I already know what autonomous means and you basically said the same thing I did.

There are some clear distinctions between people and machines (robots). Which is why I gave a few of the more obvious ones. Like sex, I wouldn’t say sex is determined. It just happens.

And my last comment about people being like robots, running into walls 50 times before they realize X doesn’t compute was more along the lines of sarcasm. Maybe you missed it, but it was intend for puns.

Now where were we?
 
Top