Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.
Your voice is missing! You will need to register to get access to the following site features:We hope to see you as a part of our community soon!
And until you show me an empirically based model of these systems and how they produce what can be verified independent of context outside of the neurophysiological processes which make them up, and then demonstrate how this model explains what we actually know about the brain (especially the contradictions it has with our understanding), then you haven't generated anything which is useful to understand choices, consiousness, etc.You still aren't getting the idea...the reward is emotional gratification generated from the limbic system or the sensory system, its irrelevant if there's a food cache or not!
The point here is that mammal brains are wired with sensory and limbic signaling that produce states that are positive or negative.
Elman's η idea about genetic coding? Jeffrey Elman? They guy who co-authored Rethinking Innatedness? Here's an article by him: "Learning and development in neural networks: the importance of starting small".The idea extends to the notion of Elman's eta idea that sensory systems are genetically coded to senstitize to the environment and preprocess information that allow neocortical processes to digest or learn.
I wish I could follow this thread.
It's a completey different one.
Memory is what your looking for. Storage and retrieval. A bit is merely an amount of data. Doesn't the brain hold data?Because it's all well and good to say that the brain encodes "bits", but if you can't show me anything in the brain which corresponds in general to a bit (and at the moment, no one can), then it doesn't mean much.
No doubt but doing quantum computing doesn't change the essential nature of memory and retrieval. A quantum bit means far more storage and complexity without needing as much space and power at which point would make it 100 times easier to get to the level of processing and networking the human mind does. No where are you describing a system that doesn't require data, retrieval and representation of that data.It's not that neurons are more complicated. It's that the neural "code" is. And it is a qualitative change. Think of it this way: why does your computer use a binary storage system, in which the minimal unit can only have 2 states? We've been storing data as only 0s and 1s for ages. Why did nobody ever think "wait, we can double our storage capacity just by using four states instead of two!" The answer is this is a qualitative change and would result in an increase in complexity/issues such that it wouldn't be worth it.
Again with the complexity. This doesn't change that neurons speak a language that needs to be decoded. What ever the neurons are "saying" represents data that has been stored through experience.Most people (including psych students) think of the neural code in terms of action potentials which are described as the neuron firing when it reaches a certain threshold. We've known this was wrong since the 40s. But we didn't know how wrong until more recently. We now know how strong the tendency for neurons to synchronize is (so strong that it can happen in brain slices preserved outside the brain, and even without synaptic transmission/signals). We also know that too much synchrony makes brain function impossible because the brain would almost be one neuron. So now, rather than trying to explain the neural "code" in terms of single spikes, we believe brains make use of all of (i.e., these all constitute meaningful "units" of information) the following
It isn't hard. Redundancy is overrated here. Data is data no matter how it is coded or by what means.1) The ways in which the rate of of firing changes over time
2) The rate itself
3) The correlations between the activity of neurons
4) The ability for neural populations to synchronize
5) Various neurophysiological properties that allow neurons to desynchronize given certain conditions (which aren't very clear) rather than do what it appears they do so readily
6) The size of a synchronized neural assembly
7) The frequency range at local and nonlocal scales
It quickly becomes very difficult to defend any notion of something like a "bit" in the brain when we know at least that if there is some minimal unit, it is constantly changing in a number of different ways. Which is why there has been an increased emphasize not on the neural code or neural firing, but on functional connectivity. What kind of conditions create what kind of neural assemblies which (again, under what conditions) may synchronize with other (nonlocal) assemblies? The question is then not about code or bits, but more about how information is represented by patterns of coordinated neural activity distributed throughout the brain.
This certainly doesn't mean the neurons are speaking gibberish. Do they store data or not?Now we are increasingly using computers in ways they were never intended to be used. We employ fuzzy logic which violates the logic built into the computer itself. We create mathematical learning models which don't have explicit rules, in that the only rules are how the the computer will, in general, adapt over time to input. And so on. But there is a problem. The artificial neural networks we program do not do what actual neural networks do. They can't. Because neurons do not have well-defined states, and neural networks do not consist of collections of discrete states. That is fundamentally what computers are: collections of discrete, binary states. So we take a simplified mathematical model of actual neural networks, and imitate this on a machine designed from the ground up to work differently.
Advanced forms of computation is comprehension. No doubt we will have a hard time proving machines are aware but this is a philosophical problem. Awareness is possible in many forms but hard to prove without asking the organism.Again, I'm not saying this is a bad thing. From both a theoretical and applied perspective (i.e., in terms of understanding something like the brain as well as coming up with programs that recognize faces or recommend books for you automatically) we've done a lot. But everything we've done ultimately goes back to logic gates and binary states, and thus to pure computation rather than comprehension. It may be that the architecture cannot do what brains do. It may be that it can, but we need to understand how associative learning works in a way we aren't even close to yet. Whatever it is that is missing, it seems clear that it is a qualitative difference.
So we don't make machines inefficient like lifeforms. Yes the complexity makes it possible but this hardly separates us from machines."Certainly works"? Computers are extremely efficient. That's why it's so hard to get them to do what slugs and plants can do as far as learning is concerned. The complexity of the brain is what makes mammals and in particular humans capable of doing what no other known system can do.
This sounds like a philosophical debate. Representation of concepts in the brain are as much computations as anything else the brain does. That is what the brain does, processing based on stimuli.First, concepts are not rules, and the problem is we don't know how something like a computer which has only the ability to work with rules can learn concepts. We don't know how we do. Second, "things like numbers and letters" do not "go to a certain part of the brain". This is what makes fMRI analysis such a challenge. A number or word or similar stimulus will increase activity in multiple places in the brain and will change each time in ways we don't understand. So if I want to say that certain regions are involved in, say, processing action words/verbs, I have to show that these regions are significantly more activated compared to some control, and futher that the other regions which will also be significantly more activated are not somehow the "core" of the whatever action word the subject is exposed to. This is a central area of debate in the cognitive sciences, because one group maintains that concepts like action words are represented apart from the motor and sensory systems of the brain. So they explain the activation in these regions during experimentation as indicative of something else. The other group maintain that cognition is fundamentally embodied, and that part of the representation of concepts like action verbs and even abstract notions like hope make use of sensorimotor regions of the brain.
Maybe but 1 isn't just some subjective concept in the brain. It remembers it based on representation of real life events. We correlate to make it possible and the same is possible for any machine.What remains true either way, however, is that we can't point to some place in your brain where the concept of "1" exists. It is somehow represented across distributed and changing neural assemblies. At best we can say there are regions which are likely to be involved in representing the concept "1".
Much more than that.That's because it's a bigger calculator.
As I said Watson knew art and how to analyze it against information and an ambiguous questioning scheme. This cannot be done by a calculator, it would need something close to the facial recognition technology.How fast could you solve the following equation:
132,674.65754 * 13.1^8 /.234= x ?
That's a very simple equation. It's straightforward arithmetic. The rules are simple, but the calculations are difficult for us because we aren't calculators. Computers are. Nobody should be impressed that Watson could find the answers. That wasn't what the challenging part was. Computers are great at storing data and accessing it. The challenging part was getting Watson to parse the question, sort through an enormous database which had to be specially made so that Watson didn't have to understand words or language to calculate an answer. It's challenging because we had to turn language into something it isn't: math.
How is this not similar to human memory retrieval. Humans do the exact same thing when a word is said and dozens of representations of that word come to our conscious or subconscious to effect what we envision. We pick the most accurate representation based on context just like Watson. This also happens to be what makes us so imaginative in the brain which really turns out to be reconstruction of already known objects.If that were true, then whenever someone asks you a question like "how's it going", you would root through an enormous database, find a bunch of possible matches that correspond to what the question might mean, calculate the probabilities that each is what the question means, select the most probable, and then return a programmed answer.
The computer.
So are all creatures with brains, and so are robots. That is why we describe robots as autonomous machines. Now I grant you that there are a lot of ways to construe the meaning of "think for themselves", but you didn't specify what you meant by that expression.A few reasons why humans are not robots.
-Humans are capable of thinking for themselves.
Again, you don't really define what you mean by "programmed". Humans (and, let's not forget, animals) adapt to their environments through a combination of innate "programming" and learned experience. Robots already do that on a more primitive level, and there is every reason to believe that it is possible to design machines to "program" themselves through machine learning techniques. Machine learning is an important branch of AI.-Humans are capable of surviving without being programed to survived (that is if you believe in evolution).
This is a bizarre criterion, but I guess we know what's on your mind now. Machines can be designed to self-replicate. They can also be designed to have fun while self-replicating. Why not?-Humans can have sex to reproduce, robots can’t
I don't mind you disagreeing with me. That's kind of the point of posting in a debate forum--to get positive and negative feedback. But where did you get the idea that I was equating computers to robots? I was equating humans and robots. My position is that humans are essentially robots made out of biological materials created by the replication and complex folding of complex DNA molecules. The same is true of all creatures with brains. Brains are essentially guidance systems for bodies that move.I disagree with the op and think its just a bunch of a nonsense to compare humans to computers or robots. Which gets into another subject altogether. I don't view or think computers should be classified as robots. Robots are capable of interacting with the world around them. Computers, not so much.
I don't really want to make everybody think that. My ambitions really go no further than the handful of readers following this thread. I am advocating the view that humans are essentially robots. It is truly ironic in these continual debates on determinism that people seem so repelled by the thought that they might be nothing more than robots--i.e. robots horrified at the thought of being thought robots.Its not hard to follow. Just remember humans are not robots and Copernicus wants everyone to think humans are programmed to act and think a certain way.
I wish I could follow this thread.
Its not hard to follow. Just remember humans are not robots and Copernicus wants everyone to think humans are programmed to act and think a certain way.
Where did you get this special knowledge about what they want? I'm guessing telepathy. :sarcasticAnd they want to say that though all are programmed robots but they are special -- they are privy to special knowledge.
No, I'm sorry, robots are not capable of thinking for themselves. Autonomous just means they are capable of learning how to interact with their environment.So are all creatures with brains, and so are robots. That is why we describe robots as autonomous machines. Now I grant you that there are a lot of ways to construe the meaning of "think for themselves", but you didn't specify what you meant by that expression.
Again, you don't really define what you mean by "programmed". Humans (and, let's not forget, animals) adapt to their environments through a combination of innate "programming" and learned experience. Robots already do that on a more primitive level, and there is every reason to believe that it is possible to design machines to "program" themselves through machine learning techniques. Machine learning is an important branch of AI.
Care to give an example where machines self replicate without human intervention?This is a bizarre criterion, but I guess we know what's on your mind now. Machines can be designed to self-replicate. They can also be designed to have fun while self-replicating. Why not?
I see your point. Humans can or may only think one way without (using the old phrase here of) thinking outside the box. But if that is the case, people would be like robots, running into walls 50 times out the day until they realize x doesn't compute and they need to go a different direction.I don't mind you disagreeing with me. That's kind of the point of posting in a debate forum--to get positive and negative feedback. But where did you get the idea that I was equating computers to robots? I was equating humans with robots. My position is that humans are essentially robots made out of biological materials created by the unfolding of complex DNA molecules. The same is true of all creatures with brains. Brains are essentially guidance systems for bodies that move.
I don't really want to make everybody think that. My ambitions really go no farther than the handful of readers following this thread. I am advocating the view that humans are essentially robots. It is truly ironic in these continual debates on determinism that people seem so repelled by the thought that they might be nothing more than robots--i.e. robots horrified at the thought of being thought robots.
Where did you get this special knowledge about what they want? I'm guessing telepathy. :sarcastic
The core has to be the same and is.
Memory is what your looking for. Storage and retrieval. A bit is merely an amount of data. Doesn't the brain hold data?
That's because it doesn't involve either. And to the extent it might (or in the few basic implementations we have) it absolutely does change the essential nature.No doubt but doing quantum computing doesn't change the essential nature of memory and retrieval.
"Actual quantum computation processes are very different from those of a classical counterpart. In a classical computer, we input the data from a keyboard or other input devices and the signal is sent to the I/O port of the computer, which is then stored in the memory, then fed into the microprocessor, and the result is stored in the memory before it is printed or it is displayed on the screen. Thus information travels around the circuit. In contrast, information in quantum computation is stored in a register, first of all, and then external fields, such as oscillating magnetic fields, electric fields or laser beams are applied to produce gate operations on the register. These external fields are designed so that they produce desired gate operation, i.e., unitary matrix acting on a particular set of qubits. Therefore the information sits in the register and they are updated each time the gate operation acts on the register." (emphasis added)A quantum bit
Again with the complexity. This doesn't change that neurons speak a language that needs to be decoded. What ever the neurons are "saying" represents data that has been stored through experience.
Data is data no matter how it is coded or by what means.
No. They don't. The reason that computer programmers talk about storage, while cognitve and neuro- scientists talk about "representation" has (in part) to do with the fact that the brain doesn't have a hard drive. Our memory is not some distinct, identifiable sequence of binary states, but is represented by the same "thing" that processes it. Neurons can represent conceptual structures somehow, but one key reason they can is likely that unlike data storage, the actual "hard drive" is not distinct from the processor or software. Thus to say that neurons "store data" would be like saying your web browser stores data. Only it doesn't. On a computer, whether we're talking RAM or a hard drive (or other storage), we are dealing with physical states distinct from the processor, which allows the processor to use code to change these states and implement programs. The brain doesn't have this.This certainly doesn't mean the neurons are speaking gibberish. Do they store data or not?
Advanced forms of computation is comprehension. No doubt we will have a hard time proving machines are aware but this is a philosophical problem.
Not really. In fact, people who have memory disorders which prevent them from recalling "real life events" (episodic memory) can still understand language and numbers. Episodic memory may be necessary to learn concepts, in that there must be a time in which you are taught them, but once learned even people who cannot recall, or create, new episodic memories can still understand.Maybe but 1 isn't just some subjective concept in the brain. It remembers it based on representation of real life events.
Yes, you said that. But it isn't correct. This is why the wrong answers it did give were so bizarre.As I said Watson knew art and how to analyze it against information and an ambiguous questioning scheme.
This cannot be done by a calculator, it would need something close to the facial recognition technology.
How much of bayesian probability are you familiar with? How about linear algebra and calculus? If the answer is "not a whole lot", then there's the difference.How is this not similar to human memory retrieval.
That's not what computers do.Humans do the exact same thing when a word is said and dozens of representations of that word come to our conscious or subconscious to effect what we envision.
Really? So when someone asks you "how's it going", you don't understand the question until you have compared it against thousands of examples from speech, calculated probablities that humans are in general not capable of at all, and all that not to answer the question, but just to understand it?We pick the most accurate representation based on context just like Watson.
You could just look the word up in a dictionary, but never mind. It means that the machines are capable of interaction with their environment independently of external control.No, I'm sorry, robots are not capable of thinking for themselves. Autonomous just means they are capable of learning how to interact with their environment.
Tautologies are always true. I think that humans are machines that are capable of thinking for themselves. You may disagree, but I want you to understand what I am actually claiming.Either you think humans are capable of thinking for themselves or they aren't.
I was commenting on your usage, not my own. I have no telepathic powers that would allow me to know what you mean by words like "programmed". I put scare quotes around the word, because "program" is being used in a loose metaphorical sense.You created this topic. You should revise your own terms before asking me to define them for you.
Of course not. I never claimed to be talking about current technology. However, self-replicating robots are theoretically possible. Why wouldn't they be? And we could introduce sex into the self-replication process, if we wanted to use that means to incentivize procreation.Care to give an example where machines self replicate without human intervention?
People do occasionally run into walls, but your idea of robotic behavior seems to be based on childood experiences with wind-up toys. In any case, you can bring yourself more up-to-date by reading about the DARPA Grand Challenges.I see your point. Humans can or may only think one way without (using the old phrase here of) thinking outside the box. But if that is the case, people would be like robots, running into walls 50 times out the day until they realize x doesn't compute and they need to go a different direction.
Care to give an example where machines self replicate without human intervention?
However, self-replicating robots are theoretically possible. Why wouldn't they be? And we could introduce sex into the self-replication process, if we wanted to use that means to incentivize procreation.
And until you show me an empirically based model of these systems and how they produce what can be verified independent of context outside of the neurophysiological processes which make them up, and then demonstrate how this model explains what we actually know about the brain (especially the contradictions it has with our understanding), then you haven't generated anything which is useful to understand choices, consiousness, etc.
Basically, you've stated that certain systems are involved in a processes ( those that create "emotional gratificication" states) which you claim all choices can be understood in terms of. You haven't pointed to any evidence that such processes exist in any meaningful way, nor how our understanding of the limbic or sensory systems make your model plausible. You've simply created a vague model, asserted that it is somehow produced by certain parts of the brain, and then declared all choices to be understood by it. As stated, your theory cannot be falsified, because everything can count as verification. That's not science.
Elman's η idea about genetic coding? Jeffrey Elman? They guy who co-authored Rethinking Innatedness? Here's an article by him: "Learning and development in neural networks: the importance of starting small".
All this time you've talked about reinforcement, it was simply based on a learning parameter in one type of artificial neural network?
I'll start here. I did read the book. I have also read much more of his work. If you would like to challenge this, feel free. Cite any part of the book, or any other work by him, which you think supports your conception and we'll go from there.You didn't read the book, and so again you don't get it...
Ideas are great. But in order for them to pan out (especially as models for science) they need to be more than just speculation. And as your idea is based upon the misunderstanding of not only the brain, or even neuroscience, but also of the very work you base it upon, you need something more than "it's an idea" to make it worth considering from a scientific perspective.And you still don't get it, I called it an idea. Its an approach that can be coded for a machine to understand and create choice not just based on logic but qualities of experiences and that is the point.
But to digress, remember Einstein or was it Bohr that had the problem with the cell phone that was beyond reductionism or was it the lack of tools, like an electron microscope...
Yes it is science; its called a hypothesis, and testing it isn't so far off in the future. Many are confident that the ability to monitor individual neurons will be possible. :flirt:
Cause data can be just about anything and it is always a matter of interpreting the data.As there is plenty of evidence that it is not, why say it "has to be the same"?
I know they don't process the same. They weren't built to process the same. They are however both processing something from memory.This is where things like "chunking" come into play for humans. If I rattle off a string of 10 numbers, you'll be unlikely to be able to repeat it back to me. Short term memory has certain limits. However, phone numbers have 10 digits, and we can often remember them without a problem. Likewise, I could ask you to memorize a bunch of letters I am going to say (one at a time), and it wouldn't be long before you there were too many letters to repeat back. But if I listed letters like
the-quick-brown-fox-jumped-over-the-lazy-dog, or the numbers 1-555-328-7682, you may very well be able to bypass the restrictions on short-term memory. This is because if I give you the letters "sdjkrs", each one is a unit of information (a bit). However, if the letters are "computation" then you don't process the letters in the same way at all. It becomes a word. The bit, or minimal unit, is context-dependent. In a computer, it is clearly and easily identified: it is the binary storage, the 0's and 1's.
This doesn't change the fundamental aspect of learning and using the data to process new information.That's because it doesn't involve either. And to the extent it might (or in the few basic implementations we have) it absolutely does change the essential nature.
"Actual quantum computation processes are very different from those of a classical counterpart. In a classical computer, we input the data from a keyboard or other input devices and the signal is sent to the I/O port of the computer, which is then stored in the memory, then fed into the microprocessor, and the result is stored in the memory before it is printed or it is displayed on the screen. Thus information travels around the circuit. In contrast, information in quantum computation is stored in a register, first of all, and then external fields, such as oscillating magnetic fields, electric fields or laser beams are applied to produce gate operations on the register. These external fields are designed so that they produce desired gate operation, i.e., unitary matrix acting on a particular set of qubits. Therefore the information sits in the register and they are updated each time the gate operation acts on the register." (emphasis added)
pp. 64-65 of Quantum Computing: From Linear Algebra to Physical Realizations (CRC press, 2008).
It does indeed change that. Because it seems that for the most part, unlike the binary units of computers, neurons do not "speak" any language. Neural assemblies do. More importantly, the "data" is also the processor, and is also the software. So not only does the "code" continually change, the physical implementation device it "runs" on isn't actually seperated from it, but is it.
But it takes us years to learn this stuff so don't take it for granted. You give the computer iyears of learning experience required by a human to learn language and then you can compare.This is rather fundamentally inaccurate. In a famous sentence which was designed to show the difference between syntax (rules) and semantics, Chomsky came up with "colorless green ideas sleep furiously" which obeys all the "rules" of grammar, but is nonsensical. It can be made sensical (my grandfather published a paper on this in Linguistics), but not simply by rules. Even if I could theoretically design a program which had all the rules of grammar (and, as someone who finds the cognitive linguistic framework far more sound than any generative grammar, I don't), there would still be nothing to make nonsensical but grammatically correct sentences nonsensical. The sentence is still data. I can type it and it is obviously then represented somewhere in my computer. But it doesn't make any sense. What is it about this sentence that doesn't make sense? It's not the rules, as Chomsky showed. It's meaning. To say that "data is data" is akin to saying that because we're all made up of atoms there is no difference between a plant and a rock and a computer. It ignores what makes rocks what they are, compared to plants, or compared to computers: some sort of organization unique to each which makes them more than just a bunch of atoms, but a particular, special bunch.
The neurons just send a code that has to be decripted. Within any given network there are tons of languages passing through all as ones and zeros. They are fundamentally the same in that respect but if you can decode all the languages you can recreate the picture of the data being passed through the wires. Same must be true for neurons. When a person remembers what their car looks like, what is it that is happening. First the person had to see their car and store it in memory. When trying to remember a car the code from the neurons must be translated into something meaningful for our consciousness. Their is a language being used that has to be decoded.No. They don't. The reason that computer programmers talk about storage, while cognitve and neuro- scientists talk about "representation" has (in part) to do with the fact that the brain doesn't have a hard drive. Our memory is not some distinct, identifiable sequence of binary states, but is represented by the same "thing" that processes it. Neurons can represent conceptual structures somehow, but one key reason they can is likely that unlike data storage, the actual "hard drive" is not distinct from the processor or software. Thus to say that neurons "store data" would be like saying your web browser stores data. Only it doesn't. On a computer, whether we're talking RAM or a hard drive (or other storage), we are dealing with physical states distinct from the processor, which allows the processor to use code to change these states and implement programs. The brain doesn't have this.
We've proved that our ability to think at high levels isn't as far out as we thought. Everyone should agree that the brain is the most powerful machine we know of in the universe. How is it more than a machine?It's not that we willl have a hard time proving computers are aware. The problem is that proof involves reference to formal languages, like mathematics and logic (not that the two are really that distinct). But these are designed to negate the ambiguities of natural language, and to bypass meaning as much as possible. It is these languages which allow us to get computers to do anything. How, though, does one formally define "meaning"? Any definition necessarily depends upon meaning. So to prove that computers can't be "aware" involves using computer-like language to define meaning. If this could be done, however, then we would have a way to get computers to understand meaning. It is the fact that, after decades of work, we have found no way to prove that concepts are impossible to reach from computation, nor to prove that they are, which is perhaps most suggestive (apart from philosophical arguments or appeals to intuition) that the two are fundamentally different.
Yeah it's all about math when babies are learning it too as their brains map out proper configurations.Facial recognition technology is all about math. You can read an entire book on the subject and never get a line of code. This is true for machine learning in general. Why? Because all learning that computers do is meaningless mathematical manipulation. It's the same that your calculator does. It is all rules and procedures.
I'd say "hows what going".Really? So when someone asks you "how's it going", you don't understand the question until you have compared it against thousands of examples from speech, calculated probablities that humans are in general not capable of at all, and all that not to answer the question, but just to understand it?
We can currently monitor individual neurons. That's how we know that individual neurons can't be the key here. And as for testing your model, it is based upon the work of a researcher who explicitly rejects your model. Furthermore, it has been tested. There is no support for it anywhere.
That is nice you want to get all technical and everything now since you weren’t very technical in your OP. When you say: “choice is a determined process” in relation to a robot, most people can automatically assume they are programmed to act certain way or perform certain tasks. Even you used the word programmed in your OP so I’m not real sure where the confusion is.You could just look the word up in a dictionary, but never mind. It means that the machines are capable of interaction with their environment independently of external control.
Tautologies are always true. I think that humans are machines that are capable of thinking for themselves. You may disagree, but I want you to understand what I am actually claiming.
I was commenting on your usage, not my own. I have no telepathic powers that would allow me to know what you mean by words like "programmed". I put scare quotes around the word, because "program" is being used in a loose metaphorical sense.
Of course not. I never claimed to be talking about current technology. However, self-replicating robots are theoretically possible. Why wouldn't they be? And we could introduce sex into the self-replication process, if we wanted to use that means to incentivize procreation.
People do occasionally run into walls, but your idea of robotic behavior seems to be based on childood experiences with wind-up toys. In any case, you can bring yourself more up-to-date by reading about the DARPA Grand Challenges.