• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Humans are like robots. Choice is determined.

uberrobonomicon4000

Active Member
Yeah, the media is always saying stupid things about robots (and automation in general). In this case, they are trying to play on the Luddite fears of their audience. That is the "hook" or angle that the writer uses to justify publishing it as news. But you are right that it all depends on what kind of robot you are talking about. I'm not talking about the kinds of robots that actually exist today. I hope that that much is clear to you. I'm talking about what kind of robot it is possible to develop in principle. At this point in time, given what we know about human cognition, there is no reason to believe that humans are anything other than very complex flesh-and-blood robots. There is nothing about our "free will" that is incompatible with determinism, unless you define "free will" in such a way that it really isn't about the freedom to choose to do what you want.
I just think using the word Robot in some instances carry to many connotations. But if you were to say humans, on a molecular biological level are like wet, well lubricated machines or even computers then I would agree with you. Once you look at a cell it’s easy to see its similarities to a computer and even a robot in some cases. What I mean is that a cell has a nucleus (Eukaryote Cell), which acts like a CPU for the cell.

Then you have the DNA which acts like a set of instructions or a program for a computer (possibly a robot) to run. Then you have a temporary storage with the RNA that helps with cell replication or perform the actual instructions provided by the DNA. Then if you look at the mitochondria you can view it as the cells powerhouse. Other people are talking about the firing of neurons, which I don’t care to get deep into because I’m not a neuroscientist, but they can be viewed as the electrical components of a computer which trigger events and transmit data. So yeah, I would agree and say humans are like wet machines composed of flesh and blood.

I just don’t like the word robot because it usually refers to something that is automated. But with the advancement of computers and robots I think Artificial Intelligence might one day be possible. Then robots would be a lot like humans. It’s fun to think about . :)
 

uberrobonomicon4000

Active Member
But what the heck. Might as well show you a video of how someone with paralysis can control a robotic limb with the firing of neurons, which transmits those signals through a computer using a braingate2 technology that relays that information into machine language (computer language). Its pretty cool.

[youtube]5s8NsgllTvg[/youtube]
Paralyzed Woman Uses Mind To Control Robotic Arm - YouTube

Full article can be found here. People with paralysis control robotic arms using brain-computer interface | Brown University News and Events
 

LegionOnomaMoi

Veteran Member
Premium Member
Again you’ve taken things out of context to discredit an idea you don't understand. The whole point of my statement was that it IS A GENERIC MECHANISM FOR NON GENETIC ADAPTION! Can't you READ?
I can read. And I'm still waiting for anything which supports your "emotional gratification" explanation and why you "extend" a parameter in a particular type of ANNs, especially when the model you use was developed by one who fundamentally disagrees with you, and who differentiates the type of learning humans are capable of from all other animals.

How your nose can sense a woman from pheromones was coded genetically
Seriously? Does the term "vegistial" mean anything to you? Humans don't use pheromones the way that other animals do.

how your retina converts light into neural signals was coded genetically, how you can feel heat, pressure and pain was ALL CODED GENETICALLY!
How?


What’s not genetic is what’s learned from the sensory signaling…
Pain, heat, pressure, smell, etc., is all learned from "sensory signaling". You are contradicting yourself.
 
Last edited:

Leonardo

Active Member
This means that, within the time during which the decision has been made, a single neuron could have fired only 4 or 5 times! The perception of a visual scene involves a concerted action of a population of neurons. We see that exchange of information between them should take place within such a short time that only a few spikes are generated by each neuron.
All that could be needed is for the neuron to generate a single spike which explains why slow switching neurons can function within time frames suitable for real time applications.
Therefore, information cannot be encoded only in the rates of firing and the phases (that is, the precise moments of firing) are important. In other words, phase relationships in the spikes of individual neurons in a population are essential and the firing moments of neurons should be correlated."

The precise moments of neurons firing is not as necessary as one might think. An example is the simulation engine we at work developed has every component operating asynchronously, literally, they all operate on their own timer. Each component requires inputs from various components, sometimes hundreds of them. The signals don’t all arrive at the same time and each signal has its own path passing through a maze of components before reaching anyone component. But the system works because each component knows when to generate its signal or signals when all its inputs finally arrive. A similar asynchronous strategy could work using chemical sensitizing of electrical signals where chemical weights are modified through repetitive electrical pulses. The memory of the chemical sensitization is long enough to allow for asynchronous operations. So each neuron knows when to fire when the chemical weight reaches its threshold.

This also allows for neural coding to be more complex for each neuron since the firing rate then would have an impact. Say a coding is setup where twenty neurons fire a signal spike to a receiving neuron “A”. Now say one of those neurons, “B”, that are an input to A is induced into a spike train, firing at say 50 hz, inducing the chemical threshold of A's twenty inputs. That allows for B to force A to output rather than require all twenty inputs. This would be indicative of say an exception to the rule that otherwise requires all 20 inputs.

From a digital perspective what I just described is a multilayered gate, where a single component can act like different gate types depending on the type of inputs it receives. So there's one coding with 20 neurons, there could be another coding with just one or two neurons and yet others with varying codlings that cause A to fire.
 
Last edited:

Copernicus

Industrial Strength Linguist
Copernicus

I am repeating this as an earlier post was ignored.

We are programmed. But on account of some mechanism you have come to discern it. And now you can discriminate between what is merely pleasurable and what is good and act based on wisdom. The momentum of old acts persist for some time.

This is what I understand. So, do we or do we not have this discriminative faculty?
I don't mean to ignore you, atanu, but you have not addressed an issue or asked a question that seems relevant to the OP or thread discussion. Perhaps I could respond better if you would show me where you are going with this. What does it have to do with the question of whether humans are essentially flesh-and-blood robots?
 

atanu

Member
Premium Member
I don't mean to ignore you, atanu, but you have not addressed an issue or asked a question that seems relevant to the OP or thread discussion. Perhaps I could respond better if you would show me where you are going with this. What does it have to do with the question of whether humans are essentially flesh-and-blood robots?

Actually, it has been pointed out that a) there is no AI and b) there is no understanding of what consciousness is.

But these are always ignored and the claim that humans are essentially robots asserted again and again. What is the use of such debate?

You tell me whether your finding my earlier post to be irrelevant to this thread was work of robot or not?

Second. You said in the OP that there was a difference between humans and Robots, in that the humans are products of evolution and robots are of artificial intelligence. I say that if humans are robots then the non-human robots are also product of the nature -- and determined.

But I do not agree that humans are robots. We are worse than robots, since we have pain, as long as we are unconscious and associate ourselves with name and form. Once we disassociate our intelligence from the name-form, we no more are robots.

Decisions taken in the mode of name-form carry on deterministically. But dElinking from name-form also delinks the consciousness from the fate of the name-form.
 

Copernicus

Industrial Strength Linguist
Actually, it has been pointed out that a) there is no AI and b) there is no understanding of what consciousness is.
I would disagree with both those claims. There is AI, and we have some understanding of what consciousness is. That understanding will grow over time, as scientists investigate the phenomenon.

But these are always ignored and the claim that humans are essentially robots asserted again and again. What is the use of such debate?
The claim is actually quite rare, although a lot of folks like you attribute it to people who don't make the claim. I am making it here in order to stimulate a discussion on the free will debate.

You tell me whether your finding my earlier post to be irrelevant to this thread was work of robot or not?
Metaphorically speaking, yes.

Second. You said in the OP that there was a difference between humans and Robots, in that the humans are products of evolution and robots are of artificial intelligence. I say that if humans are robots then the non-human robots are also product of the nature -- and determined.
That's right. Non-human robots are products of human invention.

But I do not agree that humans are robots. We are worse than robots, since we have pain, as long as we are unconscious and associate ourselves with name and form. Once we disassociate our intelligence from the name-form, we no more are robots.

Decisions taken in the mode of name-form carry on deterministically. But dElinking from name-form also delinks the consciousness from the fate of the name-form.
You lose me when you talk about "name-form". I have no idea what you are talking about.
 

atanu

Member
Premium Member
I would disagree with both those claims. There is AI, and we have some understanding of what consciousness is. That understanding will grow over time, as scientists investigate the phenomenon.

Has a machine passed Turing test? And consciousness is not even accepted. Knowing it is out of question.

Even if the above two are met, there is
http://www.religiousforums.com/forum/3209326-post164.html

The claim is actually quite rare, although a lot of folks like you attribute it to people who don't make the claim. I am making it here in order to stimulate a discussion on the free will debate.

This claim can only make any sense when a robot will make a man.

Metaphorically speaking, yes.

Metaphor is not the real thing.

That's right. Non-human robots are products of human invention.

Just as human built robots work as per dictates of program and have no understanding, similarly, if humans were really only robots then human work also is devoid of any understanding.

This is not true, as this thread demonstrates.

You lose me when you talk about "name-form". I have no idea what you are talking about.

I will try to keep it simple. Creation of a bangle (a shape and a name) does not destroy gold. Again destruction of bangle does not destroy gold.

Coming and going of ego-beings do nothing to the consciousness, which the reality is.
 

Leonardo

Active Member
Seriously? Does the term "vegistial" mean anything to you? Humans don't use pheromones the way that other animals do.

Ah...I hate to break this to you, and again this is another example of how you seem to lack some experiences, like “Ah Ha” moments, that most other people have, but the effects of pheromones are very real with humans. That you aren’t aware of this is very suspicious, are you a robot? :biglaugh:

Pain, heat, pressure, smell, etc., is all learned from "sensory signaling". You are contradicting yourself.

You're confusing learning to cope with pain, heat and pressure with the sensory system's ability to code it by generating a signal from nerve endings! :facepalm:

So I'm not contradicting myself and you prove yet again you don't understand what is being proposed. :(
 
Last edited:

Leonardo

Active Member
Actually, it has been pointed out that a) there is no AI and b) there is no understanding of what consciousness is.

But these are always ignored and the claim that humans are essentially robots asserted again and again. What is the use of such debate?

The argument for the "humans are machines" is that all brain activity resolves to neural interaction that is governed by physics. The notion of consciousness or awareness doesn't define itself well as a physical effects or measurements of electrical or chemical processes. So how can awareness come about from mechanisms, because that is how human brains work? The answer is awareness is an emergent property from the sum of all the brains mechanisms, this analogous to say the theme of a book or movie. All the actors, directors, camerapersons, makeup artists, stunt people, etc act as one that becomes the movie. When you evaluate each contributor's work it all just boils down to physics and the theme is not present in any physical way, yet the theme is encoded in the movie. We call that an emergent property.
 

idav

Being
Premium Member
How are you distinguishing processing from memory?
Memory just holds the data in whatever form can be meaningful for a particular language. The processing is doing something with that data.


Essentially thats what cells do. They communicate.


It isn't. Not in any meaningful way.

It is meaningful because a neuron is quite complex. All sorts of spike patterns and being able to connect to 100,000 other neurons simultaneously.


Done. It doesn't work. Why? Because current machine learning is fundamentally different from a type of learning humans (among other animals) can do.
Fundamentally, learning is a process that animals share. The difference with humans doesn't change that. What your wanting is a computer to be able to go back and critique itself and it's answers. I'm sure a robot would have no problem doing the type of learning your asking for with proper programming.


Signals & information do not necessarily a code or language make. I can describe pebbles, coins, skin cells, etc., in terms of information. But a coin only becomes a binary unit of information if it is flipped and some interpreter is there to interpret the resulting state. Neural networks certainly involve signals. But it is misleading (especially if one is not familiar with how the brain works) to refer to this as code or as language.
Data can be anything but it certainly doesn't mean anything if you don't speak the language of pebbles, coins or skin cells etc.


This is only one particular type of "language". Binary. It is certainly not the only type of information, let alone language, there is. And it does not seem to conform to the fundamental structure of reality. Either way, to assume all code or languages are 0's and 1's is very problematic. It forces you to reduce all signals to a sequence of equally likely binary states.
Binary is a way to send data in terms of on off switches. That would go for anything really, it either happens or it doesn't.

Why? It isn't true of qubits. These can be in a superposition of both |0> and a |1>
simultaneously.

Your just speaking to the method. Encoded in the neuron(s) is a picture of your car as you remember it. There isn't a jpeg in there!

Again, this seperates memory, code, and processing. One central reason behind the ability of humans to understand concepts is probably that this doesn't happen.When you remember your car, you aren't required to recall on a single (or multiple) particular memories. In fact, you can picture your car without recalling any actual experience with it. More importantly, the what "code from the neurons" allows you to do this?
I've not argued the efficiency of the brain. The code is the signals, the spikes coming from the neuron. The neurons are communicating something be it a car truck or plane.

We haven't. In fact, there are proofs that a computer is incapable of processing concepts. These aren't by any means universally accepted, but then again the Church-Turing thesis is not without critics.
Because we take our knowledge for granted. Concepts are all in what you know about something. It takes us years to really get language, a machine should be allowed the same if you want to make the comparison and for language to be truly meaningful to a robot.
What we've done is find out that things we took for granted as "simple" are much more complicated than we thought (which was why as soon as computers were developed A.I. was only years away; and it remained "only years away" for decades).
No doubt and likely need to relearn a few things. This doesn't mean it isn't possible.


But there is a difference between a machine and a computer. Certainly, brains are in some sense machines, and they do compute. But this does not make them computers. It does not mean that all thought can be a matter of computation. And if it were true that all thought is, then this could be proved.


I said a brain is a machine, just like any other organ in our body. Brain just happens to have a lot more functions crucial to cognition.
Babies don't just learn language over night. It takes years of hearing people explain things and associating objects with words.


Because it doesn't take the same kind of experience, nor would an infinite amount of experience using current learning algorithms somehow make a computer go from meaningless number crunching to A.I. That Sci. Fi. ("They say it got smart. A new order of intelligence"). When someone asks you "how's it going" you don't need to root through a database filled with all of your experiences, sorting through and calculating probabilities just to figure out what the question means.
If works because if you have the data and knowledge them presumably you have the answers.
More importantly, you can readily process novel concepts and abstract from specific. For example, imagine I had no idea what a kangaroo was. I'm out exploring Australia, with a guy named Dundee (also known as Crocodile Dundee), and I see this weird looking giant rabbit like creature hopping around. I ask Dundee, what's that? He says it's a kangaroo. A few hours later, I see another similar looking giant rabbit like creature. But I don't ask this time. Because even though this isn't the same kangaroo I saw before, I now have a new category "kangaroo" which I can now readily extend to specific kangaroos.
I understand that issue. Kid might do the same thing like calling a lion a "kitty" or something. Babies also go through a period where facial recognition isn't possible and they can't really tell people apart. The brain works very hard to sort all this out for us as any processing requires. Yes it takes a sophisticated program no doubt.
 

Copernicus

Industrial Strength Linguist
Has a machine passed Turing test? And consciousness is not even accepted. Knowing it is out of question.
AI is about more than a computer being able to pass the Turing Test, and scientists have made a lot of progress in understanding the neurological underpinnings of many human mental functions, including consciousness. It is easy to make facile claims about our level of ignorance, but you should not assume that you know what that level is on the basis of your own ignorance.

Just as human built robots work as per dictates of program and have no understanding, similarly, if humans were really only robots then human work also is devoid of any understanding.
The question is not whether robots built under our current level of technology are able to understand things the way humans do. It is whether robots can have free will in the same way that humans do.

This thread isn't really about consciousness or understanding in robots, which are different mental functions. It is about the decision-making process. How would it be different, in principle, in a robot than in a human? We can build machines that make decisions in much the same way that humans do--by calculating options and weighting actions against desired outcomes. What is "free will" beyond the ability to choose to do what one wants? Desires are not a matter of choice, but actions are. Robots behave the same as humans in that respect. It does not matter whether the desires and goals were programmed in by humans, gods, or happenstance. They are what determine choices.

Now, we can go on to talk about consciousness and self-awareness, but bear in mind that most of our choices are not made on a conscious level. We focus our attention on certain aspects of our behavior, but the mind operates on many different levels of awareness. The more sophisticated autonomous machines nowadays have a rudimentary awareness of their environment through peripheral sensors and their own "bodies"--again through sensors. They calculate actions and update plans on the basis of information that comes in from such sensors. They are not self-aware in the same sense that animals with very complex brains are, but biological complexity has been shaping that complexity over many hundreds of millions of years. Robot "brains" haven't been around quite that long yet, but humans interacting with their machines have discovered that the machines need to be aware of their own systems in order to function more usefully. Self-awareness is coming.

I will try to keep it simple. Creation of a bangle (a shape and a name) does not destroy gold. Again destruction of bangle does not destroy gold.
Indeed not, but there was no bangle until it was created. That is something to meditate on.

Coming and going of ego-beings do nothing to the consciousness, which the reality is.
I get the bangle bit. This last sentence strikes me as gibberish. :shrug:
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
The argument for the "humans are machines" is that all brain activity resolves to neural interaction that is governed by physics.
The second part (about physics) is actually an important one. What does it mean to say that the brain is "governed by physics"? Usually we take such statements to mean that we can explain entirely something like the brain (at least in principle) by understanding how all the molecules, atoms, cells, etc., and the forces acting on them and produced by them (and their interactions). In other words, we can reduce the brain (or any physical system) to its constituent parts and the laws of physics, and understand it entirely. Syropoulos, in his book Hypercomputation: Computing Beyond the Church–Turing Barrier (Springer, 2008) makes an interesting statement on this:
"However, it is quite suprising, particularly for nonspecialists, that biologists are gradually abandoning reductionism in favor of emergence, which is roughly the idea that some properties of a system are irreducible." (p. 105; italics in original). Syropoulos may count as a "specialist" rather than a "nonspecialist" when it comes to biology, in that as a researcher in computability theory and computer science, one of his areas of focus is molecular computing. That said, his work has clearly been more about computability theory than biology, and I'm not sure to what extent his statement is accurate. It is certainly true that systems biology is characterized by this non-reductionist standpoint. But biology is quite a broad field. Nor is the emergence stance unique to biology. It is found in diverse fields, from physics and computational sciences to philosophy and religious studies. More importantly, it's not very clear what emergence is (or may be). Specialists from the likes of Daniel Dennett (philosopher of mind) to Philip W. Anderson (Nobel prize winning physicist) both endorse "emergence", in some sense. But Dennett (who references emergence in passing), doesn't appear to be very concerned with a definition, and Anderson states that empergent phenomena are "not violations" of physics, but not "logical consequences" of them either.

But neither such definition precludes building a machine capable of producing a "mind" as an emergent property. The real problem, I think, is what it means to say that something is, or is not, a violation of physical laws.

For one, as Paavo Pylkkänen points out in Mind, Matter, and the Implicate Order (a thorough exposition on his adoption/adaption of Bohmian mechanics and Bohm's own approach to the "mind"), emergence without explanation doesn't help us much. Pylkkänen also goes beyond other treatments of physics and the mind (esp. Stapp's and Penrose's) by not only saying that consciousness cannot be explained by classical physics, but even that a new physics for biology and/or consciousness may be necessary. As radical as this may seem, it was the inability of biological physicists to deal with biological systems which formed a major impetus behind the systems biology framework. However, in order to know what kind of "new physics" if any, may be called for, we'd have to compare it to the modern physics.

Modern physics, unfortunately, which faced a crises at the turn of the century, apparently did not succeed at sweeping it under the rug. The two foundations of modern physics, relativity and quantum theory, stand in rather stark contradiction to one another: "Furthermore, although both relativity and quantum theory work brilliantly in their own domains, their basic concepts seem to be in complete contradiction with each other. Thus, as Bohm has underlined, relativity emphasizes continuity, locality, and determinism, while quantum theory suggests that the exact opposite is fundamental, namely discontinuity, non-locality, and indeterminism" (ibid).

Even on their own, however, both QM and relativity have come up against increasing challenges to their respective epistemological frameworks. For relativity, superluminal signals, causal paradoxes, and even the ontological nature of spacetime are problematic. For QM, the problem is mainly understanding what the problems are, as the problem of interpretation/measurement in QM, supposedly settled (or ignored, depending on whom one refers to) by the Copenhagen interpretation and the subsequent "standard" model(s), turns out to be not really settled at all. Our increasing ability to work with quantum systems (and allow them to "cohere" in quantum states) has increased our ability to move from seemingly paradoxical thought experiments to actual empirical implementation of these.

As mainstream physics literature has (increasingly) consisted of papers/studies demonstrating contradictions between different aspects of modern physics (or with fundamental ideas about the nature of reality, such as causation, reductionism, locality, etc.), what would "violating" modern physics even entail? If we have (and we do) experimental evidence of things like the superposition state (or "delocalized state") of a 430 atom compound (far from the subatomic level), experimental measurements which appear to violate classical logic by observing the same entity as two different things at the same time, experimental evidence of retrocausation, and so on, what's left to violate?
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
Memory just holds the data in whatever form can be meaningful for a particular language. The processing is doing something with that data.

This is a conceptual distinction. With a computer, we can certainly contrast RAM and the CPU with this apprach in a meaningful way, but what about the brain? Where is data "held" such that something else "processes" it (and what is that?) You have a constantly active brain, with memory distributed throughout, and no clear way to seperate "data" from "data processing". In fact, it seems likely that any such distinction is at least largely abitrary and probably rather fundamentally flawed.



Essentially thats what cells do. They communicate.

You misunderstood. I meant "yes, I am saying that neither groups nor individual neurons speak a language. And the fact that cells "communicate" doesn't say much at all about how memory, processing, and cognition in general happens, as all cells communicate and many as part of a larger system of interconnected cells, but very few such systems allow associate learning/conceptual processing. In fact, at the moment only brains do. Neurons communicate, as do cells, but most neural (nervous) systems are incapable of associative learning. Poking a sea slug causes neural communication and non-associative learning, but this will never turn into something even rats are capable of: internal representation of abstractions/concepts.


It is meaningful because a neuron is quite complex. All sorts of spike patterns and being able to connect to 100,000 other neurons simultaneously.
Weather is really complex. A swinging pendulum is really complex. Single celled organisms are really, really complex. But complexity doesn't make something a computer. Cloud dynamics are a very complex result of lots of connects processes and systems. But clouds are not computers.


Fundamentally, learning is a process that animals share. The difference with humans doesn't change that.
No. But the difference between the types of learning certain animals are capable of, and certain others are not, makes all the difference in the world. The most important, and most basic distinction here is associative vs. non-associative. Some animals can do both. But most can do only the second. It is this kind of learning (learning without understanding or awareness) which computers are currently capable of.

What your wanting is a computer to be able to go back and critique itself and it's answers. I'm sure a robot would have no problem doing the type of learning your asking for with proper programming.

That's not what I want out of a computer. Assessing states (external and internal) through procedures is nothing new. Interal representation of abstract, conceptual content, on the other hand, would be something quite new, and that's what I would like.

Data can be anything but it certainly doesn't mean anything if you don't speak the language of pebbles, coins or skin cells etc.

Pebbles, coins, and skin cells don't have language.

Binary is a way to send data in terms of on off switches. That would go for anything really, it either happens or it doesn't.

Let's use actual language here and see if we can't find a way in which this appears to be incorrect. When I was in 2nd grade, I "read" Moby Dick. The square quotes there are because while it is true that I read every word and every page, I understood almost none of it. In fact, it may not even be true to say I read every word, as can you really say you read a word or phrase you didn't understand? You can scan the characters in the line καίτοιἀληθές γε ὡς ἔπος εἰπεῖν οὐδὲν εἰρήκασιν, but if you can't read ancient Greek, can you really be said to have read these words? I use this example because it came up in a conversation not long ago about literature and reading and my aunt told me that what I had done "wasn't really reading." And in most ways that matter, it wasn't. But not in all ways.

Another example: there's a joke (not PC, and offensive to many) I've come across on things like coffee mugs which reads "AA is for quitters". Is someone who quits smoking or drinking a "quitter"? If I decide to stop doing something, under what exact, binary conditions have I "quit" doing that something. For example, if I become comotose incapable of higher cognitive functions, have I quit "thinking" or quit "talking" in the same way I can quit smoking or quit a job?

If I drive someone crazy, have I driven them anywhere? If I make someone angry, upset, feel insulted, and hurt, all through a single insult, how is this binary? And have I made them angry in the same way I might make my bed?

Things get even worse when we get to properties/features of things, which we want computers to recognize and understand. If a person is tall at 7 feet, are they not tall at 6? Is a cell which is part of an animal "alive" in the same way a single celled organism is? Is a virus alive?

Information theory, even in physics, runs into a problem rather quickly when trying to deal with language. It's all well and good to say everything is data or bits, but unless you have some method of explaining how bits get turned into concepts, it doesn't help that much here.

Your just speaking to the method. Encoded in the neuron(s) is a picture of your car as you remember it. There isn't a jpeg in there!

The method is what matters. Simply defining "learning" as processing bits doesn't get us very far. It doesn't tell us what makes humans capable of things computers can't do, but we want them to be able to.

The neurons are communicating something be it a car truck or plane.

To whom or what are they communicating it to?


Because we take our knowledge for granted.
This is true, but in quite the opposite way you think. Before computers were around, and the Turing machine was only theoretical, the fact that we had an idea of automated computation already meant speculations about A.I. Once we had computers, A.I. was a sure thing in a matter of a few years. Until people realized how extremely complex all the things they took for granted were. Evidence suggests that newborn infants can recognize their mothers' faces. Learning novel concepts, recognizing objects, etc., comes so easily to us that it wasn't until we tried to get computers to do this that we realized how hard it really was. It's hard because computers deal only with rules, with procedures. Which means that in order to get them to do anything, you need an explicit procedure. But we can construct the most sophisticated learning machine on the planet using current algorithms and cutting edge technology, expose it to enormous amounts of language use (far, far, far, far more than humans require), and it will never learn language. Because formal rules (the type computers need) involve specific procedures to deal with specific input and do specific things as a result of this input. Like an electric drill, or a jackhammer, or even crank operated generator, computers are machines that deal with doing and nothing else. I can make a car do all sorts of nifty stuff (well, maybe I can't but stunt drivers and mechanics can). I can make computers do this too. But it's all about making the machine take the input, mindlessly do whatever it is I constructed it to, and return some result.
When we learn language, we don't just learn rules. We learn words & phrases. We even learn words that mean one thing in one context, and something totally different in another. Even before Chomsky, philosphers of language were trying to treat language in terms of formal logic. And for decades, linguists have tried to reduce language to rules manipulating words (or atomic units of language) such that they could generate any grammarical sentence from only the meaning of those words (or atomic units) and rules. They couldn't. Because even allowing for word meaning isn't enough. In human languages, grammar (the rules) is often meaningful as well, perhaps always meaningful. So how do we get a machine to understand what a language means using only meaningless procedures/rules?


It takes us years to really get language, a machine should be allowed the same if you want to make the comparison and for language to be truly meaningful to a robot.

Humans take years because (among other things) it takes a certain amount of time to obtain the necessary exposure to language. That's not true for computers. With my learning algorithms in place, I get the computer to "experience" in a few minutes the exposure to language a human might not get in a lifetime.


Babies don't just learn language over night. It takes years of hearing people explain things and associating objects with words.
That's because it takes years of exposure and trial and error for children to get the necessary experience with language. It doesn't for a computer, because I can run thousands of trials and exposure to enormous amounts of language in under a day.



Babies also go through a period where facial recognition isn't possible and they can't really tell people apart.
It appears that from birth they may be able to recognize their mothers.

The brain works very hard to sort all this out for us as any processing requires. Yes it takes a sophisticated program no doubt.
The issue isn't whether it takes a sophisticated program, but rather what type of sophistication it takes.
 
Last edited:

atanu

Member
Premium Member
answer is awareness is an emergent property from the sum of all the brains me~

If you consider human awareness an emergent property of a particular structure, then it can only arise from that particular structure --- and not from anything else.
 

atanu

Member
Premium Member
AI is about more than a computer being able to pass the Turing Test, ----

What more? Please specify.
Has any machine passed the Turing test?

This thread isn't really about consciousness or understanding in robots, which are different mental functions. ------ We can build machines that make decisions in much the same way that humans do--by calculating options and weighting actions against desired outcomes. What is "free will" beyond the ability to choose to do what one wants? Desires are not a matter of choice, but actions are. Robots behave the same as humans in that respect. It does not matter whether the desires and goals were programmed in by humans, gods, or happenstance. They are what determine choices.

Desires are also a matter of choice. That actually is the only choice that humans have. Once a desire is picked, the desire unfolds it's law.

Robots do not know that, unless some wise programmer installs that logic.

Now, we can go on to talk about consciousness and self-awareness, but bear in mind that most of our choices are not made on a conscious level.

that is where we differ. Yes, 99 out of 100, we are unconscious. But 1 out of 100 makes the difference.

Self-awareness is coming.

This is worse than gibberish. To bring about self awareness in a robot you have yourself to be 100 percent self aware. That is impossible, in terms of mathematics, as per Godel. That is also impossible as per Hawking. That is impossible as per all spiritual knowledge. A product cannot know its source. A novel cannot understand its writer.

What we term as consciousness is the manifest aspect. We do not know what it actually is and where it resides.

Indeed not, but there was no bangle until it was created. That is something to meditate on.

The bangle is in the mind of jeweller as an idea. And the whole world is just that.

I get the bangle bit. This last sentence strikes me as gibberish. :shrug:

You need to mature a bit more.:D
 
Last edited:

PolyHedral

Superabacus Mystic
A product cannot know its source. A novel cannot understand its writer.
A novel is not a mind or algorithm. Children can understand their parents. They can also surpass them.

Self-awareness is coming.
33577696.jpg


(More informative posting will resume... some time when I'm less busy.)
 
Last edited:

Willamena

Just me
Premium Member
Please explain what you mean.
Originally Posted by Copernicus
Everyone has choices. Choice always involves conflicting desires, and choice is determined by what the agent (human or robot) most desires--the desire that wins out in the competition. What we don't consciously choose--the determining element here--is what we most desire to do. In most cases, we tend to think of free will as the unrestrained ability to satisfy our greatest desire in the case of conflicting goals or desires.
Which desire won out, once that is determined, if it's bothered to be determined, is determined after the fact. Even (and especially) in determining human behavior, we analyse after the fact, and explanation, however good, may just as easily become rationalization.
 
Last edited:
Top