• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Humans are like robots. Choice is determined.

idav

Being
Premium Member
That is nice you want to get all technical and everything now since you weren’t very technical in your OP. When you say: “choice is a determined process” in relation to a robot, most people can automatically assume they are programmed to act certain way or perform certain tasks. Even you used the word programmed in your OP so I’m not real sure where the confusion is.

I don’t need to look up any words in the dictionary. I already know what autonomous means and you basically said the same thing I did.

There are some clear distinctions between people and machines (robots). Which is why I gave a few of the more obvious ones. Like sex, I wouldn’t say sex is determined. It just happens.

And my last comment about people being like robots, running into walls 50 times before they realize X doesn’t compute was more along the lines of sarcasm. Maybe you missed it, but it was intend for puns.

Now where were we?
Similar to machines being programmed to do such and such. Humans are programmed to want sex. Good example.
 

Willamena

Just me
Premium Member
Similar to machines being programmed to do such and such. Humans are programmed to want sex. Good example.
I think things that are hard-wired stand in contrast to programming.

Programming allows us to say "no" to something like sex.
 

uberrobonomicon4000

Active Member
Similar to machines being programmed to do such and such. Humans are programmed to want sex. Good example.
Well my previous example was simply that robots can't reproduce. Sure there are some similarities and differences, but as long as people toss around words like "similar and like" then its safe to say humans are not robots.
 

idav

Being
Premium Member
I think things that are hard-wired stand in contrast to programming.

Programming allows us to say "no" to something like sex.

For the most part biological functions are preprogrammed which is good in some cases cause I might forget to breathe or keep my heart beating.
 

Thief

Rogue Theologian
I think things that are hard-wired stand in contrast to programming.

Programming allows us to say "no" to something like sex.

I'm not so sure.
I've said no ....on more than one occasion.

If my program had been up and running the response would have been there.

I see denial as a point of spirit....not program.
 

LegionOnomaMoi

Veteran Member
Premium Member
I know they don't process the same. They weren't built to process the same. They are however both processing something from memory.

How are you distinguishing processing from memory?


Your just saying that it isn't a single neuron but a group of neurons. Does this mean the group isn't speaking a language?:facepalm:
Yes.


Yes one single neuron is like a computer
It isn't. Not in any meaningful way.


But it takes us years to learn this stuff so don't take it for granted. You give the computer iyears of learning experience required by a human to learn language and then you can compare.

Done. It doesn't work. Why? Because current machine learning is fundamentally different from a type of learning humans (among other animals) can do.


The neurons just send a code that has to be decripted.
Signals & information do not necessarily a code or language make. I can describe pebbles, coins, skin cells, etc., in terms of information. But a coin only becomes a binary unit of information if it is flipped and some interpreter is there to interpret the resulting state. Neural networks certainly involve signals. But it is misleading (especially if one is not familiar with how the brain works) to refer to this as code or as language.


Within any given network there are tons of languages passing through all as ones and zeros.
This is only one particular type of "language". Binary. It is certainly not the only type of information, let alone language, there is. And it does not seem to conform to the fundamental structure of reality. Either way, to assume all code or languages are 0's and 1's is very problematic. It forces you to reduce all signals to a sequence of equally likely binary states.

Same must be true for neurons.
Why? It isn't true of qubits. These can be in a superposition of both |0> and a |1>
simultaneously.

When a person remembers what their car looks like, what is it that is happening. First the person had to see their car and store it in memory. When trying to remember a car the code from the neurons must be translated into something meaningful for our consciousness. Their is a language being used that has to be decoded.

Again, this seperates memory, code, and processing. One central reason behind the ability of humans to understand concepts is probably that this doesn't happen.When you remember your car, you aren't required to recall on a single (or multiple) particular memories. In fact, you can picture your car without recalling any actual experience with it. More importantly, the what "code from the neurons" allows you to do this?

We've proved that our ability to think at high levels isn't as far out as we thought.
We haven't. In fact, there are proofs that a computer is incapable of processing concepts. These aren't by any means universally accepted, but then again the Church-Turing thesis is not without critics.

What we've done is find out that things we took for granted as "simple" are much more complicated than we thought (which was why as soon as computers were developed A.I. was only years away; and it remained "only years away" for decades).

Everyone should agree that the brain is the most powerful machine we know of in the universe. How is it more than a machine?

But there is a difference between a machine and a computer. Certainly, brains are in some sense machines, and they do compute. But this does not make them computers. It does not mean that all thought can be a matter of computation. And if it were true that all thought is, then this could be proved.

Yeah it's all about math when babies are learning it too as their brains map out proper configurations.

How so?


I'd say "hows what going". ;)

It takes experience to learn those meanings, why hold that against a computer.
Because it doesn't take the same kind of experience, nor would an infinite amount of experience using current learning algorithms somehow make a computer go from meaningless number crunching to A.I. That Sci. Fi. ("They say it got smart. A new order of intelligence"). When someone asks you "how's it going" you don't need to root through a database filled with all of your experiences, sorting through and calculating probabilities just to figure out what the question means.

More importantly, you can readily process novel concepts and abstract from specific. For example, imagine I had no idea what a kangaroo was. I'm out exploring Australia, with a guy named Dundee (also known as Crocodile Dundee), and I see this weird looking giant rabbit like creature hopping around. I ask Dundee, what's that? He says it's a kangaroo. A few hours later, I see another similar looking giant rabbit like creature. But I don't ask this time. Because even though this isn't the same kangaroo I saw before, I now have a new category "kangaroo" which I can now readily extend to specific kangaroos.

By contrast, if I had a sophisticated program set-up to a camera, and fed the program a picture of the first kangaroo, not only could it not recognize the second kangaroo as a kangaroo, it couldn't even recognize the same kangaroo from a different angle. It would take multiple training sessions to get the computer to distinguish "kangaroo" from "non-kangaroo". And if one kangaroo happened to have a baby kangaroo in its pouch, or if I showed it a wallaby, all that learning might be completely destroyed and the computer may be back at square one. This is because, unlike humans, computers do represent abstractions (like the concept "kangaroo"), so in order to get a computer to distinguish a kangaroo from other animals, I need to repeatedly expose it to kangaroos and to animals so that over time it adapts (like the sea snail) and pairs particular features such that given input with these features, it will output "kangaroo", and given input without, it will ouput "not kangaroo". But once I introduce a wallaby, which has many of the same features, the computer may output "kangaroo". And when I input "wrong", the pattern of features which represent the way that the computer distinguishes "kangaroo" from "not kangaroo" may radically shift such that it can't distinguish kangaroos anymore.
 

Leonardo

Active Member
That Sci. Fi. ("They say it got smart. A new order of intelligence"). When someone asks you "how's it going" you don't need to root through a database filled with all of your experiences, sorting through and calculating probabilities just to figure out what the question means.

Ah.. you can't say the brain isn't sorting through information, nor calculating probabilities, you have no evidence to make that claim and in fact you claim no one has any clue as to how the brain processes information!


But many animals do calculate a risk to reward analysis so probabilities are managed. The most likely approach to bio-neural probability analysis are event frequencies. Those events that happen more often have a higher probability of happening than those with less frequency and since brains use an associative memory it allows event frequencies to be categorized to various exceptions!


As for machines; Software, say like an OS, can capture the status of a machine. I could write code that when one asks the machine "how's it goin" it gives a response relative to its pertinent states, such as power, cpu temperature, frequency of errors,internet connectivity, etc and give a health level or even a frustration level. We could even quantify it to either a positive or negative status and with comparative grades to past experiences. Effectively turning the status captures into a form of episodic memory that formulate to a gestalt! :flirt:
 
Last edited:

Willamena

Just me
Premium Member
For the most part biological functions are preprogrammed which is good in some cases cause I might forget to breathe or keep my heart beating.
I still think they are hard-wired, rather than programmed or preprogrammed.
 

Copernicus

Industrial Strength Linguist
That is nice you want to get all technical and everything now since you weren’t very technical in your OP. When you say: “choice is a determined process” in relation to a robot, most people can automatically assume they are programmed to act certain way or perform certain tasks. Even you used the word programmed in your OP so I’m not real sure where the confusion is.
It is important to remember that brains are not general computing devices that execute machine instructions. It is their physical configuration that determines behavior, not an actual "program". Thought takes place as brains reconfigure themselves. I still think that the OP was as technical as it needed to be. Legion and Leonardo are getting too tangled up down in the weeds, as far as I'm concerned, although their discussion has raised a lot of interesting issues, not all of which bear on the OP.

I don’t need to look up any words in the dictionary. I already know what autonomous means and you basically said the same thing I did.
Sorry if I wasn't clear about what I objected to in your definition. You defined the concept in terms of a machine learning to adjust to its environment. The concept of autonomy is about control, not factors that might be useful in enabling autonomy.

There are some clear distinctions between people and machines (robots). Which is why I gave a few of the more obvious ones. Like sex, I wouldn’t say sex is determined. It just happens.
I'm not really sure what your point is here. Are you saying that sex is random, not caused by anything? AFAIK, it is a sequence of causally-related events. What aspect of sex do you believe is not determined? And please remember that this is a family-oriented Internet forum. :)

And my last comment about people being like robots, running into walls 50 times before they realize X doesn’t compute was more along the lines of sarcasm. Maybe you missed it, but it was intend for puns.
And I responded in kind. I'm glad that you have a sense of humor, but your sarcasm did not contribute anything useful to the discussion in this case. Both robots and humans are "programmed" not to run into obstacles. When either of them does that, it is usually an unintended consequence that may require a trip the the repair shop and/or doctor.

Now where were we?
The relevant question is where we are now. Your move.
 

uberrobonomicon4000

Active Member
It is important to remember that brains are not general computing devices that execute machine instructions. It is their physical configuration that determines behavior, not an actual "program". Thought takes place as brains reconfigure themselves. I still think that the OP was as technical as it needed to be. Legion and Leonardo are getting too tangled up down in the weeds, as far as I'm concerned, although their discussion has raised a lot of interesting issues, not all of which bear on the OP.


Sorry if I wasn't clear about what I objected to in your definition. You defined the concept in terms of a machine learning to adjust to its environment. The concept of autonomy is about control, not factors that might be useful in enabling autonomy.


I'm not really sure what your point is here. Are you saying that sex is random, not caused by anything? AFAIK, it is a sequence of causally-related events. What aspect of sex do you believe is not determined? And please remember that this is a family-oriented Internet forum. :)


And I responded in kind. I'm glad that you have a sense of humor, but your sarcasm did not contribute anything useful to the discussion in this case. Both robots and humans are "programmed" not to run into obstacles. When either of them does that, it is usually an unintended consequence that may require a trip the the repair shop and/or doctor.


The relevant question is where we are now. Your move.
:biglaugh:– It just seems like you are all over the place with this subject.

Okay, so are you saying people can’t change their minds about things, because they only have one way of processing information(one way of thinking)? I strongly disagree with that.

That is like saying Joe doesn’t like spaghetti because the only spaghetti he has ever ate is his moms.
Since Joe hasn’t tried any other spaghetti he can’t make a clear rational choice as to why he doesn’t like spaghetti. He just knows he doesn’t like it. So if you were to tell Joe the local pizza place has some of the best spaghetti in town he would automatically cave and not even think about trying it out.

On the other hand, someone that doesn’t have a choice that is already determined wouldn’t act the same way as Joe. I wouldn’t classify all people as robots. All people don’t act or behave, or even think the same way. That is clear in our discourse over this topic.

Robots go through a learning phase, which is why I said they can interact with their environment. It really depends on what kind of robot you are talking about, because not all events, even with a robot are determined or even predetermined. It might have some per-configured capabilities or functions, but that is about it. This is one I seen on CNN the other day called Baxter. There is a short video clip on him. They were talking about how computers and robots may end up taking peoples jobs because they can perform tasks at a cheaper rate than people.
Baxter Robot Heads to Work - WSJ.com
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
Legion...Why are you taking things out of context? I clearly said monitoring neurons in vivo.

We can do that too.
The best that can be done today requires surgery and its not effective at collecting large samples, as in thousands or millions of neurons. The researcher's book is titled "Rethinking Innateness", what do you think that means? You didn't read the book and its obvious...
Well, given that the entire book is about "rethinking innateness" in that it denies what you claim is innate (and is about, instead, a rebuttal against the "nature" over "nurture" side), I'd say it means that you didn't read it, or didn't understand it, and that you don't have any idea about what Elman thinks. However, if you would like to quote from it to show that I am incorrect, and to show that you are supported by anything in it, feel free. For me, the list on page 359 pretty much sums up a case against yours.
Ah.. you can't say the brain isn't sorting through information, nor calculating probabilities, you have no evidence to make that claim and in fact you claim no one has any clue as to how the brain processes information!
I didn't say they have no clue. I said we don't know how the brain is able to deal with concepts. But we do know a lot about how the brain works and about neural activity. We know, for example, through response time studies as well as neuroimaging, that the brain acts fundamentally different from a computers:

"The biological “hardware” on which the brain is based is extremely slow. A typical interval between the spikes of an individual neuron is about 50 ms and the time needed to propagate a signal from one neuron to another is not much shorter than such an interval. This corresponds to a characteristic frequency of merely 100 Hz. Recalling that modern digital computers should operate at a frequency of 10^9 Hz and yet are not able to reproduce its main functions, we are lead to conclude that the brain should work in a way fundamentally different from digital information processing.

Simple estimates indicate that spiking in populations of neurons must be synchronized in order to yield the known brain operations. “Humans can recognize and classify complex (visual) scenes within 400-500 ms. In a simple reaction time experiment, responses are given by pressing or releasing a button. Since movement of the finger alone takes about 200-300 ms, this leaves less than 200 ms to make the decision and classify the visual scene” [Gerstner (2001)]. This means that, within the time during which the decision has been made, a single neuron could have fired only 4 or 5 times! The perception of a visual scene involves a concerted action of a population of neurons. We see that exchange of information between them should take place within such a short time that only a few spikes are generated by each neuron. Therefore, information cannot be encoded only in the rates of firing and the phases (that is, the precise moments of firing) are important. In other words, phase relationships in the spikes of individual neurons in a population are essential and the firing moments of neurons should be correlated."

From Manrubia, Susanna C.; Mikhailov, Alexander S.; Zanette, Damian H..(2004). Emergence of Dynamical Order : Synchronization Phenomena in Complex Systems. World Scientific Publishing Co., p 312
 
Last edited:

Leonardo

Active Member
Well, given that the entire book is about "rethinking innateness" in that it denies what you claim is innate (and is about, instead, a rebuttal against the "nature" over "nurture" side), I'd say it means that you didn't read it, or didn't understand it, and that you don't have any idea about what Elman thinks. However, if you would like to quote from it to show that I am incorrect, and to show that you are supported by anything in it, feel free. For me, the list on page 359 pretty much sums up a case against yours.

Just as a test Legion what do I claim is innate?
 

Copernicus

Industrial Strength Linguist
:biglaugh:– It just seems like you are all over the place with this subject.
I'm not surprised that it seems that way to you, because you don't seem to understand my "position" very well. Yuk. Yuk.

Okay, so are you saying people can’t change their minds about things, because they only have one way of processing information(one way of thinking)? I strongly disagree with that.
I have never said that humans or robots were incapable of changing their minds, just that humans do it better. Both have alternative methods for assessing situations. A robot that couldn't change its mind would indeed keep running into walls.

That is like saying Joe doesn’t like spaghetti because the only spaghetti he has ever ate is his moms. Since Joe hasn’t tried any other spaghetti he can’t make a clear rational choice as to why he doesn’t like spaghetti. He just knows he doesn’t like it. So if you were to tell Joe the local pizza place has some of the best spaghetti in town he would automatically cave and not even think about trying it out.
And you don't know anyone who behaves like Joe? I could imagine a robot with greater flexibility than Joe, assuming that it could be programmed to enjoy spaghetti.

On the other hand, someone that doesn’t have a choice that is already determined wouldn’t act the same way as Joe. I wouldn’t classify all people as robots. All people don’t act or behave, or even think the same way. That is clear in our discourse over this topic.
You either haven't read what I've been saying on this topic, or you didn't understand it. Everyone has choices. Choice always involves conflicting desires, and choice is determined by what the agent (human or robot) most desires--the desire that wins out in the competition. What we don't consciously choose--the determining element here--is what we most desire to do. In most cases, we tend to think of free will as the unrestrained ability to satisfy our greatest desire in the case of conflicting goals or desires.

Look at it this way. Human behavior is not totally unpredictable. Why? Because we understand what motivates people--the desires that drive their choices. Nor is human behavior totally predictable. Why? Because we don't know with absolute certainty what the desires are that motivate people. If someone holds a gun to your head and demands your money, he or she can reasonably assume a level of compliance. But you might go the suicidal route. It all depends on how much you value your life and think it in jeopardy. So you have free will even with a gun held to your head.

Robots go through a learning phase, which is why I said they can interact with their environment. It really depends on what kind of robot you are talking about, because not all events, even with a robot are determined or even predetermined. It might have some per-configured capabilities or functions, but that is about it. This is one I seen on CNN the other day called Baxter. There is a short video clip on him. They were talking about how computers and robots may end up taking peoples jobs because they can perform tasks at a cheaper rate than people.
Baxter Robot Heads to Work - WSJ.com
Yeah, the media is always saying stupid things about robots (and automation in general). In this case, they are trying to play on the Luddite fears of their audience. That is the "hook" or angle that the writer uses to justify publishing it as news. But you are right that it all depends on what kind of robot you are talking about. I'm not talking about the kinds of robots that actually exist today. I hope that that much is clear to you. I'm talking about what kind of robot it is possible to develop in principle. At this point in time, given what we know about human cognition, there is no reason to believe that humans are anything other than very complex flesh-and-blood robots. There is nothing about our "free will" that is incompatible with determinism, unless you define "free will" in such a way that it really isn't about the freedom to choose to do what you want.
 

LegionOnomaMoi

Veteran Member
Premium Member
Just as a test Legion what do I claim is innate?
This:

The point here is that mammal brains are wired with sensory and limbic signaling that produce states that are positive or negative. The idea extends to the notion of Elman's eta idea that sensory systems are genetically coded to senstitize to the environment and preprocess information that allow neocortical processes to digest or learn.

First, "Elman's eta idea" is not an idea. It's a parameter in an artificial neural network. Second, Elman has explicitly stated that he clearly differentiates the human mind and learning from that of other mammals. Third, this learning is not about any genetic coding to "sensitize to the environment and prepocess infromation that allow neocortical proccesses to digest or learn." It's about a generic mechanism which allows non-genetic adaption. Finally, you have applied his reinforcement algorithms to artificial neural networks (which say nothing about biological neural networks), taken the "reinforcement" of supervised learning in ANN programming out of context, and interpreted it as meaningful in the context of mammalian "innate" learning mechanisms in general.
 
Last edited:

Willamena

Just me
Premium Member
Everyone has choices. Choice always involves conflicting desires, and choice is determined by what the agent (human or robot) most desires--the desire that wins out in the competition. What we don't consciously choose--the determining element here--is what we most desire to do. In most cases, we tend to think of free will as the unrestrained ability to satisfy our greatest desire in the case of conflicting goals or desires.
The only problem I have with this is that it allows for using determinism as rationalization.
 

uberrobonomicon4000

Active Member
I'm not surprised that it seems that way to you, because you don't seem to understand my "position" very well. Yuk. Yuk.


I have never said that humans or robots were incapable of changing their minds, just that humans do it better. Both have alternative methods for assessing situations. A robot that couldn't change its mind would indeed keep running into walls.


And you don't know anyone who behaves like Joe? I could imagine a robot with greater flexibility than Joe, assuming that it could be programmed to enjoy spaghetti.
I’m not saying there are not people like Joe. I’m just saying not everyone is like or acts Joe.
I think a Robot would have a hard time eating spaghetti. :D It might be capable of learning how to recharge itself though when its cell is low. Actually I think there are robots that are already capable of doing that.
Yeah, the media is always saying stupid things about robots (and automation in general). In this case, they are trying to play on the Luddite fears of their audience. That is the "hook" or angle that the writer uses to justify publishing it as news. But you are right that it all depends on what kind of robot you are talking about. I'm not talking about the kinds of robots that actually exist today. I hope that that much is clear to you. I'm talking about what kind of robot it is possible to develop in principle. At this point in time, given what we know about human cognition, there is no reason to believe that humans are anything other than very complex flesh-and-blood robots. There is nothing about our "free will" that is incompatible with determinism, unless you define "free will" in such a way that it really isn't about the freedom to choose to do what you want.
Yeah I agree. I was kind of angry or thought the people presenting the news on robots knew little about them and their uses, same as for technology. But it does lead to another good debate topic. Which is if robots do end up taking peoples jobs, then how should companies be taxed that replaced people for bots? If someones job is taken because a robot is cheaper to use should people who are unemployed get some type of incentive for not being able to get a job? But all of that is another debate. But like I said it brings about some good questions.
 

atanu

Member
Premium Member
So, my position is that choice is a determined process and that animals, including humans, are the same as robots, in principle. The big difference is that life forms came about through a process of evolutionary design, whereas robots are usually invented through a process of intelligent design.

Comments?

Copernicus

I am repeating this as an earlier post was ignored.

We are programmed. But on account of some mechanism you have come to discern it. And now you can discriminate between what is merely pleasurable and what is good and act based on wisdom. The momentum of old acts persist for some time.

This is what I understand. So, do we or do we not have this discriminative faculty?
 
Last edited:

Leonardo

Active Member
Third, this learning is not about any genetic coding to "sensitize to the environment and prepocess infromation that allow neocortical proccesses to digest or learn." It's about a generic mechanism which allows non-genetic adaption.

Again you’ve taken things out of context to discredit an idea you don't understand. The whole point of my statement was that it IS A GENERIC MECHANISM FOR NON GENETIC ADAPTION! Can't you READ? The sensory system is GENETICALLY CODED! The sensory system preprocess information into components that neocortical processes can learn. THE GENETIC COMPONENT IS THE SENSOR’S ABILITY TO TURN STIMULI INTO NEURAL SIGNALLING!

That ability to turn stimuli into neural signaling is innate. How your nose can sense a woman from pheromones was coded genetically, how your retina converts light into neural signals was coded genetically, how you can feel heat, pressure and pain was ALL CODED GENETICALLY! :facepalm:

What’s not genetic is what’s learned from the sensory signaling…
 
Top