• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Is Kaku correct?

charlie sc

Well-Known Member
Deciding to keep or quit my job requires that I at least consider my motivations. But character, feelings, motivations and desires can be programmed or learned. All the different leaders in Civilization VI have their own sets of character, feelings, motives, and desires. The act on these just as we do.
I see your misunderstanding of bots. These bots or AI opponents do not have feelings, motives and desires. They have algorithms for certain scenarios but they do not want to survive like we do. They do not know desires or have feelings. I play Crusader Kings 2 and it's a grand strategy game where the AI has all sorts of traits, personalities, boons, items and gain them over time. Some of them are in religions and some convert. They all die from old age and so on and so on. However, these traits are arbitrarily used and decided upon. The only anthropomorphism going on is on your part and mine when I suspend disbelief to enjoy the game. In CK2, in order to make an alliance you need to get their like of you to 50+ and so on. Everything is based on numbers. All these numbers are completely arbitrary and people will make tons of mods to improve the AI to make it much more realistic or difficult. Everything is predictable in CK2 after a while. If you don't believe me, try talk to any so-called AI out there on the web or wherever. I heard this is good Eviebot at www.eviebot.com. It's says, "Note that the Evie and Cleverbot chatbots learn from people, so things said may seem inappropriate.". I tried it out and it's like talking to a brick wall. Do you think this bot has feelings, desires and motivations? Show me one AI or bot you can have an okish conversation with.

As I said earlier, You sound like the author of this article from 45 tears ago...(who concluded that no computer program would ever beat the best humans).

Humans are still better than AI at StarCraft—for now
Song, 29, said the bots approached the game differently from the way humans do. “We professional gamers initiate combat only when we stand a chance of victory with our army and unit-control skills,” he said in a post-competition interview with MIT Technology Review. In contrast, the bots tried to keep their units alive without making any bold decisions.
Wow! Military simulation war games have been around for many years. In my experience, none initiate combat when they don't stand a chance of achieving a favorable outcome. If the Bot programmers made such a mistake, that should be easily correctable.

Same link
Kim Kyung-joong, the Sejong University computer engineering professor who organized the competition, said the bots were constrained, in part, by the lack of widely available training data related to StarCraft.

That will change soon. In August, DeepMind and the games company Blizzard Entertainment released a long-awaited set of AI development tools compatible with StarCraft II, the version of the game that is most popular among professional players.

Other experts now predict that bots will be able to vanquish professional StarCraft players once they are trained properly. “When AI bots are equipped with [high-level] decision-making systems like AlphaGo, humans will never be able to win,” says Jung Han-min, a computer science and engineering professor at the University of Science and Technology in Korea.

So it's no different from a person learning by observing, studying and practising. Isn't that exactly what Song Byung-gu did?



ETA: Emphases in the above quotes are mine.
Perhaps, I'll be eager to see how well Deepmind does without any handicaps placed on other players. Let's wait and see ;)
 
Last edited:

ecco

Veteran Member
I see your misunderstanding of bots. These bots or AI opponents do not have feelings, motives and desires. They have algorithms for certain scenarios but they do not want to survive like we do. They do not know desires or have feelings.

Not a misunderstanding at all, just a difference of opinion.

You say they do not have feelings, motives and desires, they just have algorithms.
We work off of algorithms also. Some are genetically instilled like the need to eat. A newborn knows when it's hungry and makes that known to anyone within earshot. Other than that, a newborn has no desires or motivations. These are learned / acquired. These desires and motivations are stored in neurons. The ability to weigh factors in making decisions is stored in neurons as is the information on which we base those decisions. Computer memory is just like the stored neurons except for differences in capacity and differences in being able to correlate data. The will to survive, in humans, is tempered by many factors. Will a human always choose to survive? Will a father run into a burning build in an attempt to save a child knowing full well that he may die?

Can similar "decisions" be programmed/acquired? Sure. It's just a matter of how the goals are set.



I play Crusader Kings 2 and it's a grand strategy game where the AI has all sorts of traits, personalities, boons, items and gain them over time. Some of them are in religions and some convert. They all die from old age and so on and so on. However, these traits are arbitrarily used and decided upon. The only anthropomorphism going on is on your part and mine when I suspend disbelief to enjoy the game. In CK2, in order to make an alliance you need to get their like of you to 50+ and so on. Everything is based on numbers.
Is it any different in real life? When someone says they want to be your friend, you go through a series of calculations:
On a scale of 1 to 100...
Does he have a good reason to want to be your friend?
Does he have ulterior motives?
Is he trustworthy?
Do you have a good reason to want to be friends with him?
Etc.

You may not be aware of this process, but it happens. If the averages of the scores come out to be way less than 50, you would probably say "I'll think about it", or, if you are (programmed to be) very rude; "Bug off twit".


All these numbers are completely arbitrary and people will make tons of mods to improve the AI to make it much more realistic or difficult. Everything is predictable in CK2 after a while.

If people are making tons of mods to improve the AI, then the numbers are not completely arbitrary. If the numbers are arbitrary, then nothing is predictable. You can't have it both ways.

Nevertheless, the same is true for humans. All people have different traits.

If you don't believe me, try talk to any so-called AI out there on the web or wherever.
I heard this is good Eviebot at www.eviebot.com. It's says, "Note that the Evie and Cleverbot chatbots learn from people, so things said may seem inappropriate.". I tried it out and it's like talking to a brick wall.

I believe the thread is about the future of AI. In 1949, a mere 70 years ago, the average person would not have believed that in 2019:
  • You could facechat on a handheld cell phone. (What's a cell phone)
  • You could talk to your TV remote to select from thousands of options. (What's a TV remote - heck, what's a TV)
  • Computers would beat the best chess players (What's a computer)
  • People would be able to have online forum discussions about the quality of video game AIs. (What's an online forum - what's a video game - what's AI)

Do you think this bot has feelings, desires and motivations?

Feelings, desires and motivations are learned or acquired and are stored in neurons. Can a "bot" be programmed to respond positively to praise and negatively to criticisms? Sure. Just like people learn they would rather be praised than criticised.
 

ecco

Veteran Member
So, a computer can experience orgasm? May be.
Sure. Portions of AI programs evaluate things on a scale of 1 to 100 (highly oversimplified).

If ...
an AI is programmed to continually improve its score
an AI is given 60 points for hugging an avatar
an AI is given 65 points for harvesting food
an AI is given 90 points for "intercoursing" an avatar

...it will become an overriding "Desire" "Motivation" to engage in intercourse.

But sexual orgasms are not necessarily the highest reward. Ask any heroin addict. Getting a shot may result in a 98. On the other hand, the computer could be programmed to give only a 5 for a shot of heroin. In that case, there would be a lot more Satyrs and nymphomaniacs than junkies at the end of the game.
 

TagliatelliMonster

Veteran Member
Is Kaku correct that AI will be dangerous as AI machines attain self awareness?

Who is right about AI: Mark Zuckerberg or Elon Musk?

I think AI, as any other tool, will be used for profit and for evil purpose by some. That, imo, is the real danger.


AI is dangerous in the sense of things like what cambridge analytica did. As complete and utter invasions of our privacy with resulting manipulation of elections and overall behaviours etc.

Dangerous in the sense of Terminator and the rise of the machines? Not so much.
 

atanu

Member
Premium Member
Sure. Portions of AI programs evaluate things on a scale of 1 to 100 (highly oversimplified).

If ...
an AI is programmed to continually improve its score
an AI is given 60 points for hugging an avatar
an AI is given 65 points for harvesting food
an AI is given 90 points for "intercoursing" an avatar

...it will become an overriding "Desire" "Motivation" to engage in intercourse.

But sexual orgasms are not necessarily the highest reward. Ask any heroin addict. Getting a shot may result in a 98. On the other hand, the computer could be programmed to give only a 5 for a shot of heroin. In that case, there would be a lot more Satyrs and nymphomaniacs than junkies at the end of the game.

I beg to differ. Where is the thrill? Where is first person experience?
 

charlie sc

Well-Known Member
You say they do not have feelings, motives and desires, they just have algorithms.
We work off of algorithms also
This is an interesting proposition. Games are programmed very simply if this then this if not then this and so on. Assuming our brains work like this, they haven't considered emotions. Emotions are the drive to do something and, I think, require suffering contrasted to pleasure. These emotions are integrated with numerous sense perceptions like touch, taste, hearing, etc. However, weak AI does not have access to any of this, nor may it have the capacity to store what it learned in a meaningful manner. For instance, animals have senses and emotions, but I think it's safe to assume they haven't achieved self-awareness.

t. Other than that, a newborn has no desires or motivations. These are learned / acquired. These desires and motivations are stored in neurons.
I'm not sure how true this is. There are quite a few studies on babies and baby monkeys showing they need certain things. For instance, many studies have shown we need tactile comfort, love and a safe base for social development and natural curiosity. If these are not given, anti-social behaviour becomes prominent.

Is it any different in real life? When someone says they want to be your friend, you go through a series of calculations:
On a scale of 1 to 100...
Does he have a good reason to want to be your friend?
Does he have ulterior motives?
Is he trustworthy?
Do you have a good reason to want to be friends with him?
Etc.

You may not be aware of this process, but it happens. If the averages of the scores come out to be way less than 50, you would probably say "I'll think about it", or, if you are (programmed to be) very rude; "Bug off twit".
This may be the case, but it's an assertion. We don't really know the brain makes choices or how they're weighed.

If people are making tons of mods to improve the AI, then the numbers are not completely arbitrary. If the numbers are arbitrary, then nothing is predictable. You can't have it both ways.

Nevertheless, the same is true for humans. All people have different traits.
They are arbitrary in the sense that someone chose the number based on... who knows? I'm not sure what you mean by if they're arbitrary then nothing is predictable? Btw, you should look at the game Dwarf Fortress if you want to see the most advanced AI in a game.

I believe the thread is about the future of AI. In 1949, a mere 70 years ago, the average person would not have believed that in 2019:
  • You could facechat on a handheld cell phone. (What's a cell phone)
  • You could talk to your TV remote to select from thousands of options. (What's a TV remote - heck, what's a TV)
  • Computers would beat the best chess players (What's a computer)
  • People would be able to have online forum discussions about the quality of video game AIs. (What's an online forum - what's a video game - what's AI)
Indeed, it's very impressive. Nonetheless, with all the sci-fi stuff we've seen, we still can't have a conversation with a bot or AI that feels real or resembles a living person. These conversations not only lack any feeling but the bot is not aware. All it's doing is repeating what it's supposed to say. It's a bit like trying to talk to a toaster.

Feelings, desires and motivations are learned or acquired and are stored in neurons. Can a "bot" be programmed to respond positively to praise and negatively to criticisms? Sure. Just like people learn they would rather be praised than criticised.
Can it emulate human responses? Yes, I think so and hope so in time(lots of time). Can it actually become self-aware? I'm not sure about that.
 
Last edited:

ecco

Veteran Member
Sure. Portions of AI programs evaluate things on a scale of 1 to 100 (highly oversimplified).

If ...
an AI is programmed to continually improve its score
an AI is given 60 points for hugging an avatar
an AI is given 65 points for harvesting food
an AI is given 90 points for "intercoursing" an avatar

...it will become an overriding "Desire" "Motivation" to engage in intercourse.

But sexual orgasms are not necessarily the highest reward. Ask any heroin addict. Getting a shot may result in a 98. On the other hand, the computer could be programmed to give only a 5 for a shot of heroin. In that case, there would be a lot more Satyrs and nymphomaniacs than junkies at the end of the game.

I beg to differ. Where is the thrill? Where is first person experience?

Where is it in your brain? It's just some neurons that light up. Some other neurons evaluate those neurons lighting up as "way much better" than the neurons for eating a Reeses Peanut Butter Bar.

In "naturals" it all comes down to just chemicals reacting with chemicals.

In AI's it comes down to programming concepts reacting to stored bits.
 

Jumi

Well-Known Member
...
Yep, yep, yep. It's a good thing people never change their minds and make mistakes. That would imply that God had a few bugs in his creations.
Don't know how you made the jump to a creator God from the programming of Civ 1 "AI" or why?
 

ecco

Veteran Member
This is an interesting proposition. Games are programmed very simply if this then this if not then this and so on. Assuming our brains work like this, they haven't considered emotions. Emotions are the drive to do something and, I think, require suffering contrasted to pleasure. These emotions are integrated with numerous sense perceptions like touch, taste, hearing, etc. However, weak AI does not have access to any of this, nor may it have the capacity to store what it learned in a meaningful manner.

"Weak AI" "Strong AI" This conversation is starting to remind me of conversations with creationists arguing about microevolution vs macroevolution.

For just a moment, put yourself back 70 years and look around. Now look around at today and try to project the same rate of advancement into the next 70 years. No one will be talking about weak or strong AI. They may not even be using the A at all.

For instance, animals have senses and emotions, but I think it's safe to assume they haven't achieves self-awareness.

Elephants, monkeys and chickens know their place in society. For an entity to know it's place in society it must be aware of self and of others.

I'm not sure how true this is. There are quite a few studies on babies and baby monkeys showing they need certain things. For instance, many studies have shown we need tactile comfort, love and a safe base for social development and natural curiosity. If these are not given, anti-social behaviour becomes prominent.

With AI social behavior is dependent on other factors since there is no direct way for tactile comfort. However, as with everything else, it can be simulated.

This may be the case, but it's an assertion. We don't really know the brain makes choices or how they're weighed.

If a shoddily dressed man approaches you on the street and says "Let's have lunch" and a college acquaintance calls and says "Let's have lunch" your brain "subconsciously" goes through a quick evaluation of the situations.

Btw, you should look at the game Dwarf Fortress if you want to see the most advanced AI in a game.
I sure don't need to learn another new game, especially one with a poor user interface. If an AI is too good I will always lose. That's no fun. It's also no fun if the AI is really poor. With the Civ series, there are various levels of AI. I can readily beat the lowest level, I have no chance against the highest level. If I took more time to calculate things out every move, I could play against a higher level, but that's not my style.

In any case, I haven't played Civ in a couple of years. I spent some time in Cities Skylines and more in Trainz. Now I'm back to flying - IL-2 Sturmovik: 1946 - and getting my butt severely kicked.


Can it emulate human responses? Yes, I think so and hope so in time(lots of time). Can it actually become self-aware? I'm not sure about that.
What is self-aware? Have you asked a bot if it is self-aware? I just took a look and chatbot and ALICE want signins, which I don't want to do.
 

Jumi

Well-Known Member
"Weak AI" "Strong AI" This conversation is starting to remind me of conversations with creationists arguing about microevolution vs macroevolution.
The discussion is actually a lot more relevant and discussed in AI courses at uni level, unlike the creationists' ideas.
 

ecco

Veteran Member
I wouldn't call the Civ AI, artificial intelligence. It's just a few sets of IFs that the bots follow. There's no learning, no intelligence, just a set of rules the programmer made. Even though Gandhi feels like he's made his own choices by becoming a conqueror, it's because there's a bug causing the unexpected behavior.
Don't know how you made the jump to a creator God from the programming of Civ 1 "AI" or why?
You made the comment that Gandhi's change in behavior is a programming bug. I was merely pointing out that changes in behavior are not necessarily caused by bugs.


You made to comment that the Civ AI was just a few sets of IFs. Isn't that how the human brain makes evaluations?

You're driving down the street and ahead the traffic light turns yellow.
If you stay at the same speed you may not get through before it turns red. If you don't get through before it turns red you may get a ticket. If you don't get through before it turns red you may get crash. If you speed up you may make it. If you've been driving for a while it might be, if this situation has come up before in terms of speed and distance, what was done then?

If, if, if all done very quickly.
 

charlie sc

Well-Known Member
"Weak AI" "Strong AI" This conversation is starting to remind me of conversations with creationists arguing about microevolution vs macroevolution.

For just a moment, put yourself back 70 years and look around. Now look around at today and try to project the same rate of advancement into the next 70 years. No one will be talking about weak or strong AI. They may not even be using the A at all.
I'm not saying AI is not possible, it's just unlikely for a long time. Look at this link The Myth of Sentient Machines | Psychology Today He differentiated the two and I think they're important distinctions. Your conversation reminds me of someone so blinded by hope/dreams/faith that they're unwilling to look at things sceptically.
Remember, scepticism is a good thing ;)

Elephants, monkeys and chickens know their place in society. For an entity to know it's place in society it must be aware of self and of others.
Really? Describe to me how they know their place in society?

With AI social behavior is dependent on other factors since there is no direct way for tactile comfort. However, as with everything else, it can be simulated.
AI social behaviour, what?

If a shoddily dressed man approaches you on the street and says "Let's have lunch" and a college acquaintance calls and says "Let's have lunch" your brain "subconsciously" goes through a quick evaluation of the situations.
Yes, but we don't know the mechanism of this process. If you'd like to link some peer reviewed papers on this, I'd love to read them.

I sure don't need to learn another new game, especially one with a poor user interface. If an AI is too good I will always lose. That's no fun. It's also no fun if the AI is really poor. With the Civ series, there are various levels of AI. I can readily beat the lowest level, I have no chance against the highest level. If I took more time to calculate things out every move, I could play against a higher level, but that's not my style.

In any case, I haven't played Civ in a couple of years. I spent some time in Cities Skylines and more in Trainz. Now I'm back to flying - IL-2 Sturmovik: 1946 - and getting my butt severely kicked.
By this quote, you seem to think that AI means more proficient than human beings. I don't think this way. To me, better AI means realism and emulating a real thing. Dwarf Fortress has realism to the extreme, but bad graphics and bad interface. :p

What is self-aware? Have you asked a bot if it is self-aware? I just took a look and chatbot and ALICE want signins, which I don't want to do.
Give a link of which super smart AI chatbox you want me to talk to. I'll show you how it's more entertaining and real to talk to a cup. All their responses are preprogrammed. There's no thought or awareness involved, at all.
 
Last edited:

atanu

Member
Premium Member
Where is it in your brain? It's just some neurons that light up. Some other neurons evaluate those neurons lighting up as "way much better" than the neurons for eating a Reeses Peanut Butter Bar.

In "naturals" it all comes down to just chemicals reacting with chemicals.

In AI's it comes down to programming concepts reacting to stored bits.

Some neurons light up and there is sexual thrill? Who feels the thrill? Neurons or the chemicals?
 

ecco

Veteran Member
I'm not saying AI is not possible, it's just unlikely for a long time. Look at this link The Myth of Sentient Machines | Psychology Today He differentiated the two and I think they're important distinctions.
Bobby Azarian, Ph.D., is a cognitive neuroscientist affiliated with George Mason University and a freelance journalist​

He is not an expert in Computers, software or AI.

Your conversation reminds me of someone so blinded by hope/dreams/faith that they're unwilling to look at things sceptically.
Remember, scepticism is a good thing

I am one of the most skeptical people you will ever run into. I have often been labeled negative. I see BS posted all the time and I'm big on asking for evidence. I'll not ask you to look up my posting history, but keep this in mind when you read my posts going forward.

On the other hand, I'm also aware that people who have said "it's just too impossible" in the past have almost always been proved wrong. I'll just repeat what I posted earlier...

I believe the thread is about the future of AI. In 1949, a mere 70 years ago, the average person would not have believed that in 2019:
  • You could facechat on a handheld cell phone. (What's a cell phone)
  • You could talk to your TV remote to select from thousands of options. (What's a TV remote - heck, what's a TV)
  • Computers would beat the best chess players (What's a computer)
  • People would be able to have online forum discussions about the quality of video game AIs. (What's an online forum - what's a video game - what's AI)
RE:
ecco said:
Elephants, monkeys and chickens know their place in society. For an entity to know it's place in society it must be aware of self and of others.
Really? Describe to me how they know their place in society?

If you are asking how this works in their brains, I don't have an answer. If you are questioning whether or not it occurs I suggest you google "pecking order". Even bees and rodents know their place in society in relation to other members of the group.


RE: Evaluating circumstances...
Yes, but we don't know the mechanism of this process. If you'd like to link some peer reviewed papers on this, I'd love to read them.

How it works from a chemical biological perspective isn't germane to this discussion. All that is important is that computers can make decisions by evaluating different factors.

By this quote, you seem to think that AI means more proficient than human beings.

No. At least, not yet.



Give a link of which super smart AI chatbox you want me to talk to. I'll show you how it's more entertaining and real to talk to a cup. All their responses are preprogrammed. There's no thought or awareness involved, at all.

I have no interest in hearing about your conversations with a bot. Again, this thread is not so much about now as it is about the future. I showed where we were 70 years ago and where we are now. I can project that into the future (somewhat). If you want to believe that we will make less strides in the future than we have in the past, then you are ignoring history.
 

charlie sc

Well-Known Member
Bobby Azarian, Ph.D., is a cognitive neuroscientist affiliated with George Mason University and a freelance journalist
He is not an expert in Computers, software or AI.
Since we're the only definite and empirical example of sentient life that is self-aware and we're able to question our place in it, I think it's important we don't dismiss neurosciences, hmmm?

I am one of the most skeptical people you will ever run into.
Looks like your scepticism is faltering. If I were you, I'd contemplate your bias in this matter.

On the other hand, I'm also aware that people who have said "it's just too impossible" in the past have almost always been proved wrong. I'll just repeat what I posted earlier...
I can see your biases if you start straw-manning me. I never said it's impossible.

If you are asking how this works in their brains, I don't have an answer. If you are questioning whether or not it occurs I suggest you google "pecking order". Even bees and rodents know their place in society in relation to other members of the group.


RE: Evaluating circumstances...
You couldn't answer my question and you realise that(that's good at least) so I'll take that as you retracting your claim. Thank you.

How it works from a chemical biological perspective isn't germane to this discussion. All that is important is that computers can make decisions by evaluating different factors.
That's a pretty slim and narrowing idea of what AI is or supposed to me. Fine :p

I have no interest in hearing about your conversations with a bot. Again, this thread is not so much about now as it is about the future. I showed where we were 70 years ago and where we are now. I can project that into the future (somewhat). If you want to believe that we will make less strides in the future than we have in the past, then you are ignoring history.
Your last post seemed to suggest that bots are aware. I was willing to demonstrate to you how they aren't. Ok great, now this is about the future. I'm sure technology will improve but we have yet to invent AI that's reactive enough to hold a decent conversation. The best we can do is wait and see. My only suggestion is not to fall for con kickstarter type ideas and waste your money. I'm helping you here, my friend. When it comes, it comes. Cool ;)
 

ecco

Veteran Member
Since we're the only definite and empirical example of sentient life that is self-aware and we're able to question our place in it, I think it's important we don't dismiss neurosciences, hmmm?
If I want to know about computers and programming and specifically AI, I'll rely on experts in those fields. If I need a heart transplant I choose a cardiologist, not a neurologist. I think you would do the same.

Looks like your scepticism is faltering.
Accepting historical progress has nothing to do with scepticism or optimism. It has to do with accepting historical progress.

I can see your biases if you start straw-manning me. I never said it's impossible.
I never said YOU used the word "impossibe". I said...

On the other hand, I'm also aware that people who have said "it's just too impossible" in the past have almost always been proved wrong.
I then posted a list of examples. You were not in that list, were you?
Care to make some more spurious allegations about me?

You couldn't answer my question and you realise that(that's good at least) so I'll take that as you retracting your claim. Thank you.
I wrote:
If you are asking how this works in their brains, I don't have an answer. If you are questioning whether or not it occurs I suggest you google "pecking order". Even bees and rodents know their place in society in relation to other members of the group.​

So, no I'm not retracting my claim about "pecking orders".
peck·ing or·der
/ˈpekiNG ˌôrdər/
noun
  1. a hierarchy of status seen among members of a group of people or animals, originally as observed among hens.
Also...
Pecking order - Wikipedia

Your last post seemed to suggest that bots are aware.
"Seemed to suggest"? Care to show what I wrote that led you to that assumption?

My only suggestion is not to fall for con kickstarter type ideas and waste your money. I'm helping you here, my friend. When it comes, it comes. Cool ;)

If you think I need your help with financial matters, you are wrong.

However, since this is the second time you've mentioned kickstarter cons, I'm getting the impression that you may have been burned.
 

Polymath257

Think & Care
Staff member
Premium Member
I beg to differ. Where is the thrill? Where is first person experience?

They are both part of the brain's response. Give the AI the 'feeling' of its 'stomach turning' and it will have thrills and emotions also.
 
Top