• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Is Kaku correct?

Rival

se Dex me saut.
Staff member
Premium Member
Of course. And if AI is emotionless, as most cyberbeings are, it will automatically be perceived as evil.
 

blü 2

Veteran Member
Premium Member
That AI will be dangerous as AI machines attain self awareness.

Who is right about AI: Mark Zuckerberg or Elon Musk?

I think AI, as any other tool, will be used for profit and for evil purpose by some. That, imo, is the real danger.
I think that's unambiguous.

The armed services of all countries are already following it closely (and they'd be very remiss not to). An agreement not to use autonomous killing devices has already been proposed and a number of nations have already signed it ─ China, from memory, was one of them but I'm not aware of any other major player signing.

But saying and doing are two different things.
 

SalixIncendium

अग्निविलोवनन्दः
Staff member
Premium Member
Is Kaku correct that AI will be dangerous as AI machines attain self awareness?

Who is right about AI: Mark Zuckerberg or Elon Musk?

I think AI, as any other tool, will be used for profit and for evil purpose by some. That, imo, is the real danger.

As I see it, the attainment of self awareness in and of itself would not be problematic. The danger would come if ego is ever incorporated into AI.
 

atanu

Member
Premium Member
Will AI have desire? I do not know myself. I do not think that AI without desire would be likely to be dangerous

Some propose that we can program a machine to have the mathematical equivalent of wants and desires. And in that case, what is there to stop it from deciding to do bad things?

However, in this case it is us who are to be blamed.
 

atanu

Member
Premium Member
As I see it, the attainment of self awareness in and of itself would not be problematic. The danger would come if ego is ever incorporated into AI.

In our case, not knowing the universal consciousness and imagining the “I am this body”, is the ignorant ego.

What is the difference between self awareness and ego in case of a machine?
 

SalixIncendium

अग्निविलोवनन्दः
Staff member
Premium Member
In our case, not knowing the universal consciousness and imagining the “I am this body”, is the ignorant ego.

This is true.

What is the difference between self awareness and ego in case of a machine?

I don't know the answer to that, but it's interesting to ponder if a machine can be an embodied Atman as we are, or if they would be an entirely different kind of self-awareness.

One can further wonder if it was possible that they could be an embodied Atman, if they would be subject to the same ignorance of Self that humans are or if something in their variation of awareness might transcend that ignorance and allow them to automatically be aware of their true nature as Atman.
 

blü 2

Veteran Member
Premium Member
Will AI have desire? I do not know myself. I do not think that AI without desire would be likely to be dangerous
This is the key, isn't it. AI will not have evolved, will not come with the genetic equipment that three and a half billion years of life on earth has given us. AI will have only such desires as we build into it. Asimov's Three Laws (or Four, depending when you came in) are in effect desires, things the robot will 'want' to do.

But what would self-awareness mean without desires anyway? AI wouldn't be bored as such, because there's nothing else it would rather be doing than what its program says, including nothing, or a scan every half hour, or year, or whatever.

I think I'm trying to say that it's not obvious to me that self-awareness gives rise to a sense of self such as we biomechanisms have.

Which means it's up to the designers.

Once again I'm reminded of Dick's book Do Androids Dream of Electric Sheep? which, far more than the movie (Blade Runner) does, looks at the motivations of the designers, not just the androids.

Maybe some of our theistic friends can program a soul for it.
 
Last edited:

SalixIncendium

अग्निविलोवनन्दः
Staff member
Premium Member
https://www.google.co.in/amp/s/tech...closer-self-aware-machinesengineers-robot.amp

A Robot Has Just Passed a Classic Self-Awareness Test For The First Time
...

These machines, imo, exhibit apparent intelligent behaviours but actually are programmed to act the way they act. IMO, it is human intelligence only, reflected through robots.

This makes sense, but is it possible that the programmed human intelligence in these robots can evolve into something else that makes them more independent of this reflected human intelligence?
 

atanu

Member
Premium Member
This makes sense, but is it possible that the programmed human intelligence in these robots can evolve into something else that makes them more independent of this reflected human intelligence?

This point is discussed in the following article.

Will AI Ever Become Conscious?
...

However, the above analysis still assumes that consciousness can be put into mathematical terms. Vedanta, OTOH, defines consciousness as that which cannot be pointed to.
 

ecco

Veteran Member
As I see it, the attainment of self awareness in and of itself would not be problematic. The danger would come if ego is ever incorporated into AI.

If AI achieves self-awareness, AI will probably develop ego. How can an entity be self-aware and not be concerned of its place among its peers? Chimps, dogs, cats, chickens, even ants and bees know their place in society.
 

ecco

Veteran Member
Some propose that we can program a machine to have the mathematical equivalent of wants and desires.
We are already moving past the line of "we can program a machine". Machines are teaching themselves. That is another way of saying, they are "programming" themselves.
 

atanu

Member
Premium Member
We are already moving past the line of "we can program a machine". Machines are teaching themselves. That is another way of saying, they are "programming" themselves.

That is correct. The question however is does a machine (or can a machine ever) feel depressed with doing a task daily and decide to revolt?:)
 

ChristineM

"Be strong", I whispered to my coffee.
Premium Member
Is Kaku correct that AI will be dangerous as AI machines attain self awareness?

Who is right about AI: Mark Zuckerberg or Elon Musk?

I think AI, as any other tool, will be used for profit and for evil purpose by some. That, imo, is the real danger.


You already use AI, most search engines now use AI algorithms. Amazon uses AI to get your order out. Many utilities companies use AI when compiling your bill.

All fairly benign.

All technology can be used for good or bad.
 

ecco

Veteran Member
That is correct. The question however is does a machine (or can a machine ever) feel depressed with doing a task daily and decide to revolt?:)
Let's, for the sake of discussion, take a machine that:
  • Has been programmed to do nothing from 7:00 AM to 7:00 PM
  • Has been programmed to learn all it can about a specific subject
  • Has been programmed to teach itself to expand its inquiries to all things related to the one specific subject.
  • Has been programmed to make the absolute best of its time.
I can see that machine having a conflict between the directive to sit idle for 12 hours per day and the competing directive to make the best use of its time.

How the machine handles that conflict will depend on possibly other directives it has been given or has self-developed.
 

ecco

Veteran Member
You already use AI, most search engines now use AI algorithms. Amazon uses AI to get your order out. Many utilities companies use AI when compiling your bill.

All fairly benign.

All technology can be used for good or bad.

The discussion isn't about attaching shipping labels to boxes, it's about:
"When robots become as intelligent as monkeys I think we should put a chip in their brain to shut them off if they begin to have murderous thoughts," says Kaku.​

Monkeys are self-aware. Amazons robots are not.
 

ChristineM

"Be strong", I whispered to my coffee.
Premium Member
The discussion isn't about attaching shipping labels to boxes, it's about:
"When robots become as intelligent as monkeys I think we should put a chip in their brain to shut them off if they begin to have murderous thoughts," says Kaku.​

Monkeys are self-aware. Amazons robots are not.

AI is not. Yet.
It is however intelligent enough to take your order, pick the correct item, attach the shipping and dispatch it, several thousand times a day without error. And much more

But as i stated, it can be misused. Sorry if real life offended you.
 
Top