• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Is Kaku correct?

Thief

Rogue Theologian
to make the best use of its time.
computers already out think humans
thousands of calculations in secs

the danger of ai……
is what you give it power over

your vacuum cleaner?
your bank account?
your nuclear reactor?
 

Jumi

Well-Known Member
We're not close to a self-aware "real AI" yet. If the day comes, we will soon know if it's dangerous...
 
Last edited:

charlie sc

Well-Known Member
We’re a far way off from AI. Self recognition is different from self awareness. Self awareness has more nuances to it. It means the ability to identify you as separate entity from the environment and question it. I’m not sure if AI will ever achieve this state, because we have emotions and pleasurable vs painful stimuli responses. These may be necessary elements for AI to develop. I’m not sure if logic alone is enough to incept the desire to contemplate one’s existential self.


You’re welcome.
 

Jumi

Well-Known Member
What is confusing that there are multiple levels/concepts discussed as AI. Some exist and have existed for a longer time, others such as self-aware "robots" that can figure out their own philosophy are far off. Maybe quantum computing will provide something for the type that Asimov(and others) envisioned. Data from Star Trek and Skynet... not in the foreseeable future.
 

ecco

Veteran Member
We’re a far way off from AI. Self recognition is different from self awareness. Self awareness has more nuances to it. It means the ability to identify you as separate entity from the environment and question it. I’m not sure if AI will ever achieve this state, because we have emotions and pleasurable vs painful stimuli responses. These may be necessary elements for AI to develop. I’m not sure if logic alone is enough to incept the desire to contemplate one’s existential self.

Even the most basic avatars from computer games of 35 years ago had something called "hit points". When these were diminished, the avatar "feeling the pain" could no longer fight as well. When all were gone the avatar died.

In 1991 Sid Meier came out with a game called Civilization. At its most basic, the human plays against a number of AI "leaders". The leaders all have different personalities, some being more aggressive, some more deceitful, some more adventurous, etc. These AI leaders react to circumstances. I'm not saying they experience pleasure and pain in the same way that humans do. But if you walk across the territory of an aggressive leader you will probably start a war. As wars go on, leaders feel the weight of their losses, evaluate them against gains and may sue for peace.

What our brains do with electrical impulses and chemicals, avatars do with 1s and 0s. "Fuzziness" can be established with just 1s and 0s.

For now brains have more cells than computers have bits. For now.
This site has a great visualization of the increases in processing power since ~ 1960:
https://www.visualcapitalist.com/visualizing-trillion-fold-increase-computing-power/

The 7090 at the top of the chart cost $2.9 million (equivalent to $19 million in 2018) and had a blazing speed of about 2 million flops.
By comparison, a PS4 can churn out 1,800 billion flops and costs about $300,
 

charlie sc

Well-Known Member
Even the most basic avatars from computer games of 35 years ago had something called "hit points". When these were diminished, the avatar "feeling the pain" could no longer fight as well. When all were gone the avatar died.

In 1991 Sid Meier came out with a game called Civilization. At its most basic, the human plays against a number of AI "leaders". The leaders all have different personalities, some being more aggressive, some more deceitful, some more adventurous, etc. These AI leaders react to circumstances. I'm not saying they experience pleasure and pain in the same way that humans do. But if you walk across the territory of an aggressive leader you will probably start a war. As wars go on, leaders feel the weight of their losses, evaluate them against gains and may sue for peace.

What our brains do with electrical impulses and chemicals, avatars do with 1s and 0s. "Fuzziness" can be established with just 1s and 0s.

For now brains have more cells than computers have bits. For now.
This site has a great visualization of the increases in processing power since ~ 1960:
https://www.visualcapitalist.com/visualizing-trillion-fold-increase-computing-power/

The 7090 at the top of the chart cost $2.9 million (equivalent to $19 million in 2018) and had a blazing speed of about 2 million flops.
By comparison, a PS4 can churn out 1,800 billion flops and costs about $300,
Here, you're describing simulating AI, not actual AI. I suppose, it's possible there's no difference, but I argue that actual AI has to come to grips with its existential nature. This, I think, is not something you can program.
 

ecco

Veteran Member
Here, you're describing simulating AI, not actual AI.

No, I am describing simulating I (intelligence). Since it is a simulation, we can refer to it as A (artificial).


I suppose, it's possible there's no difference, but I argue that actual AI has to come to grips with its existential nature. This, I think, is not something you can program.

What does "existential nature" even mean? Self-awareness? Did you not get the point I was making about the AI leaders in Civilization?

Anything can be programmed.
A computer was programmed to win at chess. A later version had the computer "learning" to play chess. A computer has now been programmed to learn to play GO.

Thirteen minutes worth watching. Toward the end, it discusses fears and ethics.
 
Last edited:

charlie sc

Well-Known Member
No, I am describing simulating I (intelligence). Since it is a simulation, we can refer to it as A (artificial).
The opposite of artificial intelligence is natural intelligence. We are naturally intelligent. I suppose this term is convoluted, because when I say AI, I mean sentients, and not just something simply following algorithms. Simulating intelligence or emulating it is quite different from actually being intelligent. I have to direct you to the Chinese room - Wikipedia problem. How do we know or can we ever know something is sentient? This is what I'm talking about.

What does "existential nature" even mean? Self-awareness? Did you not get the point I was making about the AI leaders in Civilization?

Anything can be programmed.
A computer was programmed to win at chess. A later version had the computer "learning" to play chess. A computer has now been programmed to learn to play GO.
This is all very impressive but it did not address what I said. They said Deepmind played millions of games for it to improve and compared it's play-style to intuition. I'm not entirely sure how far any so-called AI can get with this. Intuition may not work this way and it may use an accumulation of emotions, experience and taking chances(I.E. gut feelings). No doubt, they are getting the experience part but they're missing vital ingredients for actual AI.


In this video at 58:02 Deepmind had no idea what to do. It was akin to a robot not even comprehending what's going on. I won't even say animal, because even animals know when to give up. Even after millions upon millions of games, Deepmind did not stop attempting to reach the air units. As the game went on, it did not understand or comprehend taking a chance for a base race. It did not give up and when it only had 1 unit left, it's API was still 300. These are all indicative of robot that can learn through repetition, not AI. It was unable to adapt intuitively. This reminds me of movies/Star Trek :p where a robot faces a logical problem and explodes or a robot that has some error so it continuously hits itself against the wall.
 
Last edited:

atanu

Member
Premium Member
... Self recognition is different from self awareness. Self awareness has more nuances to it. It means the ability to identify you as separate entity from the environment and question it. ....

To me Self awareness is one step more. Self awareness of separate selves inevitably leads to pain. The individual self, empowered by true intelligence, however, has the competence to see through the veil of separateness and overcome pain.
 

charlie sc

Well-Known Member
To me Self awareness is one step more. Self awareness of separate selves inevitably leads to pain. The individual self, empowered by true intelligence, however, has the competence to see through the veil of separateness and overcome pain.
Yes, I think this is an important part of becoming self-aware: the dichotomy to avoid pain and seek pleasure and how this relates to the self. Basically, desire is, I think, a necessary property.
 
Last edited:

A Vestigial Mote

Well-Known Member
One of the things I always marveled at in the sci-fi movies where some future AI gets "out of control" because it becomes too self-aware, etc. etc. etc. - the one thing they don't take into account is the fact that we've all talked about it and seen movies about it for so long that any designer would be horribly remiss in not including some form of "kill switch" that instantly renders the supporting computing power, or the software itself inert and powerless. In other words, safeguards against it "getting out," or doing any damage.

As a very simple example, you force the AI's encoding to react to binary "0" as "1" and "1" as "0." And you have all its requests or attempts to read/write information run through an entirely physical mechanism (that it can't see the internal workings of) that translates the data for consumption by itself or outside systems. So then, if it did somehow "write itself" to some other drive or something, this physical translator necessarily won't be in place, and all data will look like gobbledygook to it, and all data it attempts to push out will look like gobbledygook to any other system. It would be like stepping out of a cave, thinking you have found freedom, only to realize that you're blind, deaf and completely lost.
 
Top