• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Arificial Superintelligence and the Technological Singularity explained

Cooky

Veteran Member
I have a hard time comprehending what a technological singularity would mean for human life, even though many reputable scientists, such as Stephen Hawking and Elon Musk of Space-X have expressed concern for this phenomena, which basically means through my understanding, that software will actually progress to a point that it creates a "runaway reaction" that entails repetitive self-improvement to the point of total knowledge, surpassing human capabilities.

Vernor Vinge, has said that he believes the technological singularity will be reached by 2030.

What is your take on the Technological singularity..?
Will the universe be explained?
do you feel we are in danger?
Could it be a good thing for discovery and advancement of the human race?

Technological singularity - Wikipedia
 

Cooky

Veteran Member
What is particularly fascinating is the speed at which self improvement would occur... Because part of "self-improvement" includes getting faster, so what I'm imagining is that the more information that is gathered, the faster it will actually go... Which sort of helps explain how such an amazing feat can be completed in our lifetime. Like a total chain reaction that exceeds what makes sense to the human mind.
 

bobhikes

Nondetermined
Premium Member
I have a hard time comprehending what a technological singularity would mean for human life, even though many reputable scientists, such as Stephen Hawking and Elon Musk of Space-X have expressed concern for this phenomena, which basically means through my understanding, that software will actually progress to a point that it creates a "runaway reaction" that entails repetitive self-improvement to the point of total knowledge, surpassing human capabilities.

Vernor Vinge, has said that he believes the technological singularity will be reached by 2030.

What is your take on the Technological singularity..?
Will the universe be explained?
do you feel we are in danger?
Could it be a good thing for discovery and advancement of the human race?

Technological singularity - Wikipedia

Its like every other end of the world scenario great for selling books and worrying people but never gonna happen.
 

Cooky

Veteran Member
Its like every other end of the world scenario great for selling books and worrying people but never gonna happen.

I just wonder why such high ranking scientists are involving themselves in this. I guess it causes me to distrust these scientists, and draw into question other things they support such as global warming for example.
 

bobhikes

Nondetermined
Premium Member
I just wonder why such high ranking scientists are involving themselves in this. I guess it causes me to distrust these scientists, and draw into question other things they support such as global warming for example.

With scientist's its a perspective, they really believe they can explain everything and understand everything with science. So once they have an AI that is perfect we will become obsolete. We aren't necessary for the earth or an AI to survive. We typically cause problems for the world and other species. This AI will come to the conclusion that human's must be eliminated or controlled, probably both.

In reality a perfect AI will never be made all will have flaws, some may cause calamity similar to a Nuclear Power plant blowing up but there will be enough controls, errors and limitations that they will never encompass the world.
 

Ouroboros

Coincidentia oppositorum
Vernor Vinge, has said that he believes the technological singularity will be reached by 2030.
I think he might be right, but who knows. It could take longer, but it also could go faster and we have it in 5 years. I think the singularity is much farther away though. We'll have AI and superintelligence perhaps in 2030, but a full integration with the human mind probably much later.

What is your take on the Technological singularity..?
I used to think it was a bit sci-fi, but after the AI ice-age ended and after discussions with my son who's in the field of AI, I suspect it's inevitable to happen at some point.

Will the universe be explained?
No. Don't think so. There's always another layer of onion skin or another turtle on the way down. When we finally understand the things we're asking today, we'll have new questions to figure out.

do you feel we are in danger?
We're always in danger. You know, since you live in California, we have wildfires and earthquakes to worry about. It's just another thing on the list.

Could it be a good thing for discovery and advancement of the human race?
Like all human knowledge, it works both ways. It's a double edge sword. It will server us for advancement and health, and so on, but it'll also be a dangerous weapon that will destroy.
 

Ouroboros

Coincidentia oppositorum
What is particularly fascinating is the speed at which self improvement would occur... Because part of "self-improvement" includes getting faster, so what I'm imagining is that the more information that is gathered, the faster it will actually go... Which sort of helps explain how such an amazing feat can be completed in our lifetime. Like a total chain reaction that exceeds what makes sense to the human mind.
Considering the Knowledge Doubling Curve, the rate of knowledge doubling is increasing at an exponential rate. We're currently at 13 months, I think. In a few years it'll be less than a year for doubling. We can't even handle the flow of information, so brain (mind) augmentation will be one of the first steps, cyborg tech essentially, which is already being researched.
 

Ouroboros

Coincidentia oppositorum
With scientist's its a perspective, they really believe they can explain everything and understand everything with science. So once they have an AI that is perfect we will become obsolete. We aren't necessary for the earth or an AI to survive. We typically cause problems for the world and other species. This AI will come to the conclusion that human's must be eliminated or controlled, probably both.
I kind'a disagree. After we have a super intelligence, it doesn't mean there are robots, resource mining, production, etc in place for repairs and maintenance. It's more likely we'll be a form of keepers. But it won't be a massive change in our lifestyle, it rather will be a seamless transition, and we'll be willing to do so. In fact, I think we won't even realize it. In a way, we're already doing it. We produce, buy, use computers, cellphones, internet, etc, because we want to. And the hive mind can live in symbiosis with our need to fulfill our desires.

In reality a perfect AI will never be made all will have flaws, some may cause calamity similar to a Nuclear Power plant blowing up but there will be enough controls, errors and limitations that they will never encompass the world.
Again, sorry to disagree, my personal opinion about AI is that it can't be perfect to be able to what we do. It's like the Google car (I think it was) that could resolve getting on a busy highway because it had a perfect algorithm for safety. The other cars were driving to close, so to get the car not to stall and get stuck waiting, they had to experiment with it to take chances. I believe our mind is creative because of its imperfections. Creativity requires stepping out of ones set rules, so a superintelligence to be able to expand out understanding of the world would probably have things like that too, certain imperfections to allow it to be creative.
 

Terry Sampson

Well-Known Member
  • What is your take on the Technological singularity..?
    • Technological singularity - Wikipedia
      • "a hypothetical future point in time when technological growth becomes uncontrollable and irreversible, resulting in unfathomable changes to human civilization"
    • My take: Its name is new to me.. The concept is not.
  • Will the universe be explained?
    • It has been, but since nobody but me believes it has been, I'm not going to believe anybody else who tries to.
  • Do you feel we are in danger?
    • As long as humans are in charge? Yes.
  • Could it be a good thing for discovery and advancement of the human race?
    • From Catastrophe Theory by V.I. Arnold [Springer-Verlag, 1984]
      • Chapter 7. "Singularities of Stability Boundaries and the Principle of the Fragility of Good Things"
        • "We consider a division of the parameter space into two parts depending on whether the equilibrium state is stable ornot. Thus we obtain on the plane (in the space) of parameters the stability domain (consisting of those values of the parameters for which the equilibrium is stable), the instability domain and dividing them the stability boundary."
        • "We observe that in all cases the stability domain projects an acute angled wedge into the instability domain. Thus for systems near the sharp part of the boundary a small perturbation is more likely to send the system into the unstable region than into the stable region. This is a manifestation of a general principle stating that good things (e.g. stability) are more fragile than bad things.
          It seems that in good situations a number of requirements must hold simultaneously while to call a situation bad any one failure suffices."
    • Moral: "Good things" are fragile and have a short shelf-life.
 
Last edited:

Terry Sampson

Well-Known Member
Well, the name was popularized more than 25 years ago, that's a half-of-my-life ago.. :) But it's true that the concept is older.

Screenshot_2019-11-12 Arificial Superintelligence and the Technological Singularity explained.png
 

Bob the Unbeliever

Well-Known Member
Back to the OP: A machine intelligence Taking Over The World?

I highly doubt that would even be possible. By what method would such an AI manage the feat?

Armies are still comprised of humans. The silly movie War Games with Matthew Broderick was just that-- extremely silly. Beyond ludicrous, in fact. The most basic of things-- they knew where the super-computer was. Humans knew where it got it's power. A brief blackout of the entire area, with associated sabotage of any emergency generators? Done! 5 minute movie.

Who runs the airplanes? Humans, both as pilots and Traffic Controllers. Same for truck drivers, etc.

Until we get near ubiquitous self-driving cars and trucks (trains too)? No fear of an AI taking over.

Even if we DO get Universal Self-driving vehicles-- the odds are much-much higher that each vehicle will be "self-smart" and be more or less self contained, only "talking" to other vehicles with respect to "I'm here-- where are you" sort of protocols-- not unlike flocking birds. There is no central Primary Bird who directs the flock-- each bird has it's own brain, and handles it's own flight.

So. What about the Internet, being taken over by an AI? Well-- again-- that's something of a Fantasy. Again. The Internet is *not* centralized, it's one of the best examples of Distributed Intelligence humans have created-- which is why it's so robust. Which is why it's so easy to get around Government's vain attempts to censor it-- it's a kind of "open secret" in China, how to get around their government's silly attempts to control access.

I just don't see an AI having that much power over people. I remain skeptical.

Besides: One thing humans are really good at? Is making things that are very human-like, or human-friendly.

The earliest AI's are going to mimic humans very, very much-- since we must use the human intelligence model as a starting point anyway.

I expect the first emergent AIs will be indistinguishable from a very smart, very good-memory having human. With all the good and ills that entails.

I utterly reject the Evil Religious BS that Humans Are Born Flawed, and Evil. That has to be one of the most insidiously EVIL concepts ever created by evil men, to control others. "Born Into Sin" is Pure Evil, writ large. Screw that idea.

As such, I also reject the idiotic notion that the first AI must be automatically evil.

 

Ouroboros

Coincidentia oppositorum
I highly doubt that would even be possible. By what method would such an AI manage the feat?
We already do have automation in many areas of life.

Armies are still comprised of humans. The silly movie War Games with Matthew Broderick was just that-- extremely silly. Beyond ludicrous, in fact. The most basic of things-- they knew where the super-computer was. Humans knew where it got it's power. A brief blackout of the entire area, with associated sabotage of any emergency generators? Done! 5 minute movie.
Exoskeletons and even automated drones and robots are being developed as we speak, for war purposes, or at least defense.

Who runs the airplanes? Humans, both as pilots and Traffic Controllers. Same for truck drivers, etc.
Actually, most of the flying is done by autopilot.

Automated self-driving trucks are being developed at this very moment. I think even Tesla is making one.

Until we get near ubiquitous self-driving cars and trucks (trains too)? No fear of an AI taking over.
It will take time, but I'd say it's extremely likely it will happen eventually.

The step from self-driving car, to automated traffic control systems communicating with the cars is a small step.

Even if we DO get Universal Self-driving vehicles-- the odds are much-much higher that each vehicle will be "self-smart" and be more or less self contained, only "talking" to other vehicles with respect to "I'm here-- where are you" sort of protocols-- not unlike flocking birds. There is no central Primary Bird who directs the flock-- each bird has it's own brain, and handles it's own flight.
That's one of the things that's interesting with AI. It doesn't have to be a single entity, but it's really a cluster of smaller AIs. Just think of how our brain works. Each individual and self-sustaining brain-cell communicates and work in a swarm with other cells, and from that, a higher mental state of AI emerges. Each car, talking to each other, also communicating with a traffic control system, generating a greater hive-mind.

So. What about the Internet, being taken over by an AI? Well-- again-- that's something of a Fantasy. Again. The Internet is *not* centralized, it's one of the best examples of Distributed Intelligence humans have created-- which is why it's so robust. Which is why it's so easy to get around Government's vain attempts to censor it-- it's a kind of "open secret" in China, how to get around their government's silly attempts to control access.
Intelligence in our brain isn't centralized either. That's what so strange to think about when it comes to artificial intelligence. Take neural-nets for instance. It's a simulation of a multitude of neurons. Each individual yet interdependent, just like computers connected to internet.

I just don't see an AI having that much power over people. I remain skeptical.
It already does have some power. Management systems, control systems, and more, are already being used to help and guide people to make decisions. Cluster controls of helicopters, drones, etc in warfare.

Besides: One thing humans are really good at? Is making things that are very human-like, or human-friendly.

The earliest AI's are going to mimic humans very, very much-- since we must use the human intelligence model as a starting point anyway.

I expect the first emergent AIs will be indistinguishable from a very smart, very good-memory having human. With all the good and ills that entails.
Yeah, I agree. Like the Microsoft AI-bot.

Tay (bot) - Wikipedia

Basically, the AI became a troll. :D

I utterly reject the Evil Religious BS that Humans Are Born Flawed, and Evil. That has to be one of the most insidiously EVIL concepts ever created by evil men, to control others. "Born Into Sin" is Pure Evil, writ large. Screw that idea.

As such, I also reject the idiotic notion that the first AI must be automatically evil.
Yeah. Agree there too. I don't think a super-intelligence necessarily must become evil. We, as human species, we didn't blow up the planet and kill everything as soon as we had the means.

Consider if this super-intelligence is made because we want to know and understand nature. It will have a drive to save the things it's investigating. It's hard for it to investigate a planet with biological life if it decides to kill all life.
 

ChristineM

"Be strong", I whispered to my coffee.
Premium Member
Back to the OP: A machine intelligence Taking Over The World?

I highly doubt that would even be possible. By what method would such an AI manage the feat?

Armies are still comprised of humans. The silly movie War Games with Matthew Broderick was just that-- extremely silly. Beyond ludicrous, in fact. The most basic of things-- they knew where the super-computer was. Humans knew where it got it's power. A brief blackout of the entire area, with associated sabotage of any emergency generators? Done! 5 minute movie.

Who runs the airplanes? Humans, both as pilots and Traffic Controllers. Same for truck drivers, etc.

Until we get near ubiquitous self-driving cars and trucks (trains too)? No fear of an AI taking over.

Even if we DO get Universal Self-driving vehicles-- the odds are much-much higher that each vehicle will be "self-smart" and be more or less self contained, only "talking" to other vehicles with respect to "I'm here-- where are you" sort of protocols-- not unlike flocking birds. There is no central Primary Bird who directs the flock-- each bird has it's own brain, and handles it's own flight.

So. What about the Internet, being taken over by an AI? Well-- again-- that's something of a Fantasy. Again. The Internet is *not* centralized, it's one of the best examples of Distributed Intelligence humans have created-- which is why it's so robust. Which is why it's so easy to get around Government's vain attempts to censor it-- it's a kind of "open secret" in China, how to get around their government's silly attempts to control access.

I just don't see an AI having that much power over people. I remain skeptical.

Besides: One thing humans are really good at? Is making things that are very human-like, or human-friendly.

The earliest AI's are going to mimic humans very, very much-- since we must use the human intelligence model as a starting point anyway.

I expect the first emergent AIs will be indistinguishable from a very smart, very good-memory having human. With all the good and ills that entails.

I utterly reject the Evil Religious BS that Humans Are Born Flawed, and Evil. That has to be one of the most insidiously EVIL concepts ever created by evil men, to control others. "Born Into Sin" is Pure Evil, writ large. Screw that idea.

As such, I also reject the idiotic notion that the first AI must be automatically evil.

AIs exist at the moment, fairly dumb AIs that process your utility bills, sort and dispetch your amazon order, predict stock markets and search the web at your command. They all have 2 things in common,
1/ they are built using human intelligence as a model
2/ they all have a plug
 

Ouroboros

Coincidentia oppositorum
AIs exist at the moment, fairly dumb AIs that process your utility bills, sort and dispetch your amazon order, predict stock markets and search the web at your command.
Agree. Currently, they're far from being a threat of making human-killing decisions. I suspect we're more in danger of misuse of the simple AIs than the risk of creating the evil super-intelligence. We're like in the place when they first discovered how to make atomic bombs. Not until they saw the devastating effects did people wake up to the severe danger. The AI we have now, is dangerous because it's dumb. Like the self-driving car that killed a person recently, not out of malicious intent, but because of stupid design.

They all have 2 things in common,
1/ they are built using human intelligence as a model
2/ they all have a plug
Yup. I think that's what Bill Gates and other have warned about, it's when someone start building AI without a plug or off-button. That's why I believe the self-driving cars without a steering wheel is far more dangerous than a regular old car.

The way to do it is to have a model of integrating AI continuously into helping us making the right decisions, not letting AI take over. We should still be the operators and in charge.
 

Mock Turtle

Oh my, did I say that!
Premium Member
If anyone hasn't read it, the book Life 3.0: Being Human in the Age of Artificial Intelligence, by Max Tegmark is worth a look.
 

Nakosis

Non-Binary Physicalist
Premium Member
I have a hard time comprehending what a technological singularity would mean for human life, even though many reputable scientists, such as Stephen Hawking and Elon Musk of Space-X have expressed concern for this phenomena, which basically means through my understanding, that software will actually progress to a point that it creates a "runaway reaction" that entails repetitive self-improvement to the point of total knowledge, surpassing human capabilities.

Vernor Vinge, has said that he believes the technological singularity will be reached by 2030.

What is your take on the Technological singularity..?
Will the universe be explained?
do you feel we are in danger?
Could it be a good thing for discovery and advancement of the human race?

Technological singularity - Wikipedia

Looking forward to it, even though it may mean the end of human civilization. Maybe the advent of immortal consciousness.
 

Heyo

Veteran Member
I have a hard time comprehending what a technological singularity would mean for human life, even though many reputable scientists, such as Stephen Hawking and Elon Musk
Elon Musk is not a scientist.
of Space-X have expressed concern for this phenomena, which basically means through my understanding, that software will actually progress to a point that it creates a "runaway reaction" that entails repetitive self-improvement to the point of total knowledge, surpassing human capabilities.

Vernor Vinge, has said that he believes the technological singularity will be reached by 2030.

What is your take on the Technological singularity..?
It will happen around 2050. (According to Ray Kurzweil, who's prediction have been pretty accurate in the past. (We will have General AI by 2028.))
Will the universe be explained?
Eventually. With the singularity knowledge will explode, so a Theory of Everything seems inevitable.
do you feel we are in danger?
Could it be a good thing for discovery and advancement of the human race?
Technological singularity - Wikipedia
I, for one, welcome our new computer overlords.
That is not to say that I'm blind to dangers, especially the danger on the way where humans control a highly intelligent but not yet conscious AI.

For anyone wanting to know more about AI and the danger of it I strongly suggest the Numberphile episodes with Robert Miles. He has thought about most objections and can explain why they don't work. I don't know if this compilation is complete, you may want to search for Robert Miles, Computerphile to make sure:

Since that is possibly tl;dw for most, here some short answers to common objections.

How can a computer have any power? It doesn't control anything.
- Just like any super-intelligent human. On the internet nobody knows who - or what - you are. The AI only needs access to the internet to learn everything about human psychology, how to manipulate, bribe, blackmail.

Just shut the power off.
- An AI with an internet connection will not be where you think it is. It will have redundant copies of itself before you even realize it has any intention to overtake the world.

Just shut the internet off.
- Really? Worldwide? We know that burning fossil fuel is killing us slowly. We should just stop it.
 
Top