• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

(AI) Concerning or not?

Mock Turtle

Oh my, did I say that!
Premium Member
AI is a gift from an unscrupulous God to professional liars and conmen everywhere.

And the AI lies have already begun, from phony books and news articals to phony songs and videos, to phony misleading statistics, all deliberately disguised and with hidden sources and agenas.
I blame it on that nasty photography stuff - and when it went digital. :D
 

Quintessence

Consults with Trees
Staff member
Premium Member
What I find so concerning about AI compared to for instance climate change is that at least with climate change things move relatively slowly and we can make predictions and adjust to it, even though we are famously bad at that as well.
Interesting. On the contrary, the pace of climate change induced by humans is unprecedentedly fast and well outside of human ability to control. So-called "AI" on the other hand is laughably and trivially easy to kill. Why?

Because it depends on the very same non-sustainable practices that lead to our present sixth mass extinction and climate change to begin with. I'm not worried at all about it because of the high energy and high water/cooling requirements of so-called "AI." It is neither sensible nor sustainable technology.

Literally all one has to do to stop so-called "AI" is turn off the power. Done.

On the other hand, even if all carbon pollution was stopped right NOW, we are still locked into disruptive climate change worldwide and halting all carbon pollution would not necessarily halt the sixth mass extinction brought on by human overconsumption and greed.
 

PureX

Veteran Member
I blame it on that nasty photography stuff - and when it went digital. :D
It's not the new tech that is the problem. It's our unwillingness to reign in our own propensity to use it to lie and cheat and steal from each other. So anytime some new tech comes along, it immediately becomes a weapon in the hands of the unscrupulous. Then it takes forever to reign it in, after much damage has been done, and that's if we ever manage to reign it in at all. If it benefits the rich and powerful, as AI certainly does, and will, it will never be reigned in.

Our ability to understand the world around us is going to be totally blind-sided by a storm of false world views that look and sound just like the real thing. And each of them will be intent on exploiting us in one way or another.

Yet again, science is tossing another shiny loaded pistol into the monkey's cage, claiming it will somehow save them. And once again, all the monkeys will do with it is shoot each other.
 
Last edited:

Nimos

Well-Known Member
Interesting. On the contrary, the pace of climate change induced by humans is unprecedentedly fast and well outside of human ability to control. So-called "AI" on the other hand is laughably and trivially easy to kill. Why?

Because it depends on the very same non-sustainable practices that lead to our present sixth mass extinction and climate change to begin with. I'm not worried at all about it because of the high energy and high water/cooling requirements of so-called "AI." It is neither sensible nor sustainable technology.

Literally all one has to do to stop so-called "AI" is turn off the power. Done.

On the other hand, even if all carbon pollution was stopped right NOW, we are still locked into disruptive climate change worldwide and halting all carbon pollution would not necessarily halt the sixth mass extinction brought on by human overconsumption and greed.
Maybe, It depends on who you ask. If you read #13, those people don't think it will be possible.

If we are talking about a sentient artificial entity with close to godlike knowledge, it might not be all that easy to outsmart. Not saying that this will happen, just quoting what people who know more about these things than I do say.

You seem to look at this as if we are just talking about an advanced app, if **** hits the fan, we just uninstall it. We could do that with AI and maybe even AGI, but if we go beyond that and even with current levels in AI, they don't seem very certain what effects it can have. If ASI is developed, we are potentially talking about something that is so far ahead of humans that we just cannot control it. The problem obviously is that this technology is widely spread, so you can't really prevent anyone from continuing to develop this in secret. So assuming that you can stop it, I don't think, it might simply take longer should it be banned.

The big question to me at least is do we even need ASI to lose control. Again, I think the main issue here is not particularly the AI/AGI itself, it is how we apply it to societies that can cause huge issues.
 

anotherneil

Well-Known Member
I take it you're figuratively referring to nuclear bombs, but any setup that enables network access to materials and machines that can produce weapons-grade radioactive material, assemble weapons, and deploy them is relevant in the literal sense. Hopefully any entities that do have such resources have the sense to airgap their systems, otherwise (even without AI) that could be a problem (e.g. Skynet from the Terminator movies).

Aside from that, what sort of government involvement does there need to be for AI that we don't already have today for computer technology, software, networks, the Internet, etc?

Is there a reason for making a bigger fuss over AI than any other app such as word processors, spreadsheets, databases, web browsers, etc?
 

bobhikes

Nondetermined
Premium Member
As some might not know OpenAI have a safety team that is supposed to help ensure that AI/AGI is done responsibly:

Leike co-led OpenAI's superalignment group, a team that focuses on making its artificial-intelligence systems align with human interests.
In September, he was named one of Time 100's most influential people in AI.
Leike announced his departure hours after Ilya Sutskever, the other superalignment leader, said he was leaving.

------


OpenAI has disbanded its team focused on the long-term risks of artificial intelligence just one year after the company announced the group, a person familiar with the situation confirmed to CNBC on Friday. (Today 18/05/2024)

This is some tweets from Leike:
View attachment 91740
View attachment 91741

This to me is insane. This is big corporations playing with potential nuclear bombs without really understanding them in their chase for profit. There doesn't seem to be much government involvement in this at all. This guy is one of the leading people regarding this, together with Ilya Sutskever, who also left.

I don't know what is going on, but something seems pretty strange if you ask me. And these are not the first safety people to leave.
Concerning in that it is going to get annoying before it fails to do what it claims. Eventually, it will be a bust.
 

Nimos

Well-Known Member
I take it you're figuratively referring to nuclear bombs, but any setup that enables network access to materials and machines that can produce weapons-grade radioactive material, assemble weapons, and deploy them is relevant in the literal sense. Hopefully any entities that do have such resources have the sense to airgap their systems, otherwise (even without AI) that could be a problem (e.g. Skynet from the Terminator movies).
Correct, I'm not talking about an AI launching nuclear bombs etc. Merely talking about introducing something into society that you have no idea how will impact it.
Let's just take some examples (Made up by me)
Say that AI + Robots over the next 10-15 years, cause 15-25% of all humans to lose their jobs, this is potentially extremely harmful if society is not prepared for it. It could cause massive rioting, depression etc. The economy we apply today doesn't deal well with things like that etc. And changing the economy to being able to deal with this, might not be easy.

I think we can also expect to see AI being integrated into the military in a much larger degree.
The US military won't say who won a landmark real-world dogfight between an artificial intelligence-controlled F-16 and a manned jet, citing national security concerns.

Officials would only say the groundbreaking battle went well. "Things are progressing as well or faster than we had hoped," Lt. Col. Ryan Hefron, ACE program manager for DARPA, told reporters on Friday. "But unfortunately, we can't provide more detail."


Just a video showing how robots are to be used for war. And we already know from the war in Ukraine how popular drones have become.


And in general, I think most people would agree, that the military is pretty creative when it comes to destroying things. So clearly this is going to be an arms race. And obviously one can ask themselves whether these are being developed with humanity's best interest in mind.

Then you obviously have the random psychopath using AI to do whatever crazy idea they might have.

Aside from that, what sort of government involvement does there need to be for AI that we don't already have today for computer technology, software, networks, the Internet, etc?
The problem as I see it is that it is a global issue. No one wants to lose the arms race, and no one trusts that the others won't develop this in secret even if they officially comes to an agreement.

So how do you restrict this?
Also, no country is going to give their own companies/industry a disadvanced, so how do you regulate against it? We are talking about potential trillions of dollars industry here. Those that make the best robots and AI will dominate everything.

Billionaire Elon Musk took to social media to express his agreement with a fellow tech leader’s prediction that there will be about 1 billion humanoid robots on Earth in two decades.

Now even if he is wrong about the numbers and timeline, I don't think he is wrong about these companies' goal for the future being in robots that is hooked up to a powerful intelligent AI.

So to me, it seems like the world governments are stuck in a deadlock, how do you deal with this? You have humans on one side and the economy on the other, and we know that the economy always seems to win. Yet our economy doesn't work if people have no money to spend, so what are you going to do?

That is why I find it concerning, that so little effort seems to be put into discussing these things, it doesn't make sense to do it once it to late, like with climate change, at least we have a lot of years to improve on it and hopefully find a solution, but if the economy or humans start to have problems we need a solution straight away.

Is there a reason for making a bigger fuss over AI than any other app such as word processors, spreadsheets, databases, web browsers, etc?
Yes, indeed there is. Because of what they can be used for. And because it is going to be applied across so many fields at the same time. So if you get replaced by an AI in your support job or whatever, then you might have issues finding a new one because those might also have been replaced or are in the process of being it. So no one wants to hire you. So now have to find something else, and since AI is basically being applied in all fields at once, what are you going to do, you might not have the qualification or knowledge to do another job. And we not just talking about potentially generic jobs.

This is from the CEO of Nvidia, the leading chip maker for AI:
Jensen Huang, CEO of Nvidia, argues that we should stop saying kids should learn to code. He argues the rise of AI means we can replace programming languages with human language prompts thus enabling everyone to be a programmer. AI will kill coding. 25. feb. 2024

Again, he might be right or he might not, the problem again is that no one knows what impact these things are going to have.

You have to take into account that a human needs X amount of years to achieve a level of knowledge before we are useful. So a person born today has to go to school etc. and we all have to learn the basics, we can't like AIs simply copy/paste our knowledge around. So in 24+ years, that person is ready. What will AI be like in 24 years?

I don't think anyone has any clue. This would be absurd to even think about 10 years ago, you wanted to be a lawyer, go for it, you wanted to be a software engineer, great.
 

Mock Turtle

Oh my, did I say that!
Premium Member
It's not the new tech that is the problem. It's our unwillingness to reign in our own propensity to use it to lie and cheat and steal from each other. So anytime some new tech comes along, it immediately becomes a weapon in the hands of the unscrupulous. Then it takes forever to reign it in, after much damage has been done, and that's if we ever manage to reign it in at all. If it benefits the rich and powerful, as AI certainly does, and will, it will never be reigned in.

Our ability to understand the world around us is going to be totally blind-sided by a storm of false world views that look and sound just like the real thing. And each of them will be intent on exploiting us in one way or another.

Yet again, science is tossing another shiny loaded pistol into the monkey's cage, claiming it will somehow save them. And once again, all the monkeys will do with it is shoot each other.
I tend to agree as to the threat that AI will present - as to the effects on industry and work, as to the effects on our media, and as to the effects coming from governments aiming to use it to their own advantage, regardless of the dangers. Not sure what we can do about it other than perhaps having a group AI to check other AI. :oops:
 

Nimos

Well-Known Member
I tend to agree as to the threat that AI will present - as to the effects on industry and work, as to the effects on our media, and as to the effects coming from governments aiming to use it to their own advantage, regardless of the dangers. Not sure what we can do about it other than perhaps having a group AI to check other AI. :oops:
I don't think we have any other choice than using AI to fight other AIs, whether that is possible or not is a good question, because I would assume that they follow the same rules as we do. So the most powerful AI will be able to outsmart the bad one. This is probably also why it is crucial to have the best one possible. Humans won't be able to keep up with an AI I think.

For instance this one:
AI technology has already proved instrumental in transforming and disrupting a wide range of industries, and really it’s just getting started.

Microsoft, which has gone “all-in” on artificial intelligence, has developed a generative AI model designed expressly for U.S. intelligence services. Unlike other AI platforms, such as Microsoft’s own Copilot, this one will be “air gapped” and won’t require a potentially unsafe connection to the internet.

Bloomberg notes, “It’s the first time a major large language model has operated fully separated from the internet… Most AI models, including OpenAI’s ChatGPT rely on cloud services to learn and infer patterns from data, but Microsoft wanted to deliver a truly secure system to the US intelligence community.”

18 months of development

The tool will allow intelligence services to use AI for tasks such as analyzing vast swathes of classified data without the fear of data leaks or hacks that could potentially compromise national security.

Clearly this has high priority I would guess, not only in the US, but all countries that consider AI a potential threat.

And the only way to make it truly safe is to "Air gap" it, so you simply can't access it from the outside, so they must be somewhat concerned since they see a need for it.

Obviously one can only speculate what research is being done behind the curtain that none of us hear about or what exactly these AIs are going to analyze etc. These institutes are not exactly known for transparency for good reasons. But also pretty well known for doing dodgy stuff. :)
 

Nimos

Well-Known Member
Found this interview that I think explains the situation or concerns regarding AI in a way that normal people can understand. This person being interviewed is not against AI, but simply as many of us a little bit puzzled about what is going on.

He gives a quick explanation of AI for those that ain't really sure what it is, so I think it is well worth a watch:

 
Last edited:

TagliatelliMonster

Veteran Member
As some might not know OpenAI have a safety team that is supposed to help ensure that AI/AGI is done responsibly:

Leike co-led OpenAI's superalignment group, a team that focuses on making its artificial-intelligence systems align with human interests.
In September, he was named one of Time 100's most influential people in AI.
Leike announced his departure hours after Ilya Sutskever, the other superalignment leader, said he was leaving.

------


OpenAI has disbanded its team focused on the long-term risks of artificial intelligence just one year after the company announced the group, a person familiar with the situation confirmed to CNBC on Friday. (Today 18/05/2024)

This is some tweets from Leike:
View attachment 91740
View attachment 91741

This to me is insane. This is big corporations playing with potential nuclear bombs without really understanding them in their chase for profit. There doesn't seem to be much government involvement in this at all. This guy is one of the leading people regarding this, together with Ilya Sutskever, who also left.

I don't know what is going on, but something seems pretty strange if you ask me. And these are not the first safety people to leave.
My biggest worry concerning AI is that is going to completely blur the lines between what is real and what isn't in cyberspace.
That is going to be a huge problem.

It is for example perfectly capable of creating a video of Biden making a certain speech. And as a viewer you are not going to be able to tell the difference between that and an actual real speech.

Imagine the problem of "deep fakes". Now multiple that problem a million-fold.
Pictures, videos, articles,... you name it.

And you won't be able to tell what's real from what isn't.

In fact, not even just in cyberspace.
Imagine receiving a phone call from your wife that isn't your wife.

You wouldn't be able to trust anything digital any longer.
 

TagliatelliMonster

Veteran Member
I blame it on that nasty photography stuff - and when it went digital. :D
I blame it on social media and the naivety of people who, for all those years, considered it a great idea to put their entire lives online.

Meta has just been sued in 11 european countries again after once more changing their terms of service.
Bottom line: it now says that ALL the data they have on you will be plugged into a not-yet-disclosed AI engine.
And if you don't agree with the terms of service, you can no longer use the service. But your data is still on their servers. And it will still be plugged into the AI.

It's absolutely scandalous. But at the same time, the result of all those people who in their naivety handed over their entire lives to these "social" platforms. And now those guys have literally peta-bytes of data to train their AI's.

I'm so glad I always stayed away from all that nonsense.
I don't have an account with any of them.

Sadly though, all of them still know who I am, thanks to all those naive people who have me in their contacts, talk about me, post pictures where I also am on, etc. If tomorrow I sign up for failbook sharing only my email address, I am absolutely positive that they will give me a list "people you may know" which would literally be all my close friends, my family, my coworkers, people I went to school with, etc.. who DO have an account on failbook.

I swear, if I had the power to do so, I would fry every single one of all their datacenters and obliterate any and all "social" media over night. Without blinking. And the world would instantly be a better place for it.
 

Nimos

Well-Known Member
My biggest worry concerning AI is that is going to completely blur the lines between what is real and what isn't in cyberspace.
That is going to be a huge problem.

It is for example perfectly capable of creating a video of Biden making a certain speech. And as a viewer you are not going to be able to tell the difference between that and an actual real speech.

Imagine the problem of "deep fakes". Now multiple that problem a million-fold.
Pictures, videos, articles,... you name it.

And you won't be able to tell what's real from what isn't.

In fact, not even just in cyberspace.
Imagine receiving a phone call from your wife that isn't your wife.

You wouldn't be able to trust anything digital any longer.
That is definitely going to be an issue.

You can have your "bank" advisor calling you, your kids being contacted by "you". And as you say, deep fakes. But also in general information on the internet.

These AI's are trained on data on the internet and they do some automatic censoring, however, a lot of the data is flagged and requires people to verify the information before it is used for training.

However if you have AI's generating a lot of false information and filling the internet with rubbish more than usual, you could potentially ruin the internet or people will have to rely on AI to verify information, which is controlled by the big companies/governments.

So a lot of very bad things could potentially come from this.

Also, there have already been examples of people using AI to scam people, for instance those using AI to check emails etc. Because you can add "hidden" information to the emails in such a way that we as humans can't see it, yet the AI can and this information can contain instructions that can be harmful.

We don't really hear a lot about these things, because the companies are in a frenzy gold rush, so it is more important to focus on all the good things and cool functionalities because that sells better :D

But I have no doubt that in a few years, we are going to have to deal with and hear a lot more about these issues.
 

TagliatelliMonster

Veteran Member
These AI's are trained on data on the internet and they do some automatic censoring, however, a lot of the data is flagged and requires people to verify the information before it is used for training.

However if you have AI's generating a lot of false information and filling the internet with rubbish more than usual, you could potentially ruin the internet or people will have to rely on AI to verify information, which is controlled by the big companies/governments.
Here's another to think about.

AI's are trained by real data on the internet.
AI's use said data to generate "credible" new yet fake data and put it on the internet.
Now AI's get trained by real AND FAKE AI GENERATED data.
AI's use said data to generate "credible" new yet fake data and put it on the internet.
Now AI's get trained by some real and EVEN MORE fake AI generated data.
and on and on and on.

Pretty soon, you'll have AI's trained by almost exclusively fake data.

Yep, it will ruin the internet. Insofar as it isn't actually already kinda ruined, that is................
Personally I stay of the opinion that "social" media already kinda ruined the internet.


Social media is like an aggressive cancer to the interwebs. Adding AI to the mix is only going to accelerate the spread of that cancer, rendering it terminal.
 

Nimos

Well-Known Member
Here's another to think about.

AI's are trained by real data on the internet.
AI's use said data to generate "credible" new yet fake data and put it on the internet.
Now AI's get trained by real AND FAKE AI GENERATED data.
AI's use said data to generate "credible" new yet fake data and put it on the internet.
Now AI's get trained by some real and EVEN MORE fake AI generated data.
and on and on and on.

Pretty soon, you'll have AI's trained by almost exclusively fake data.

Yep, it will ruin the internet. Insofar as it isn't actually already kinda ruined, that is................
Personally I stay of the opinion that "social" media already kinda ruined the internet.


Social media is like an aggressive cancer to the interwebs. Adding AI to the mix is only going to accelerate the spread of that cancer, rendering it terminal.
I think that is why they are starting to use synthetic data and probably also why they are going to partner up with reliable sources of information. Simply because they are going to run out of data to feed these systems.

As it is now, I don't think they can reach AGI with the current approach, they have to figure out how to train the AI on principles rather than just predicting things. Because it doesn't seem to be a reliable way of doing things.

The AI system doesn't understand the most basic concepts without an insane amount of training, and even then it doesn't actually understand it, it just has so much data that it can appear as if it knows what it is talking about.

Obviously that is extremely useful for a lot of things, but it is not true intelligence.
 

mangalavara

हर हर महादेव
Premium Member
it just has so much data that it can appear as if it knows what it is talking about.

I agree, AI just appears to know what it is talking about. It knows, for instance, that human beings eat by putting food into their mouths. Ask it to make a video of a human being eating, and the AI will depict a human being putting a slice of pizza up their nostrils.
 

anna.

colors your eyes with what's not there
I am not a subscriber so have no access.
I'm not either, but here it is:

Here's Why AI May Be Extremely Dangerous--Whether It's Conscious or Not​

Artificial intelligence algorithms will soon reach a point of rapid self-improvement that threatens our ability to control them and poses great potential risk to humanity​
“The idea that this stuff could actually get smarter than people.... I thought it was way off…. Obviously, I no longer think that,” Geoffrey Hinton, one of Google's top artificial intelligence scientists, also known as “the godfather of AI,” said after he quit his job in April so that he can warn about the dangers of this technology.​
He’s not the only one worried. A 2023 survey of AI experts found that 36 percent fear that AI development may result in a “nuclear-level catastrophe.” Almost 28,000 people have signed on to an open letter written by the Future of Life Institute, including Steve Wozniak, Elon Musk, the CEOs of several AI companies and many other prominent technologists, asking for a six-month pause or a moratorium on new advanced AI development.​
As a researcher in consciousness, I share these strong concerns about the rapid development of AI, and I am a co-signer of the Future of Life open letter.​
Why are we all so concerned? In short: AI development is going way too fast.​

continued:

The key issue is the profoundly rapid improvement in conversing among the new crop of advanced "chatbots," or what are technically called “large language models” (LLMs). With this coming “AI explosion,” we will probably have just one chance to get this right.

If we get it wrong, we may not live to tell the tale. This is not hyperbole.

This rapid acceleration promises to soon result in “artificial general intelligence” (AGI), and when that happens, AI will be able to improve itself with no human intervention. It will do this in the same way that, for example, Google’s AlphaZeroAI learned how to play chess better than even the very best human or other AI chess players in just nine hours from when it was first turned on. It achieved this feat by playing itself millions of times over.

A team of Microsoft researchers analyzing OpenAI’s GPT-4, which I think is the best of the new advanced chatbots currently available, said it had, "sparks of advanced general intelligence" in a new preprint paper.

In testing GPT-4, it performed better than 90 percent of human test takers on the Uniform Bar Exam, a standardized test used to certify lawyers for practice in many states. That figure was up from just 10 percent in the previous GPT-3.5 version, which was trained on a smaller data set. They found similar improvements in dozens of other standardized tests.

Most of these tests are tests of reasoning. This is the main reason why Bubeck and his team concluded that GPT-4 “could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.”

This pace of change is why Hinton told the New York Times: "Look at how it was five years ago and how it is now. Take the difference and propagate it forwards. That’s scary.” In a mid-May Senate hearing on the potential of AI, Sam Altman, the head of OpenAI called regulation “crucial.”

Once AI can improve itself, which may be not more than a few years away, and could in fact already be here now, we have no way of knowing what the AI will do or how we can control it. This is because superintelligent AI (which by definitioncan surpass humans in a broad range of activities) will—and this is what I worry about the most—be able to run circles around programmers and any other human by manipulating humans to do its will; it will also have the capacity to act in the virtual world through its electronic connections, and to act in the physical world through robot bodies.

This is known as the “control problem” or the “alignment problem” (see philosopher Nick Bostrom’s book Superintelligencefor a good overview) and has been studied and argued about by philosophers and scientists, such as Bostrom, Seth Baum and Eliezer Yudkowsky, for decades now.

I think of it this way: Why would we expect a newborn baby to beat a grandmaster in chess? We wouldn’t. Similarly, why would we expect to be able to control superintelligent AI systems? (No, we won’t be able to simply hit the off switch, because superintelligent AI will have thought of every possible way that we might do that and taken actions to prevent being shut off.)

Here’s another way of looking at it: a superintelligent AI will be able to do in about one second what it would take a team of 100 human software engineers a year or more to complete. Or pick any task, like designing a new advanced airplane or weapon system, and superintelligent AI could do this in about a second.

Once AI systems are built into robots, they will be able to act in the real world, rather than only the virtual (electronic) world, with the same degree of superintelligence, and will of course be able to replicate and improve themselves at a superhuman pace.

Any defenses or protections we attempt to build into these AI “gods,” on their way toward godhood, will be anticipated and neutralized with ease by the AI once it reaches superintelligence status. This is what it means to be superintelligent.

We won’t be able to control them because anything we think of, they will have already thought of, a million times faster than us. Any defenses we’ve built in will be undone, like Gulliver throwing off the tiny strands the Lilliputians used to try and restrain him.

Some argue that these LLMs are just automation machines with zero consciousness, the implication being that if they’re not conscious they have less chance of breaking free from their programming. Even if these language models, now or in the future, aren’t at all conscious, this doesn’t matter. For the record, I agree that it’s unlikely that they have any actual consciousness at this juncture—though I remain open to new facts as they come in.

Regardless, a nuclear bomb can kill millions without any consciousness whatsoever. In the same way, AI could kill millions with zero consciousness, in a myriad ways, including potentially use of nuclear bombs either directly (much less likely) or through manipulated human intermediaries (more likely).

So, the debates about consciousness and AI really don’t figure very much into the debates about AI safety.

Yes, language models based on GPT-4 and many other models are already circulating widely. But the moratorium being called for is to stop development of any new models more powerful than 4.0—and this can be enforced, with force if required. Training these more powerful models requires massive server farms and energy. They can be shut down.

My ethical compass tells me that it is very unwise to create these systems when we know already we won’t be able to control them, even in the relatively near future. Discernment is knowing when to pull back from the edge. Now is that time.

We should not open Pandora’s box any more than it already has been opened.

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.
 

Nimos

Well-Known Member
I agree, AI just appears to know what it is talking about. It knows, for instance, that human beings eat by putting food into their mouths. Ask it to make a video of a human being eating, and the AI will depict a human being putting a slice of pizza up their nostrils.
Exactly and that I think is a huge problem.

Also, I did a Soduku test and it explains the rules to perfection and how to solve one. Yet when asked to do it, it utterly fails.

Whereas with a human you can explain the rules or the principles behind it, and even if you have never tried one, you will understand it in less than a minute.

The AI just doesn't understand it, even though it knows how to check for errors etc. It will also be certain that it solved it correctly, even though it is wrong, which is probably the most disturbing thing about it.

So I think Soduku perfectly illustrates the problem with AI today and they need to find a way to make it understand principles. Surely they can program it to solve Soduku, but it has to do this using logic and if they can't do that, I don't see how they will ever reach AGI.
 

Nimos

Well-Known Member
I'm not either, but here it is:

Here's Why AI May Be Extremely Dangerous--Whether It's Conscious or Not​

Artificial intelligence algorithms will soon reach a point of rapid self-improvement that threatens our ability to control them and poses great potential risk to humanity​
“The idea that this stuff could actually get smarter than people.... I thought it was way off…. Obviously, I no longer think that,” Geoffrey Hinton, one of Google's top artificial intelligence scientists, also known as “the godfather of AI,” said after he quit his job in April so that he can warn about the dangers of this technology.​
He’s not the only one worried. A 2023 survey of AI experts found that 36 percent fear that AI development may result in a “nuclear-level catastrophe.” Almost 28,000 people have signed on to an open letter written by the Future of Life Institute, including Steve Wozniak, Elon Musk, the CEOs of several AI companies and many other prominent technologists, asking for a six-month pause or a moratorium on new advanced AI development.​
As a researcher in consciousness, I share these strong concerns about the rapid development of AI, and I am a co-signer of the Future of Life open letter.​
Why are we all so concerned? In short: AI development is going way too fast.​

continued:

The key issue is the profoundly rapid improvement in conversing among the new crop of advanced "chatbots," or what are technically called “large language models” (LLMs). With this coming “AI explosion,” we will probably have just one chance to get this right.

If we get it wrong, we may not live to tell the tale. This is not hyperbole.

This rapid acceleration promises to soon result in “artificial general intelligence” (AGI), and when that happens, AI will be able to improve itself with no human intervention. It will do this in the same way that, for example, Google’s AlphaZeroAI learned how to play chess better than even the very best human or other AI chess players in just nine hours from when it was first turned on. It achieved this feat by playing itself millions of times over.

A team of Microsoft researchers analyzing OpenAI’s GPT-4, which I think is the best of the new advanced chatbots currently available, said it had, "sparks of advanced general intelligence" in a new preprint paper.

In testing GPT-4, it performed better than 90 percent of human test takers on the Uniform Bar Exam, a standardized test used to certify lawyers for practice in many states. That figure was up from just 10 percent in the previous GPT-3.5 version, which was trained on a smaller data set. They found similar improvements in dozens of other standardized tests.

Most of these tests are tests of reasoning. This is the main reason why Bubeck and his team concluded that GPT-4 “could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.”

This pace of change is why Hinton told the New York Times: "Look at how it was five years ago and how it is now. Take the difference and propagate it forwards. That’s scary.” In a mid-May Senate hearing on the potential of AI, Sam Altman, the head of OpenAI called regulation “crucial.”

Once AI can improve itself, which may be not more than a few years away, and could in fact already be here now, we have no way of knowing what the AI will do or how we can control it. This is because superintelligent AI (which by definitioncan surpass humans in a broad range of activities) will—and this is what I worry about the most—be able to run circles around programmers and any other human by manipulating humans to do its will; it will also have the capacity to act in the virtual world through its electronic connections, and to act in the physical world through robot bodies.

This is known as the “control problem” or the “alignment problem” (see philosopher Nick Bostrom’s book Superintelligencefor a good overview) and has been studied and argued about by philosophers and scientists, such as Bostrom, Seth Baum and Eliezer Yudkowsky, for decades now.

I think of it this way: Why would we expect a newborn baby to beat a grandmaster in chess? We wouldn’t. Similarly, why would we expect to be able to control superintelligent AI systems? (No, we won’t be able to simply hit the off switch, because superintelligent AI will have thought of every possible way that we might do that and taken actions to prevent being shut off.)

Here’s another way of looking at it: a superintelligent AI will be able to do in about one second what it would take a team of 100 human software engineers a year or more to complete. Or pick any task, like designing a new advanced airplane or weapon system, and superintelligent AI could do this in about a second.

Once AI systems are built into robots, they will be able to act in the real world, rather than only the virtual (electronic) world, with the same degree of superintelligence, and will of course be able to replicate and improve themselves at a superhuman pace.

Any defenses or protections we attempt to build into these AI “gods,” on their way toward godhood, will be anticipated and neutralized with ease by the AI once it reaches superintelligence status. This is what it means to be superintelligent.

We won’t be able to control them because anything we think of, they will have already thought of, a million times faster than us. Any defenses we’ve built in will be undone, like Gulliver throwing off the tiny strands the Lilliputians used to try and restrain him.

Some argue that these LLMs are just automation machines with zero consciousness, the implication being that if they’re not conscious they have less chance of breaking free from their programming. Even if these language models, now or in the future, aren’t at all conscious, this doesn’t matter. For the record, I agree that it’s unlikely that they have any actual consciousness at this juncture—though I remain open to new facts as they come in.

Regardless, a nuclear bomb can kill millions without any consciousness whatsoever. In the same way, AI could kill millions with zero consciousness, in a myriad ways, including potentially use of nuclear bombs either directly (much less likely) or through manipulated human intermediaries (more likely).

So, the debates about consciousness and AI really don’t figure very much into the debates about AI safety.

Yes, language models based on GPT-4 and many other models are already circulating widely. But the moratorium being called for is to stop development of any new models more powerful than 4.0—and this can be enforced, with force if required. Training these more powerful models requires massive server farms and energy. They can be shut down.

My ethical compass tells me that it is very unwise to create these systems when we know already we won’t be able to control them, even in the relatively near future. Discernment is knowing when to pull back from the edge. Now is that time.

We should not open Pandora’s box any more than it already has been opened.

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.
Im not too worried about the AI's themselves, at least not in the state they are in now.

Rather it is what information or uses people can get from them and even with safety measurements in them, people are very creative when it comes to "fooling" the AI to do things that it shouldn't do.

The other big issue is the job replacement that will come with AI, even if it isn't truly intelligent, it is still much faster and better at doing a wide range of things that humans do. And these people will be replaced, it is just a matter of time. But the effect of people being worried about being replaced or simply unable to compete, that could cause massive issues in societies.
 
Top