• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

(AI) Concerning or not?

Nimos

Well-Known Member
As some might not know OpenAI have a safety team that is supposed to help ensure that AI/AGI is done responsibly:

Leike co-led OpenAI's superalignment group, a team that focuses on making its artificial-intelligence systems align with human interests.
In September, he was named one of Time 100's most influential people in AI.
Leike announced his departure hours after Ilya Sutskever, the other superalignment leader, said he was leaving.

------


OpenAI has disbanded its team focused on the long-term risks of artificial intelligence just one year after the company announced the group, a person familiar with the situation confirmed to CNBC on Friday. (Today 18/05/2024)

This is some tweets from Leike:
AI_2.png

AI_1.png


This to me is insane. This is big corporations playing with potential nuclear bombs without really understanding them in their chase for profit. There doesn't seem to be much government involvement in this at all. This guy is one of the leading people regarding this, together with Ilya Sutskever, who also left.

I don't know what is going on, but something seems pretty strange if you ask me. And these are not the first safety people to leave.
 

Quintessence

Consults with Trees
Staff member
Premium Member
:shrug:

I still can't get over that this is getting called "artificial intelligence" when it's just a language learning model that basically plagiarized works by humans to do what it does. How is it we call that "intelligence" but so many humans can't bother recognizing non-human animal intelligence or plant intelligence? It just... I just can't. I find it deeply frustrating if not entirely unsurprising in this deeply hubristic and anthropocentric culture.
 

Nimos

Well-Known Member
:shrug:

I still can't get over that this is getting called "artificial intelligence" when it's just a language learning model that basically plagiarized works by humans to do what it does. How is it we call that "intelligence" but so many humans can't bother recognizing non-human animal intelligence or plant intelligence? It just... I just can't. I find it deeply frustrating if not entirely unsurprising in this deeply hubristic and anthropocentric culture.
Call it what you want, doesn't really matter. Sam Altman the head of OpenAI believes that they will probably achieve it in around 2-3 years, others might not be so optimistic. Doesn't really matter, it will be done and when it does, you will have AI much smarter than humans thinking like we do.

Besides that, the AI as we have it today might not be an issue in itself, it is what people with ill intentions can use it for. As I have written in an earlier post, models that can tell you how to construct bombs and dangerous chemicals are already freely available on the internet for anyone to download.
Doesn't matter whether you think it is wrong to call it an AI, if it can help you do these things:

It took less than six hours for drug-developing AI to invent 40,000 potentially lethal molecules. Researchers put AI normally used to search for helpful drugs into a kind of “bad actor” mode to show how easily it could be abused at a biological arms control conference.

...we actually looked at a lot of the structures of these newly generated molecules. And a lot of them did look like VX and other warfare agents, and we even found some that were generated from the model that were actual chemical warfare agents. These were generated from the model having never seen these chemical warfare agents. So we knew we were sort of in the right space here and that it was generating molecules that made sense because some of them had already been made before.
For me, the concern was just how easy it was to do. A lot of the things we used are out there for free.


-----

Dario Amodei, chief executive of the high-profile A.I. start-up Anthropic, told Congress last year that new A.I. technology could soon help unskilled but malevolent people create large-scale biological attacks, such as the release of viruses or toxic substances that cause widespread disease and death.8. mar. 2024

So call it what you want, I don't think it matters.
 
Last edited:

Twilight Hue

Twilight, not bright nor dark, good nor bad.
As some might not know OpenAI have a safety team that is supposed to help ensure that AI/AGI is done responsibly:

Leike co-led OpenAI's superalignment group, a team that focuses on making its artificial-intelligence systems align with human interests.
In September, he was named one of Time 100's most influential people in AI.
Leike announced his departure hours after Ilya Sutskever, the other superalignment leader, said he was leaving.

------


OpenAI has disbanded its team focused on the long-term risks of artificial intelligence just one year after the company announced the group, a person familiar with the situation confirmed to CNBC on Friday. (Today 18/05/2024)

This is some tweets from Leike:
View attachment 91740
View attachment 91741

This to me is insane. This is big corporations playing with potential nuclear bombs without really understanding them in their chase for profit. There doesn't seem to be much government involvement in this at all. This guy is one of the leading people regarding this, together with Ilya Sutskever, who also left.

I don't know what is going on, but something seems pretty strange if you ask me. And these are not the first safety people to leave.
Ai is basically the vanguard for bringing in the impending dystopia that is going to hit the world by storm.
 

Quintessence

Consults with Trees
Staff member
Premium Member
Ai is basically the vanguard for bringing in the impending dystopia that is going to hit the world by storm.
Impending?

We're already in a sixth mass extinction event. The "vanguard" was the industrial revolution. And before that, the plow. Before that still, the taming of fire. Or just the evolution of the human species just in general... especially after it forgot indigenous wisdom.
 

The Hammer

Skald
Premium Member
Impending?

We're already in a sixth mass extinction event. The "vanguard" was the industrial revolution. And before that, the plow. Before that still, the taming of fire. Or just the evolution of the human species just in general... especially after it forgot indigenous wisdom.

Well said. I can't really add anything else.
 

The Hammer

Skald
Premium Member
As some might not know OpenAI have a safety team that is supposed to help ensure that AI/AGI is done responsibly:

Leike co-led OpenAI's superalignment group, a team that focuses on making its artificial-intelligence systems align with human interests.
In September, he was named one of Time 100's most influential people in AI.
Leike announced his departure hours after Ilya Sutskever, the other superalignment leader, said he was leaving.

------


OpenAI has disbanded its team focused on the long-term risks of artificial intelligence just one year after the company announced the group, a person familiar with the situation confirmed to CNBC on Friday. (Today 18/05/2024)

This is some tweets from Leike:
View attachment 91740
View attachment 91741

This to me is insane. This is big corporations playing with potential nuclear bombs without really understanding them in their chase for profit. There doesn't seem to be much government involvement in this at all. This guy is one of the leading people regarding this, together with Ilya Sutskever, who also left.

I don't know what is going on, but something seems pretty strange if you ask me. And these are not the first safety people to leave.

AI is and always has been a concern. We've been warned since before it was even a fully conceived idea in idk how many sci-fi stories.

Also this: Here's Why AI May Be Extremely Dangerous--Whether It's Conscious or Not
 

Nimos

Well-Known Member
AI is and always has been a concern. We've been warned since before it was even a fully conceived idea in idk how many sci-fi stories.

Also this: Here's Why AI May Be Extremely Dangerous--Whether It's Conscious or Not
What I find so concerning about AI compared to for instance climate change is that at least with climate change things move relatively slowly and we can make predictions and adjust to it, even though we are famously bad at that as well.

But with AI, we can't really predict what is going to happen, even less with AGI. It will add a lot of benefits for sure, but what about the other side? And it is not like climate change, once AGI is achieved and released we just have to hope that it has good intentions, or that it values humans as much as we do and can't be manipulated into doing things that it shouldn't.

I think this is pretty interesting from the article if true:
I think of it this way: Why would we expect a newborn baby to beat a grandmaster in chess? We wouldn’t. Similarly, why would we expect to be able to control superintelligent AI systems? (No, we won’t be able to simply hit the off switch, because superintelligent AI will have thought of every possible way that we might do that and taken actions to prevent being shut off.)

Here’s another way of looking at it: a superintelligent AI will be able to do in about one second what it would take a team of 100 human software engineers a year or more to complete. Or pick any task, like designing a new advanced airplane or weapon system, and superintelligent AI could do this in about a second.

Once AI systems are built into robots, they will be able to act in the real world, rather than only the virtual (electronic) world, with the same degree of superintelligence, and will of course be able to replicate and improve themselves at a superhuman pace.


Any defenses or protections we attempt to build into these AI “gods,” on their way toward godhood, will be anticipated and neutralized with ease by the AI once it reaches superintelligence status. This is what it means to be superintelligent.

We won’t be able to control them because anything we think of, they will have already thought of, a million times faster than us.
 

mikkel_the_dane

My own religion
As some might not know OpenAI have a safety team that is supposed to help ensure that AI/AGI is done responsibly:

Leike co-led OpenAI's superalignment group, a team that focuses on making its artificial-intelligence systems align with human interests.
In September, he was named one of Time 100's most influential people in AI.
Leike announced his departure hours after Ilya Sutskever, the other superalignment leader, said he was leaving.

------


OpenAI has disbanded its team focused on the long-term risks of artificial intelligence just one year after the company announced the group, a person familiar with the situation confirmed to CNBC on Friday. (Today 18/05/2024)

This is some tweets from Leike:
View attachment 91740
View attachment 91741

This to me is insane. This is big corporations playing with potential nuclear bombs without really understanding them in their chase for profit. There doesn't seem to be much government involvement in this at all. This guy is one of the leading people regarding this, together with Ilya Sutskever, who also left.

I don't know what is going on, but something seems pretty strange if you ask me. And these are not the first safety people to leave.

Well, I get your post, but to me, it is not just about the material world, as it is politics, ethics and so on.
 

Jayhawker Soule

-- untitled --
Premium Member
AI is being promoted as having the potential to address many, if not most, of the world's problems. The one I find most intriguing is that posed by Fermi's paradox. See:

Highlights
  • Artificial Intelligence (AI) is emerging as one of the most transformative technological developments in human history.
  • Biological civilisations may universally underestimate the speed that AI systems progress, as these are so different from traditional timescales.
  • AI could spell the end of intelligence on Earth (including AI) before mitigating strategies, e.g. a multiplanetary capability, have been achieved.
  • These arguments suggest that the longevity, L, of technical civilisations is < 200 years, thus explaining the great silence observed by SETI.
  • Small values for L underscores the necessity to intensify efforts to regulate AI - failure to do so, could rob the universe of all concious presence.
For those interested, the paper addresses the Drake Equation explicitly in section four, and notes in its conclusions:

... The development of ASI * is likely to happen well before humankind manages to establish a resilient and enduring multiplanetary presence in our solar system. This disparity in the rate of progress between these two technological frontiers is a pattern that we can expect to be repeated across all emerging technical civilizations.​
This raises questions about the inevitability of civilisations unwittingly triggering calamitous events that lead to the demise of both a biological and post-biological technical civilisation. The potential of ASI to serve as a "Great Filter" compels us to consider its role in the broader context of our civilization's future and its implications for life throughout the galaxy. If ASI limits the communicative lifespan of advanced civilizations to a few hundred years, then only a handful of communicating civilisations are likely to be concurrently present in the Milky Way. This is not inconsistent with the null results obtained from current SETI surveys and other efforts to detect technosignatures across the electromagnetic spectrum.​

==========

* ASI -- Artificial Superintelligence
 

Nimos

Well-Known Member
Well, I get your post, but to me, it is not just about the material world, as it is politics, ethics and so on.
I completely agree, there is so much involved in this, which isn't being discussed. And the problem as I see it, is that it is moving at such a fast phase that I would think that the majority of people don't even know what it is, or think it is just some new computer thingy app and they already don't really understand computers, beyond social media, watching some youtube videos, writing some documents. And they might not be interested in computers at all, so they don't want to get involved in AI anyway.

Just as an example, my parents are not from the IT era, they have phones and computers etc. And despite me, constantly trying to help them and explain things in the most easy way possible, they are just not very interested in computers. So a week ago, I had to explain to my mother what a website is, even though she uses her iPad and phone a lot and buys stuff on it, etc. :) And despite having tried to explain it for several years now, my dad still doesn't understand the difference between Google, and Chrome and what it means to browse or search on the internet.

Yet, we are on the brink of developing AGI, the contrast is absurd. And I do what I can to help warn them about what AI could potentially be used to scam them. Because I know they will be fooled by it if I don't constantly remind them to be extremely watchful.

But again, there is hardly anything in Denmark about this (If I recall correctly you are from Denmark as well, so what is your impression of it, how much have you heard about it?), it is small sections here and there, everything else is politicians dealing with yet another scandal they have gotten themselves into, some famous person being drunk in the city, or some football match or whatever.

The implications of AI/AGI ought to be one of the headlines in my opinion to inform people about it, and make sure that everyone has at least a basic understanding of it and how the politicians aim to deal with it, once it really starts to get integrated into companies and the everyday life.
 

wellwisher

Well-Known Member
As some might not know OpenAI have a safety team that is supposed to help ensure that AI/AGI is done responsibly:

Leike co-led OpenAI's superalignment group, a team that focuses on making its artificial-intelligence systems align with human interests.
In September, he was named one of Time 100's most influential people in AI.
Leike announced his departure hours after Ilya Sutskever, the other superalignment leader, said he was leaving.

------


OpenAI has disbanded its team focused on the long-term risks of artificial intelligence just one year after the company announced the group, a person familiar with the situation confirmed to CNBC on Friday. (Today 18/05/2024)

This is some tweets from Leike:
View attachment 91740
View attachment 91741

This to me is insane. This is big corporations playing with potential nuclear bombs without really understanding them in their chase for profit. There doesn't seem to be much government involvement in this at all. This guy is one of the leading people regarding this, together with Ilya Sutskever, who also left.

I don't know what is going on, but something seems pretty strange if you ask me. And these are not the first safety people to leave.
I think much of this is connected to early marketing hype, capitalizing on the primitive fear of novelty, thereby creating marketing buzz. People are leaving the hype machine, to position themselves in the supply chain.

This primitive fear of novelty is a logical extension of science, politics and lotteries all seeing the world through the black box of casino math, and their use of math oracles to appease their fear; risk floats like a vapor covering all of us, even if you wear a rational mask.

If you believe in a probabilistic universe where anything has a level of probabilty, AI is for you. AI will lead you through the darkness of the black box. AI will be the personal fortune telling oracle of the future. My fear is not so much AI pandering to the irrationality of humans, but humans getting sucked into the oracle's predictions and making it a self fulfilling prophesy. We need to fix the humans.

It took less than six hours for drug-developing AI to invent 40,000 potentially lethal molecules. Researchers put AI normally used to search for helpful drugs into a kind of “bad actor” mode to show how easily it could be abused at a biological arms control conference.

This was actually a useful result to demonstrate how evolution could not have been done by chance. The AI's dice and cards approach made more bad things than good things, disproving the current version of evolution using its digital time lapse photography. It is much easier to destroy that to build in a probabilistic universe. My fear is the PO or personal oracles will generate their random predictions which are mostly destructive to rational and ordered thinking.

Google's new AI had built in DEI bias, which loaded the dice in the casinos of public opinion, which is destructive to common sense. A little bias in the dice, can change the odds quite a bit, when you use digital time lapse photography.
 

Jayhawker Soule

-- untitled --
Premium Member
This was actually a useful result to demonstrate how evolution could not have been done by chance. The AI's dice and cards approach made more bad things than good things, disproving the current version of evolution using its digital time lapse photography.
That is absolute nonsense at best. So, for example, what makes you think that descent with modification has not "made more bad things than good things"?
 

Nimos

Well-Known Member
This was actually a useful result to demonstrate how evolution could not have been done by chance. The AI's dice and cards approach made more bad things than good things, disproving the current version of evolution using its digital time lapse photography.
Just so you understand what they did. They intentionally told the AI to do this to show how easy it is to do. Kind of in the same way you would train a kid (obviously not suggesting you should do this, but you get what I mean :)), so you give it a piece of candy every time it comes up with something dangerous and slap it whenever it doesn't. It has nothing to do with evolution at all, it is just the AI using whatever knowledge it has about these things to develop new lethal molecules and getting rewarded for doing so.
 

PureX

Veteran Member
AI is a gift from an unscrupulous God to professional liars and conmen everywhere.

And the AI lies have already begun, from phony books and news articals to phony songs and videos, to phony misleading statistics, all deliberately disguised and with hidden sources and agenas.
 
Top