• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

It's able to create knowledge itself

Nakosis

Non-Binary Physicalist
Premium Member
Google’s artificial intelligence group, DeepMind, has unveiled the latest incarnation of its Go-playing program, AlphaGo – an AI so powerful that it derived thousands of years of human knowledge of the game before inventing better moves of its own, all in the space of three days.

Named AlphaGo Zero, the AI program has been hailed as a major advance because it mastered the ancient Chinese board game from scratch, and with no human help beyond being told the rules. In games against the 2015 version, which famously beat Lee Sedol, the South Korean grandmaster, in the following year, AlphaGo Zero won 100 to 0.
'It's able to create knowledge itself': Google unveils AI that learns on its own


So an AI can duplicate a thousand years of human learning in a matter of days and surpass it.:eek:

It's currently limited to problems which can be "perfectly" simulated by a computer, however science makes use of a lot of computer simulations. Self taught AI systems that already know more than humans. I wonder if at some point human intelligence will become obsolete.

DyGUx9EW0AAS6XX.jpg
 
Last edited:

sun rise

The world is on fire
Premium Member
I can't help myself from asking the political question about that AI program or another one. Maybe we can have an AI derive laws from a consideration of maximizing human happiness over the long term and other such considerations.
 

Nakosis

Non-Binary Physicalist
Premium Member
I was also considering the problem of true AI consciousness.

If we accept that consciousness is a product of evolution, an advantage in survival. The reward/punishment system for biological life. While the ability to develop machine consciousness may beyond the ability of man, if a similar reward/punishment scheme could be developed for an AI system, perhaps, on its own, it would be able to develop a consciousness.
 

Nakosis

Non-Binary Physicalist
Premium Member
Did the 2015 version display frustration after such an humiliating defeat?

Emotions are part of our reward/punishment system. A physiological process that provides a method of correction. Perhaps a bit mundane for an AI system but it was able to know the difference between winning and losing, success and failure. Frustration lets us know we have not yet achieved the desired result.

No doubt emotions are important to human learning and work well enough to get us to where we are but maybe not the best way to go about knowledge?
 

Quintessence

Consults with Trees
Staff member
Premium Member
Considering human intelligence hasn't rendered the intelligence of all other life on the planet obsolete, I don't think humans need to worry. That said, the very notion of obsoleteness is a human construct in the first place and thus arbitrarily determined.
 

Nakosis

Non-Binary Physicalist
Premium Member
Considering human intelligence hasn't rendered the intelligence of all other life on the planet obsolete, I don't think humans need to worry. That said, the very notion of obsoleteness is a human construct in the first place and thus arbitrarily determined.

Well you wouldn't have an gorilla build a rocketship for example. Obsolete in the sense that in the future the job of the scientist would be to input a query into an AI system and report on its output.

Even in the sense of art, create better poetry, music, paintings, write better fiction. You could put a paintbrush in the hands of a gorilla, and there'd still probably be a market for it but one mainly for its odd curiosity. An AI that could learn exactly what pleases humans.

What happens to the human race if our intellect and creativity are no longer needed? More for science fiction at this point but I see in coming.
 

Brickjectivity

Turned to Stone. Now I stretch daily.
Staff member
Premium Member
So an AI can duplicate a thousand years of human learning in a matter of days and surpass it.:eek:
Actually, no; because they had to tell it the rules. There are a very limited set of situations in which you can do that. A board game is the simplest possible case. Its turn by turn. You can never leave the board. There's almost nothing which has better defined rules than a board game. Why didn't they instead have it derive the rules for Euclidean geometry? It couldn't. It needed someone to provide it with specific rules containing all of the possible moves.

Google, just like IBM and all the others is looking to find a way around programming the AI, because that will cost a lot of money. Its a major investment. The more that they can get it to learn on its own the less they will have to pay for usable AI programs. That's why they're trying to collect lots of data. They try to push the AI programs to extract information and rules from all that data.

Its an improvement, but its a very small improvement.

I recently went to Wolfram Alpha dot com and asked it to show me an 81 sided regular polyhedron. It couldn't. If there aren't rules then the AI's get confused about what is needed. Give them enough data to determine some rules or give them some rules which describe all possible situations, and then they become useful.
 

Nakosis

Non-Binary Physicalist
Premium Member
Also I read an article about the problem of getting AI to understand humor. Some folks even thinking teaching and AI system to understand humor might be a really bad idea. What if that AI decides killing humans is funny? :eek:

 

Nakosis

Non-Binary Physicalist
Premium Member
Actually, no; because they had to tell it the rules. There are a very limited set of situations in which you can do that. A board game is the simplest possible case. Its turn by turn. You can never leave the board. There's almost nothing which has better defined rules than a board game. Why didn't they instead have it derive the rules for Euclidean geometry? It couldn't. It needed someone to provide it with specific rules containing all of the possible moves.

Google, just like IBM and all the others is looking to find a way around programming the AI, because that will cost a lot of money. Its a major investment. The more that they can get it to learn on its own the less they will have to pay for usable AI programs. That's why they're trying to collect lots of data. They try to push the AI programs to extract information and rules from all that data.

Its an improvement, but its a very small improvement.

I recently went to Wolfram Alpha dot com and asked it to show me an 81 sided regular polyhedron. It couldn't. If there aren't rules then the AI's get confused about what is needed. Give them enough data to determine some rules or give them some rules which describe all possible situations, and then they become useful.

Sure, human evolution took about 6 million years.
Introduction to Human Evolution

Not expecting anything dramatic to happen in my lifetime.
 

Brickjectivity

Turned to Stone. Now I stretch daily.
Staff member
Premium Member
Sure, human evolution took about 6 million years.
Introduction to Human Evolution

Not expecting anything dramatic to happen in my lifetime.
I do during my lifetime, but I don't expect sentient or out of control robots. I expect to see bots increasingly able to navigate through a construction site and do odd construction jobs, manufacturing or service. I expect to see a robot greeter at a retail store, and it will do price checks, fetch things, mop floors, re-shop items. I expect to see robotic stockers and butchers in the back, and I expect to see businesses at night patrolled by robots. I also expect to see mobile vending machines which travel on a circuit, somewhat like the ice cream man. I expect something like a traveling redbox machine which goes down your street renting out videos.
 

Quintessence

Consults with Trees
Staff member
Premium Member
Well you wouldn't have an gorilla build a rocketship for example. Obsolete in the sense that in the future the job of the scientist would be to input a query into an AI system and report on its output.

Even in the sense of art, create better poetry, music, paintings, write better fiction. You could put a paintbrush in the hands of a gorilla, and there'd still probably be a market for it but one mainly for its odd curiosity. An AI that could learn exactly what pleases humans.

What happens to the human race if our intellect and creativity are no longer needed? More for science fiction at this point but I see in coming.

As mentioned before, obsoleteness is a human construct. This assessment of "no longer needed" is a human construct grounded in human values. Human values that are ultimately subjective and projections onto the world rather than qualities inherent to the world. I mean, if we make the standards of valuation, we control who meets or fails them. The gorilla persons only fail and are relegated to obsoleteness if the human assessing them places (undue) importance on building rocketships or making art that is similar to that of humans. But why do that in the first place? Or at least why not recognize that you are writing the standards in a highly biased way that tells the story the way you want it to be told?

The long and the short of it nothing in this universe becomes "obsolete" unless we tell the story that this is so. Calling something obsolete is a choice that reflects the speaker's values. No more, no less.
 

Nakosis

Non-Binary Physicalist
Premium Member
As mentioned before, obsoleteness is a human construct. This assessment of "no longer needed" is a human construct grounded in human values. Human values that are ultimately subjective and projections onto the world rather than qualities inherent to the world. I mean, if we make the standards of valuation, we control who meets or fails them. The gorilla persons only fail and are relegated to obsoleteness if the human assessing them places (undue) importance on building rocketships or making art that is similar to that of humans. But why do that in the first place? Or at least why not recognize that you are writing the standards in a highly biased way that tells the story the way you want it to be told?

The long and the short of it nothing in this universe becomes "obsolete" unless we tell the story that this is so. Calling something obsolete is a choice that reflects the speaker's values. No more, no less.

Ok, you want to focus on word usage, that's fine. :)
 

Quintessence

Consults with Trees
Staff member
Premium Member
Ok, you want to focus on word usage, that's fine. :)

That's not what I'm focusing on at all. Let me put this more simply, then.

Given assessments of relevance are value-based and subjective, human intelligence can only become "obsolete" if we decide to value some other form of intelligence more than our own. There is no need to wonder if human intelligence will become "obsolete." It is a question of you and your values - the story you want to tell about it. And also to some extent whether or not you choose to support the (mis)use of technologies that threaten your assessments of human value or self-worth.
 

Nakosis

Non-Binary Physicalist
Premium Member
That's not what I'm focusing on at all. Let me put this more simply, then.

Given assessments of relevance are value-based and subjective, human intelligence can only become "obsolete" if we decide to value some other form of intelligence more than our own. There is no need to wonder if human intelligence will become "obsolete." It is a question of you and your values - the story you want to tell about it. And also to some extent whether or not you choose to support the (mis)use of technologies that threaten your assessments of human value or self-worth.

Sure, seems you see being obsolete as a negative value, I see it as a practical value. Life generally goes with what is practical.
 

Bob the Unbeliever

Well-Known Member
Google’s artificial intelligence group, DeepMind, has unveiled the latest incarnation of its Go-playing program, AlphaGo – an AI so powerful that it derived thousands of years of human knowledge of the game before inventing better moves of its own, all in the space of three days.

Named AlphaGo Zero, the AI program has been hailed as a major advance because it mastered the ancient Chinese board game from scratch, and with no human help beyond being told the rules. In games against the 2015 version, which famously beat Lee Sedol, the South Korean grandmaster, in the following year, AlphaGo Zero won 100 to 0.
'It's able to create knowledge itself': Google unveils AI that learns on its own


So an AI can duplicate a thousand years of human learning in a matter of days and surpass it.:eek:

It's currently limited to problems which can be "perfectly" simulated by a computer, however science makes use of a lot of computer simulations. Self taught AI systems that already know more than humans. I wonder if at some point human intelligence will become obsolete.

DyGUx9EW0AAS6XX.jpg

I think the title is a wee bit of "click-bait" here.

I think the term "human knowledge" is a misnomer, in this context, as the entire context is limited to a game, which has a very short list of rules (which is in direct contrast to reality, which has a seemingly endless list). It's not surprising that a computer could derive all possible moves in such a limited scope-- a computer does not forget--ever-- once something is "learned", it has perfect recall. Thus it never repeats the same mistake, or even an experiment, a second time, it has no need to do so.

Neither does it need to sleep, to re-organize it's total experiences from short term memory to long-term, in fact, there is no real difference in all it's memory.

Finally, being electronic, it "thinks" at nearly the speed of light, again, in contrast to humans who only think at the "speed of chemistry".

I'm not remotely surprised at this, and I fully expect other games to be 'conquered' by computers as they grow more capable.
 

Revoltingest

Pragmatic Libertarian
Premium Member
I think the title is a wee bit of "click-bait" here.
Oh, I disagree.
AlphaGo has added knowledge to playing go.
Moreover, new moves it's discovered have been called "beautiful".
It'll be even more interesting when this approach can be applied
to human interactions, eg, economics, war, & (of course) porn.
 

Bob the Unbeliever

Well-Known Member
Oh, I disagree.
AlphaGo has added knowledge to playing go.
Moreover, new moves it's discovered have been called "beautiful".
It'll be even more interesting when this approach can be applied
to human interactions, eg, economics, war, & (of course) porn.

Well, within the context of a very limited game, it's hardly surprising.

But I don't get what you mean, "within this approach"... what would that mean, in the real non-game world?
 

Revoltingest

Pragmatic Libertarian
Premium Member
Well, within the context of a very limited game, it's hardly surprising.

But I don't get what you mean, "within this approach"... what would that mean, in the real non-game world?
Go rests upon simple rules.
Humans obey fuzzier & more numerous ones.
Imagine Machiavelli, but smarter & made of metal.
He'd be researching, plotting, & planning with others
of his ilk at multigigaterraflop speeds.
They'd conquer all humans in 3.2 seconds.
 
Top