• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Is Kaku correct?

ecco

Veteran Member
...
This is all very impressive but it did not address what I said. They said Deepmind played millions of games for it to improve and compared it's play-style to intuition. I'm not entirely sure how far any so-called AI can get with this. Intuition may not work this way and it may use an accumulation of emotions, experience and taking chances(I.E. gut feelings). No doubt, they are getting the experience part but they're missing vital ingredients for actual AI.
...
In this video at 58:02 Deepmind had no idea what to do. It was akin to a robot not even comprehending what's going on. I won't even say animal, because even animals know when to give up. Even after millions upon millions of games, Deepmind did not stop attempting to reach the air units. As the game went on, it did not understand or comprehend taking a chance for a base race. It did not give up and when it only had 1 unit left, it's API was still 300. These are all indicative of robot that can learn through repetition, not AI. It was unable to adapt intuitively. This reminds me of movies/Star Trek :p where a robot faces a logical problem and explodes or a robot that has some error so it continuously hits itself against the wall.

Your video notwithstanding...
DeepMind’s AlphaStar AI sweeps StarCraft II pros in head-to-head match
Computers are getting more sophisticated than ever at understanding and playing complicated games. DeepMind, one of the leaders in artificial intelligence, proved that once again today with its latest A.I. agent called AlphaStar. During a livestream, this program took on two StarCraft II pros in a series of five matches for each, and AlphaStar swept all 10 matches.

I've played Civilization, on and off, since its inception in 1991. I have never seen an AI leader act in the way described in your video. In any case, since Alphastar is ten for ten in its latest contests, I guess it learned. If that meant a human stepped in and tweaked the code a little, that's no different from having a chess master correct some faults it sees in a student.
As to the game GO, it learned the way humans learn - observation and practice. And then did even better.
Excerpts with my emphases...
How Google's AI Viewed the Move No Human Could Understand

HOW GOOGLE'S AI VIEWED THE MOVE NO HUMAN COULD UNDERSTAND GEORDIE WOOD FOR WIRED
SEOUL, SOUTH KOREA — Th
e move didn't make sense to the humans packed into the sixth floor of Seoul's Four Seasons hotel. But the Google machine saw it quite differently. The machine knew the move wouldn't make sense to all those humans. Yes, it knew. And yet it played the move anyway, because this machine has seen so many moves that no human ever has.

In the second game of this week's historic Go match between Lee Sedol...this surprisingly skillful machine made a move that flummoxed everyone from the throngs of reporters and photographers to the match commentators to, yes, Lee Sedol himself. "That's a very strange move," said one commentator, an enormously talented Go player in his own right. "I thought it was a mistake," said the other. ....

Fan Hui, the three-time European Go champion who lost five straight games to AlphaGo this past October, was also completely gobsmacked. "It's not a human move. I've never seen a human play this move," he said. But he also called the move "So beautiful. So beautiful." Indeed, it changed the path of play, and AlphaGo went on to win the second game....

It was a move that demonstrated the mysterious power of modern artificial intelligence... In the wake of Game Two, Fan Hui so eloquently described the importance and the beauty of this move..
...
"You're listening to the commentators on the one hand. And you're looking at AlphaGo's evaluation on the other hand. And all the commentators are disagreeing."
...
Using a second technology called reinforcement learning, they set up matches in which slightly different versions of AlphaGo played each other. As they played, the system would track which moves brought the most reward—the most territory on the board. "AlphaGo learned to discover new strategies for itself, by playing millions of games between its neural networks, against themselves, and gradually improving," Silver said when DeepMind first revealed the approach earlier this year.
...
So, in the end, the system learned not just from human moves but from moves generated by multiple versions of itself. The result is that the machine is capable of something like Move 37.

"That's how it guides the moves it considers," Silver says. For Move 37, the probability was one in ten thousand. In other words, AlphaGo knew this was not a move that a professional Go player would make.

But, drawing on all its other training with millions of moves generated by games with itself, it came to view Move 37 in a different way. It came to realize that, although no professional would play it, the move would likely prove quite successful. "It discovered this for itself," Silver says, "through its own process of introspection and analysis."

Is introspection the right word? You can be the judge. But Fan Hui was right. The move was inhuman. But it was also beautiful.

Another thing to keep in mind is the advances in computer hardware. In the '60s an IBM 7090 computer was the size of a car and capable of 200,000 flops. Today's PS4 is the size of a small cake and can crank out 1,800,000,000,000 flops

Today, an IBM quantum computer is the size of a car with processors in the "transistor" stage. What do you think will happen in the next 50 years?
 

Rational Agnostic

Well-Known Member
He is a theoretical physicist, much of what he studies is way beyond mr average, perhaps thats why you find him annoying

He's not just a theoretical physicist, he's a guy who tries to talk about string theory, or the multiverse hypothesis as if they were facts, when they actually have no real scientific basis. I'm sure he is much smarter than me, but a lot of his fellow physicists also disagree with his approach as well.
 

charlie sc

Well-Known Member
Your video notwithstanding...
DeepMind’s AlphaStar AI sweeps StarCraft II pros in head-to-head match
Computers are getting more sophisticated than ever at understanding and playing complicated games. DeepMind, one of the leaders in artificial intelligence, proved that once again today with its latest A.I. agent called AlphaStar. During a livestream, this program took on two StarCraft II pros in a series of five matches for each, and AlphaStar swept all 10 matches.
I don't know why you said it swept all 10 matches, because it lost the 1 at the end vs Mana. I used to play Starcraft 2 and watch quite a lot of tournaments. TLO is a Zerg player, not a Protos player. When he streams, by himself, he sometimes plays random but at tournaments he plays Zerg. Mana, though, is a Protos player. I have to assume they were asked to play the same race as Deepmind. Now, that sounds like quite a handicap to me. Mana and TLO are very good but they aren't the best. From what little I saw, and I'll probably watch the whole thing sometime, problems with Deepmind. At one point, I saw Deepmind continuously go up a ramp, die numerous times, without learning. While its units were stuck behind the force field, he didn't let his units attempt a suicide attack. This is not even something noobs do. This shows, as I said before, a complete lack of comprehension of how the game is played. In this scenario, Deepmind should have gone kamikaze and changed tactics. Undoubtedly, Deepmind kicked butt, but you can see the same affect if you play a first person shooter vs a bot with 100% aim. No one will stand a chance, unless you exploit its weaknesses. Starcraft 2 is much more nuanced than other games and it appears Deepmind knows how to play by experience, not by comprehension. I'd be curious to see more SC2 games with Deepmind, because I don't think Deepmind will win by experience alone. I predict that if it plays more games, with all races, people will start learning its flaws, and it'll lose every-time, because it can't adapt to something it can't comprehend.

As to the game GO, it learned the way humans learn - observation and practice. And then did even better.

Excerpts with my emphases...
How Google's AI Viewed the Move No Human Could Understand

HOW GOOGLE'S AI VIEWED THE MOVE NO HUMAN COULD UNDERSTAND GEORDIE WOOD FOR WIRED
SEOUL, SOUTH KOREA — Th
e move didn't make sense to the humans packed into the sixth floor of Seoul's Four Seasons hotel. But the Google machine saw it quite differently. The machine knew the move wouldn't make sense to all those humans. Yes, it knew. And yet it played the move anyway, because this machine has seen so many moves that no human ever has.

In the second game of this week's historic Go match between Lee Sedol...this surprisingly skillful machine made a move that flummoxed everyone from the throngs of reporters and photographers to the match commentators to, yes, Lee Sedol himself. "That's a very strange move," said one commentator, an enormously talented Go player in his own right. "I thought it was a mistake," said the other. ....

Fan Hui, the three-time European Go champion who lost five straight games to AlphaGo this past October, was also completely gobsmacked. "It's not a human move. I've never seen a human play this move," he said. But he also called the move "So beautiful. So beautiful." Indeed, it changed the path of play, and AlphaGo went on to win the second game....

It was a move that demonstrated the mysterious power of modern artificial intelligence... In the wake of Game Two, Fan Hui so eloquently described the importance and the beauty of this move..
...
"You're listening to the commentators on the one hand. And you're looking at AlphaGo's evaluation on the other hand. And all the commentators are disagreeing."
...
Using a second technology called reinforcement learning, they set up matches in which slightly different versions of AlphaGo played each other. As they played, the system would track which moves brought the most reward—the most territory on the board. "AlphaGo learned to discover new strategies for itself, by playing millions of games between its neural networks, against themselves, and gradually improving," Silver said when DeepMind first revealed the approach earlier this year.
...
So, in the end, the system learned not just from human moves but from moves generated by multiple versions of itself. The result is that the machine is capable of something like Move 37.

"That's how it guides the moves it considers," Silver says. For Move 37, the probability was one in ten thousand. In other words, AlphaGo knew this was not a move that a professional Go player would make.

But, drawing on all its other training with millions of moves generated by games with itself, it came to view Move 37 in a different way. It came to realize that, although no professional would play it, the move would likely prove quite successful. "It discovered this for itself," Silver says, "through its own process of introspection and analysis."

Is introspection the right word? You can be the judge. But Fan Hui was right. The move was inhuman. But it was also beautiful.

Yes, it probably learns the same way we do, but does it comprehend the same way we do? I don't think so. I consider chess and GO much more logical and experience based than Starcraft. Therefore, if it knows most outcomes in GO it can probably win most of the time, this doesn't make it AI.

Could AI become a thing in the future? Sure, why not, but I have my reservations if AI can actually become AI or is it just a smart bot. ;)
 
Last edited:

Shad

Veteran Member
Is Kaku correct that AI will be dangerous as AI machines attain self awareness?

Who is right about AI: Mark Zuckerberg or Elon Musk?

I think AI, as any other tool, will be used for profit and for evil purpose by some. That, imo, is the real danger.

Have you seen this woman or this boy? Asking for a friend.

terminator5.jpg


iu
 

atanu

Member
Premium Member
I don't know why you said it swept all 10 matches, because it lost the 1 at the end vs Mana. I used to play Starcraft 2 and watch quite a lot of tournaments. TLO is a Zerg player, not a Protos player. When he streams, by himself, he sometimes plays random but at tournaments he plays Zerg. Mana, though, is a Protos player. I have to assume they were asked to play the same race as Deepmind. Now, that sounds like quite a handicap to me. Mana and TLO are very good but they aren't the best. From what little I saw, and I'll probably watch the whole thing sometime, problems with Deepmind. At one point, I saw Deepmind continuously go up a ramp, die numerous times, without learning. While its units were stuck behind the force field, he didn't let his units attempt a suicide attack. This is not even something even noobs do. This shows, as I said before, a complete lack of comprehension of how the game is played. In this scenario, Deepmind should have gone kamikaze and changed tactics. Undoubtedly, Deepmind kicked butt, but you can see the same affect if you play a first person shooter vs a bot with 100% aim. No one will stand a chance, unless you exploit its weaknesses. Starcraft 2 is much more nuanced than other games and it appears Deepmind knows how to play by experience, not by comprehension. I'd be curious to see more SC2 games with Deepmind, because I don't think Deepmind will win by experience alone. I predict that if it plays more games, with all races, people will start learning its flaws, and it'll lose every-time, because it can't adapt to something it can't comprehend.



Yes, it probably learns the same way we do, but does it comprehend the same way we do? I don't think so. I consider chess and GO much more logical and experience based than Starcraft. Therefore, if it knows most outcomes in GO it can probably win most of the time, this doesn't make it AI.

Could AI become a thing in the future? Sure, why not, but I have my reservations if AI can actually become AI or is it just a smart bot. ;)


All human tools are better than humans in specific tasks for which they are designed. A cycle is faster. A computer computes almost infinitely better than humans.

But is computation equal to intelligence and consciousness?
 

atanu

Member
Premium Member
Lay reporters are creating a lot of confusion among lay public with their click bait headers. To describe a successful ‘Task agnostic self modelling machine’, reporters in reputable publications shout “Robot that thinks for itself from scratch brings forward rise the self-aware machines”.

The following article gives the details.

https://mindmatters.ai/2019/02/that-robot-is-not-self-aware/


Development of a self modelling machine is an important development in its own right. Still, it only involves statistical algorithms applied to a robotic device in a new way. The robot still doesn’t understand anything. It’s not aware of the world around it, let alone aware of itself. There’s absolutely no reason to think it’s on its way to self-awareness.
 

charlie sc

Well-Known Member
All human tools are better than humans in specific tasks for which they are designed. A cycle is faster. A computer computes almost infinitely better than humans.

But is computation equal to intelligence and consciousness?
Lay reporters are creating a lot of confusion among lay public with their click bait headers. To describe a successful ‘Task agnostic self modelling machine’, reporters in reputable publications shout “Robot that thinks for itself from scratch brings forward rise the self-aware machines”.

The following article gives the details.

https://mindmatters.ai/2019/02/that-robot-is-not-self-aware/


Development of a self modelling machine is an important development in its own right. Still, it only involves statistical algorithms applied to a robotic device in a new way. The robot still doesn’t understand anything. It’s not aware of the world around it, let alone aware of itself. There’s absolutely no reason to think it’s on its way to self-awareness.
The essential unique feature of consciousness is the subjective experience and learning. There is nothing about processing symbols or computation that generates subjective experience or psychological phenomena like qualitative sensations.

The Myth of Sentient Machines | Psychology Today
Exactly. As much as I love sci-fi, seen tons of movies and played tons of games on AI stuff, it seems far away for the reasons and links you gave. I skimmed it and it looks accurate. This idea has been popular for quite some time now and sci-fi fans expect/want it so badly :p However, if you examine it objectively, it's not as simple as it seems. Simiilarly, Thunderf00t on YouTube debunk all these scam joke promises ideas from various so called inventors or great minds like Elon Musk. Hopefully, people don't buy into this with real money and get their hearts broken. All these aspirations about AI :confused:

Here's a bit of an anecdote and for all those people who are hoping/fearful of AI. Has anyone seen this video of this super cool AI home device?


Wow, that looks amazing! AI is so advance... A few years ago, I saw this with my ex and I said this'll never get off the ground. AI does not work like this and we're far away from this kind of technology. I even made a bet with her, but I can't remember.
Well, it released a while ago and apparently its absolute garbage. I hope she bought it and wasted her money. Anyway, my advice is to rather wait to see what happens and listen to the experts advice. Most of all, don't spend your money on places like kick-starter :p
 
Last edited:

ecco

Veteran Member
I don't know why you said it swept all 10 matches, because it lost the 1 at the end vs Mana.

I didn't say it swept all 10 matches. The article I linked to and quoted from said that.

Perhaps you confused the ten win sweep with...

DeepMind’s AlphaStar AI sweeps StarCraft II pros in head-to-head matchAlphaStar did lose one match
The livestream primarily focused on the five-game matches that AlphaStar played against TLO and MaNa a few weeks agao. But DeepMind did let MaNa get a rematch live in front of the audience watching on YouTube and Twitch. And this is when MaNa got his revenge with a win against the machine.

But the live match of MaNa vs. AlphaStar had some variations compared to the last time they played. DeepMind used a new prototype version of AlphaStar that actually uses the exact same camera view as the players. This means that AlphaStar can’t just sit at a zoomed-out perspective, it has to get in close to the action to see the details of the fight.

This version of AlphaStar also didn’t have as much time to train. So instead of playing through 200 years of an AlphaStar league, it played through something closer to 20 years. But even with that “limited” experience, it still showed off strategies that shocked everyone watching.

“The way AlphaStar played the matchup was not anything like I had experience with,” said MaNa. “It was a different kind of StarCraft. It was a great chance to learn something new from an A.I.”

And that’s one of the things that DeepMind is proudest of. That a pro player could take away new strategy ideas by playing against a computer, which is not something that anyone would have considered possible before.​
 

ecco

Veteran Member
I predict that if it plays more games, with all races, people will start learning its flaws, and it'll lose every-time, because it can't adapt to something it can't comprehend.

You sound like the author of this article from 45 tears ago...

https://www.theatlantic.com/magazine/archive/1974/08/computers-aren-t-so-smart-after-all/303467/

Computers Aren’t So Smart, After All
During the "computer craze" of the 1950s and 1960s some people envisioned the machine replacing the human brain. It hasn't happened and, says the author, it probably never will. So we must still think for ourselves

FRED HAPGOOD AUGUST 1974 ISSUE (The Atlantic)

So my strategy was to play away from the program's abilities and to steer the game into slow-paced, stable, balanced positions. Whenever I did this, MacHack's game seemed to become nervous and moody. The program would lose its concentration, begin to shift objectives restlessly, and launch speculative attacks. This is not an unfamiliar style; every chess club has some players—they are called "romantics "—whose joy is found in contact and tension, in games where pieces flash across the board and unexpected possibilities open up with each new move. Put them in slow positions, and, like MacHack, they grow impatient and try to force their game.

We played no more than five times; eventually, beating it became too easy. The winning formula was mechanically simple: develop cautiously, keep contact between the two sides restricted, let the pawns lead out the pieces. MacHack would always develop in a rush and send its knights and bishops skittering about the board trying to scare up some quick action; denied that action, its position would collapse in confusion.
...
During the last two games I played, MacHack refused to give its moves when I was about to checkmate it. My curiosity was piqued at this sullenness, and I stayed, trying to wait the machine out and get a reply. MacHack just hummed at me. Finally a programmer, becoming interested in this delay, extracted the record of MacHack's deliberations. It had been working over the mate variations, just looking at them, over and over. "Must be a bug somewhere," the programmer said.
...
One of the pioneering computers, ENIAC, built by Eckert and Mauchly, was invented in the hope that it would facilitate long-range weather forecasting, Almost certainly John Mauchly thought he was closer to that goal in 1943 than meteorologists do today.





ETA:

A new (computer) chess champion is crowned, and the continued demise of human Grandmasters - ExtremeTech
In 1996, IBM’s Deep Blue chess computer lost to Garry Kasparov — then the top-rated chess player in the world. In the 1997 rematch, following some software tweaks (and ironically, perhaps thanks to a very fateful software bug), Deep Blue won. Over the next few years, humans and computers traded blows — but eventually, by 2005-2006, computer chess programs were solidly in the lead. Today’s best chess programs can easily beat out the world’s best human chess players, even when they’re run on fairly conventional hardware (a modern multi-core CPU).
 

Jumi

Well-Known Member
In 1991 Sid Meier came out with a game called Civilization. At its most basic, the human plays against a number of AI "leaders". The leaders all have different personalities, some being more aggressive, some more deceitful, some more adventurous, etc. These AI leaders react to circumstances. I'm not saying they experience pleasure and pain in the same way that humans do. But if you walk across the territory of an aggressive leader you will probably start a war. As wars go on, leaders feel the weight of their losses, evaluate them against gains and may sue for peace.
I wouldn't call the Civ AI, artificial intelligence. It's just a few sets of IFs that the bots follow. There's no learning, no intelligence, just a set of rules the programmer made. Even though Gandhi feels like he's made his own choices by becoming a conqueror, it's because there's a bug causing the unexpected behavior.
 

ecco

Veteran Member
I came across this by chance.

Computer learns to play Civilization by reading the instruction manual

Matthew Rogers on July 14, 2011 at 5:03 pm
MIT researchers just got a computer to accomplish yet another task that most humans are incapable of doing: It learned how to play a game by reading the instruction manual.

The MIT Computer Science and Artificial Intelligence lab has a computer that now plays Civilization all by itself — and it wins nearly 80% of the time. Those are better stats than most of us could brag about, but the real win here is the fact that instruction manuals don’t explain how to win a game, just how to play it.

The results may be game-oriented, but the real purpose for the experiment was to get a computer to do more than process words as data — and to actually process them as language. In this case, the computer read instructions on how to play a rather complex game, then proceeded to not only play that game, but to play it very well.


Teaching a computer to actually read medical books, like a student in med school would, is something entirely different.

Regarding that last paragraph from the 2011 article...

IBM's Watson using data to transform health care
The first thing they found is you could teach natural language processing, machine learning and cognitive computing. You could teach Watson to read electronic health records and structured and unstructured text (or unorganized, tough-to-mine data). Then Watson could come up with probabilities.​
 

ecco

Veteran Member
I consider chess and GO much more logical and experience based than Starcraft. Therefore, if it knows most outcomes in GO it can probably win most of the time, this doesn't make it AI.
If you believe that anything can know "most outcomes in GO", I suggest you learn a little about it.
 

ecco

Veteran Member
Lay reporters are creating a lot of confusion among lay public with their click bait headers. To describe a successful ‘Task agnostic self modelling machine’, reporters in reputable publications shout “Robot that thinks for itself from scratch brings forward rise the self-aware machines”.

The following article gives the details.

https://mindmatters.ai/2019/02/that-robot-is-not-self-aware/


Development of a self modelling machine is an important development in its own right. Still, it only involves statistical algorithms applied to a robotic device in a new way. The robot still doesn’t understand anything. It’s not aware of the world around it, let alone aware of itself. There’s absolutely no reason to think it’s on its way to self-awareness.
I wonder if there is a modern-day equivalent to the 19th Century word "Luddite"?
 

charlie sc

Well-Known Member
If you believe that anything can know "most outcomes in GO", I suggest you learn a little about it.
It doesn't come close to SC2. The people making Deepmind want it to play with fog of war. Therefore, it can't know what the player is doing. Some players, especially in tournaments, will examine the play-style of others to know a viable strategy against them. They'll use intuition to judge where the player is. Deepmind is already cheating somewhat, first it can see the whole of the map(not through the fog of war though) and second it sees units as binary. Therefore, its not even operating the same way humans do. It'll know there's a dark Templar(instantly) without going through the same cognitive processes as we do. They also only used 1 map for the test. That's fine, it cheats, but this doesn't make me think it's somehow better AI. The very fact it needs other players to be handicapped shows it's not even close to AI.

Do I think Deepmind can become the best at SC2? Yes, it's possible but I have my reservations. Do I think AI is going to be developed in the nearby future? No. I think we're at least 100 years off. Atanu posted a good article about this The Myth of Sentient Machines | Psychology Today where it differentiates between strong and weak AI. Weak AI is definitely a thing, but we're no where close to strong AI.
 

ecco

Veteran Member
The robot still doesn’t understand anything.

What is understanding?

Let's get away from the mystical concepts of mind and conscioness. Boiled down, humans can observe things and remember them. Computers can observe things and remember them. Humans can approach problems based on what they have learned and remembered. Computers can approach problems based on what they have learned and remembered.
Human brains have 100 billion neurons. Computers have far less (today).

The robot still doesn’t understand anything.
Computers understand how to play GO.
Computers understand how to take off, fly, and land airplanes.
Computers understand how read xrays and find tumors.



It’s not aware of the world around it

The computer that flies airplanes is very aware of the world around it. The computer that flies 200 drones in synchronization is very aware of the world around it.


let alone aware of itself. There’s absolutely no reason to think it’s on its way to self-awareness.

Is an amoeba aware of itself?


conscious knowledge of one's own character, feelings, motives, and desires.​

Frankly, I don't dwell too much on those factors. I don't think a doctor, reading an X-ray, is considering his own character, feelings, motives, and desires when making a diagnosis.

I don't think a soldier driving a humvee in Afghanistan is considering his own character, feelings, motives, and desires while looking for IEDs.

Sure, these things come into play sometimes. Deciding to keep or quit my job requires that I at least consider my motivations. But character, feelings, motivations and desires can be programmed or learned. All the different leaders in Civilization VI have their own sets of character, feelings, motives, and desires. The act on these just as we do.

Civilization 6 Leader list: Leader Agendas, Traits, Abilities and Unique Units • Eurogamer.net
 

ecco

Veteran Member
...
I wouldn't call the Civ AI, artificial intelligence. It's just a few sets of IFs that the bots follow. There's no learning, no intelligence, just a set of rules the programmer made. Even though Gandhi feels like he's made his own choices by becoming a conqueror, it's because there's a bug causing the unexpected behavior.
Yep, yep, yep. It's a good thing people never change their minds and make mistakes. That would imply that God had a few bugs in his creations.
 

ecco

Veteran Member
It doesn't come close to SC2. The people making Deepmind want it to play with fog of war. ...The very fact it needs other players to be handicapped shows it's not even close to AI.

Do I think Deepmind can become the best at SC2? Yes, it's possible but I have my reservations.
As I said earlier, You sound like the author of this article from 45 tears ago...(who concluded that no computer program would ever beat the best humans).

Humans are still better than AI at StarCraft—for now
Song, 29, said the bots approached the game differently from the way humans do. “We professional gamers initiate combat only when we stand a chance of victory with our army and unit-control skills,” he said in a post-competition interview with MIT Technology Review. In contrast, the bots tried to keep their units alive without making any bold decisions.
Wow! Military simulation war games have been around for many years. In my experience, none initiate combat when they don't stand a chance of achieving a favorable outcome. If the Bot programmers made such a mistake, that should be easily correctable.

Same link
Kim Kyung-joong, the Sejong University computer engineering professor who organized the competition, said the bots were constrained, in part, by the lack of widely available training data related to StarCraft.

That will change soon. In August, DeepMind and the games company Blizzard Entertainment released a long-awaited set of AI development tools compatible with StarCraft II, the version of the game that is most popular among professional players.

Other experts now predict that bots will be able to vanquish professional StarCraft players once they are trained properly. “When AI bots are equipped with [high-level] decision-making systems like AlphaGo, humans will never be able to win,” says Jung Han-min, a computer science and engineering professor at the University of Science and Technology in Korea.

So it's no different from a person learning by observing, studying and practising. Isn't that exactly what Song Byung-gu did?



ETA: Emphases in the above quotes are mine.
 
Top