welcome-banner
All News
article-headline
StarCraft 27 years agoGosu "GosuGamers" Gamers

Starcraft, the new battlefield for AI research

Google's Deepmind have announced that the next game they will try to conquer is Blizzard's very own Starcraft II

Back in March 2016, we witnessed a landmark in AI technology and a glimpse of the future when Google’s flagship AI defeated one of the world's best Go players Kim Se Dol.

At the time of the contest, it was thought artificial intelligence was more than 10 years away from competing with the very best of the ancient Chinese game of Go, a game much more difficult than AI’s previous triumph, Chess. However Deepmind, the department in charge of Google’s deep-learning research, defeated one of the game’s greatest players easily, taking a 4-1 victory over a multiple-time world champion.

Now, looking to the future, Deepmind have chosen the next arena in which the battle of man vs machine will be fought, and that is Starcraft II. An unprecedented challenge, Starcraft II presents a huge number of difficulties for the program, such as opponents hiding information and the real-time action of the game, but Deepmind aren’t deterred and are looking for you to help them achieve the colossal task of training a computer to defeat some of the world’s best Starcraft players.

Deepmind use a method called ‘deep learning’ to train their AI to learn strategy games, this involves showing the program hundreds of thousands of matches and telling it the result of each one, over time the program begins to understand which action will lead to a stronger position and which will lead to a weaker one. For games like chess or go it’s a relatively simple idea, the board is static and it’s easier to adjust your actions based on your opponent’s move. As starcraft is a dynamic game, with both players moving and planning in real time, it presents a new hurdle for Google’s Deepmind to overcome.

Research is planning to begin during the first quarter of 2017.


Want to help further this research?

Just play! Deep learning relies on having a huge number of high quality replays to constantly show the program what it should be doing, so the more you play, the more you contribute to supplying the Google AI with helpful material to study.

Blizzard have also announced an API that can be used for creating this kind of program, so hobbyists or researchers can use this to toy around with making their own Robotic Bonjwa.

For full details you can check out the article posted on Deepmind’s website announcing their plans to take on the best of the best at the world’s most difficult game.

All Esports

Entertainment

GosuBattles

Account