Google DeepMind AI To Take On StarCraft II Challengers Next

Google announced at BlizzCon 2016 in Anaheim, California, that it is collaborating with Blizzard Entertainment to use StarCraft II as a training platform for artificial intelligence (AI) and machine learning research. As part of that partnership, Google will pit its DeepMind project against human players in StarCraft II and use the information obtained to tweak its AI algorithms.

"DeepMind is on a scientific mission to push the boundaries of AI, developing programs that can learn to solve any complex problem without needing to be told how. Games are the perfect environment in which to do this, allowing us to develop and test smarter, more flexible AI algorithms quickly and efficiently, and also providing instant feedback on how we’re doing through scores," Google explains.

StarCraft II

Using games to test or otherwise demonstrate the capabilities of AI isn't. However, StarCraft II is different because of the level of complexity combined with only a partially observable environment. As opposed to "perfect information" games such as Chess and Go, StarCraft II players have to send units out to scout the unseen areas to learn what their opponent is up to, and then remember that information over a long period of time.

There is lot for the AI to do in StarCraft II. It starts with selecting one of three races, each of which bring unique abilities to the table, and continues with balancing an in-game economy with gathering resources to build new units and buildings.

"An agent that can play StarCraft will need to demonstrate effective use of memory, an ability to plan over a long time, and the capacity to adapt plans based on new information. Computers are capable of extremely fast control, but that doesn’t necessarily demonstrate intelligence, so agents must interact with the game within limits of human dexterity in terms of 'Actions Per Minute'. StarCraft’s high-dimensional action space is quite different from those previously investigated in reinforcement learning research," Google adds.

Even just to perform a simple task such as expanding a base to some location, the AI has to coordinate mouse clicks, the camera, and available resources. The significance there is that it makes the gameplay mechanics hierarchical, which is one of the toughest parts of reinforcement learning.


To make all this possible, Google and Blizzard developed a special API that allows for programmatic control of units and access to the full game state. They also developed a new image-based interface that outputs low resolution image data for the map and mini map.

This collaboration is in the very early stage so don't expect to see any headlines indicating that DeepMind was able to beat a human player in StarCraft II. Google says DeepMind has a "long way" to go before it can challenge a professional StarCraft II player. Of course, the same could have once been said about AI beating humans in Jeopardy, and that worked out pretty well for machines.