Deepmind researchers, part of the Deepmind’s AlphaStar team, have announced that their artificial intelligence agents have reached grandmaster level in the popular, real-time strategy game Starcraft II.
At the beginning of this year, the previous version of AlphaStar – Deepminds RL agent challenged two of the world’s top players in StarCraft II. Ten months later and a series of improvements over the initial version of Alphastar, now the system is capable of playing as a top-star human player without any restrictions. It is currently ranked better than 99.8% of the human players on Battle.net, after playing official games on the official game server of Battle.net, using the same maps and conditions as human players.
A combination of methods and techniques from machine learning was used by researchers in order to train AlphaStar directly from gameplay data. Neural networks, self-play via reinforcement learning, multi-agent learning, and imitation learning were all combined to achieve a Grandmaster level at the complex strategic game. The advances on which the new AlphaStar system is based were published in a paper in the Nature journal.
According to Deepmind researchers, the progress done with AlphaStar can easily be transferred to a number of domains and can serve as a foundation for the development of robust and flexible agents that can cope with complex, real-world environments.
“At DeepMind, we’re interested in understanding the potential – and limitations – of open-ended learning, which enables us to develop robust and flexible agents that can cope with complex, real-world domains. Games like StarCraft are an excellent training ground to advance these approaches, as players must use limited information to make dynamic and difficult decisions that have ramifications on multiple levels and timescales. “
More about Deepmind’s work behind AlphaStar can be read in the official blog post and in the paper. Replays of all the games that AlphaStar played can be found here.