Is self-play the future of (most) AI?

Go is game whose number of possible moves – more than chess at 10170 – is greater than the number of atoms in the universe.

AlphaGo, the predecessor to AlphaGo Zero, crushed 18-time world champion Lee Sedol and the reigning world number one player, Ke Jie. After beating Jie earlier this year, DeepMind announced AlphaGo was retiring from future competitions.

Now, an even more superior competitor, AlphaGo Zero, could beat the version of AlphaGo that faced Lee Sedol after training for just 36 hours and earned beat its predecessor by 100-0 score after 72 hours. Interestingly, AlphaGo Zero didn’t learn from observing humans playing against each other – unlike AlphaGo – but instead, its neural network relies on an old technique in reinforcement learning: self-play. Self-play means agents can learn behaviours that are not hand-coded on any reinforcement learning task, but the sophistication of the learned behaviour is limited by the sophistication of the environment. In order for an agent to learn intelligent behaviour in a particular environment, the environment has to be challenging, but not too challenging.

Essentially, self-play means that AlphaGo Zero plays against itself. During training, it sits on each side of the table: two instances of the same software face off against each other. A match starts with the game’s black and white stones scattered on the board, placed following a random set of moves from their starting positions. The two computer players are given the list of moves that led to the positions of the stones, and then are each told to come up with multiple chains of next moves along with estimates of the probability they will win by following through each chain. The next move from the best possible chain is then played, and the computer players repeat the above steps, coming up with chains of moves ranked by strength. This repeats over and over, with the software feeling its way through the game and internalizing which strategies turn out to be the strongest.

AlphaGo Zero did start from scratch with no experts guiding it. And it is much more efficient: it only uses a single computer and four of Google’s custom TPU1 chips to play matches, compared to AlphaGo’s several machines and 48 TPUs. Since Zero didn’t rely on human gameplay, and a smaller number of matches, its Monte Carlo tree search is smaller. The self-play algorithm also combined both the value and policy neural networks into one, and was trained on 64 GPUs and 19 CPUs by playing nearly five million games against itself. In comparison, AlphaGo needed months of training and used 1,920 CPUs and 280 GPUs to beat Lee Sedol.

AlphaGo combines the two most powerful ideas about learning to emerge from the past few decades: deep learning and reinforcement learning. In the human brain, sensory information is processed in a series of layers. For instance, visual information is first transformed in the retina, then in the midbrain, and then through many different areas of the cerebral cortex. This creates a hierarchy of representations where simple, local features are extracted first, and then more complex, global features are built from these. The AI equivalent is called deep learning; deep because it involves many layers of processing in simple neuron-like computing units.

But to survive in the world, animals need to not only recognise sensory information, but also act on it. Generations of scientists have studied how animals learn to take a series of actions that maximise their reward. This has led to mathematical theories of reinforcement learning that can now be implemented in AI systems. The most powerful of these is temporal difference learning, which improves actions by maximising its expectation of future reward. It is thus that, among others, it even discovered for itself, without human intervention, classic Go moves such as fuseki opening tactics and life and death.

So are there problems to which the current algorithms can be fairly immediately applied?

One example may be optimisation in controlled industrial settings. Here the goal is often to complete a complex series of tasks while satisfying multiple constraints and minimising cost. As long as the possibilities can be accurately simulated, self-play-based algorithms can explore and learn from a vastly larger space of outcomes than will ever be possible for humans.

Researchers at OpenAI have already experimented with the same technique to train bots to play Dota 2, and published a paper on competitive self play. There are other experiments, such as this one, showing how self-play/teaching AI is better at predicting heart attacks.

AlphaGo Zero’s success bodes well for AI’s mastery of games. But it would be a mistake to believe that we’ve learned something general about thinking and about learning for general intelligence. This approach won’t work in more ill-structured problems like natural-language understanding or robotics, where the state space is more complex and there isn’t a clear objective function.

Unsupervised training is the key to ultimately creating AI that can think for itself, but more research is needed outside of the confines of board games and predefined objective functions” before computers can really begin to think outside the box.

DeepMind says the research team behind AlphaGo is looking to pursue other complex problems, such as finding new cures for diseases, quantum chemistry and material design.

Although it couldn’t sample every possible board position, AlphaGo’s neural networks extracted key ideas about strategies that work well in any position. Unfortunately, as yet there is no known way to interrogate the network to directly read out what these key ideas are. If we learn the game of Go purely through supervised learning, the best one could hope to do would be as good as the human one is imitating. Through self-play (and thus unsupervised learning), one could learn something completely novel and create or catalyse emergence.

DeepMind’s self-play approach is not the only way to push the boundaries of AI. Gary Marcus, a neuroscientist at NYU, has co-founded Geometric Intelligence (acquired by Uber), to explore learning techniques that extrapolate from a small number of examples, inspired by how children learn. He claimed to outperform both Google’s and Microsoft’s deep-learning algorithms.