Checkmate, humans? Google’s AI programme takes just 4 hours to master chess
AlphaZero computer programme defeats a world-champion programme in each game of chess, shogi (Japanese chess) and the Chinese game Go.
A decade back, humans got a faint idea of the capability of intelligent machines when a supercomputer--IBM’s Deep Blue--defeated the then world chess champion, Gary Kasparov.
The world of Artificial Intelligence (AI), broadly defined as the desire to replicate human intelligence in machines, has dramatically advanced in these 10 years. So much so that a paper [PDF] posted by Alphabet Inc-owned AI firm DeepMind late Tuesday, revealed that AlphaZero--modelled on the company’s AlphaGo Zero computer programme--took just four hours just to learn all chess rules and mastering the game enough to defeat the world’s strongest open-source chess engine, Stockfish.
Starting from random play, and with no domain knowledge except for the game rules, AlphaZero defeated a world-champion programme in each game of chess, shogi (Japanese chess) and the Chinese game, Go, within 24 hours.
It was only in March 2016 that DeepMind’s computer programme, AlphaGo, beat Go champion Lee Seedol in March 2016.
If that was not enough, DeepMind announced on October 18 that AlphaGo’s new version, AlphaGo Zero, is now so powerful that it does not need to train on human amateur and professional games to learn how to play the ancient Chinese game of Go. Further, the new version has not only learnt from AlphaGo, the world’s strongest player of the Chinese game Go, but also defeated it.
The AlphaZero algorithm is a more generic version of the AlphaGo Zero algorithm. AlphaGo Zero, according to the recently published paper, uses a new form of reinforcement training to become “its own teacher”.
Reinforcement learning is an unsupervised training method that uses rewards and punishments. This is the same method used by the chess player AlphaZero. The system begins with a neural network (loosely modelled on the brain, hence the name) that knows nothing about the game. It then plays games against itself, by combining this neural network with a powerful search algorithm. The neural network is tuned and updated to predict moves as well as the eventual winner of the games. This updated neural network is then recombined with the search algorithm to create a new, stronger version, and the process begins again.
AI has undoubtedly become smarter by leaps and bounds with rapid advancements in machine-learning and deep-learning algorithms, humongous amounts of Big Data on which these algorithms can be trained, and the phenomenal increase in computing power.
This has, understandably, given rise to the fear that automation and AI will take away our jobs and eventually become more intelligent than human beings. In his 2006 book The Singularity Is Near, American author and futurist Ray Kurzweil predicted, among many other things, that AI will surpass humans, the smartest and most capable life forms on the planet. By 2099, he forecast, machines would have attained equal legal status with humans.
We could take some comfort in the fact that AI has no such superpower. Not yet, at least.