Super-AI gameplay spurs humans to novel, winning strategies
Are machines losing their edge in game-playing against humans?
It all began at the New York's World Fair in 1940, when Edward Condon, a nuclear physicist, displayed a one-ton electro-mechanical machine that successfully played a simple ancient Chinese game called Nim, in which two competitors remove objects from piles until only one is left. The machine, one of the first computerized games on record, easily dispatched with most of its human challengers.
It also was likely the first time humans confronted the reality that an inanimate object might just be smarter than they were.
Fast forward six decades and IBM's Deep Blue computer, the size of a small room, conquered world chess champion Gary Kasparov, winning three games and drawing one. Ten years later, tournament challengers were defeated by an AI chess program housed in a cellphone.
Artificial intelligence progressed rapidly through the 2000s, leading up to massive computer systems capable of learning on their own how to play by analyzing popular, complex video games.
For example, computerized versions of Go—a deceptively simple game, yet a googol times more complex than chess—for years could at best play at the level of amateur humans. But DeepMind's AlphaGo utilized neural networks that introduced new dimension to AI, analyzing 10170 possible board configurations to create a program that decisively triumphed over leading masters of the game.
But according to a research paper published this week in the Proceedings of the National Academy of Science, humans may be gaining on—if not yet defeating—super intelligent programs.
Minkyu Shin, an assistant professor of marketing at the City University of Hong Kong, says that the advent of superhuman artificial intelligence has prompted humans to become more creative in game strategies and is responsible for their improvement in play.
Shin and fellow researchers undertook massive analysis, churning through a database of 5.8 million movements by Go players made over the course of 71 years, beginning in 1950 (when Go rules were standardized).
"Our findings suggest that the development of superhuman AI programs may have prompted human players to break away from traditional strategies and induced them to explore novel moves, which in turn may have improved their decision-making," Shin said.
They developed a formula to rank the quality of human decisions in each step of game play. Using KataGo, a superhuman AI program, they compared winning human moves to nearly 60 billion theoretical game patterns. This generated a score called Decision Quality Index (DQI).
Examining DQI scores over several decades, Shin and his team found that while human players made minimal strides during the first several decades of Go play, substantial improvement was found immediately following 2016, the year of AlphaGo's first remarkable achievements.
"We find that human decision-making significantly improved following the advent of superhuman AI," Shin said. "This improvement was associated with greater novelty in human decisions."
David Silver, a DeepMind scientist, commented on these findings: "It is amazing to see that human players have adapted so quickly to incorporate these new discoveries into their own play." He added, "These results suggest that humans will adapt and build upon these discoveries to massively increase their potential."
So humans may not yet declare checkmate in the battle against super intelligent machine games, but they are no longer passive pawns thanks to unprecedented insights into winning strategies offered by AI.
More information: Minkyu Shin et al, Superhuman artificial intelligence can improve human decision-making by increasing novelty, Proceedings of the National Academy of Sciences (2023). DOI: 10.1073/pnas.2214840120
© 2023 Science X Network