A team of researchers at Uber AI Labs in San Francisco has developed a set of learning algorithms that proved to be better at playing classic video games than human players or other AI systems. In their paper published in the journal Nature, the researchers explain how their algorithms differ from others and why they believe they have applications in robotics, language processing and even designing new drugs.

Reinforcement learning algorithms learn how to do things by synthesizing information provided in a large dataset—they recognize patterns and use them to make guesses about new data. This is how reinforcement learning algorithms are used to spot lung cancer in X-rays. But, as the researchers with this new effort note, such algorithms tend to run into trouble when they encounter data that does not fit with other data in the dataset. This is why such systems can sometimes return incorrect results.

In this new effort, the researchers have overcome this problem by adding an algorithm that remembers all the paths a previous algorithm has taken as it has tried to solve a problem. When it finds a data point that does not appear to be correct, it goes back to its memory map and tries another route. In terms of playing video games, it retains screen grabs as it plays and when it finds itself losing, goes back to another point in the game and tries another approach. The algorithm also groups together images that look similar to figure out what point in time it should return to if things go awry.

The researchers tested their new approach by adding game rules and a goal—score the most points possible and try to achieve a every time. They then used their system to play 55 Atari games that, over time, have become benchmarks for testing AI systems. The new system beat other AI systems 85.5 percent of the time. It did particularly well at Montezuma's Revenge, scoring higher than any other AI system and beating the record for a human.

The researchers believe their could be ported to other applications such as image or by robots.

More information: Adrien Ecoffet et al. First return, then explore, Nature (2021). DOI: 10.1038/s41586-020-03157-9

Journal information: Nature