The Playground

Stay tuned for the upcoming Animal-AI Olympics, brought to you by researchers at the Leverhulme Centre for the Future of Intelligence in Cambridge, UK, and GoodAI, a Prague-based research institute.

As the contest name suggests, you are looking at a contest that involves and AI. "The AI agent will have to learn robust behaviours from only pixel inputs and a reward."

The animal-AI challenge involves a share in a $10,000 prize pool on offer. Skills needed to succeed in the tasks will vary in complexity.

In June, the Animal-AI Olympics' full competition goes live. Final results should be available in December.

While the full competition kicks off in June, stay turned later this month for important news: (1) the arena (available at the end of April) and (2) a list of the cognitive abilities that govern the test.

Oscar Schwartz, MIT Technology Review, discussed what researchers will do to make the test happen—the researchers will train "algorithms to master a suite of tasks that have traditionally been used to test animal cognition." The team said methods from the "animal cognition literature" will be used for testing.

According to IEEE Spectrum, they have about 50 tasks from the animal-intelligence literature now. This month they are to present packets of information about the competition; in June, the competition goes live, and people can start working on it.

So, why animals? Isn't a game of chess against humans the real AI challenge? Neither parrot nor crow, after all, can play chess but that is not the point. Matthew Crosby said in IEEE Spectrum, "An AI can be great at one , but can it solve similar tasks that it hasn't seen before? This competition is testing for exactly that kind of thing. Maybe we'll be surprised by how well the AI agents do."

Crosby is one of the contest's organizers and a postdoctoral researcher at the Leverhulme Center and at Imperial College London.

Animal-AI Olympics will pit AIs against tests normally used to study animal intelligence, reported Donna Lu in New Scientist.

"Humans are no longer the best Go players, quiz-show contestants, or even, in some respects, the best doctors," said the Olympics team.

Why bother comparing AI performance with animals?

Nicholas Montegriffo, AndroidPIT, has some answers. "Put the AI in an unfamiliar situation or environment, and it usually fails to apply anything from the skills it learned getting good at a specific task." So, it will be especially interesting to see it in the animal world.

Schwartz similarly drew the contrast: "Usually, AI benchmarks involve mastering a single task, like beating a grandmaster in Go or figuring out how learn a video game from scratch. AI has been extraordinarily successful in such realms. But when you apply the same AI systems to a totally different task, they are generally hopeless."

Researchers are planning a different game. They are testing AI out to see if it can take on what Montegriffo called the natural world.

The test here would AI behaving under a more intelligence characteristic of animal species. You usually hear about how well the AI can repeat what it learned. In the new testing environment, "the can't just repeat what it learned, but needs to apply its training to a new situation."

The event organizers accept that none of the AI systems will be able to adapt perfectly to every circumstance or post a perfect score. But they hope that the best systems will be able to adapt to tackle the different problems they face. The agents will have to be good at all the tests across the board: the winning agent will be the one that shows good performance on average, said MIT Technology Review.

Under the radar: the capacity to adapt quickly to new situations or translate skills from one type of activity to another. Some of the tests will be easier than others. Some may be basic, said Schwartz, like "requiring the agent to retrieve food from an environment with no obstacles."

Harder tasks? Schwartz named "an understanding of object permanence," knowing that "an object is still there even if it is hidden." Also examined will be "the capacity to make a mental model of an environment in order to navigate it in the dark."

What's next? Beyond December, this could step up a conversation about animal cognition and AI. As important, the testing of AI and animal intelligence should inspire more conversations over the meaning of intelligence—in and of itself a never-final pursuit through the years. Have we really nailed a satisfactory definition? Will this project add more insight as to what a working definition should be?

MIT Technology Review reminded readers that when we talk about animal intelligence, it is a "biological intelligence" that is a result "of hundreds of millions of years of evolution." The question remains if the innate structure of an animal's intelligence can be built into a system.

Perhaps the last word should go to Matthew Crosby, a postdoctoral researcher at the Leverhulme Centre for the Future of Intelligence, quoted in MIT Technology Review. Crosby said that the project was more about exploring the differences between minds than trying to prove equivalence between artificial and biological cognition.

"What we are actually interested in is discovering how to translate between different types of ," he says. "If part of what we learn is where this translation fails, that's a success as far as we're concerned."

In an interview with Eliza Strickland in IEEE Spectrum, he explained that "we're making tasks specifically to test things like generalization and transfer learning. Even if no one does incredibly well in the competition, it will still be useful."

More information: animalaiolympics.com/