DeepMind uses neural network to help explain meta-learning in people

DeepMind uses neural network to help explain meta-learning in people
Our virtual recreation of the Harlow Experiment -- the agent must shift its gaze towards the object it thinks is associated with a reward. Credit: DeepMind

A team of researchers led by a group at Google subsidiary DeepMind has developed a theory regarding how human meta-learning works by comparing it to a certain type of deep learning network on computers. In their paper published in the journal Nature Neuroscience, the group suggests key elements in specialized computerized neural networks might be similar to the function of dopamine in the brain during meta-learning.

Deep learning networks, while quite impressive when running, still fall short in one area—they take a lot of time and effort to get up to speed. A recent example would be programmed to play old computer games such as Pong. A human can master the basics and become quite proficient after playing for just an afternoon. A neural network, on the other hand, requires hundreds of hours of training. Neuroscientists have suggested that this difference is due to what is called meta-learning—where a person (or animal) learns how to do something new based on what they learned in the past. Monkeys, for example, can learn to choose from dissimilar objects after first learning to choose via random selection—something that was discovered as part of the Harlow experiment.

Researchers, such as those at DeepMind, have made recent strides in getting computers to engage in meta-learning. The process by which they do it is very well understood, of course, since they are the ones that make it come about. How it happens in humans, though, is still not clear. In this new effort, the team at DeepMind suggest that one of the key factors in getting computers to engage in meta-learning, might be similar to something found in human neural networks.

To come to this conclusion, the team developed six -based meta-learning experiments that were originally part of neuroscience experiments on animals, one of which was the Harlow experiment. The researchers found their deep neural responses were similar to those of the animals in the original experiments. Furthermore, they noted that the common ingredient used for each of the experiments was something they called an agent—it was required to cause meta-type learning to come about. This, they note, might indicate that animal neural networks have a similar biological agent that is responsible for causing meta-learning to come about. And they suggest that agent might be the .

More information: Jane X. Wang et al. Prefrontal cortex as a meta-reinforcement learning system, Nature Neuroscience (2018). DOI: 10.1038/s41593-018-0147-8

Abstract
Over the past 20 years, neuroscience research on reward-based learning has converged on a canonical model, under which the neurotransmitter dopamine 'stamps in' associations between situations, actions and rewards by modulating the strength of synaptic connections between neurons. However, a growing number of recent findings have placed this standard model under strain. We now draw on recent advances in artificial intelligence to introduce a new theory of reward-based learning. Here, the dopamine system trains another part of the brain, the prefrontal cortex, to operate as its own free-standing learning system. This new perspective accommodates the findings that motivated the standard model, but also deals gracefully with a wider range of observations, providing a fresh foundation for future research.

Journal information: Nature Neuroscience

© 2018 Tech Xplore

Citation: DeepMind uses neural network to help explain meta-learning in people (2018, May 15) retrieved 28 March 2024 from https://techxplore.com/news/2018-05-deepmind-neural-network-meta-learning-people.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Forgetting in neural networks just got less catastrophic

81 shares

Feedback to editors