The brain inspires a new type of artificial intelligence

The brain inspires a new type of artificial intelligence
Processing an event with multiple objects. A synchronous input where all objects are presented simultaneously to a computer (left), versus an asynchronous input where objects are presented with temporal order to the brain (right). Credit: Prof. Ido Kanter

Machine learning, introduced 70 years ago, is based on evidence of the dynamics of learning in the brain. Using the speed of modern computers and large datasets, deep learning algorithms have recently produced results comparable to those of human experts in various applicable fields, but with different characteristics that are distant from current knowledge of learning in neuroscience.

Using advanced experiments on neuronal cultures and large scale simulations, a group of scientists at Bar-Ilan University in Israel has demonstrated a new type of ultrafast artificial algorithms—based on the very slow dynamics—which outperform learning rates achieved to date by state-of-the-art learning algorithms.

In an article published today in the journal Scientific Reports, the researchers rebuild the bridge between neuroscience and advanced artificial intelligence algorithms that has been left virtually useless for almost 70 years.

"The current scientific and technological viewpoint is that neurobiology and are two distinct disciplines that advanced independently," said the study's lead author, Prof. Ido Kanter, of Bar-Ilan University's Department of Physics and Gonda (Goldschmied) Multidisciplinary Brain Research Center. "The absence of expectedly reciprocal influence is puzzling."

"The number of neurons in a brain is less than the number of bits in a typical disc size of modern personal computers, and the computational speed of the brain is like the second hand on a clock, even slower than the first computer invented over 70 years ago," he continued. "In addition, the brain's learning rules are very complicated and remote from the principles of learning steps in current artificial intelligence algorithms," added Prof. Kanter, whose research team includes Herut Uzan, Shira Sardi, Amir Goldental and Roni Vardi.

Brain dynamics do not comply with a well-defined clock synchronized for all nerve cells, since the biological scheme has to cope with asynchronous inputs, as physical reality develops. "When looking ahead one immediately observes a frame with multiple objects. For instance, while driving one observes cars, pedestrian crossings, and road signs, and can easily identify their temporal ordering and relative positions," said Prof. Kanter. "Biological hardware (learning rules) is designed to deal with asynchronous inputs and refine their relative information." In contrast, traditional artificial intelligence algorithms are based on synchronous inputs, hence the relative timing of different inputs constituting the same frame is typically ignored.

The new study demonstrates that ultrafast learning rates are surprisingly identical for small and large networks. Hence, say the researchers, "the disadvantage of the complicated brain's learning scheme is actually an advantage." Another important finding is that learning can occur without learning steps through self-adaptation according to asynchronous inputs. This type of learning-without-learning occurs in the dendrites, several terminals of each neuron, as was recently experimentally observed. In addition, network dynamics under dendritic learning are governed by weak weights which were previously deemed insignificant.

The idea of efficient based on the very slow brain's dynamics offers an opportunity to implement a new class of advanced artificial intelligence based on fast computers. It calls for the reinitiation of the bridge from neurobiology to artificial intelligence and, as the research group concludes, "Insights of fundamental principles of our brain have to be once again at the center of future artificial intelligence."


Explore further

The brain learns completely differently than we've assumed since the 20th century

More information: Herut Uzan et al. Biological learning curves outperform existing ones in artificial intelligence algorithms, Scientific Reports (2019). DOI: 10.1038/s41598-019-48016-4
Journal information: Scientific Reports

Citation: The brain inspires a new type of artificial intelligence (2019, August 9) retrieved 24 August 2019 from https://techxplore.com/news/2019-08-brain-artificial-intelligence.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
233 shares

Feedback to editors

User comments

Aug 09, 2019
"The number of neurons in a brain is less than the number of bits in a typical disc size of modern personal computers, and the computational speed of the brain is like the second hand on a clock, even slower than the first computer invented over 70 years ago,"


That's a gross mis-characterisation, since each neuron actually needs multiple kilobytes of data to describe. Each neuron is more like a tiny microprocessor running an IO program, rather than a bit of data.

The brain has approximately 86 billion neurons. Let's suppose you need a kilobyte of code to describe the function of a single neuron - which is still a very small amount of data that. That's about 86 trillion bytes, 86 x 10^12 bytes, or 86 Terabytes. That's about 100 times greater than the "typical size of modern personal computers".

And then, there's the data throughput. The brain operates itself at a rate of some tens of Hertz, which means a data rate on the order of 1000 Terabytes per second.

Aug 11, 2019
Well, good if they start to look at this across disciplines; I remember that a paper had worked out 2006 how a fairly simple biological inspired cortex model could mimic symbol training in neural networks such that a) the network grouped data physically in adjacent nodes and b) could not be over trained.

It was easy to understand that the biological model did and it was easy to train, something AI neural networks struggled with at the time. Still struggles with I think, since a solution to understand the dispersed node information in modern, now successful deep neural networks [DNNs] is to use a 2nd network to understand what it reacts to. (DNNs are Clever Hans routines, they can be useful but beware what they really learn can be uncorrelated outside the training scope. E.g. a "boat" identifier that actually identifies the interrupted water horizon in a boat-on-water image, it may later identify an island as "a boat".)

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more