This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:


peer-reviewed publication

trusted source


Learning on tree architectures shown to outperform a convolutional feedforward network

Is brain learning weaker than artificial Intelligence?
Scheme of a simple neural network based on dendritic tree (left) vs. a complex artificial intelligence deep learning architecture (right). Credit: Prof. Ido Kanter, Bar-Ilan University

Traditionally, artificial intelligence stems from human brain dynamics. However, brain learning is restricted in a number of significant aspects compared to deep learning (DL). First, efficient DL wiring structures (architectures) consist of many tens of feedforward (consecutive) layers, whereas brain dynamics consist of only a few feedforward layers. Second, DL architectures typically consist of many consecutive filter layers, which are essential to identify one of the input classes.

If the input is a car, for example, the first filter identifies wheels, the second one identifies doors, the third one lights and after many additional filters it becomes clear that the input object is, indeed, a car. Conversely, dynamics contain just a single filter located close to the retina. The last necessary component is the mathematical complex DL training procedure, which is evidently far beyond biological realization.

Can the brain, with its limited realization of precise mathematical operations, compete with advanced artificial intelligence systems implemented on fast and parallel computers? From our daily experience, we know that for many tasks the answer is yes. Why is this and, given this affirmative answer, can one build a new type of efficient artificial intelligence inspired by the brain? In an article published today in Scientific Reports, researchers from Bar-Ilan University in Israel solve this puzzle.

"We've shown that efficient learning on an artificial tree architecture, where each weight has a single route to an output unit, can achieve better classification success rates than previously achieved by DL architectures consisting of more layers and filters. This finding paves the way for efficient, biologically inspired new AI hardware and algorithms," said Prof. Ido Kanter, of Bar-Ilan's Department of Physics and Gonda (Goldschmied) Multidisciplinary Brain Research Center, who led the research.

"Highly pruned tree architectures represent a step toward a plausible biological realization of efficient dendritic tree learning by a single or several neurons, with reduced complexity and , and biological realization of backpropagation mechanism, which is currently the central technique in AI," added Yuval Meir, a Ph.D. student and contributor to this work.

Credit: Bar-Ilan University

Efficient dendritic tree learning is based on previous research by Kanter and his experimental research team—and conducted by Dr. Roni Vardi—indicating evidence for sub-dendritic adaptation using neuronal cultures, together with other anisotropic properties of neurons, like different spike waveforms, refractory periods and maximal transmission rates.

The efficient implementation of highly pruned tree training requires a new type of hardware that differs from emerging GPUs which are better fitted to the current DL strategy. The emergence of a new hardware is required to efficiently imitate brain dynamics.

More information: Yuval Meir et al, Learning on tree architectures outperforms a convolutional feedforward network, Scientific Reports (2023). DOI: 10.1038/s41598-023-27986-6

Journal information: Scientific Reports
Citation: Learning on tree architectures shown to outperform a convolutional feedforward network (2023, January 30) retrieved 17 April 2024 from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

New brain learning mechanism calls for revision of long-held neuroscience hypothesis


Feedback to editors