Artificial Intelligence also has illusory perceptions

human ai
Credit: CC0 Public Domain

Researchers from the Image Processing Laboratory (IPL) of the University of Valencia and the Department of Information and Communication Technologies (DTIC) of the Pompeu Fabra University (UPF) have shown that convolutional neural networks (CNN) – a type of artificial neural network commonly used in detection systems—are also affected by visual illusions, just as the human brain.

In a convolutional neural neurons are arranged in receptive fields in much the same way as neurons in the visual cortex of a biological brain. Today, CNN are found in a wide variety of autonomous systems, such as face detection and recognition systems or self-driving vehicles.

The study published in Vision Research analyzes the phenomenon of in convolutional networks compared to their effect on the vision of human beings. After training CNN for simple tasks such as removing noise or blur, scientists have found that these networks are also susceptible to perceiving reality in a biased way, caused by visual illusions of brightness and color.

Also, the article says, "some illusions of networks may be inconsistent with the perception of humans." This means that the visual illusions that occur in the CNN do not necessarily have to coincide with the biological illusory perceptions, but that in these artificial networks, there can be different illusions which are foreign to the human brain. "This is one of the factors that leads us to believe that it is not possible to establish analogies between the simple concatenation of artificial neural networks and the much more complex ," says Jesús Malo, professor of optics and vision sciences and researcher at the Image Processing Laboratory of the University of Valencia.

They propose a paradigm shift

Along these lines, the team has just published another article in Scientific Reports that details the limits and differences between the two systems, whose results lead the authors to warn about the use of CNN to study human vision. "CNN are based on the behavior of biological neurons, in particular on their basic structure formed by the concatenation of modules made up of a linear operation (sums and products) followed by a non-linear one (saturation), but this conventional formulation is too simple. In addition to the intrinsic limitations of these artificial networks to model vision, the non-linear behavior of flexible architectures can be very different from that of the biological visual system," sums up Malo, co-signer of the articles by the University of Valencia.

The text argues that artificial neural networks with intrinsically non-linear bio-inspired modules, rather than the usual excessively deep concatenations of linear + non-linear modules, not only better emulate basic human perception, but can provide a higher performance in general purpose applications. "Our results suggest a for both vision science and artificial intelligence," concludes Jesús Malo.

More information: Marcelo Bertalmío et al. Evidence for the intrinsically nonlinear nature of receptive fields in vision, Scientific Reports (2020). DOI: 10.1038/s41598-020-73113-0

A. Gomez-Villa et al. Color illusions also deceive CNNs for low-level vision tasks: Analysis and implications, Vision Research (2020). DOI: 10.1016/j.visres.2020.07.010

Provided by Asociacion RUVID
Citation: Artificial Intelligence also has illusory perceptions (2020, October 19) retrieved 29 March 2024 from https://techxplore.com/news/2020-10-artificial-intelligence-illusory-perceptions.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Researchers teach computers to see optical illusions

26 shares

Feedback to editors