A computational shortcut for neural networks

Computational shortcut for neural networks
Neural networks (centre) can be used to investigate phase transitions, for instance of magnetic materials (arrows). Credit: Department of Physics, University of Basel

Neural networks are learning algorithms that approximate the solution to a task by training with available data. However, it is usually unclear how exactly they accomplish this. Two young Basel physicists have now derived mathematical expressions that allow one to calculate the optimal solution without training a network. Their results not only give insight into how those learning algorithms work, but could also help to detect unknown phase transitions in physical systems in the future.

Neural networks are based on the principle of operation of the brain. Such computer algorithms learn to solve problems through repeated training and can, for example, distinguish objects or process spoken language.

For several years now, physicists have been trying to use to detect as well. Phase transitions are familiar to us from everyday experience, for instance when water freezes to ice, but they also occur in more complex form between different phases of magnetic materials or , where they are often difficult to detect.

Julian Arnold and Frank Schäfer, two Ph.D. students in the research group of Prof. Dr. Christoph Bruder at the University of Basel, have now single-handedly derived mathematical expressions with which such phase transitions can be discovered faster than before. They recently published their results in Physical Review X.

Skipping training saves time

A neural network learns by systematically varying parameters in many training rounds in order to make the predictions calculated by the network match the training data fed into it more and more closely. That training data can be the pixels of pictures or, in fact, the results of measurements on a physical system exhibiting phase transitions about which one would like to learn something.

"Neural networks have already become quite good at detecting phase transitions", says Arnold, "but how exactly they do it usually remains completely obscure." To change that situation and shine some light into the "black box" of a neural network, Arnold and Schäfer looked at the special case of networks with an infinite number of parameters which, in principle, also go through infinitely many training rounds.

Generally, it has been known for a long time that the predictions of such networks always tend towards a certain optimal solution. Arnold and Schäfer took this as a starting point for deriving mathematical formulas that allow one to directly calculate that optimal solution without actually having to train the network. "That shortcut enormously reduces the computing time", Arnold explains: "The time it takes to calculate our solution is only as long as a single training round of a small network."

Insight into the network

In addition to saving time, the method developed by the Basel physicists also has the advantage that the derived equations give some insight into the functioning of the neural networks and, hence, of the under investigation.

So far, Arnold and Schäfer have tested their method on computer-generated data. Soon, they also want to apply the method to real measurement data. In the future, this could make it possible to detect as yet unknown phase transitions, for instance in quantum simulators or in novel materials.

More information: Julian Arnold et al, Replacing Neural Networks by Optimal Analytical Predictors for the Detection of Phase Transitions, Physical Review X (2022). DOI: 10.1103/PhysRevX.12.031044

Journal information: Physical Review X
Citation: A computational shortcut for neural networks (2022, September 30) retrieved 19 March 2024 from https://techxplore.com/news/2022-09-shortcut-neural-networks.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

New method for comparing neural networks exposes how artificial intelligence works

206 shares

Feedback to editors