June 14, 2023

This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked
peer-reviewed publication
trusted source
proofread

Hybrid AI-powered computer vision combines physics and big data

Graphic showing two techniques to incorporate physics into machine learning pipelines—residual physics (top) and physical fusion (bottom). Credit: Achuta Kadambi / UCLA Samueli
× close
Graphic showing two techniques to incorporate physics into machine learning pipelines—residual physics (top) and physical fusion (bottom). Credit: Achuta Kadambi / UCLA Samueli

Researchers from UCLA and the United States Army Research Laboratory have laid out a new approach to enhance artificial intelligence-powered computer vision technologies by adding physics-based awareness to data-driven techniques.

Published in Nature Machine Intelligence, the study offers an overview of a hybrid methodology designed to improve how AI-based machinery senses, interacts and responds to its environment in real time—as in how autonomous vehicles move and maneuver, or how robots use the improved technology to carry out precision actions.

Computer vision allows AIs to see and make sense of their surroundings by decoding data and inferring properties of the physical world from images. While such images are formed through the of light and mechanics, traditional computer vision techniques have predominantly focused on data-based machine learning to drive performance. Physics-based research has, on a separate track, been developed to explore the various physical principles behind many computer vision challenges.

It has been a challenge to incorporate an understanding of physics—the laws that govern mass, motion and more—into the development of neural networks, where AIs are modeled after the with billions of nodes to crunch massive image data sets until they gain an understanding of what they "see." But there are now a few promising lines of research that seek to add elements of physics-awareness into already robust data-driven networks.

The UCLA study aims to harness the power of both the from data and the real-world know-how of physics to create a hybrid AI with enhanced capabilities.

"Visual machines—cars, robots, or health instruments that use images to perceive the world—are ultimately doing tasks in our physical world," said the study's corresponding author Achuta Kadambi, an assistant professor of electrical and computer engineering at the UCLA Samueli School of Engineering. "Physics-aware forms of inference can enable cars to drive more safely or surgical robots to be more precise."

The research team outlined three ways in which physics and data are starting to be combined into computer vision artificial intelligence:

These three lines of investigation have already yielded encouraging results in improved . For example, the hybrid approach allows AI to track and predict an object's motion more precisely, and can produce accurate, high-resolution images from scenes obscured by inclement weather.

With continued progress in this dual modality approach, deep learning-based AIs may even begin to learn the laws of physics on their own, according to the researchers.

The other authors on the paper are Army Research Laboratory computer scientist Celso de Melo and UCLA faculty Stefano Soatto, a professor of computer science; Cho-Jui Hsieh, an associate professor of computer science and Mani Srivastava, a professor of electrical and computer engineering and of computer science.

More information: Achuta Kadambi et al, Incorporating physics into data-driven computer vision, Nature Machine Intelligence (2023). DOI: 10.1038/s42256-023-00662-0

Journal information: Nature Machine Intelligence

Load comments (0)