New AI sensor technology for autonomous driving

New AI sensor technology for autonomous driving
TU Graz is working together with Infineon on new, robust radar sensors for autonomous driving. © Infineon

Researchers at TU Graz have modeled an AI system for automotive radar sensors that filters out interfering signals caused by other radar sensors and dramatically improves object detection. Now the system is to be made more robust to weather and environmental influences as well as new types of interference.

In order for driving assistance and in modern cars to perceive their environment and function reliably in all conceivable situations, they have to rely on sensors such as cameras, lidar, ultrasound and radar. The latter in particular are indispensable components. Radar sensors provide the vehicle with location and speed information from surrounding objects. However, they have to deal with numerous disruptive and environmental influences in traffic. Interference from other (radar) equipment and extreme weather conditions create noise that negatively affects the quality of the radar measurement.

"The better the denoising of interfering signals works, the more reliably the position and speed of objects can be determined," explains Franz Pernkopf from the Institute of Signal Processing and Speech Communication. Together with his team and with partners from Infineon, he developed an AI system based on that mitigates mutual interference in radar signals, far surpassing the current state of the art. They now want to optimize this so that it also works outside of learned patterns and recognizes objects even more reliably.

Resource-efficient and intelligent signal processing

To this end, the researchers first developed model architectures for automatic noise suppression based on so-called (CNNs). "These architectures are modeled on the layer hierarchy of our visual cortex and are already being used successfully in image and signal processing," says Pernkopf. CNNs filter the visual information, recognize connections and complete the image using familiar patterns. Due to their structure, they consume considerably less memory than other neural networks, but still exceed the available capacities of radar sensors for autonomous driving.

Compressed AI in chip format

The goal was to become even more efficient. To this end, the TU Graz team trained various of these neural networks with noisy data and desired output values. In experiments, they identified particularly small and fast model architectures by analyzing the memory space and the number of computing operations required per denoising process. The most efficient models were then compressed again by reducing the bit widths, i.e. the number of bits used to store the model parameters. The result was an AI model with high filter performance and low energy consumption at one and the same time. The excellent denoising results, with an F1 score (a measure of the accuracy of a test) of 89 percent, are almost equivalent to an object detection rate of undisturbed radar signals. The interfering signals are thus almost completely removed from the measurement signal.

Expressed in figures: with a bit width of 8 bits, the model achieves the same performance as comparable models with a bit width of 32 bits, but only requires 218 kilobytes of memory. This corresponds to a storage space reduction of 75 percent, which means that the model far surpasses the current state of the art.

Focus on robustness and explainability

In the FFG project REPAIR (Robust and ExPlainable AI for Radar sensors), Pernkopf and his team are now working together with Infineon over the next three years to optimize their development. Says Pernkopf: "For our successful tests, we used data (note: interfering signals) similar to what we used for the training. We now want to improve the model so that it still works when the input signal deviates significantly from learned patterns." This would make many times more robust with respect to interference from the environment. After all, the sensor is also confronted with different, sometimes unknown situations in reality. "Until now, even the smallest changes to the measurement data were enough for the output to collapse and objects not to be detected or to be detected incorrectly, something which would be devastating in the autonomous driving use case."

Shining a light into the black box

The system has to cope with such challenges and notice when its own predictions are uncertain. Then, for example, it could respond with a secured emergency routine. To this end, the researchers want to find out how the system determines predictions and which influencing factors are decisive for this. This complex process within the network has previously only been comprehensible to a limited extent. For this purpose, the complicated model architecture is transferred into a linear model and simplified. In Pernkopf's words: "We want to make CNNs' behavior a bit more explainable. We are not only interested in the output result, but also in its range of variation. The smaller the variance, the more certain the network is."

Either way, there is still a lot to be done for real-world use. Pernkopf expects the technology to be developed to the point where the first can be equipped with it in the next few years.

Citation: New AI sensor technology for autonomous driving (2022, February 23) retrieved 28 March 2024 from https://techxplore.com/news/2022-02-ai-sensor-technology-autonomous.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

A new look at quantum radar suggests it might boost accuracy more than thought

5 shares

Feedback to editors