New AI camera could revolutionize autonomous vehicles

camera
Credit: CC0 Public Domain

The image recognition technology that underlies today's autonomous cars and aerial drones depends on artificial intelligence: the computers essentially teach themselves to recognize objects like a dog, a pedestrian crossing the street or a stopped car. The problem is that the computers running the artificial intelligence algorithms are currently too large and slow for future applications like handheld medical devices.

Now, researchers at Stanford University have devised a new type of artificially intelligent camera system that can classify images faster and more energy efficiently, and that could one day be built small enough to be embedded in the devices themselves, something that is not possible today. The work was published in the August 17 Nature Scientific Reports.

"That autonomous car you just passed has a relatively huge, relatively slow, energy intensive in its trunk," said Gordon Wetzstein, an assistant professor of electrical engineering at Stanford, who led the research. Future applications will need something much faster and smaller to process the stream of images, he said.

Consumed by computation

Wetzstein and Julie Chang, a graduate student and first author on the paper, took a step toward that technology by marrying two types of computers into one, creating a hybrid optical-electrical computer designed specifically for image analysis.

The first layer of the prototype camera is a type of optical computer, which does not require the power-intensive mathematics of digital computing. The second layer is a traditional digital electronic computer.

The optical computer layer operates by physically preprocessing image data, filtering it in multiple ways that an electronic computer would otherwise have to do mathematically. Since the filtering happens naturally as light passes through the custom optics, this layer operates with zero input power. This saves the hybrid system a lot of time and energy that would otherwise be consumed by computation.

"We've outsourced some of the math of into the optics," Chang said.

The result is profoundly fewer calculations, fewer calls to memory and far less time to complete the process. Having leapfrogged these preprocessing steps, the remaining analysis proceeds to the digital computer layer with a considerable head start.

"Millions of calculations are circumvented and it all happens at the speed of light," Wetzstein said.

Rapid decision-making

In speed and accuracy, the prototype rivals existing electronic-only computing processors that are programmed to perform the same calculations, but with substantial computational cost savings.

While their current prototype, arranged on a lab bench, would hardly be classified as small, the researchers said their system can one day be miniaturized to fit in a handheld video camera or an aerial drone.

In both simulations and real-world experiments, the team used the system to successfully identify airplanes, automobiles, cats, dogs and more within natural image settings.

"Some future version of our system would be especially useful in rapid decision-making applications, like autonomous vehicles," Wetzstein said.

In addition to shrinking the prototype, Wetzstein, Chang and colleagues at the Stanford Computational Imaging Lab are now looking at ways to make the optical component do even more of the preprocessing. Eventually, their smaller, faster technology could replace the trunk-size computers that now help cars, drones and other technologies learn to recognize the world around them.

More information: Julie Chang et al. Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification, Scientific Reports (2018). DOI: 10.1038/s41598-018-30619-y , www.nature.com/articles/s41598-018-30619-y
Provided by Stanford University
Citation: New AI camera could revolutionize autonomous vehicles (2018, August 17) retrieved 21 November 2018 from https://techxplore.com/news/2018-08-ai-camera-revolutionize-autonomous-vehicles.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
122 shares

Feedback to editors

User comments

Aug 17, 2018
So can they say that's it's partial quantum computing? I bet it is, which is to say precise computer simulations of what the filters are doing would have a computational cost that grows much more quickly than adding filters alone.

But it's hard to define what makes a quantum computer unique.

Aug 18, 2018
No Luke, they cannot say that since it's not quantum computing at all.
It's basically a neural light network.

Aug 18, 2018
The light part isn't specific to neural networks. What they are doing is outsourcing the convolution part of a CNN (a convoloutional neural network) into an optic component.
That is: "Simply" having various ways of filtering (e.g. edge detection) of the incoming image put into hardware (the "C" part in CNN)...while the actual neural network part is still running in software.

It's a pretty neat trick.

Aug 18, 2018
Thx, I know it's nothing like a universal quantum computer, but I also know light has behaviors that can be resource expensive to simulate. (E.g. in holograms) If it's leveraging any of those properties for purposes of computation, that's what I'm calling "partial quantum" for lack of better words.

Aug 18, 2018
Here's a source for the sort of thing I'm talking about BTW:

https://www.natur...-13733-1
I only read the abstract, but Fourier transforms are an example of something optical/quantum systems can compute faster than classical. Also:

https://en.m.wiki...ransform

Aug 18, 2018
Mammalian visual cortex does this automatically and has been doing that for millions of years. For example, simple cells in the visual cortex of the domestic cat (Felis catus), respond to edges—a feature which is more likely to occur in objects and organisms in the environment. This discovery was made by David Hubel and Torsten Wiesel in the early 1950s and was the basis of their Nobel prize award in 1981.

Aug 19, 2018
Such optical systems are already used in a limited sense, in things like consumer cameras which use optical means to measure the image contrast or phase to figure out whether the optics are in focus before triggering the shutter.

https://en.wikipe...etection

Aug 19, 2018
"Mammalian visual cortex does this automatically and has been doing that for millions of years."

not quite. The connections that do this prefiltering are electrical (not optical) through connection of nerves. This is why you will find in biology and medical textbooks that the eye is considered part of the brain.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more