Credit: Carnegie Mellon University, Department of Civil and Environmental Engineering

In the 1999 film The Matrix, a young hacker named Neo learns that the world as he knows it is a virtual simulation—and with this realization comes the ability to use this virtual world to his advantage. In one scene, Neo, who has no combat training whatsoever, downloads an extensive knowledge of martial arts into his brain, making him a Kung Fu master in mere seconds.

"What we're doing is just like the Matrix," says Bob Iannucci, distinguished service professor of electrical and computer engineering at Carnegie Mellon University's Silicon Valley campus (CMU-SV). "We're hooking into the visual cortex and downloading information that would typically take much longer to learn in a very short amount of time. Only, we're teaching drones."

Imagine you want your to track a particular car. In order to teach the drone which car you want it to track, you can apply machine vision and techniques which work by aggregating large amounts of photo data to help the drone identify an object in a variety of situations. These algorithms take a significant wealth of data to train, however, and these substantial sets are hard to obtain.

So what if you don't have a lot of data, but you still need to tell the drone what it's looking for, in real-time, using only the data it can collect from its surroundings?

Iannucci and his team, consisting of ECE researchers Ervin Teng, Joao Diogo de Menezes Falcao, and Cef Ramirez, are working on a number of projects to train drones through real-time deep learning. The first, called SMILE, uses a camera mounted on the drone to take pictures of the drone's current surroundings and send those images back to an operator on the ground. If you want the drone to track down a particular red convertible, for instance, some of the images it sends back will contain the desired car, and some will not. Using a computer interface, the operator lets the drone know whether or not the object it's looking for is in the image. After just a few minutes, the drone can identify the desired object in the images with enough accuracy to track it across distances.

How can teaching a drone how to recognize objects in images, and using that drone to teach other drones to recognize those objects, lead to better disaster relief? CMU-Silicon Valley Professor Bob Iannucci explains CROSSMobile.

Just like Neo, however, don't always have the luxury of practicing in the . Due to lack of time or lack of available data, training a drone to identify an object in space or navigate an environment in the real world can be challenging. It's this difficulty that led the team to develop the Virtual Image Processing Environment for Research, or VIPER.

"Using a video game engine, we're able to create a photorealistic, virtual training environment, identical to the real-world environment the drone will be encountering," says Iannucci. "By training a virtual drone to identify the desired object in the virtual world, then uploading that data to the real drone in the real world, the real drone will 'remember' everything the virtual drone learned."

Because the training simulations are done in the , operators can run them much faster than real-time. In the future, by networking multiple computers running multiple simulations together, the time reductions are potentially limitless. And now, certain more dangerous tasks—such as navigating collapsed buildings or complicated industrial sites—can be done virtually, so that operators don't have to worry about expensive drones crashing before they're properly prepared.

"This technology is broadly applicable," Iannucci says. "Not only can it track individual cars or people, but it can be used for many other things, such as inspection of industrial sites or pipelines. It may not teach them Kung Fu, but the variety of tasks that VIPER has the potential to teach drones to perform is virtually limitless."