Researchers train autonomous drones using cross-modal simulated data

CMU researchers train autonomous drones using cross-modal simulated data

To fly autonomously, drones need to understand what they perceive in the environment and make decisions based on that information. A novel method developed by Carnegie Mellon University researchers allows drones to learn perception and action separately. The two-stage approach overcomes the "simulation-to-reality gap," and creates a way to safely deploy drones trained entirely on simulated data into real-world course navigation.

"Typically drones trained on even the best photorealistic simulated data will fail in the real world because the lighting, colors and textures are still too different to translate," said Rogerio Bonatti, a doctoral student in the School of Computer Science's Robotics Institute. "Our is trained with two modalities to increase robustness against environmental variabilities."

The first modality that helps train the 's is image. The researchers used a photorealistic simulator to create an environment that included the drone, a soccer field and red square gates raised off the ground and positioned randomly to create a track. They then built a large dataset of simulated images from thousands of randomly generated drone and gate configurations.

The second modality needed for perception is knowing the gates' position and orientation in space, which the researchers accomplished using the dataset of simulated images.

Teaching the model using multiple modalities reinforces a robust representation of the drone's experience, meaning it can understand the essence of the field and gates in a way that translates from simulation to reality. Compressing images to have fewer pixels aids this process. Learning from a low-dimensional representation allows the model to see through the visual noise in the real world and identify the gates.

Credit: Carnegie Mellon University

With perception learned, researchers deploy the drone within the simulation so it can learn its control policy—or how to physically move. In this case, it learns which velocity to apply as it navigates the course and encounters each gate. Because it's a simulated environment, a program can calculate the drone's optimal trajectory before deployment. This method provides an advantage over manually supervised learning using an expert operator, since real-world learning can be dangerous, time-consuming and expensive.

The drone learns to navigate the course by going through training steps dictated by the researchers. Bonatti said he challenges specific agilities and directions the drone will need in the real world. "I make the drone turn to the left and to the right in different track shapes, which get harder as I add more noise. The robot is not learning to recreate going through any specific track. Rather, by strategically directing the simulated drone, it's learning all of the elements and types of movements to race autonomously," Bonatti said.

Bonatti wants to push current technology to approach a human's ability to interpret environmental cues.

"Most of the work on autonomous drone racing so far has focused on engineering a system augmented with extra sensors and software with the sole aim of speed. Instead, we aimed to create a computational fabric, inspired by the function of a human brain, to map visual information to the correct control actions going through a latent representation," Bonatti said.

But drone racing is just one possibility for this type of learning. The method of separating perception and control could be applied to many different tasks for artificial intelligence such as driving or cooking. While this model relies on images and positions to teach perception, other modalities like sounds and shapes could be used for efforts like identifying cars, wildlife or objects.

More information: Bonatti et al., Learning Visuomotor Policies for Aerial Navigation Using Cross-Modal Representations. arXiv:1909.06993 [cs.CV]. arxiv.org/abs/1909.06993

The researchers' code is available online: github.com/microsoft/AirSim-Dr … Racing-VAE-Imitation

Citation: Researchers train autonomous drones using cross-modal simulated data (2020, August 27) retrieved 19 April 2024 from https://techxplore.com/news/2020-08-autonomous-drones-cross-modal-simulated.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Researchers determine how to accurately pinpoint malicious drone operators

76 shares

Feedback to editors