A new framework for vision-based aggressive driving

**A new framework for vision-based aggressive driving
AutoRally vehicle navigating a bump on the track at high speed during testing. Credit: Drews et al.

Researchers at the Institute for Robotics and Intelligent Machines (IRIM) of the Georgia Institute of Technology have recently proposed a new framework for aggressive driving using only a monocular camera, IMU sensors and wheel speed sensors. Their approach, presented in a paper pre-published on arXiv, combines deep learning-based road detection, particle filters and model predictive control (MPC).

"Understanding the edge cases of autonomous driving is becoming very important," Paul Drews, one of the researchers who carried out the study, told TechXplore. "We chose aggressive driving, as this is a good proxy for collision avoidance or mitigation required by ."

The term 'aggressive driving' refers to instances in which a ground vehicle operates near the speed limits of handling and often with high sideslip angles, as required in rally racing. In their previous work, the researchers investigated aggressive driving using high-quality GPS for global position estimation. This approach has several limitations, for instance, it requires expensive sensors and excludes GPS-denied areas.

The researchers previously achieved promising results with a vision-based (non GPS-based) driving solution, based on regressing a local cost map from monocular camera images and using this information for MPC-based control. However, treating each input frame separately led to crucial learning challenges due to the limited field of view and low vantage point of the camera mounted on a ground vehicle, which made it difficult to generate cost maps that were effective at high speed.

**A new framework for vision-based aggressive driving
System diagram. Credit: Drews et al.

"Our main objective for this work is to understand how vision can be used as the primary sensor for aggressive driving," Drews said. "This affords interesting challenges because the must meet stringent time requirements. This allows us to explore algorithms that are tightly coupled between perception and control."

In this new study, the researchers addressed the limitations of their , introducing an alternative approach for autonomous high-speed driving in which a local cost map generator in the form of a video-based deep neural network model (i.e. LSTM) is used as the measurement process for a particle filter state estimator.

Essentially, the particle filter uses this dynamic observation model to localize in a schematic map and MPC is used to drive aggressively based on this state estimate. This aspect of the framework allowed them to obtain a global position estimate against a schematic map without using GPS technology, while also improving the accuracy of cost map predictions.

"We take a direct approach to autonomous racing by learning the intermediate cost map directly from monocular images," Drews explained. "This intermediate representation can then be directly used by model predictive control, or can be used by a particle filter to approach GPS state based aggressive performance."

Drews and his colleagues evaluated their framework using the 1:5 test vehicle on AutoRally, an open source platform from aggressive autonomous driving. With their approach, they found that the vehicle could operate reliably at the friction limits on a complex dirt track, reaching speeds above 27 mph (12 m/s).

"I think we have shown two things in this study," Drews said. "First, that by directly regressing a costmap from images, we can both use it directly and use it for localization to enable at the limits of handling. Second, that temporal information is very important in a difficult driving scenario such as this."

The study carried out by Drew and his colleagues demonstrates the advantages of combining MPC with state estimation and learned perception. In the future, their framework could pave the way for more robust and cost-effective aggressive autonomous driving on complex tracks.

"We would now like to further enhance this method with learned attention and extend it to obstacles and unknown environments," Drews said.

More information: Vision-based high speed driving with a deep dynamic observer. arXiv:1812.02071 [cs.RO]. arxiv.org/abs/1812.02071

Aggressive deep driving: combining convolutional neural networks and model predictive control. proceedings.mlr.press/v78/drews17a.html

AutoRally an open platform for aggressive autonomous driving. arXiv:1806.00678 [cs.RO]. arxiv.org/abs/1806.00678

autorally.github.io/

© 2018 Science X Network

Citation: A new framework for vision-based aggressive driving (2018, December 20) retrieved 19 March 2024 from https://techxplore.com/news/2018-12-framework-vision-based-aggressive.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

An app for operating a self-driving car

215 shares

Feedback to editors