Study: Older drivers need more time to react to road hazards

driving alert
Credit: CC0 Public Domain

Imagine you're sitting in the driver's seat of an autonomous car, cruising along a highway and staring down at your smartphone. Suddenly, the car detects a moose charging out of the woods and alerts you to take the wheel. Once you look back at the road, how much time will you need to safely avoid the collision?

MIT researchers have found an answer in a new study that shows humans need about 390 to 600 milliseconds to detect and react to road hazards, given only a single glance at the road—with younger detecting hazards nearly twice as fast as older drivers. The findings could help developers of autonomous cars ensure they are allowing people enough time to safely take the controls and steer clear of unexpected hazards.

Previous studies have examined hazard response times while people kept their eyes on the road and actively searched for hazards in videos. In this new study, recently published in the Journal of Experimental Psychology: General, the researchers examined how quickly drivers can recognize a road hazard if they've just looked back at the road. That's a more realistic scenario for the coming age of semiautonomous cars that require human intervention and may unexpectedly hand over control to human drivers when facing an imminent hazard.

"You're looking away from the road, and when you look back, you have no idea what's going on around you at first glance," says lead author Benjamin Wolfe, a postdoc in the Computer Science and Artificial Intelligence Laboratory (CSAIL). "We wanted to know how long it takes you to say, "A moose is walking into the road over there, and if I don't do something about it, I'm going to take a moose to the face.""

For their study, the researchers built a unique dataset that includes YouTube dashcam videos of drivers responding to road hazards—such as objects falling off truck beds, moose running into the road, 18-wheelers toppling over, and sheets of ice flying off car roofs—and other videos without road hazards. Participants were shown split-second snippets of the videos, in between blank screens. In one test, they indicated if they detected hazards in the videos. In another test, they indicated if they would react by turning left or right to avoid a hazard.

The results indicate that younger drivers are quicker at both tasks: Older drivers (55 to 69 years old) required 403 milliseconds to detect hazards in videos, and 605 milliseconds to choose how they would avoid the hazard. Younger drivers (20 to 25 years old) only needed 220 milliseconds to detect and 388 milliseconds to choose.

Those age results are important, Wolfe says. When autonomous vehicles are ready to hit the road, they'll most likely be expensive. "And who is more likely to buy expensive vehicles? Older drivers," he says. "If you build an autonomous vehicle system around the presumed capabilities of reaction times of young drivers, that doesn't reflect the time older drivers need. In that case, you've made a system that's unsafe for older drivers."

Joining Wolfe on the paper are: Bobbie Seppelt, Bruce Mehler, Bryan Reimer, of the MIT AgeLab, and Ruth Rosenholtz of the Department of Brain and Cognitive Sciences and CSAIL.

Playing "the worst video game ever"

In the study, 49 participants sat in front of a large screen that closely matched the visual angle and viewing distance for a driver, and watched 200 videos from the Road Hazard Stimuli dataset for each test. They were given a toy wheel, brake, and gas pedals to indicate their responses. "Think of it as the worst video game ever," Wolfe says.

The dataset includes about 500 eight-second dashcam videos of a variety of road conditions and environments. About half of the videos contain events leading to collisions or near collisions. The other half try to closely match each of those driving conditions, but without any hazards. Each video is annotated at two critical points: the frame when a hazard becomes apparent, and the first frame of the driver's response, such as braking or swerving.

Before each video, participants were shown a split-second white noise mask. When that mask disappeared, participants saw a snippet of a random video that did or did not contain an imminent hazard. After the video, another mask appeared. Directly following that, participants stepped on the brake if they saw a hazard or the gas if they didn't. There was then another split-second pause on a black screen before the next mask popped up.

When participants started the experiment, the first video they saw was shown for 750 milliseconds. But the duration changed during each test, depending on the participants' responses. If a participant responded incorrectly to one video, the next video's duration would extend slightly. If they responded correctly, it would shorten. In the end, durations ranged from a single frame (33 milliseconds) up to one second. "If they got it wrong, we assumed they didn't have enough information, so we made the next video longer. If they got it right, we assumed they could do with less information, so made it shorter," Wolfe says.

The second task used the same setup to record how quickly participants could choose a response to a hazard. For that, the researchers used a subset of videos where they knew the response was to turn left or right. The stops, and the mask appears on the first frame that the driver begins to react. Then, participants turned the wheel either left or right to indicate where they'd steer.

"It's not enough to say, "I know something fell into road in my lane." You need to understand that there's a shoulder to the right and a car in the next lane that I can't accelerate into, because I'll have a collision," Wolfe says.

More time needed

The MIT study didn't record how long it actually takes people to, say, physically look up from their phones or turn a wheel. Instead, it showed people need up to 600 milliseconds to just detect and react to a hazard, while having no context about the environment.

Wolfe thinks that's concerning for autonomous vehicles, since they may not give humans adequate time to respond, especially under panic conditions. Other studies, for instance, have found that it takes people who are driving normally, with their eyes on the road, about 1.5 seconds to physically avoid road hazards, starting from initial detection.

Driverless cars will already require a couple hundred milliseconds to alert a driver to a hazard, Wolfe says. "That already bites into the 1.5 seconds," he says. "If you look up from your phone, it may take an additional few hundred milliseconds to move your eyes and head. That doesn't even get into time it'll take to reassert control and brake or steer. Then, it starts to get really worrying."

Next, the researchers are studying how well peripheral vision helps in detecting hazards. Participants will be asked to stare at a blank part of the screen—indicating where a smartphone may be mounted on a windshield—and similarly pump the brakes when they notice a road hazard.


Explore further

Older drivers adapt their thinking to improve road hazard detection

This story is republished courtesy of MIT News (web.mit.edu/newsoffice/), a popular site that covers news about MIT research, innovation and teaching.

Citation: Study: Older drivers need more time to react to road hazards (2019, August 8) retrieved 19 September 2019 from https://techxplore.com/news/2019-08-older-drivers-react-road-hazards.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
48 shares

Feedback to editors

User comments

Aug 08, 2019
Road hazards like inexperienced reckless young drivers.

Aug 08, 2019
"Older drivers need more time to react to road hazards"

WOW. what a revelation!
And they needed a study, to figure this out!?

What's next, "The sun rises in the East." ? O!M!G! Who knew???

Aug 08, 2019
"Suddenly, the car detects a moose charging out of the woods and alerts you to take the wheel."

Sorry, IMHO, that's a critical algorithmic failure.
The *correct* response to 'novel hazard' is to brake hard and sound the horn.
This classic combination reduces impact speed, buys time for deeper analysis and perhaps an evasive swerve, demonstrates 'due diligence' and, by acting to mitigate the mayhem, is an acceptable defence 'in extremis'...

Aug 08, 2019
How about smoking pot while not driving? Whaa...who...me? Wait a minute ... now?

Aug 08, 2019
I do most of my driving by peripheral vision. If something is moving, it's a hazard. Doesn't matter if it's on the sidewalk, in the road, near the road, in a driveway... Anything could become a hazard. One of the first things my dad told me about driving, be sure to watch out for a-holes opening doors suddenly into traffic. I feel like he must have hit one once.

One time 9 deer were on the highway in the middle of the night while it was foggy and I had to stear in between them. Good thing I have so much video game training is all I'm saying. I wonder if millennials are the sweet spot for driving reflexes, having both owned a car for the bulk of their lives but also played video games the most of any generation before. GenZ'rs typically don't drive all that much and GenX'rs typically only played shitty video games and then "grew up".

Aug 09, 2019
Concerning this Study, I think the title is misleading, "Older drivers need more time to react to road hazards". Going with the title, I wonder how much money MIT spent on this study? If they had spent just a short while driving south Florida roads, they would have come to the same conclusion!
In all seriousness, I can't wait to see their study on peripheral vision and the elderly. Talk to any doctor or even traffic cop, they will tell you that the average elderly person has lost 25 to 30 degrees of peripheral vision (and AARP fights against regular eye exams for the elderly)!

Aug 10, 2019

The *correct* response to 'novel hazard' is to brake hard and sound the horn.


You could cause a total pileup. This is a problem because the AI will produce a lot of false positives for "animals" that simply aren't there, or don't require a hazard response. The rule of thumb is that you don't brake for anything smaller than a large dog, and if the car starts braking over every plastic bag blown around in the wind, it's going to get a lot of people killed.

A lot of the time you should just drive by, and flash your hazard lights to alert other people.

Aug 10, 2019
Good thing I have so much video game training is all I'm saying.


It's not going to give you much of an advantage.

There are no more "twitch" games like in the 80's and 90's CRT era. Now you have multiple IO buffers in between that make the event updates 10 times slower, so the games actually have to be made slower paced or else they would be unplayable. There's lag compensation and event prediction, auto-aim, etc. to give you the illusion that you're hitting the targets, but in reality the game is giving you miles of leeway and helping you.

For example, if you play with a TV that has 200 ms of input lag, the game has to estimate when -you- see the event on the screen and allow you extra 200 ms time to react, and the response to your input becomes visible after another 200 ms, so the action-reaction cycle can only occur once every 400 ms. Old CRT game systems such as PacMan could cycle at 33 ms.

Modern games are more like watching a video than playing it.

Aug 10, 2019
The difference is that in the past, you had your inputs wired directly to the CPU bus, so pressing a key or moving the mouse would trigger an interrupt and update your inputs in less than a millisecond, and the CPU would use the CRT screen blanking interval to computer the game logic, so it would have a new screen to draw every 16.7 milliseconds. There was no buffering - the picture was computed as the electron beam flies, and games could be physically faster than the human reaction time (<150 ms)

Nowadays you have a wireless game controller operating through USB, which means there's a minimum input lag of 8-10 milliseconds just to get the button presses recorded and into the CPU. Then the CPU renders the screen through 2 or 3 buffers (3x 16.7 ms) and then the monitor/TV itself buffers the same data 2 or 3 times before it gets pushed out onto the pixels, so you have a MINIMUM lag around the same as the human reaction time to go through the whole loop from input to output.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more