A psychological approach to human-automation interaction

A psychological approach to human-automation interaction
Assistant professor of psychology Dr. Nathan Tenhundfeld, left, recently established the Advanced Teaming, Technology, Automation, and Computing Lab to study human-machine teaming. Credit: Michael Mercier

It's called the uncanny valley. Those who are fans of the HBO show "Westworld" or who have seen the movie "Ex Machina" may already be familiar with the phenomenon. But for those who are not, it's essentially the idea that humans are comfortable with robots who have humanoid features, but become very uncomfortable when the robot looks almost but not exactly like a human.

For Dr. Nathan Tenhundfeld, however, the uncanny valley is just one of many factors he must take into account while researching human-automation interaction as an assistant professor in the Department of Psychology at The University of Alabama in Huntsville (UAH).

"We're at a unique point with the development of the technology where or platforms are no longer a tool but a teammate that is incorporated into our day-to-day experiences," he says. "So we're looking at commercial platforms that offer the same systems but in different forms to see whether a certain appearance or characteristic affects the user and in what way."

Take for example, the recent push by the U.S. Department of Defense to incorporate automation into warfighting. As a concept, it makes sense: the more robots we have fighting wars, the less cost there is to human life. But in practice, it's a little more complex. What should a warfighting look like? A person? A machine?

To answer these questions, Dr. Tenhundfeld has partnered with a colleague at the U.S. Air Force Academy, where he conducted research as a postdoctoral fellow, to use "a massive database of robots" so that they determine how various components might affect the perception of a robot's capabilities. "We want to know things like, does a robot with wheels or a track fit better with our expectation of what we should be sending to war versus a humanoid robot?" he says. "And, does having a face on the robot affect whether we want to put it in harm's way?"

A psychological approach to human-automation interaction
A student in the ATTAC Lab takes part in a flight simulation. Credit: Michael Mercier

Even if there were easy answers—which there aren't—there's another equally important factor to consider beyond the robot's user interface: . For a robot to be effective, the user must trust the information that it is providing. To explain, Dr. Tenhundfeld points to research he conducted on the Tesla Model X while at the Academy. Looking at the car's autoparking feature specifically, he and his team wanted to determine the user's willingness to let the car complete its task as a function of their risk-taking preference or confidence in their abilities.

"The data suggest automated vehicles tend to be safer than humans, but humans don't like to relinquish control," he says with a laugh. "So we had this pattern where there were high intervention rates at first, but as they developed trust in the system—after it wasn't so novel and it started to meet their expectations—they began to trust it more and the intervention rates went down."

The flip side of that coin, however, is the potential for empathy in, or attachment to, a particular automated system users may have developed trust in. To illustrate this concept, he recounts a case study of explosive-ordinance disposal teams who employ robots to safely blow up bombs. "When they have to send the robots back to get repaired, they have an issue when they're given a different robot," he says. "So they've placed this trust in a specific robot even though the intelligence/capability is the same across all of the robots."

And lest it start to sound like there is already more than enough for Dr. Tenhundfeld to factor in, there is also situational trust, which sits somewhere between trust and overtrust. In this scenario, a user may develop a certain level of trust as a whole over time, but then realize they don't trust some aspects as much as much as others. "Say I have an automated system, or robot, providing intelligence in a mission-planning environment, and it screws that up," he says. "I might not trust it in a different environment, such as on the battlefield, even though it has a different physical embodiment for use in that environment, and may be distinctly capable on the battlefield."

In short, the increasingly digital nature of our world introduces a seemingly endless list of considerations when it comes to ensuring automated systems can successfully meet —all of which Dr. Tenhundfeld must take into account with the research he is doing in his Advanced Teaming, Technology, Automation, and Computing Lab, or ATTAC Lab. But given UAH's role as an academic partner to this emerging industry, it's a challenge that he and his fellow researchers have embraced. "Businesses are focused on being first to market with a product," he says. "We help them improve the product so that it works well for the user."

More information: Nathan L. Tenhundfeld et al. Calibrating Trust in Automation Through Familiarity With the Autoparking Feature of a Tesla Model X, Journal of Cognitive Engineering and Decision Making (2019). DOI: 10.1177/1555343419869083

Citation: A psychological approach to human-automation interaction (2019, August 21) retrieved 19 March 2024 from https://techxplore.com/news/2019-08-psychological-approach-human-automation-interaction.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Humans, robot teams work better when there's an emotional connection

8 shares

Feedback to editors