When things go wrong in an automated world, would we still know what to do?

When things go wrong in an automated world, would we still know what to do?
Credit: AI-generated image (disclaimer)

We live in a world that is both increasingly complex and automated. So just as we are having to deal with more complex problems, automation is leading to an atrophy of human skills that may leave us more vulnerable when responding to unexpected situations or when things go wrong.

Consider the final minutes of Air France Flight 447, which crashed into the Atlantic in May 2009 after leaving Rio de Janeiro, Brazil, for Paris, France.

Its flight recorder revealed utter confusion in the cockpit. The plane became tilted upwards at 15º with an automated voice repetitively calling "stall, stall". Yet the pilots were reeling, one exclaiming: "[…] we don't understand anything."

This is not the place to go into the ins and outs of that ill-fated flight, other than to note that any system designed to deal automatically with contingencies the majority of the time leaves a degraded skill base for the minority of situations the designers couldn't foresee.

Speaking to Vanity Fair, Nadine Sarter, an industrial engineer at the University of Michigan, recalls a conversation with five engineers involved in building a particular aircraft.

I started asking, 'Well, how does this or that work?' And they could not agree on the answers. So I was thinking, if these five engineers cannot agree, the poor pilot, if he ever encounters that particular situation … well, good luck.

In effect the complexity of judiciously flying highly intricate high-tech airliners has been outsourced to a robot, with flight engineers to all intents and purposes gone from cockpits. Only older pilots and ex air force pilots retain those detailed skills.

Back on terra firma, in an autonomous driving world there could be entire future generations with no practical experience whatsoever in driving and navigating a vehicle.

We're already seeing an indication of what can go wrong when humans leave control to autonomous systems.

An investigation into the fatal crash of a Tesla Model S with autopilot noted that the company provided information about "system limitations" to drivers. In that case, it's still up to drivers to pay attention.

But what chance would a person have of taking over any controls should things start to go wrong in their future fully autonomous vehicle. Would they even know how to spot the early signs of impending disaster?

Losing our way?

Driving this is a technological determinism that believes any and all innovation is intrinsically good. While emerging technologies may yet define what it is to be human, the challenge is to recognise the risk and what to do to make sure things don't go wrong.

That's getting harder as we've been adding to complexity, especially with autonomous driving of suburban trains, air taxis and delivery drones.

System designers have been building bigger and more intertwined systems to share computer processing load even though this makes their creations prime candidates for breakdown. They are overlooking the fact that once everything is connected, problems can spread as readily as solutions, sometimes more so.

The growing and immense complexity of an automated world poses similar risks.

Danger points

In hindsight, what is needed is an ability to cut networks free when there are failure points, or at least to seal off parts of a single network when there are failure points elsewhere within it.

This "islanding" is a feature of smart electricity grids providing scope to split the network into fragments that are able to self-sustain their internal power demand. Modelling has shown that fewer connections can lead to more security.

Could emergent complexity science help pinpoint where the danger points might lie in highly interconnected networks? Marten Scheffer and colleagues thought so. He had seen similarities between the behaviour of (his) natural systems and economic and financial systems.

His earlier work on lakes, coral reefs, seas, forests and grasslands, found that environments subject to gradual changes like climate, nutrient load and habitat loss can reach tipping points that flip them into a sometimes irreversible lower state.

Could bankers and economists grappling with the stability of financial markets learn from researchers in ecology, epidemiology and climatology to develop markers of the proximity to critical thresholds and system breakdown?

In February 2016 this all came together in the form of a paper on complexity theory and financial regulation co-authored by a wide range of experts including an economist, banker, physicist, climatologist, ecologist, zoologist, veterinarian and epidemiologist.

They recommended an online integration of data, methods and indicators, feeding into stress tests for global socioeconomic and financial systems in near-realtime. The former is similar to what's been achieved in dealing with other complex systems such as the weather.

We can begin to see how our example of an autonomous driving world folds over into questions of network stability. Imagine a highly interconnected network of autonomous vehicles.

There's a clear need to know how to detect and isolate any potential failure points in such a network, before things go wrong with potentially tragic consequences. This is more than just protecting driver and passenger from any system failure in a single autonomous vehicle.

It's time to think how we might use those multidisciplinary advances in understanding the stability of such large scale networks to avoid drastic consequences.

Provided by The Conversation

This article was originally published on The Conversation. Read the original article.The Conversation

Citation: When things go wrong in an automated world, would we still know what to do? (2017, March 20) retrieved 19 March 2024 from https://techxplore.com/news/2017-03-wrong-automated-world.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

California gives green light to self-driving car tests

 shares

Feedback to editors