Companies spell out guiding principles for autonomous cars to be safe

Companies spell out guiding principles for autonomous cars to be safe
Credit: Intel

"Safety First for Automated Driving" has been compiled by 11 authors representing automotive and mobility industry thought leadership.

This work is not a literature review. It's not a milestones report of yesterday and tomorrow. While there is no shortage of publications on the topic of self-driving cars, the authors carry their own mission: It's a framework for safety in automated passenger vehicles, where industry players are looking closely and carefully at safety by design.

What is still missing from existing literature, they stated, is verification and validation of such systems. While the authors represent different companies, they share a goal of "industry-wide standardization of automated driving." Their intended audience? That would include regulators, automated-driving industry players, and—not leaving anyone out—"any persons involved in later standardization efforts."

Intel was among the companies represented in the list of authors. "We are proud to have contributed to the groundbreaking work to establish a framework for introducing automated vehicles that are safe by design," said Intel's Jack Weast, senior principal engineer.

The document, while addressing the technical, was clearly written and often stated sobering analyses of where the industry is in safe self-driving systems. At the heart of the document are 12 principles that all self-driving vehicles should adhere to moving forward.

"Existing standards do not present solutions to some of the most problematic topics of automated driving systems," said the authors, "such as the safety assurance of artificial intelligence (the most relevant algorithms derive from the fields of machine learning and , see Appendix B), human factors and psychology, and the technological capability of the sensory devices used as inputs to the automated driving system."

To be sure, it is safe to say that no discussion of potential guidelines for safety would be complete without reference to the deep neural networks used for automated driving systems.

"Machine seen as a crucial technology for automated driving systems," they wrote. "Consequently, the development process for machine learning algorithms responsible for executing safety-related tasks of automated driving systems must undergo strict assessment."

Another topic for highlighting has to do with cybersecurity. We get a lot of information about hijacked computers but what about addressing risks of highjacking fleets of cars?

"The automotive industry is facing new challenges in automated driving due to the extreme connectivity within automated driving vehicles and between those vehicles and their ." They said the challenges included ensuring safety to protecting fleets and customers from cybersecurity attacks.

"Connectivity additions include new interfaces between the of connected vehicles, IT backend systems, and other external information sources," they wrote. "This rich attack surface creates considerable interest for malicious actors with various goals...Most importantly, cybersecurity principles and practices should be applied to ensure that attackers cannot gain arbitrary control of a vehicle's movement and that attacks are exceptionally difficult to scale to the point of simultaneously exploiting multiple vehicles."

In reading the document, TechSpot cut to the chase, asking, "Is the industry attempting to regulate itself?"

Cohen Coberly in TechSpot recognized that the document was talking about that delicate balance of driver responsibility and system responsibility in self-driving automobile safety.

He wrote that "these principles aim to blend user and vehicle responsibility, ensuring that a driver knows what's expected of them at all times—for example, explicitly informing them when a manual take-over is necessary—while preventing the vehicle's autonomous systems from putting drivers in harm's way in the first place."

It is not certain how many car makers will adopt the 12 principles laid out in the paper but Coberly commented that "given the many controversies that have surrounded self-driving cars over the past couple of years, self-regulation like this will probably seem preferable to government intervention."

Sasha Lekach, Mashable, delivered a description of what subjects are covered by the 12 guiding principles: safe operation; operational design domain; vehicle operator-initiated handover; security; user responsibility; vehicle-initiated handover; interdependency between the operator and the automated system; safety assessment; data recording; passive safety; behavior in traffic; and a layer.

Explore further

Drivers are slower to respond to emergencies in semi-automated cars

More information: … framework/#gs.nlfdlo … utomated-Driving.pdf

© 2019 Science X Network

Citation: Companies spell out guiding principles for autonomous cars to be safe (2019, July 5) retrieved 19 September 2019 from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Feedback to editors

User comments

Jul 05, 2019
It's a mystery why this article purports in its headline to outline "guiding principles for autonomous cars" when the underlying design principle aim is "to blend user and vehicle responsibility." As soon as you "blend" command and control responsibility of a man-machine system the machine loses any autonomy. The system merely becomes computer-aided.

Jul 06, 2019
Ya, no longer autonomous. Also, sounds like a sidestep to the "Trolley Problem": Hand it over to a human (with microseconds to spare).

Jul 06, 2019
He wrote that "these principles aim to blend user and vehicle responsibility, ensuring that a driver knows what's expected of them at all times—for example, explicitly informing them when a manual take-over is necessary...

The article seeks to define principles for safe autonomous cars yet appears to accept without question the safety of semi-autonomous vehicles which are decidedly not safe. Handing control of a moving vehicle over to a human being who must suddenly control the vehicle without the knowledge of position, direction, speed, acceleration of the vehicles around him and of any road hazards.

Jul 07, 2019
No surprise Intel was part of this, considering they are responsible for the Tesla autopilot failures.

Did you hear that Intel floating point processors can't subtract two numbers ?
Trying subtracting 10^-23 from 100 in Excel, you can't use the windows 10 calculator app as it won't let you enter 10^-23.
That's the second bug I've personally found in the last 20 years.
Pretty sure they never fixed any of them.

Jul 07, 2019
the windows 10 calculator app as it won't let you enter 10^-23

Are you in 'Scientific Mode' ?

Jul 07, 2019
Yes, been programming for 40 years, just so ya know 8-)

Jul 07, 2019
Looks like Microsoft has fixed it this release, good on them, colluding with Intel was a bad look.

Jul 07, 2019
You can use powershell if you still doubt it ....

PS: The fact the calculator now returns the correct value implies continuing collusion between Microsoft and Intel.

Jul 07, 2019
It appears any number less than 10^-13 has this error.

Jul 08, 2019
I have done some more investigating,
It appears the Intel floating point can't subtract numbers less than 10^-13
using powershell all of the below calcs return an incorrect result ....
100-[math]::pow(10,-15) etc all the way to 10^-256

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more