Credit: Diemert and Weber.

Robotic systems are set to be introduced in a wide range of real-world settings, ranging from roads to malls, offices, airports, and healthcare facilities. To perform consistently well in these environments, however, robots should be able to cope well with uncertainty, adapting to unexpected changes in their surrounding environment while ensuring the safety of nearby humans.

Robotic systems that can autonomously adapt to uncertainty in situations where humans could be endangered are referred to as "safety-critical self-adaptive" systems. While many roboticists have been trying to develop these systems and improve their performance, a clear and general theoretical framework that defines them is still lacking.

Researchers at University of Victoria in Canada have recently carried out a study aimed at clearly delineating the notion of "safety-critical self-adaptive system." Their paper, pre-published on arXiv, provides a valuable framework that could be used to classify these systems and tell them apart from other robotic solutions.

"Self-adaptive systems have been studied extensively," Simon Diemert and Jens Weber wrote in their paper. "This paper proposes a definition of a safety-critical self-adaptive system and then describes a for classifying into different types based on their impact on the system's safety and the system's safety case."

The key objective of the work by Diemert and Weber was to formalize the idea of "safety-critical self-adaptive systems," so that it can be better understood by roboticists. To do this, the researchers first proposed some clear definitions for two terms, namely "safety-critical self-adaptive system" and "safe adaptation."

According to their definition, to be a safety-critical self-adaptive system, a robot should meet three key criteria. Firstly, it should satisfy Weyns' external principle of adaptation, which basically means that it should be able to autonomously handle changes and uncertainty in its environment, as well as the system itself and its goals.

To be safety-critical and self-adaptive, the system should also satisfy Weyns' internal principle of adaptation, which suggest that it should internally evolve and adjust its behavior according to the changes it experiences. To do this, it should be comprised of a managed system and a managing system.

In this framework, the managed system performs primary system functions, while the managing system adapts the managed system over time. Finally, the managed system should be able to effectively tackle safety-critical functions (i.e., complete actions that, if performed poorly, could lead to incidents and adverse events).

The researchers' definition of "safe adaptation," on the other hand, is based on two key ideas. These are that the managed component of a robotic system is responsible for any accidents in the environment, while the managing component is responsible for any changes to the managed system's configuration. Based on these two notions, Diemert and Weber define "safe adaptation" as follows:

"A safe adaptation option is an adaptation option that, when applied to the managed system, does not result in, or contribute to, the managed system reaching a hazardous state," the researchers wrote in their paper. "A safe adaptation action is an adaptation action that, while being executed, does not result in or contribute to the occurrence of a hazard. It follows that a safe adaptation is one where all adaptation options and adaptation actions are safe."

To better delineate the meaning of "safe adaptation," and what distinguishes it from any other form of "adaptation," Diemert and Weber also devised a new taxonomy that could be used to classify different adaptation performed by self-adaptive systems. This taxonomy specifically focuses on the safety or hazards associated with different adaptations.

"The taxonomy expresses criteria for classification and then describes specific criteria that the safety case for a self-adaptive system must satisfy, depending on the type of adaptations performed," Diemert and Weber wrote in their paper. "Each type in the taxonomy is illustrated using the example of a safety-critical self-adaptive water heating system."

The taxonomy delineated by Diemert and Weber classifies adaptations performed by self-adaptive robotic or computational systems into four broad categories, referred to as type 0 (non-inference), type I (static assurance), type II (constrained assurance), and type III (dynamic assurance). Each of these adaptation categories is associated with specific rules and characteristics.

The recent work by this team of researchers could guide future studies focusing on the development of self-adaptive systems designed to operate in safety-critical conditions. Ultimately, it could be used to gain a better understanding of the potential of these systems for different real-world implementations.

"The next step for this line of inquiry is to validate the proposed taxonomy, to demonstrate that it is capable of classifying all types of safety-critical self-adaptive systems and that the obligations imposed by the taxonomy are appropriate using a combination of systematic literature reviews and ," Diemert and Weber conclude in their paper.

More information: Simon Diemert, Jens H. Weber, Safety-critical adaptation in self-adaptive systems. arXiv:2210.00095v1 [cs.SE], arxiv.org/abs/2210.00095