To the future: Finding the moral common ground in human-robot relations

To the future: finding the moral common ground in human-robot relations
Credit: AI-generated image (disclaimer)

AI robots are still not sophisticated enough to understand humans or the complexity of social situations, says UNSW's Dr. Masimiliano Cappuccio.

"So we need to think about how we interact with social and to instead help us become more aware of our own behavior, limitations, vices or bad habits," says Dr. Cappuccio, the Deputy Director of Values in Defense and Security Technology at UNSW Canberra.

"And this can be in the areas of greater self-discipline and but also in learning virtues such as generosity and empathy."

Dr. Cappuccio is the lead author of Can Robots Make Us Better Humans? Virtuous Robotics and the Good Life with Artificial Agents which was written in collaboration with UNSW Art & Design's Dr. Eduardo Sandoval and Professor Mari Velonaki along with academics from the University of Western Sydney and Chalmers University of Technology in Sweden

It is also the first in a collection co-edited by Dr. Cappuccio, Dr. Sandoval and Prof. Velonaki and published in the International Journal of Robotics as a special issue titled Virtuous Robotics: Artificial Agents and the Good Life.

An ethical approach

The paper argues that because social robots are able to shape human beliefs and emotions, then people need to take a more ethical approach to their design and our interactions with them.

Most roboticists try to do this through the use of deontological or consequentialist principles only. Deontological ethics is more concerned with whether an action or decision is good, based on the moral obligations that action or decision fulfills. Consequentialism determines whether an action or decision is good based on the outcome, and is more concerned with the greatest advantages for the most amount of people.

But Dr. Cappuccio says we need to rely on virtue ethics: "an ancient philosophy of self-betterment and human flourishing."

"Instead of trying to build robots that imitate our ethical decision-making processes, we should consider our own interactions with robots as an opportunity of human betterment and moral learning," he says.

Dr. Cappuccio says Virtuous Robotics theory emphasizes the responsibility of the human in every morally-sensitive form of engagement with robots, such as with the AI humanoid Pepper.

Robots are "not always intelligent enough to make the best ethical choice on your behalf but can help you make the best ethical choice by reminding you, creating awareness, coaching, or by encouraging you," Dr. Cappuccio says.

Generosity, courage, honor, compassion and integrity are examples of universal virtues that researchers in the paper hope to encourage in humans through their use of social robots.

Dr. Cappuccio says AI technology in Virtuous Robotics theory acts like a mirror on human behavior and encourages the user to be more mindful. "It puts you in front of yourself and asks you to become aware of what you are doing," he says.

It is in these instances, says Dr. Sandoval, a robotics specialist from UNSW Art & Design, that Virtuous Robotics looks at how we can use AI technology to make us better as human beings "in self-improvement, education and in creating good habits, with the ultimate goal being about us becoming better people."

To the future: finding the moral common ground in human-robot relations
Credit: AI-generated image (disclaimer)

Kasper the friendly robot

An example of a Virtuous Robot is Kasper (Kinesics and Synchronization in Personal Assistant Robot).

Kasper is a child-size humanoid that UNSW acquired following a collaboration with the University of Hertfordshire, UK, where the companion was first built in 2005.

The robot is designed to assist children with autism and learning difficulties.

Professor Mari Velonaki, founder and director of UNSW's world-class Creative Robotics Lab, says Kasper teaches the children socially acceptable behaviors, for example by saying "that hurts" when the child hits it, or "that feels good" when the child touches the robot in a gentle way.

"Kasper does not replace the therapist, the social network, the family, or school," Prof. Velonaki says. "It is just a robot to help them learn social behaviors, to play, and to experiment with."

Multidisciplinary approach

Prof. Velonaki agrees with Dr. Cappuccio's approach to machine ethics, and as someone who has been building robots for at least 20 years, she says the industry needs to take this multi-disciplinary approach.

"It's not complementary, it is essential. And it has to be there from the very beginning when designing a system," she says. "You need to have people who are doing interactive design, ethicists, people from the social sciences, artificial intelligence, and mechatronics. Because we're not talking about systems that are isolated in a factory manufacturing cars, we're talking about systems that in the near future will be implemented within a social structure."

Prof. Velonaki says we need to start thinking about some of these existential questions now as AI technology advances. "Because maybe in 30 years from now systems might be a lot more a biotech—combining the biological and technical."

Social robots in improving human habits

In general, Dr. Cappuccio says, virtuous robotics applies to all the fields of human development and human flourishing.

"Whenever there are moral skills involved, for example such as having greater self-awareness over such vices as smoking, alcohol or diet, virtuous robotics can be helpful to anybody wanting to increase their control over their behaviors," he says.

And social robots are more successful in cultivating virtue in humans than mobile phone apps, says Dr. Sandoval, who attempted a self-experimentation with exercise and meditation apps on his mobile phone.

"So far, human interaction is the most effective way to cultivate virtue," Dr. Sandoval says. "But probably the second-best way to cultivate virtue is with which have an embodiment, and don't rely on screens to perform the interaction with people."

More information: Massimiliano L. Cappuccio et al. Can Robots Make us Better Humans?, International Journal of Social Robotics (2020). DOI: 10.1007/s12369-020-00700-6

Citation: To the future: Finding the moral common ground in human-robot relations (2020, November 11) retrieved 18 April 2024 from https://techxplore.com/news/2020-11-future-moral-common-ground-human-robot.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Using gazes for effective tutoring with social robots

58 shares

Feedback to editors