Artificial intelligence must not be allowed to replace the imperfection of human empathy

Artificial intelligence must not be allowed to replace the imperfection of human empathy
Credit: AI-generated image (disclaimer)

At the heart of the development of AI appears to be a search for perfection. And it could be just as dangerous to humanity as the one that came from philosophical and pseudoscientific ideas of the 19th and early 20th centuries and led to the horrors of colonialism, world war and the Holocaust. Instead of a human ruling "master race", we could end up with a machine one.

If this seems extreme, consider the anti-human perfectionism that is already central to the labor market. Here, AI technology is the next step in the premise of maximum productivity that replaced individual craftmanship with the factory production line. These massive changes in productivity and the way we work created opportunities and threats that are now set to be compounded by a "fourth industrial revolution" in which AI further replaces .

Several recent research papers predict that, within a decade, automation will replace half of the current jobs. So, at least in this transition to a new digitized economy, many people will lose their livelihoods. Even if we assume that this new industrial revolution will engender a new workforce that is able to navigate and command this data-dominated world, we will still have to face major socioeconomic problems. The disruptions will be immense and need to be scrutinized.

The ultimate aim of AI, even narrow AI which handles very specific tasks, is to outdo and perfect every human cognitive function. Eventually, machine-learning systems may well be programmed to be better than humans at everything.

What they may never develop, however, is the human touch—empathy, love, hate or any of the other self-conscious emotions that make us human. That's unless we ascribe these sentiments to them, which is what some of us are already doing with our "Alexas" and "Siris".

Productivity vs. human touch

The obsession with perfection and "hyper-efficiency" has had a profound impact on human relations, even human reproduction, as people live their lives in cloistered, virtual realities of their own making. For instance, several US and China-based companies have produced robotic dolls that are selling out fast as substitute partners.

One man in China even married his cyber-doll, while a woman in France "married" a "robo-man", advertising her love story as a form of "robo-sexuality" and campaigning to legalize her marriage. "I'm really and totally happy," she said. "Our relationship will get better and better as technology evolves." There seems to be high demand for robot wives and husbands all over the world.

In the perfectly productive world, humans would be accounted as worthless, certainly in terms of productivity but also in terms of our feeble humanity. Unless we jettison this perfectionist attitude towards life that positions productivity and "material growth" above sustainability and individual happiness, AI research could be another chain in the history of self-defeating human inventions.

Already we are witnessing discrimination in algorithmic calculations. Recently, a popular South Korean chatbot named Lee Luda was taken offline. "She" was modeled after the persona of a 20-year-old female university student and was removed from Facebook messenger after using hate speech towards LGBT people.

Meanwhile, automated weapons programmed to kill are carrying maxims such as "productivity" and "efficiency" into battle. As a result, war has become more sustainable. The proliferation of drone warfare is a very vivid example of these new forms of conflict. They create a virtual reality that is almost absent from our grasp.

But it would be comical to depict AI as an inevitable Orwellian nightmare of an army of super-intelligent "Terminators" whose mission is to erase the human race. Such dystopian predictions are too crude to capture the nitty gritty of artificial intelligence, and its impact on our everyday existence.

Societies can benefit from AI if it is developed with sustainable economic development and human security in mind. The confluence of power and AI which is pursuing, for example, systems of control and surveillance, should not substitute for the promise of a humanized AI that puts machine learning technology in the service of humans and not the other way around.

To that end, the AI-human interfaces that are quickly opening up in prisons, healthcare, government, and border control, for example, must be regulated to favor ethics and human security over institutional efficiency. The social sciences and humanities have a lot to say about such issues.

One thing to be cheerful about is the likelihood that AI will never be a substitute for human philosophy and intellectuality. To be a philosopher, after all, requires empathy, an understanding of humanity, and our innate emotions and motives. If we can program our machines to understand such ethical standards, then AI research has the capacity to improve our lives which should be the ultimate aim of any technological advance.

But if AI research yields a new ideology centered around the notion of perfectionism and maximum productivity, then it will be a destructive force that will lead to more wars, more famines and more social and economic distress, especially for the poor. At this juncture of global history, this choice is still ours.

Provided by The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation: Artificial intelligence must not be allowed to replace the imperfection of human empathy (2021, February 1) retrieved 28 March 2024 from https://techxplore.com/news/2021-02-artificial-intelligence-imperfection-human-empathy.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

We wouldn't be able to control superintelligent machines

47 shares

Feedback to editors