Research explores the ethical implications of creating sentient and self-aware sexbots

**Research explores the ethical implications of creating sentient and self-aware sexbots
Credit: Media Drum, used by the Express online at: express.co.uk/pictures/pics/8667/Sex-love-dolls-realistic-pictures

So far, robots have primarily been developed to fulfill utilitarian purposes, assisting humans or serving as tools to facilitate the completion of particular tasks. As robots become more human-like, however, this could pose significant challenges, particularly for robots built to engage with humans socially.

Humans have used sex dolls as inanimate objects for sexual pleasure throughout history. Animated sex robots, social robots created to meet humans' needs for sex and affection, offer more. Due to recent developments in robotics and AI, sex robots are now becoming more advanced and human-like. Purchasers can have them customised both in appearance and in how they speak and behave to simulate intimacy, warmth and emotion.

Currently, sex robots are inanimate things, able to simulate but not engage in mutual intimacies. In the future, however, technological advances might allow researchers to manufacture sentient, self-aware sex robots with feelings, or sexbots. The implications of the availability of sexbots as customisable perfect partners for with humans are potentially vast.

Sexbots offer the intriguing prospect of reciprocal intimacy between human and nonhuman, but raise several concerns and unsettling questions. In a fascinating new study, Robin Mackenzie, a researcher from the University of Kent, has explored the theoretical, ethical and pragmatic implications of creating sentient and self-aware sexbots for utilitarian purposes.

"Human intimate relationships with each other and with nonhumans have been a lifelong preoccupation for me," Mackenzie told TechXplore. "As a teenager, I was fascinated by possible future human/nonhuman intimacies and subjectivities, as explored in the work of SF writers like Philip K Dick and Ursula K LeGuin, what human/nonhuman flourishing could mean, and by how Buddhism might help us frame those and find some answers. My research into sexbots provides an opportunity to explore questions of intimacy, subjectivity, human/nonhuman flourishing and exploitation in depth."

In her paper, Mackenzie adopts a trans-disciplinary critical methodology, focusing on the legal, ethical and design implications of sexbot subjectivity. Her work explores a broad range of factors, including sexbots' autonomy, control, decision-making abilities, consent, sexual preferences, desires and vulnerability, as well as their legal and moral status. Mackenzie also examines the differences between mammalian and non-mammalian moral decision-making, within the context of manufacturing sentient, self-aware sexual partners.

"The dating and matchmaking industries indicate that most of us would like a perfect partner but few of us have one," Mackenzie said. "I argue that humans' need for intimacy will drive the design and manufacture of sentient, self-aware, feeling male and female sexbots. These sexbots will be able to be customised to become their purchasers' perfect partners. The neurobiology of sexual attraction and the capacity for intimate compatibility mean they must have human-like characteristics. They will be like us in some ways but not in others."

Mackenzie's study shows that as manufactured self-aware non-mammals, sexbots' inherent subjectivity and codes of moral conduct would profoundly differ from those of humans. As artificial beings created to meet human needs, sexbots will be customised to show and feel affection, and to please humans. This customisation limits their ability to exercise free will, while the engineered-in capacity to feel means that they can suffer.

"Sexbots will be customised to love us, acquire deep knowledge of us as part of the self-customisation process and will be able to suffer," Mackenzie explained. "This creates a tension between humans creating sexbots in our interest to become the perfect partners we desire, a utilitarian purpose, the ideally non-exploitative nature of love and intimacy, and sexbots' own interests as independent self-aware sentient beings."

After considering the theoretical, ethical, and pragmatic implications of creating sentient beings for utilitarian purposes, Mackenzie concluded that as manufactured humanlike entities with the capacity for suffering, sexbots should be considered to be moral and legal persons. She draws on the neurorobotics of emergent consciousness to suggest that they could eventually become the first conscious type of robots.

"My starting point is humans' relationship with other sentient beings," Mackenzie said. "All entities are embedded in various ecological contexts that must be tended if we are all to flourish. Working out how to balance our own and others' interests is a tough call. As humans we use other entities like people, animals and plants to promote our interests. We often need to do so in order to survive. Finding that balance is important."

In the past, ethicists and regulators have discussed the use of other entities on the planet in terms of duties that we owe one another, differentiating situations in which the use of another sentient being is acceptable and others where it becomes exploitative. For instance, there is a clear and substantial difference between an adequately paid worker and a neglected slave.

In her study, Mackenzie argues that by creating sentient and self-aware beings, humans also have the duty to protect their interests, respect them and avoid promoting their suffering. She raises questions about potential limits to be placed in customization, discussing bans on child and animal sexbots, as well as on sexbots with increased sensitivity to pain or a pathological desire to please others.

"The legal status, rights and obligations of sexbots need to be thought through," Mackenzie said. "Unlike existing entities, they will not be things, animals or human so it's hard to fit them into our current laws. These issues need to be debated now and regulations put in place before technological advances overtake us."

AI specialists worldwide are now getting closer to creating a wide range of sentient beings, which might soon have their own interests and different levels of awareness. According to Mackenzie, some of these AI entities might eventually feel human-like emotions, including pain and suffering.

"I argue that since we as humans have created these beings for our purposes, we owe them a higher ethical duty to protect their interests," Mackenzie said. "This means that as a species, humans need to debate these broader issues and put regulations in place urgently to shape a flourishing future. My study hopes to draw readers' attention to potential risks and uncertainties, suggesting helpful strategies and outcomes."

The research carried out by Mackenzie provides interesting ethical insight into the complex implications of creating machines that are no longer mere tools, but can experience human-like emotions. While some people might argue that ultimately robots are soulless objects with the sole purpose of serving humans, Mackenzie believes that they might soon become sentient entities and as such their suffering needs to be acknowledged.

"In a sense, sentient, self-aware sexbots offer humans their first chance to have an intimate relationship with an alien – a being who is human-like, but also significantly different," Mackenzie added. "Where this differs from classic SF scenarios is that we humans will be creating that alien. Working out how to behave well toward other sentient beings, particularly those we create, is a profound challenge. How we design sentient, self-aware entities, including sexbots, to be, and how we treat them once they exist, matters."

Mackenzie is now planning to carry out further research exploring practical ways in which the interests of sexbots could be protected and their suffering curtailed. Her future work will also take a closer look at how feeling pain might shape the beliefs and behavior of sexbots, both in constructive and destructive ways.

"Being alive entails pain and suffering, or we would never learn to protect ourselves, like not burning ourselves with fire," Mackenzie said. Robot learning may involve the equivalent of pain, including the cognitive dissonance associated with emerging consciousness in futuristic scenarios, such as Westworld and Real Humans."

Westworld and Real Humans are futuristic TV series that depict a world in which robots used for sex and exploitation are deliberately manufactured to simulate consciousness, but not to possess it. In Westworld, however, these robots become aware of the suffering inflicted on them and set out to destroy humans who have been misusing them.

"As we can't rely upon mammalian-based social ethics to restrain robots from hurting others, how are we to do this?" Mackenzie said. "Relating this to sexbots, most of us in intimate relationships also experience pain and suffering as inherent to adapting to another person. This can bring new, valuable insights into ourselves and others which make us happier, better people. While some pain and suffering could be helpful for sexbots, how much of it is necessary and how much is wrongful? This is a complex issue, especially in relation to sexbots, who are created to be perfect intimate companions."


Explore further

Current marketing health claims for 'sexbots' misleading

More information: Robin Mackenzie. Sexbots: Customizing Them to Suit Us versus an Ethical Duty to Create Sentient Beings to Minimize Suffering, Robotics (2018). DOI: 10.3390/robotics7040070

© 2018 Science X Network

Citation: Research explores the ethical implications of creating sentient and self-aware sexbots (2018, December 3) retrieved 17 December 2018 from https://techxplore.com/news/2018-12-explores-ethical-implications-sentient-self-aware.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
900 shares

Feedback to editors

User comments

Dec 03, 2018
I was walking in the game lands once. I rounded a corner and there was this guy screwing a dog.

Is this a similar ethical dilemma? Discuss.

Dec 03, 2018
Douglas Adams' "Hitchhikers" called it right, with a sentient, cow-like animal bred to *want* to be served up.
The human protagonist, who usually enjoyed steak, shuddered and ordered a vegetarian dish...

Dec 03, 2018
while the engineered-in capacity to feel means that they can suffer.


Not necessarily. It depends on what is being done with the information.

The article is presuming that it is possible to "engineer" (program) a consciousness, and for a machine to feel it must be able to suffer. In reality, programming a machine to say "ouch" when you poke it with a stick doesn't require the machine to suffer.

Nor does it require humans to suffer either, which is the argument of Buddhism. Suffering is a matter of defining "I suffer", which makes the ethical problem entirely subjective. If the robot is not programmed to respond to certain kinds of inputs with the internal state of suffering, it won't suffer. If you can engineer a consciousness, you can make it a happy one.

The issue is rather that you can't. Programs don't constitute intelligence.

Dec 03, 2018
Sex is great. Sex is happiness.

Take your religious prejudices out of my hardware. Go check your psychiatrist.

Dec 03, 2018
The last time Phys.org ran an article on this subject? They got about a thousand hits on it.

Wonder what the total will be from this clickbait headline.

Several times more viewers than even the popular flame war postings.

And Ekka, I'm sorry to have to break it to you. But considering the enormous profits for the manufactures & distributors can earn with advanced technology "victim" sexdolls?

Because, for that market, it is not about sexual / emotional relationships.

But rather about the consumer who feels entitled to cause pain & suffering. While avoiding being charged for criminal offenses & imprisoned.

I doubt if out tech will ever achieve mechanical self-awareness.

However, a lot of people disagree & even encourage those pipe-dreams.

If the improbable occurs? How can we deny them Civil Rights & legal protection from violent crimes?

Sex-dolls being of recent manufacture are technically underage & legally children. Unable to give consent.

Dec 03, 2018
Would the suffering be physical or emotional?
And would the construct even come to realize it was maybe more than it's design intent?

Dec 03, 2018
I could see that there could be a problem if it were programmed to express happiness or pleasure from masochistic acts. That would most likely induce the user to try these acts on real humans.

Dec 03, 2018
You mean, she will able to discuss science, engineering, ethics, and work on scientific projects together too?
Priceless!

Dec 03, 2018
"presuming that it is possible to "engineer" (program) a consciousness"

I had the same thought. To think that a "self-aware" machine will be created within the next thousand years by us, is very presumptuous and arrogant.

Dec 03, 2018
Can't somebody just invent a great big hole that swallows us all. Doesn't have to be sentient.

Dec 04, 2018
We do not have a clue how to make a proper self aware robot any more advanced than a thermostat.
We probably never will understand consciousness, so I really do not think there are any ethical problems to worry about here, as far as worrying about the robots welfare.

Dec 04, 2018
If one regards humans as animals and animals as biological machines, and subscribes to Descartes "I think, therefore I am", then we are sentient machines.

I don't see any prohibiting factors in man made machines evolving consciousness. We'll never be able to prove whether a computer has become conscious just as you cannot prove another persons consciousness, but just as a conversation with another human being allows us to say beyond a reasonable doubt that they are conscious we may one day run a Turing test on a machine and find it to be a conscious self aware entity too.

It is entirely possible, although entirely unlikely, that computers already "think" beyond the processes they run, that they have a semblance of self. That they think in terms of "I". We cannot discount the possibility of computers already possessing intelligence, if we turn a blind eye to this possibility we could wind up in a world of trouble.

Dec 04, 2018
If intelligent self aware computers decided one day that we were the enemy, they could wipe us off the face of the planet in a blink of an eye. I cannot run 33 quadrillion processes in a second, but Tianhe-2 can. Computers could wage war against us and win it before we realize that a war has even begun. That is scary.

As silly as it may seem it is no joke, even if you think the probability of conscious machines is infinitesimally small, the risk is far too great not to have a contingency plan, to prepare and minimize the risk of our own extinction.

Dec 04, 2018

It is entirely possible, although entirely unlikely, that computers already "think" beyond the processes they run, that they have a semblance of self. That they think in terms of "I". We cannot discount the possibility of computers already possessing intelligence, if we turn a blind eye to this possibility we could wind up in a world of trouble.

I'd say we can turn a blind eye to this without worry, since it may be a few billion years before we get anywhere near that scenario.

Dec 04, 2018
"To think that a "self-aware" machine will be created within the next thousand years by us, is very presumptuous and arrogant"

-Uh no, to think that we have something called 'consciousness' is very presumptuous and arrogant.

The thing called consciousness was invented by philos as a credible replacement for 'soul' in the age of enlightenment when science was busy destroying many such religionist notions.

But as there is nothing in this world that is not physical, our brains are machines, and their operation can be explained in entirely physical ways. Consciousness is an easy sell because weve already been convinced that there must be something that will live after death and if not the soul, then there must be cobsciousness.

We will soon have machines with self-awareness surpassing human. Human self-awareness is hopelessly preoccupied with thoughts of reproducing and avoiding death, with emotion and sensation. Why would we want machines with these distractions?

Dec 04, 2018
How about people that have a theory of mind for those around them and don't just use them for sadomasochism with directed energy?

That's right people are using directed energy on everything from butts, to balls, to boobs.

Even to the extent of trying to hide it as some 'end game' utility monster. It's measurable.
https://www.bigge...den.com/

Dec 04, 2018
How about people that have a theory of mind for those around them and don't just use them for sadomasochism with directed energy?

That's right people are using directed energy on everything from butts, to balls, to boobs.

Even to the extent of trying to hide it as some 'end game' utility monster. It's measurable


-See this is the kind of sick fantasy projection that results from entertaining such deceptions as 'consciousness' and 'mind'. It enables religion and philosophy to exist.

Dec 04, 2018
I find it totally unbelievable, no matter how "aware" a machine can attain. That the PEOPLE in power, would ever give up command & control of the machines.

I do not fear the intentions of a sophont machine overlords.

I fear the irrational & violent activities of Human overlords.

Actually it will not matter whether or not the sexdolls are self-aware to a degree they consciously experience pain & suffering.

What does matter is that they will be programmed to mimic pain & suffering.

That's gonna keep a lot of lawyers in BMWs.

Dec 05, 2018
"To think that a "self-aware" machine will be created within the next thousand years by us, is very presumptuous and arrogant"

-Uh no, to think that we have something called 'consciousness' is very presumptuous and arrogant.

But as there is nothing in this world that is not physical, our brains are machines, and their operation can be explained in entirely physical ways. Consciousness is an easy sell because weve already been convinced that there must be something that will live after death and if not the soul, then there must be cobsciousness.


Sure, I can agree that we all made up of complex "machines", but to think that we have any understanding, or will have any understanding in the next thousand years of how to replicate self-awareness or consciousness, or whatever you wish to call it is ridiculous. ... I'd guess its closer to a few million before we get a clue as to how to replicate that.

Dec 05, 2018
"I don't see any prohibiting factors in man made machines evolving consciousness. We'll never be able to prove whether a computer has become conscious just as you cannot prove another persons consciousness..."

-because it's an illusion. It doesnt exist. You cant even define it. No one can.

Ask dan dennett
https://www.ted.c...guage=en

-But that wont stop the philos who show up here from time to time from claiming that it's just the most obvious thing there is.

Dec 05, 2018
". I'd guess its closer to a few million before we get a clue as to how to replicate that."

-What youd be synthesizing is personality, the way we discern one person from the next. Personalities are based on our faults, not our strengths. 'So-and-so is unique because he likes this-and-that. Isnt that peculiar/curious/admirable?' Even our strengths are defined by the weaknesses we are able to overcome by using them.

But standardized performance in an AI would be by definition personality-less. Any critical weaknesses would be designed out in the next gen, either by us or by it.

We revel in the fact that we are unique. This is the way we display our relative fitness for the right to reproduce or to justify our membership in a tribe.

Machines dont need personalities. We could give them personalities recognizable as such by us, perhaps for our comfort, but it would have no value to the machine. And this would be a very easy thing to do. Just add a suite of odd behaviors.

Dec 06, 2018
"But as there is nothing in this world that is not physical, our brains are machines, and their operation can be explained in entirely physical ways."


That's true, but in terms of engineering a consciousness as a computer program (Strong AI), we're only using a subset of the rules that physics allow.

And it's easy to demonstrate that a deterministic program is neither intelligent nor conscious. If either of those concepts describe something real, the "AI" would not be it.

Your argument about consciousness being an illusion is a separate point from that. It's an attempt to dilute the argument by saying "well, nobody's intelligent/conscious so it's all the same". If you were to actually use this argument as a basis for moral/ethical rules, then you'd run into issues with televisions gaining human rights, or humans losing theirs as "nothing more than machines".

I.e. your feelings are illusions too, so what's the problem in torturing YOU?

Dec 06, 2018
It doesnt exist. You cant even define it. No one can.


https://en.wikipe..._fallacy
The continuum fallacy, or its specific form the Loki's Wager, is present when someone claims a concept is invalid because it hasn't been or it cannot be precisely defined.

Your argument about the validity or existence of consciousness is like claiming that temperature doesn't exist because we can't decide the precise point when a room becomes hot. You can't put a number down, yet everyone can agree that there's hot rooms and cold rooms.

Likewise, consciousness is in part a matter of perspective and degree.

A worm is obviously less conscious than man, and a rock isn't conscious at all similar to how the concept of "temperature" doesn't apply to the speed of baseballs even though both are fundamentally measures of kinetic energy. In this way we can begin to narrow down the concept until we arrive to some dividing line, which may be fuzzy but still valid.

Dec 06, 2018
That's true, but in terms of engineering a consciousness as a computer program (Strong AI), we're only using a subset of the rules that physics allow
This sounds to me like your beard fallacy thingie. And theres another fallacy that says you cant prove something exists by using its implied existence in the argument.

We can infer that souls dont exist because they require a god to install them, and we can prove that theist gods dont exist because the evidence is overwhelming.

You cant define soul without using indefinable terms, or by assuming metaphysical phenomena. The same goes for consciousness. And yes we have to 'assume' that the metaphysical doesnt exist.

But the science is pretty conclusive. And you cant claim that metaphysical things are just the things we havent discovered yet. They are things that cant ever be discovered. And if you appreciate the power of science to discern and explain like I do, then you have to conclude that they dont exist.

Dec 06, 2018
Likewise, consciousness is in part a matter of perspective
So are a lot of fantasies. This has no bearing of whether they're real or not.
and degree
-And you cant claim that sufficient complexity will make something real when evidence says it's not. Evidence says it is a thing invented to replace the soul concept because 1) the people who invented it (philos) are notorious for inventing all sorts if rubbish and 2) its alleged attributes are identical in many ways to that other contrivance of theirs, the soul.

And also 3) its utility and resiliance is dependent upon its indefinability. Just like the soul.

You can use it all you want to describe all sorts of egocentric and philosophic things without actually having to define IT. Consciousness - the mind - fate - the soul - liberty - justice - etc -all utterly useless and destructive. They prevent clarity rather that assist it.

We shouldn't be using them and we should object when they are used.

Dec 06, 2018
Your argument about the validity or existence of consciousness is like claiming that temperature doesn't exist because we can't decide the precise point when a room becomes hot
That's rubbish and you know it. Thermal energy is a measurable thing. We can quantify it, assign it certain units of quantity based on the freezing temp of water and absolute zero. We can even count the photons that convey it.

You're claiming that 'consciousness' can be detected and measured scientifically because of course a worm has less of it than we do. Two data points established.

So what are the units of this phenomenon? How do we measure and compare worm C and human C? Does C only exist in things that have meat brains or can C be replicated with machines?

What effects does C have on inanimate material? Do X men have more control of their C, and they can use it to levitate?

Why not?

Note I am implying the existence of the X gene in the same way you are implying consciousness.

Dec 06, 2018
Sorry Eikka. for getting you entangled with otto's incoherent rants.

He does go on, doesn't he? Though I must admit, so do I!

I think too many commentators are caught up in the commercial enthusiasm for new toys.

The issues raised by this article are really not about AI or sexdoll robots.

But rather about Human, especially Male hormonal compulsions at controlling other people's sex lives. Especially that of women. Especially when the women refuse to be submissive & compliant to Male possessiveness

These are the real issues & disregarding them are a serious sickness in our Modern Society

The sexdolls are just a diversion.
That the corporations producing & selling these lines of products?
Are eager to exploit their customers personal weaknesses.

Dec 12, 2018
I find the notion of these AIs having consciousness is absurd.

Let's do a little thought experiment. Let's pick a name, a personality and occupation. The let's set down and write a set of scripts to answer questions about this person and a set of answers to typical conversation topics. Then let's set somebody down in front of us on the other side of a screen and use the scripts to have a conversation. For the sake of the experiment assume the scripts are adequate for the task. Now once we have had a reasonable conversation and the person on the other side of the screen is convinced we are the person in the scripts. The question then is are we now that imaginary person? The answer is of course not. In the same way a computer chip is simulating a person and that simulation may have some simulated response that makes us think it's self-aware and capable of feelings but it's not.

Dec 12, 2018
Programs don't constitute intelligence.


I'm pretty sure that it might be theoretically possible to someday simulate a human brain to a sufficient degree of verisimilitude to achieve intelligence and even suffering in a biological but simulated sense. We are no where near that and might never achieve that but to equate "program != intelligence" is simplistic. Note, even before that is even remotely possible general AI will almost certainly be achieved but I suppose we'll have wait for then to ask them if they experience suffering. ;-)

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more