Report from AI watchdogs rips emotion tech

facial recognition
Credit: CC0 Public Domain

The opinion that affect recognition should be banned from important decisions sounds like an angry cry...but what does it all mean? Talk is heating up, actually, about artificial intelligence's impact on our daily lives in ways that cause as much worries as wonder.

"Affect recognition." In tech parlance represents a subset of facial recognition. Affect recognition is all about emotional AI, and it is about artificial intelligence put to use to analyze expressions with the aim of identifying .

Interpreting the expressions on your face? How sound are those interpretations?

At a New York University research center, a report reminds its readers that this is not the best way to understand how people feel. The report's view is that, plain and simple, emotion-detecting AI should not be readily assumed to be able to make important calls on situations that can have serious impact on people: in recruitment, in monitoring students in the classroom, in customer services and last, but hardly least, in criminal justice.

There was a need to scrutinize why entities are using faulty technology to make assessments about character on the basis of physical appearance in the first place. This is particularly concerning in contexts such as employment, education, and .

The AI Now Institute at New York University issued the AI Now 2019 Report. The institute's focus is on the social implications of artificial intelligence. The institute notes that AI systems should have appropriate safeguards or accountability structures in place, and the institute sounds concerns where this may not be the case.

Their 2019 report looks at the business use of expression analysis as it currently stands in making decisions.

Reuters pointed out that this was AI Now's fourth annual report on AI tools. The assessment examines risks of potentially harmful AI technology and its human impact.

Turning to The Institute report said affect recognition has been "a particular focus of growing concern in 2019—not only because it can encode biases, but because it lacks any solid scientific foundation to ensure accurate or even valid results."

The report had strong wording: "Regulators should ban the use of affect recognition in that impact people's lives and access to opportunities. Until then, AI companies should stop deploying it."

The authors are not just indulging in personal; opinion; they reviewed research.

"Given the contested scientific foundations of affect recognition technology—a subclass of facial recognition that claims to detect things such as personality, emotions, mental health, and other interior states—it should not be allowed to play a role in important decisions about human lives, such as who is interviewed or hired for a job, the price of insurance, patient pain assessments, or student performance in school."

The report went even further and said that governments should "specifically prohibit use of affect recognition in high-stakes decision-making processes."

The Verge's James Vincent would not be surprised over this finding. Back in July, he reported on research that looked at failings of technology to accurately read emotions through facial expressions; simply put, you cannot trust AI to do so. He quoted a professor of psychology at Northeastern University. "Companies can say whatever they want, but the data are clear."

Vincent reported back then on a review of the literature commissioned by the Association for Psychological Science, and five scientists scrutinized the evidence: "Emotional Expressions Reconsidered: Challenges to Inferring Emotion From Human Facial Movements.". Vincent said "It took them two years to examine the data, with the review looking at more than 1,000 different studies."

Since emotions are expressed in a huge variety of ways, it is difficult to reliably infer how someone feels from a simple set of facial movements. The authors said that may well be asking a question that is fundamentally wrong. Efforts to read out people's internal states from facial movements without considering various aspects of context were at best incomplete and at worst lacked validity.

While the report called for a ban, it might be fair to consider the concern is against the naive level of confidence in a technology still in need of improvement. The field of emotional analysis needs to do better.

According to The Verge article, a professor of psychology at Northeastern University believed that perhaps the most important takeaway from the review was that "we need to think about emotions in a more complex fashion."

Leo Kelton, BBC News, meanwhile, relayed the viewpoint of AI Now co-founder Prof. Kate Crawford, who said studies had demonstrated considerable variability in terms of the number of emotional states and the way that people expressed them.

Reuters reported on its conference call ahead of the report's release: "AI Now founders Kate Crawford and Meredith Whittaker said that damaging uses of AI are multiplying despite broad consensus on ethical principles because there are no consequences for violating them." The current report said that AI-enabled affect recognition continued to be deployed at scale across environments from classrooms to job interviews. It was informing determinations about who is productive but often without people's knowledge.

The AI Now report carried specific examples of companies doing business in emotion detecting products. One such company is selling video-analytics cameras that classify faces as feeling anger, fear, and sadness, sold to casinos, restaurants, retail merchants, real estate brokers, and the hospitality industry,.

Another example was a company with AI-driven video-based tools to recommend which candidates a company should interview. The algorithms were designed to detect emotional engagement in applicants' micro-expressions.

The report included a company creating headbands that purport to detect and quantify students' attention levels through brain-activity detection. (The AI report did not ignore to add that studies "outline significant risks associated with the deployment of emotional AI in the classroom.")

More information: Report: ainowinstitute.org/AI_Now_2019_Report.pdf

Lisa Feldman Barrett et al. Emotional Expressions Reconsidered: Challenges to Inferring Emotion From Human Facial Movements, Psychological Science in the Public Interest (2019). DOI: 10.1177/1529100619832930

© 2019 Science X Network

Citation: Report from AI watchdogs rips emotion tech (2019, December 14) retrieved 19 March 2024 from https://techxplore.com/news/2019-12-ai-watchdogs-rips-emotion-tech.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Researchers call for harnessing, regulation of AI

93 shares

Feedback to editors