This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

preprint

trusted source

proofread

Toward affective computing that works for everyone

Toward affective computing that works for everyone
Some groups are highly underrepresented. Credit: arXiv (2023). DOI: 10.48550/arxiv.2309.10780

Diversity and inclusion are critical aspects of the responsible development of artificial intelligence (AI) technologies, including affective computing. Affective computing, which focuses on recognizing, interpreting, and responding to human emotions, can revolutionize various domains, such as health care, education, and human-machine interaction. Capturing subjective states through technical means is challenging, though, and errors can occur, as seen with lie detectors not working adequately or gender classifier systems misgendering users.

If used for ulterior decision-making processes, such inferences could have for people, the impacts of which may vary depending on the context of an application, i.e., flagging innocent people as potential criminals in border control or detrimentally affecting in .

Following this line of thought, Tessa Verhoef from the Creative Intelligence Lab at Leiden University and Eduard Fosch-Villaronga from eLaw—Center for Law and Digital Technologies have written an article posted on the arXiv preprint server highlighting that systems trained on the datasets currently available and used most widely may not work equally well for everyone and will likely have racial biases, biases against users with (mental) , and age biases because they derive from limited samples that do not fully represent societal diversity.

Verhoef and Fosch-Villaronga presented the paper entitled "Towards affective computing that works for everyone" at the conference Affective Computing + Intelligent Interaction (ACII '23) that was held at the Massachusetts Institute of Technology (MIT) Media Lab Sept. 10–13. The annual Conference of the Association for the Advancement of Affective Computing (AAAC) is the premier international forum for research on affective and multimodal human-machine interaction and systems.

In their paper, they argue that missing diversity, equity, and inclusion elements in affective computing datasets directly affect the accuracy and fairness of emotion recognition algorithms across different groups.

The researchers conducted a literature review revealing how affective computing systems may work differently for different groups due to, for instance, mental health conditions impacting and speech or age-related changes in facial appearance and health. To do so, they analyzed existing affective computing datasets and highlighted a disconcerting lack of diversity in current affective computing datasets regarding race, sex/gender, age, and (mental) health representation.

By emphasizing the need for more inclusive sampling strategies and standardized documentation of demographic factors in datasets, the researchers provide recommendations and call for greater attention to inclusivity and consideration of societal consequences in affective computing research to promote ethical and accurate outcomes in this emerging field.

More information: Tessa Verhoef et al, Towards affective computing that works for everyone, arXiv (2023). DOI: 10.48550/arxiv.2309.10780

Journal information: arXiv
Provided by Leiden University
Citation: Toward affective computing that works for everyone (2023, September 25) retrieved 27 April 2024 from https://techxplore.com/news/2023-09-affective.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Development of psychosis influenced by neighborhood

11 shares

Feedback to editors