Page 9: Research news on Human-centered AI interfaces

Human-centered AI interfaces encompass computational systems that use machine learning, generative models, and multimodal sensing to mediate, augment, or interpret human communication and behavior. Work in this area spans assistive communication for speech, hearing, and motor impairments, real-time sign language and speech technologies, and social robots that adapt behavior and express empathy. Vision-language models and video analytics support long-video reasoning, activity recognition, and error detection, while interactive agents, privacy-aware speech systems, and affect-sensitive tools enable more accessible, expressive, and context-aware human–AI interaction across physical and virtual environments.

Hi Tech & Innovation

Biological intelligence as the basis for new AI systems

In a new research project led by the Central Institute of Mental Health (CIMH) in Mannheim, scientists are investigating how insights into learning processes in animal brains can be used to make artificial intelligence (AI) ...

Software

AI tool created to help sight-impaired programmers

A University of Texas at Dallas researcher and his collaborators have developed an artificial intelligence (AI)-assisted tool that makes it possible for visually impaired computer programmers to create, edit and verify 3D ...

Computer Sciences

Visualizing the internal structure behind AI decision-making

Although deep learning–based image recognition technology is rapidly advancing, it still remains difficult to clearly explain the criteria AI uses internally to observe and judge images. In particular, technologies that analyze ...

page 9 from 26