Page 16: Research news on Human-centered AI interfaces

Human-centered AI interfaces encompass computational systems that use machine learning, generative models, and multimodal sensing to mediate, augment, or interpret human communication and behavior. Work in this area spans assistive communication for speech, hearing, and motor impairments, real-time sign language and speech technologies, and social robots that adapt behavior and express empathy. Vision-language models and video analytics support long-video reasoning, activity recognition, and error detection, while interactive agents, privacy-aware speech systems, and affect-sensitive tools enable more accessible, expressive, and context-aware human–AI interaction across physical and virtual environments.

Machine learning & AI

AI helps UK woman rediscover lost voice after 25 years

A British woman suffering from motor neuron disease who lost her ability to speak is once again talking in her own voice thanks to artificial intelligence and a barely audible eight-second clip from an old home video.

Machine learning & AI

Researchers optimize AI systems for science

Using services like ChatGPT or Microsoft Copilot can sometimes seem like magic—to the point it can be easy to forget about the advanced science running behind the scenes of any artificial intelligence (AI) system. Like any ...

page 16 from 26