Page 2: Research news on Human-centered AI interfaces

Human-centered AI interfaces encompass computational systems that use machine learning, generative models, and multimodal sensing to mediate, augment, or interpret human communication and behavior. Work in this area spans assistive communication for speech, hearing, and motor impairments, real-time sign language and speech technologies, and social robots that adapt behavior and express empathy. Vision-language models and video analytics support long-video reasoning, activity recognition, and error detection, while interactive agents, privacy-aware speech systems, and affect-sensitive tools enable more accessible, expressive, and context-aware human–AI interaction across physical and virtual environments.

Consumer & Gadgets

AI tools to help vision-impaired are good, but could be better

Artificial intelligence is touching nearly every aspect of life—including assistive technology for blind and low-vision (BLV) individuals. And just like in other arenas, the AI used to assist BLV people is good—but far from ...

Hi Tech & Innovation

Tiny cameras in earbuds let users talk with AI about what they see

University of Washington researchers developed the first system that incorporates tiny cameras in off-the-shelf wireless earbuds to allow users to talk with an AI model about the scene in front of them. For instance, a user ...

Machine learning & AI

Meta releases first new AI model since shaking up team

Meta on Wednesday released an artificial intelligence model, Muse Spark, it touts as smarter and faster than what it offered before shaking up its Superintelligence Labs unit.

page 2 from 26