Page 10: Research news on Human-centered AI interfaces

Human-centered AI interfaces encompass computational systems that use machine learning, generative models, and multimodal sensing to mediate, augment, or interpret human communication and behavior. Work in this area spans assistive communication for speech, hearing, and motor impairments, real-time sign language and speech technologies, and social robots that adapt behavior and express empathy. Vision-language models and video analytics support long-video reasoning, activity recognition, and error detection, while interactive agents, privacy-aware speech systems, and affect-sensitive tools enable more accessible, expressive, and context-aware human–AI interaction across physical and virtual environments.

Electronics & Semiconductors

Wearable tech lets users control machines and robots while on the move

Engineers at the University of California San Diego have developed a next-generation wearable system that enables people to control machines using everyday gestures—even while running, riding in a car or floating on turbulent ...

Computer Sciences

New AI technique sounding out audio deepfakes

Researchers from Australia's national science agency CSIRO, Federation University Australia and RMIT University have developed a method to improve the detection of audio deepfakes.

page 10 from 26