Page 15: Research news on Human-centered AI interfaces

Human-centered AI interfaces encompass computational systems that use machine learning, generative models, and multimodal sensing to mediate, augment, or interpret human communication and behavior. Work in this area spans assistive communication for speech, hearing, and motor impairments, real-time sign language and speech technologies, and social robots that adapt behavior and express empathy. Vision-language models and video analytics support long-video reasoning, activity recognition, and error detection, while interactive agents, privacy-aware speech systems, and affect-sensitive tools enable more accessible, expressive, and context-aware human–AI interaction across physical and virtual environments.

Consumer & Gadgets

Apertus: A fully open, transparent, multilingual language model

In July, EPFL, ETH Zurich, and CSCS announced their joint initiative to build a large language model (LLM). Now, this model is available and serves as a building block for developers and organizations for future applications ...

Robotics

Robots can now learn to use tools—just by watching us

Despite decades of progress, most robots are still programmed for specific, repetitive tasks. They struggle with the unexpected and can't adapt to new situations without painstaking reprogramming. But what if they could learn ...

Machine learning & AI

Researchers develop privacy-focused speech recognition for children

From the voice-to-text feature on your phone to the captions that make videos more accessible, speech transcription is already woven into everyday life. Behind the scenes, artificial intelligence is doing the heavy lifting, ...

page 15 from 26