Page 17: Research news on Human-centered AI interfaces

Human-centered AI interfaces encompass computational systems that use machine learning, generative models, and multimodal sensing to mediate, augment, or interpret human communication and behavior. Work in this area spans assistive communication for speech, hearing, and motor impairments, real-time sign language and speech technologies, and social robots that adapt behavior and express empathy. Vision-language models and video analytics support long-video reasoning, activity recognition, and error detection, while interactive agents, privacy-aware speech systems, and affect-sensitive tools enable more accessible, expressive, and context-aware human–AI interaction across physical and virtual environments.

Hi Tech & Innovation

Turning gestures into speech for people with limited communication

Communication is a fundamental human right, and many individuals need augmentative and alternative communication (AAC) approaches or tools, such as a notebook or electronic tablet with symbols the user can select to create ...

Business

Palantir, the AI giant that preaches US dominance

Palantir, an American data analysis and artificial intelligence company, has emerged as Silicon Valley's latest tech darling—one that makes no secret of its macho, America-first ethos now ascendant in Trump-era tech culture.

Consumer & Gadgets

AI helps stroke survivors find their voice

A new approach using generative AI platforms such as ChatGPT is showing promise in enhancing communication for people with language disorders.

page 17 from 26