Research news on Human-centered AI interfaces

Human-centered AI interfaces encompass computational systems that use machine learning, generative models, and multimodal sensing to mediate, augment, or interpret human communication and behavior. Work in this area spans assistive communication for speech, hearing, and motor impairments, real-time sign language and speech technologies, and social robots that adapt behavior and express empathy. Vision-language models and video analytics support long-video reasoning, activity recognition, and error detection, while interactive agents, privacy-aware speech systems, and affect-sensitive tools enable more accessible, expressive, and context-aware human–AI interaction across physical and virtual environments.

Machine learning & AI

In a world of AI text, speech still reigns supreme

I remember the first time I attended a linguistics lecture as an undergraduate in Argentina. The lecturer asked a simple question: where does language come from? My instinctive answer was: books.

Security

Can people distinguish between AI-generated and human speech?

In a collaboration between Tianjin University and the Chinese University of Hong Kong, researchers led by Xiangbin Teng used behavioral and brain activity measures to explore whether people can discern between AI-generated ...

page 1 from 23