Page 2: Research news on Human-centered AI interfaces

Human-centered AI interfaces encompass computational systems that use machine learning, generative models, and multimodal sensing to mediate, augment, or interpret human communication and behavior. Work in this area spans assistive communication for speech, hearing, and motor impairments, real-time sign language and speech technologies, and social robots that adapt behavior and express empathy. Vision-language models and video analytics support long-video reasoning, activity recognition, and error detection, while interactive agents, privacy-aware speech systems, and affect-sensitive tools enable more accessible, expressive, and context-aware human–AI interaction across physical and virtual environments.

Consumer & Gadgets

Explainability is a must for older adults to trust AI, study shows

Voice-activated, conversational artificial intelligence (AI) agents must provide clear explanations for their suggestions, or older adults aren't likely to trust them. That's one of the main findings from a study by AI Caring ...

Consumer & Gadgets

New app designed to improve conference experience

A new app developed by Yun Huang, associate professor in the School of Information Sciences at the University of Illinois Urbana-Champaign, aims to make navigating conferences less work and more fun, so that attendees can ...

Software

AI tech recognizes human actions from just a few example videos

Typically, AI requires massive amounts of training data to understand complex human actions. However, in real-world scenarios, it is often difficult to secure sufficient video data for specific actions. A research team led ...

Robotics

Video-based AI gives robots a visual imagination

In a major step toward more adaptable and intuitive machines, Kempner Institute Investigator Yilun Du and his collaborators have unveiled a new kind of artificial intelligence system that lets robots "envision" their actions ...

page 2 from 25