Page 2: Research news on Human-centered AI interfaces

Human-centered AI interfaces encompass computational systems that use machine learning, generative models, and multimodal sensing to mediate, augment, or interpret human communication and behavior. Work in this area spans assistive communication for speech, hearing, and motor impairments, real-time sign language and speech technologies, and social robots that adapt behavior and express empathy. Vision-language models and video analytics support long-video reasoning, activity recognition, and error detection, while interactive agents, privacy-aware speech systems, and affect-sensitive tools enable more accessible, expressive, and context-aware human–AI interaction across physical and virtual environments.

Security

AI education could be crucial in tackling rising voice scams

A new study from Abertay University reveals that the most effective way to protect people from AI voice scams is not through traditional warning messages, but by educating them about how advanced and authentic AI voices have ...

Machine learning & AI

Is this your AI? ZEN framework cracks AI black box

Artificial intelligence (AI) systems power everything from chatbots to security cameras, yet many of the most advanced models operate as "black boxes." Companies can use them, but outsiders can't see how they were built, ...

Machine learning & AI

India chases 'DeepSeek moment' with homegrown AI

Fledgling Indian artificial intelligence companies showcased homegrown technologies this week at a major summit in New Delhi, underpinning big dreams of becoming a global AI power.

Consumer & Gadgets

Laughter reveals how we use AI at home

Voice assistants such as Alexa are often marketed as smart tools that streamline everyday life. But once the technology moves into people's homes, interest quickly fades. This is shown by new research in which laughter is ...

page 2 from 23