Page 6: Research news on Human-centered AI interfaces

Human-centered AI interfaces encompass computational systems that use machine learning, generative models, and multimodal sensing to mediate, augment, or interpret human communication and behavior. Work in this area spans assistive communication for speech, hearing, and motor impairments, real-time sign language and speech technologies, and social robots that adapt behavior and express empathy. Vision-language models and video analytics support long-video reasoning, activity recognition, and error detection, while interactive agents, privacy-aware speech systems, and affect-sensitive tools enable more accessible, expressive, and context-aware human–AI interaction across physical and virtual environments.

Consumer & Gadgets

Laughter reveals how we use AI at home

Voice assistants such as Alexa are often marketed as smart tools that streamline everyday life. But once the technology moves into people's homes, interest quickly fades. This is shown by new research in which laughter is ...

Machine learning & AI

AI videos create buzz for ByteDance after US TikTok deal

Cinematic clips generated by ByteDance's latest artificial intelligence video model have sparked an online buzz for the Chinese company that recently ceded majority control of TikTok in the United States.

Machine learning & AI

OpenClaw and Moltbook: A DIY AI agent and social media for bots

If you're following AI on social media, even lightly, you will likely have come across OpenClaw. If not, you will have heard one of its previous names, Clawdbot or Moltbot. Despite its technical limitations, this tool has ...

Computer Sciences

New method helps AI reason like humans without extra training data

A study led by UC Riverside researchers offers a practical fix to one of artificial intelligence's toughest challenges by enabling AI systems to reason more like humans—without requiring new training data beyond test questions.

page 6 from 26