Page 3: Research news on Human-centered AI interfaces

Human-centered AI interfaces encompass computational systems that use machine learning, generative models, and multimodal sensing to mediate, augment, or interpret human communication and behavior. Work in this area spans assistive communication for speech, hearing, and motor impairments, real-time sign language and speech technologies, and social robots that adapt behavior and express empathy. Vision-language models and video analytics support long-video reasoning, activity recognition, and error detection, while interactive agents, privacy-aware speech systems, and affect-sensitive tools enable more accessible, expressive, and context-aware human–AI interaction across physical and virtual environments.

Machine learning & AI

AI videos create buzz for ByteDance after US TikTok deal

Cinematic clips generated by ByteDance's latest artificial intelligence video model have sparked an online buzz for the Chinese company that recently ceded majority control of TikTok in the United States.

Machine learning & AI

OpenClaw and Moltbook: A DIY AI agent and social media for bots

If you're following AI on social media, even lightly, you will likely have come across OpenClaw. If not, you will have heard one of its previous names, Clawdbot or Moltbot. Despite its technical limitations, this tool has ...

Computer Sciences

New method helps AI reason like humans without extra training data

A study led by UC Riverside researchers offers a practical fix to one of artificial intelligence's toughest challenges by enabling AI systems to reason more like humans—without requiring new training data beyond test questions.

Robotics

Robot learns to lip sync by watching YouTube

Almost half of our attention during face-to-face conversation focuses on lip motion. Yet, robots still struggle to move their lips correctly. Even the most advanced humanoids make little more than muppet mouth gestures—if ...

page 3 from 23