Page 18: Research news on Human-centered AI interfaces

Human-centered AI interfaces encompass computational systems that use machine learning, generative models, and multimodal sensing to mediate, augment, or interpret human communication and behavior. Work in this area spans assistive communication for speech, hearing, and motor impairments, real-time sign language and speech technologies, and social robots that adapt behavior and express empathy. Vision-language models and video analytics support long-video reasoning, activity recognition, and error detection, while interactive agents, privacy-aware speech systems, and affect-sensitive tools enable more accessible, expressive, and context-aware human–AI interaction across physical and virtual environments.

Business

AI video becomes more convincing, rattling creative industry

Gone are the days of six-fingered hands or distorted faces—AI-generated video is becoming increasingly convincing, attracting Hollywood, artists, and advertisers, while shaking the foundations of the creative industry.

Security

RisingAttacK: New technique can make AI 'see' whatever you want

Researchers have demonstrated a new way of attacking artificial intelligence computer vision systems, allowing them to control what the AI "sees." The research shows that the new technique, called RisingAttacK, is effective ...

page 18 from 26