Page 24: Research news on Human-centered AI interfaces

Human-centered AI interfaces encompass computational systems that use machine learning, generative models, and multimodal sensing to mediate, augment, or interpret human communication and behavior. Work in this area spans assistive communication for speech, hearing, and motor impairments, real-time sign language and speech technologies, and social robots that adapt behavior and express empathy. Vision-language models and video analytics support long-video reasoning, activity recognition, and error detection, while interactive agents, privacy-aware speech systems, and affect-sensitive tools enable more accessible, expressive, and context-aware human–AI interaction across physical and virtual environments.

Machine learning & AI

Hybrid AI model crafts smooth, high-quality videos in seconds

What would a behind-the-scenes look at a video generated by an artificial intelligence model be like? You might think the process is similar to stop-motion animation, where many images are created and stitched together, but ...

Computer Sciences

Text-to-video AI blossoms with new metamorphic video capabilities

While text-to-video artificial intelligence models like OpenAI's Sora are rapidly metamorphosing in front of our eyes, they have struggled to produce metamorphic videos. Simulating a tree sprouting or a flower blooming is ...

Business

Meta releases standalone AI app, competing with ChatGPT

Social media behemoth Meta unveiled its first standalone AI assistant app on Tuesday as it tries to take on ChatGPT by giving users a direct path to its generative artificial intelligence models.

page 24 from 26