Page 26: Research news on Human-centered AI interfaces

Human-centered AI interfaces encompass computational systems that use machine learning, generative models, and multimodal sensing to mediate, augment, or interpret human communication and behavior. Work in this area spans assistive communication for speech, hearing, and motor impairments, real-time sign language and speech technologies, and social robots that adapt behavior and express empathy. Vision-language models and video analytics support long-video reasoning, activity recognition, and error detection, while interactive agents, privacy-aware speech systems, and affect-sensitive tools enable more accessible, expressive, and context-aware human–AI interaction across physical and virtual environments.

Machine learning & AI

When humans use AI to earn patents, who is doing the inventing?

The advent of generative artificial intelligence has sent shock waves across industries, from the technical to the creative. AI systems that can generate viable computer code, write news stories and spin up professional-looking ...

Machine learning & AI

Generative AI rivals racing to the future

Since ChatGPT burst onto the scene in late 2022, generative artificial intelligence (GenAI) models have been vying for the lead—with the US and China hotbeds for the technology.

page 26 from 26