Page 12: Research news on Human-centered AI interfaces

Human-centered AI interfaces encompass computational systems that use machine learning, generative models, and multimodal sensing to mediate, augment, or interpret human communication and behavior. Work in this area spans assistive communication for speech, hearing, and motor impairments, real-time sign language and speech technologies, and social robots that adapt behavior and express empathy. Vision-language models and video analytics support long-video reasoning, activity recognition, and error detection, while interactive agents, privacy-aware speech systems, and affect-sensitive tools enable more accessible, expressive, and context-aware human–AI interaction across physical and virtual environments.

Engineering

AI bots could match scientist-level design problem solving

Engineers at Duke University have constructed a group of AI bots that together can solve complex design problems nearly as well as a fully trained scientist. The results, the researchers say, show how AI might soon automate ...

Machine learning & AI

AI teaches itself and outperforms human-designed algorithms

Like humans, artificial intelligence learns by trial and error, but traditionally, it requires humans to set the ball rolling by designing the algorithms and rules that govern the learning process. However, as AI technology ...

Computer Sciences

A new 'blueprint' for advancing practical, trustworthy AI

A new "blueprint" for building AI that highlights how the technology can learn from different kinds of data—beyond vision and language—to make it more deployable in the real world, has been developed by researchers at the ...

Machine learning & AI

Multimodal AI learns to weigh text and images more evenly

Just as human eyes tend to focus on pictures before reading accompanying text, multimodal artificial intelligence (AI)—which processes multiple types of sensory data at once—also tends to depend more heavily on certain types ...

page 12 from 26