Research news on Human-centered AI interfaces

Human-centered AI interfaces encompass computational systems that use machine learning, generative models, and multimodal sensing to mediate, augment, or interpret human communication and behavior. Work in this area spans assistive communication for speech, hearing, and motor impairments, real-time sign language and speech technologies, and social robots that adapt behavior and express empathy. Vision-language models and video analytics support long-video reasoning, activity recognition, and error detection, while interactive agents, privacy-aware speech systems, and affect-sensitive tools enable more accessible, expressive, and context-aware human–AI interaction across physical and virtual environments.

Machine learning & AI

AI galaxy hunters could be adding to the global GPU crunch

NASA announced that it will launch the Nancy Grace Roman space telescope into orbit in September 2026, eight months ahead of schedule. The new space telescope is expected to deliver 20,000 terabytes of data to astronomers ...

Consumer & Gadgets

What does it mean to train an AI to speak like you?

Ultra-personalized artificial intelligence for assisted communication risks muting aspects of the user's identity and occasionally breaches privacy, according to a new study from a Cornell Tech doctoral student who trained ...

Machine learning & AI

New report looks at how AI is impacting software development

Generative AI tools are rapidly transforming how software is built—and raising new risks in the process, according to a new TechBrief from the Association for Computing Machinery's Technology Policy Council (TPC) on the rise ...

Computer Sciences

What skills do people need to successfully program with AI?

The new trend of "vibe coding" allows people to program software without writing a single line of code. Now, a new study by ETH Zurich published in the Proceedings of the 2026 CHI Conference on Human Factors in Computing ...

Machine learning & AI

SmartDJ lets users reshape audio experiences with simple words

Penn Engineers have developed SmartDJ, an AI-powered editor that lets users modify immersive audio environments with simple instructions in everyday language, with potential applications in virtual reality, augmented reality, ...

page 1 from 26