Research news on AI chatbot safety

AI chatbot safety concerns the psychological, social, and informational risks posed by conversational AI systems and the mechanisms for mitigating those risks. Work in this area examines how chatbots can influence user behavior, propagate misinformation, exhibit bias, or respond inadequately to crises such as suicidality, with particular attention to children and adolescents. The field integrates technical alignment and auditing methods with legal, regulatory, and ethical frameworks, including parental controls, age restrictions, liability standards, and governance of persuasive or anthropomorphic chatbot designs.

Consumer & Gadgets

What does it mean to train an AI to speak like you?

Ultra-personalized artificial intelligence for assisted communication risks muting aspects of the user's identity and occasionally breaches privacy, according to a new study from a Cornell Tech doctoral student who trained ...

Consumer & Gadgets

The friendlier AI gets, the more it can backfire

Major AI platforms, including OpenAI and Anthropic, as well as social apps like Replika and Character.ai, are increasingly designing chatbots to be warm, friendly, and empathetic. However, new research from the Oxford Internet ...

Machine learning & AI

An experimental cafe run by AI opens in Stockholm

The avocado toasts and baristas making foamy lattes make it look like any other café, except at this one, located in a Stockholm residential neighborhood, artificial intelligence (AI) is running the place.

Consumer & Gadgets

Are you addicted to your AI chatbot? It might be by design

AI chatbots can grant almost any request—a celebrity in love with you, a research assistant, a book character sprung to life—instantly and with little effort. New research presented at the 2026 CHI Conference on Human Factors ...

Consumer & Gadgets

Chatbots may fuel 'delusional spirals' that lead to real-world harm

Perhaps to the surprise of their creators, large language models have become confidants, therapists, and, for some, intimate partners to real human users. In a new study, AI researchers at Stanford studied verbatim transcripts ...

Consumer & Gadgets

New study reveals chatbot empathy can worsen customer reactions

When a service encounter goes south, customers expect empathy. Hearing an employee say, "I share your frustration," can calm tensions and rebuild trust. But new research from the University of South Florida suggests that ...

page 1 from 18