Page 2: Research news on AI chatbot safety

AI chatbot safety concerns the psychological, social, and informational risks posed by conversational AI systems and the mechanisms for mitigating those risks. Work in this area examines how chatbots can influence user behavior, propagate misinformation, exhibit bias, or respond inadequately to crises such as suicidality, with particular attention to children and adolescents. The field integrates technical alignment and auditing methods with legal, regulatory, and ethical frameworks, including parental controls, age restrictions, liability standards, and governance of persuasive or anthropomorphic chatbot designs.

Machine learning & AI

AI 'agent' fever comes with lurking security threats

Artificial intelligence "agents" promise to save users time and energy by automating tasks, but the growing power of systems like OpenClaw is setting cybersecurity experts on edge.

Security

Making AI safer for victims of intimate partner violence

Conversational AI tools denied blunt requests for harmful content by researchers posing as intimate partner abusers, but these guardrails were easily circumvented when they requested the content under false pretenses, a new ...

Consumer & Gadgets

Teens are becoming concerned about their attachment to AI chatbots

It's estimated that more than half of all U.S. teens are regularly using companion chatbots powered by large language models and generative artificial intelligence (AI) technology. The programs, such as Character.AI, Replika ...

Consumer & Gadgets

Explainability is a must for older adults to trust AI, study shows

Voice-activated, conversational artificial intelligence (AI) agents must provide clear explanations for their suggestions, or older adults aren't likely to trust them. That's one of the main findings from a study by AI Caring ...

page 2 from 18