Page 5: Research news on AI chatbot safety

AI chatbot safety concerns the psychological, social, and informational risks posed by conversational AI systems and the mechanisms for mitigating those risks. Work in this area examines how chatbots can influence user behavior, propagate misinformation, exhibit bias, or respond inadequately to crises such as suicidality, with particular attention to children and adolescents. The field integrates technical alignment and auditing methods with legal, regulatory, and ethical frameworks, including parental controls, age restrictions, liability standards, and governance of persuasive or anthropomorphic chatbot designs.

Electronics & Semiconductors

AI toys look for bright side after troubled start

Toy makers at the Consumer Electronics Show were adamant about being careful to ensure that their fun creations infused with generative artificial intelligence don't turn naughty.

Internet

Grok spews misinformation about deadly Australia shooting

Elon Musk's AI chatbot Grok churned out misinformation about Australia's Bondi Beach mass shooting, misidentifying a key figure who saved lives and falsely claiming that a victim staged his injuries, researchers said Tuesday.

Internet

Meta partners with news outlets to expand AI content

Meta announced Friday it will integrate content from major news organizations into its artificial intelligence assistant to provide Facebook, Instagram and WhatsApp users with real-time information.

Security

We built AI friends but forgot the safeguards

Recently, a popular AI Companion company made headlines by announcing it would ban users under 18 from open-ended chats with its AI characters, with the full restriction to taking effect on 25 November 2025.

page 5 from 16