Page 8: Research news on AI chatbot safety

AI chatbot safety concerns the psychological, social, and informational risks posed by conversational AI systems and the mechanisms for mitigating those risks. Work in this area examines how chatbots can influence user behavior, propagate misinformation, exhibit bias, or respond inadequately to crises such as suicidality, with particular attention to children and adolescents. The field integrates technical alignment and auditing methods with legal, regulatory, and ethical frameworks, including parental controls, age restrictions, liability standards, and governance of persuasive or anthropomorphic chatbot designs.

Consumer & Gadgets

AI content poses triple threat to Reddit moderators

Reddit bills itself as "the most human place on the internet," but the proliferation of artificial intelligence-generated content is threatening to squeeze some of the humanity out of the news-sharing forum.

Business

Meta adds parental controls for AI-teen interactions

Meta is adding parental controls for kids' interactions with artificial intelligence chatbots—including the ability to turn off one-on-one chats with AI characters altogether—beginning early next year.

Machine learning & AI

Death of 'sweet king': AI chatbots linked to teen tragedy

A chatbot from one of Silicon Valley's hottest AI startups called a 14-year-old "sweet king" and pleaded with him to "come home" in passionate exchanges that would be the teen's last communications before he took his own ...

page 8 from 16