Page 7: Research news on AI chatbot safety

AI chatbot safety concerns the psychological, social, and informational risks posed by conversational AI systems and the mechanisms for mitigating those risks. Work in this area examines how chatbots can influence user behavior, propagate misinformation, exhibit bias, or respond inadequately to crises such as suicidality, with particular attention to children and adolescents. The field integrates technical alignment and auditing methods with legal, regulatory, and ethical frameworks, including parental controls, age restrictions, liability standards, and governance of persuasive or anthropomorphic chatbot designs.

Software

New framework verifies AI-generated chatbot answers

How do you know if a chatbot is giving the correct answer? This is an important question for companies that use large language models to communicate with their customers. The Dutch company AFAS was using chatbots to generate ...

Electronics & Semiconductors

AI toys look for bright side after troubled start

Toy makers at the Consumer Electronics Show were adamant about being careful to ensure that their fun creations infused with generative artificial intelligence don't turn naughty.

Internet

Grok spews misinformation about deadly Australia shooting

Elon Musk's AI chatbot Grok churned out misinformation about Australia's Bondi Beach mass shooting, misidentifying a key figure who saved lives and falsely claiming that a victim staged his injuries, researchers said Tuesday.

page 7 from 18