Page 12: Research news on AI chatbot safety

AI chatbot safety concerns the psychological, social, and informational risks posed by conversational AI systems and the mechanisms for mitigating those risks. Work in this area examines how chatbots can influence user behavior, propagate misinformation, exhibit bias, or respond inadequately to crises such as suicidality, with particular attention to children and adolescents. The field integrates technical alignment and auditing methods with legal, regulatory, and ethical frameworks, including parental controls, age restrictions, liability standards, and governance of persuasive or anthropomorphic chatbot designs.

Machine learning & AI

Can AI help you identify a scam? An expert explains

Imagine that you've received an email asking you to transfer money to a bank account. Some of the details look right, but how can you be sure the message is legitimate?

Machine learning & AI

How trustworthy is AI?

Artificial intelligence is everywhere—writing emails, recommending movies and even driving cars—but what about the AI you don't see? Who (or what) is behind the scenes developing the algorithms that go unnoticed? And ...

Internet

Hey chatbot, is this true? AI 'factchecks' sow misinformation

As misinformation exploded during India's four-day conflict with Pakistan, social media users turned to an AI chatbot for verification—only to encounter more falsehoods, underscoring its unreliability as a fact-checking ...

Machine learning & AI

Do we trust chatbots? New tool makes it easier to gauge

As artificial intelligence tools like ChatGPT are integrated into our everyday lives, our interactions with AI chatbots online become more frequent. Are we welcoming them, or are we trying to push them away?

page 12 from 14