Page 3: Research news on AI chatbot safety

AI chatbot safety concerns the psychological, social, and informational risks posed by conversational AI systems and the mechanisms for mitigating those risks. Work in this area examines how chatbots can influence user behavior, propagate misinformation, exhibit bias, or respond inadequately to crises such as suicidality, with particular attention to children and adolescents. The field integrates technical alignment and auditing methods with legal, regulatory, and ethical frameworks, including parental controls, age restrictions, liability standards, and governance of persuasive or anthropomorphic chatbot designs.

Machine learning & AI

OpenAI shelves plans for erotic chatbot

OpenAI has put plans for a sexually explicit chatbot on hold indefinitely, the company said Thursday, amid mounting concerns about the societal and reputational risks of releasing such a product.

Consumer & Gadgets

Asking AI to act like an expert can make it less reliable

To get the best out of AI, some users tell it to provide answers as if it were an expert. Others ask it to adopt a persona, such as a safety monitor, to guide its responses. However, this approach can sometimes hurt performance, ...

Consumer & Gadgets

Report calls for AI toy safety standards to protect young children

AI-powered toys that "talk" with young children should be more tightly regulated and carry new safety kitemarks, according to a report that warns they are not always developed with children's psychological safety in mind. ...

page 3 from 18