Page 6: Research news on AI chatbot safety

AI chatbot safety concerns the psychological, social, and informational risks posed by conversational AI systems and the mechanisms for mitigating those risks. Work in this area examines how chatbots can influence user behavior, propagate misinformation, exhibit bias, or respond inadequately to crises such as suicidality, with particular attention to children and adolescents. The field integrates technical alignment and auditing methods with legal, regulatory, and ethical frameworks, including parental controls, age restrictions, liability standards, and governance of persuasive or anthropomorphic chatbot designs.

Machine learning & AI

AI's blind spot: Tools fail to detect their own fakes

When outraged Filipinos turned to an AI-powered chatbot to verify a viral photograph of a lawmaker embroiled in a corruption scandal, the tool failed to detect it was fabricated—even though it had generated the image itself.

Security

Dangers to consider as AI gets smarter, more rapidly adopted

A recent study at UC Davis had AI chatbots send messages to people's phones to remind them to get their steps in. Those messages were interactive. Sometimes the chatbot would tell a joke based on this example provided by ...

Security

AI agents open door to new hacking threats

Cybersecurity experts are warning that artificial intelligence agents, widely considered the next frontier in the generative AI revolution, could wind up getting hijacked and doing the dirty work for hackers.

Computer Sciences

AI tech can compress LLM chatbot conversation memory by 3–4 times

Seoul National University College of Engineering announced that a research team led by Professor Hyun Oh Song from the Department of Computer Science and Engineering has developed a new AI technology called KVzip that intelligently ...

page 6 from 16