Research news on Generative AI misinformation

Generative AI misinformation concerns the creation and dissemination of synthetic media—such as hyperrealistic deepfake images, videos, and voices—used to deceive, manipulate, or exploit individuals and publics. Work in this area examines AI-enabled impersonation, political and commercial disinformation, scams, and non-consensual explicit content, as well as their psychological and societal impacts. It also investigates technical and sociotechnical defenses, including detection models, watermarking and provenance systems, security against data poisoning and backdoors, and human-centered interventions to preserve trust and information integrity.

Machine learning & AI

Dubious AI detectors drive 'pay-to-humanize' scam

Feed an Iranian news dispatch or a literary classic into some text detectors, and they return the same verdict: AI-generated. Then comes the pitch: pay to "humanize" the writing, a pattern experts say bears the hallmarks ...

Security

AI making cyber attacks costlier and more effective: Munich Re

Artificial intelligence is making cyberattacks increasingly sophisticated and costlier for businesses, reinsurer Munich Re said Wednesday, warning of methods ranging from highly personalized phishing emails to computer-generated, ...

page 1 from 21