Page 12: Research news on Generative AI misinformation

Generative AI misinformation concerns the creation and dissemination of synthetic media—such as hyperrealistic deepfake images, videos, and voices—used to deceive, manipulate, or exploit individuals and publics. Work in this area examines AI-enabled impersonation, political and commercial disinformation, scams, and non-consensual explicit content, as well as their psychological and societal impacts. It also investigates technical and sociotechnical defenses, including detection models, watermarking and provenance systems, security against data poisoning and backdoors, and human-centered interventions to preserve trust and information integrity.

Security

Watermarks offer no defense against deepfakes, study suggests

New research from the University of Waterloo's Cybersecurity and Privacy Institute demonstrates that any artificial intelligence (AI) image watermark can be removed, without the attacker needing to know the design of the ...

page 12 from 19