Page 9: Research news on Generative AI misinformation

Generative AI misinformation concerns the creation and dissemination of synthetic media—such as hyperrealistic deepfake images, videos, and voices—used to deceive, manipulate, or exploit individuals and publics. Work in this area examines AI-enabled impersonation, political and commercial disinformation, scams, and non-consensual explicit content, as well as their psychological and societal impacts. It also investigates technical and sociotechnical defenses, including detection models, watermarking and provenance systems, security against data poisoning and backdoors, and human-centered interventions to preserve trust and information integrity.

Security

How is AI enhancing scams?

By now, you know the email from a wealthy African prince is a fraud. But is that really a friend's voice on the telephone saying they're in trouble?

Hi Tech & Innovation

AI-generated voices now indistinguishable from real human voices

Many people still think of AI-generated speech as sounding "fake" or unconvincing and easily told apart from human voices. But new research from Queen Mary University of London shows that AI voice technology has now reached ...

page 9 from 19