Page 4: Research news on Generative AI misinformation

Generative AI misinformation concerns the creation and dissemination of synthetic media—such as hyperrealistic deepfake images, videos, and voices—used to deceive, manipulate, or exploit individuals and publics. Work in this area examines AI-enabled impersonation, political and commercial disinformation, scams, and non-consensual explicit content, as well as their psychological and societal impacts. It also investigates technical and sociotechnical defenses, including detection models, watermarking and provenance systems, security against data poisoning and backdoors, and human-centered interventions to preserve trust and information integrity.

Security

What can technology do to stop AI-generated sexualized images?

The global outcry over the sexualization and nudification of photographs—including of children—by Grok, the chatbot developed by Elon Musk's artificial intelligence company xAI, has led to urgent discussions about how ...

Security

Deepfakes leveled up in 2025—here's what's coming next

Over the course of 2025, deepfakes improved dramatically. AI-generated faces, voices and full-body performances that mimic real people increased in quality far beyond what even many experts expected would be the case just ...

Computer Sciences

Generative AIs fail at the game of visual 'telephone'

Generative AIs may not be as creative as we assume. Publishing in the journal Patterns, researchers show that when image-generating and image-describing AIs pass the same descriptive scene back and forth, they quickly veer ...

Internet

Grok spews misinformation about deadly Australia shooting

Elon Musk's AI chatbot Grok churned out misinformation about Australia's Bondi Beach mass shooting, misidentifying a key figure who saved lives and falsely claiming that a victim staged his injuries, researchers said Tuesday.

page 4 from 19