Page 2: Research news on Generative AI misinformation

Generative AI misinformation concerns the creation and dissemination of synthetic media—such as hyperrealistic deepfake images, videos, and voices—used to deceive, manipulate, or exploit individuals and publics. Work in this area examines AI-enabled impersonation, political and commercial disinformation, scams, and non-consensual explicit content, as well as their psychological and societal impacts. It also investigates technical and sociotechnical defenses, including detection models, watermarking and provenance systems, security against data poisoning and backdoors, and human-centered interventions to preserve trust and information integrity.

Internet

Whack-a-mole: US academic fights to purge his AI deepfakes

As deepfake videos of John Mearsheimer multiplied across YouTube, the American academic rushed to have them taken down, embarking on a grueling fight that laid bare the challenges of combating AI-driven impersonation.

Business

How AI deepfakes have skirted revenge porn laws

Federal and state governments have outlawed "revenge porn," the nonconsensual online sharing of sexual images of individuals, often by former partners. Last year, South Carolina became the 50th state to enact such a law. ...

page 2 from 19