Research news on Generative AI misinformation

Generative AI misinformation concerns the creation and dissemination of synthetic media—such as hyperrealistic deepfake images, videos, and voices—used to deceive, manipulate, or exploit individuals and publics. Work in this area examines AI-enabled impersonation, political and commercial disinformation, scams, and non-consensual explicit content, as well as their psychological and societal impacts. It also investigates technical and sociotechnical defenses, including detection models, watermarking and provenance systems, security against data poisoning and backdoors, and human-centered interventions to preserve trust and information integrity.

Computer Sciences

Can AI quantify beauty? New study suggests it can't

Attempts to define human beauty using artificial intelligence may reveal more about bias in data than universal standards, according to a new analysis from the University of Virginia's School of Data Science. Using computer ...

Business

'Clearly me': AI drama accused of stealing faces

Christine Li is a model and influencer, but not an actor, so when she saw herself playing a cruel character in a Chinese microdrama she felt bewildered, then angry and afraid.

Internet

YouTube offers deepfake detection to Hollywood

YouTube is offering Hollywood celebrities and entertainers a free detection tool to help combat their deepfakes, expanding the Google-owned video platform's efforts to guard against AI-driven impersonations.

page 1 from 22