Page 3: Research news on Generative AI misinformation

Generative AI misinformation concerns the creation and dissemination of synthetic media—such as hyperrealistic deepfake images, videos, and voices—used to deceive, manipulate, or exploit individuals and publics. Work in this area examines AI-enabled impersonation, political and commercial disinformation, scams, and non-consensual explicit content, as well as their psychological and societal impacts. It also investigates technical and sociotechnical defenses, including detection models, watermarking and provenance systems, security against data poisoning and backdoors, and human-centered interventions to preserve trust and information integrity.

Internet

YouTube to match OpenAI with AI likeness feature

YouTube announced plans on Wednesday to allow its users this year to create AI versions of themselves for video sharing, matching a feature from Sora, the video-creation app from ChatGPT-maker OpenAI.

Consumer & Gadgets

AI can make the dead talk—why this doesn't comfort us

For as long as humans have buried their dead, they've dreamed of keeping them close. The ancient Fayum portraits—those stunningly lifelike images wrapped in Egyptian mummies—captured faces meant to remain present even ...

page 3 from 19