Page 11: Research news on Generative AI misinformation

Generative AI misinformation concerns the creation and dissemination of synthetic media—such as hyperrealistic deepfake images, videos, and voices—used to deceive, manipulate, or exploit individuals and publics. Work in this area examines AI-enabled impersonation, political and commercial disinformation, scams, and non-consensual explicit content, as well as their psychological and societal impacts. It also investigates technical and sociotechnical defenses, including detection models, watermarking and provenance systems, security against data poisoning and backdoors, and human-centered interventions to preserve trust and information integrity.

Consumer & Gadgets

When AI blurs reality: The rise of hyperreal digital culture

From Bigfoot vlogs to algorithmically created personas, hyperrealistic AI content is redefining the boundaries of digital creators. These influencers are entirely virtual personas created using generative AI tools that simulate ...

Machine learning & AI

AI helps UK woman rediscover lost voice after 25 years

A British woman suffering from motor neuron disease who lost her ability to speak is once again talking in her own voice thanks to artificial intelligence and a barely audible eight-second clip from an old home video.

Security

One tiny flip can open a dangerous back door in AI

A self-driving motor vehicle is cruising along, its numerous sensors and cameras telling it when to brake, change lanes, and make turns. The vehicle approaches a stop sign at a high rate of speed, but instead of stopping, ...

page 11 from 19