Page 2: Research news on Generative AI misinformation

Generative AI misinformation concerns the creation and dissemination of synthetic media—such as hyperrealistic deepfake images, videos, and voices—used to deceive, manipulate, or exploit individuals and publics. Work in this area examines AI-enabled impersonation, political and commercial disinformation, scams, and non-consensual explicit content, as well as their psychological and societal impacts. It also investigates technical and sociotechnical defenses, including detection models, watermarking and provenance systems, security against data poisoning and backdoors, and human-centered interventions to preserve trust and information integrity.

Security

Deepfake songs are exploding, but a new tool shuts them down

Artificial intelligence models can now clone a voice with just a few seconds of audio, fueling a surge of deepfake songs online and creating a growing crisis for musicians who don't want their voices hijacked. Beyond the ...

Security

AI education could be crucial in tackling rising voice scams

A new study from Abertay University reveals that the most effective way to protect people from AI voice scams is not through traditional warning messages, but by educating them about how advanced and authentic AI voices have ...

Machine learning & AI

US AI giants accuse Chinese rivals of mass data theft

US artificial intelligence company Anthropic said Monday it had uncovered campaigns by three Chinese AI firms to illicitly extract capabilities from its Claude chatbot, in what it described as industrial-scale intellectual ...

Consumer & Gadgets

People are overconfident about spotting AI faces, study finds

Most people believe they can spot AI-generated faces, but that confidence is out of date, research from UNSW Sydney and the Australian National University (ANU) has demonstrated. With AI-generated faces now almost impossible ...

page 2 from 20