Research news on Generative AI misinformation

Generative AI misinformation concerns the creation and dissemination of synthetic media—such as hyperrealistic deepfake images, videos, and voices—used to deceive, manipulate, or exploit individuals and publics. Work in this area examines AI-enabled impersonation, political and commercial disinformation, scams, and non-consensual explicit content, as well as their psychological and societal impacts. It also investigates technical and sociotechnical defenses, including detection models, watermarking and provenance systems, security against data poisoning and backdoors, and human-centered interventions to preserve trust and information integrity.

Machine learning & AI

US AI giants accuse Chinese rivals of mass data theft

US artificial intelligence company Anthropic said Monday it had uncovered campaigns by three Chinese AI firms to illicitly extract capabilities from its Claude chatbot, in what it described as industrial-scale intellectual ...

Consumer & Gadgets

People are overconfident about spotting AI faces, study finds

Most people believe they can spot AI-generated faces, but that confidence is out of date, research from UNSW Sydney and the Australian National University (ANU) has demonstrated. With AI-generated faces now almost impossible ...

page 1 from 19