Page 13: Research news on Generative AI misinformation

Generative AI misinformation concerns the creation and dissemination of synthetic media—such as hyperrealistic deepfake images, videos, and voices—used to deceive, manipulate, or exploit individuals and publics. Work in this area examines AI-enabled impersonation, political and commercial disinformation, scams, and non-consensual explicit content, as well as their psychological and societal impacts. It also investigates technical and sociotechnical defenses, including detection models, watermarking and provenance systems, security against data poisoning and backdoors, and human-centered interventions to preserve trust and information integrity.

Business

AI video becomes more convincing, rattling creative industry

Gone are the days of six-fingered hands or distorted faces—AI-generated video is becoming increasingly convincing, attracting Hollywood, artists, and advertisers, while shaking the foundations of the creative industry.

Security

RisingAttacK: New technique can make AI 'see' whatever you want

Researchers have demonstrated a new way of attacking artificial intelligence computer vision systems, allowing them to control what the AI "sees." The research shows that the new technique, called RisingAttacK, is effective ...

page 13 from 19