Page 18: Research news on Generative AI misinformation

Generative AI misinformation concerns the creation and dissemination of synthetic media—such as hyperrealistic deepfake images, videos, and voices—used to deceive, manipulate, or exploit individuals and publics. Work in this area examines AI-enabled impersonation, political and commercial disinformation, scams, and non-consensual explicit content, as well as their psychological and societal impacts. It also investigates technical and sociotechnical defenses, including detection models, watermarking and provenance systems, security against data poisoning and backdoors, and human-centered interventions to preserve trust and information integrity.

Robotics

That 'uhh... let me think' face you make? Androids need it too

Ever asked a question and been met with a blank stare? It's awkward enough with a person—but on a humanoid robot, it can be downright unsettling. Now, an international team co-led by Hiroshima University and RIKEN has found ...

Internet

Tech firms fight to stem deepfake deluge

Tech firms are fighting the scourge of deepfakes, those deceptively realistic voices or videos used by scammers that are more available than ever thanks to artificial intelligence.

page 18 from 19