Page 8: Research news on Generative AI misinformation

Generative AI misinformation concerns the creation and dissemination of synthetic media—such as hyperrealistic deepfake images, videos, and voices—used to deceive, manipulate, or exploit individuals and publics. Work in this area examines AI-enabled impersonation, political and commercial disinformation, scams, and non-consensual explicit content, as well as their psychological and societal impacts. It also investigates technical and sociotechnical defenses, including detection models, watermarking and provenance systems, security against data poisoning and backdoors, and human-centered interventions to preserve trust and information integrity.

Consumer & Gadgets

AI content poses triple threat to Reddit moderators

Reddit bills itself as "the most human place on the internet," but the proliferation of artificial intelligence-generated content is threatening to squeeze some of the humanity out of the news-sharing forum.

Business

Mass-produced AI podcasts disrupt a fragile industry

Artificial intelligence now makes it possible to mass-produce podcasts with completely virtual hosts, a development that is disrupting an industry still finding its footing and operating on a fragile business model.

Consumer & Gadgets

Old tricks, new tech: Scams in the age of AI

As a college student, Gabriel Aguilar fell victim to an elaborate scam. The fraudsters posed as employers offering job opportunities that provided quick income.

Computer Sciences

Chatbot dreams generate AI nightmares for Bay Area lawyers

A Palo Alto, California, lawyer with nearly a half-century of experience admitted to an Oakland federal judge this summer that legal cases he referenced in an important court filing didn't actually exist and appeared to be ...

page 8 from 19