This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:


trusted source


AI makes it harder to spot deep fakes than ever before, but awareness is key, says expert

Credit: Pixabay/CC0 Public Domain

As artificial intelligence programs continue to develop and access is easier than ever, it's making it harder to separate fact from fiction. Just this week, an AI-generated image of an explosion near the Pentagon made headlines online and even slightly impacted the stock market until it was quickly deemed a hoax.

Cayce Myers, a professor in Virginia Tech's School of Communication, has been studying this ever evolving technology and shares his take on the future of deep fakes and how to spot them.

"It is becoming increasingly difficult to identify disinformation, particularly sophisticated AI generated deep fake," says Myers. "The cost barrier for generative AI is also so low that now almost anyone with a computer and internet has access to AI."

Myers believes because of this we will see a lot more disinformation—both visual and written—over the next few years. "Spotting this disinformation is going to require users to have more and savvy in examining the truth of any claim."

While photoshop programs have been used for years, Myers says the difference between that and disinformation created with AI is one of sophistication and scope. "Photoshop allows for fake images, but AI can create altered videos that are very compelling. Given that disinformation is now a widespread source of content online this type of fake news content can reach a much wider audience, especially if the content goes viral."

When it comes to combating disinformation, Myers says there are two main sources—ourselves and the AI companies.

"Examining sources, understanding warning signs of disinformation, and being diligent in what we share online is one personal way to combat the spread of disinformation," he says. "However, that is not going to be enough. Companies that produce AI content and where disinformation is spread will need to implement some level of guardrails to prevent the widespread disinformation from being spread."

Myers explains the problem is that the technology of AI has developed so fast that it's likely that any mechanism to prevent the spread of AI generated disinformation will not be full proof.

Attempts to regulate AI are going on in the U.S. at the federal, state, and even local level. Lawmakers are considering a variety of issues including , discrimination, intellectual property infringement, and privacy.

"The issue is that do not want to create a new law regulating AI before we know where the technology is going. Creating a law too fast can stifle AI's development and growth, creating one too slow may open the door for a lot of potential problems. Striking a balance will be a challenge," says Myers.

Provided by Virginia Tech
Citation: AI makes it harder to spot deep fakes than ever before, but awareness is key, says expert (2023, May 26) retrieved 12 July 2024 from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

EU beefs up disinformation code to prevent digital ad profit


Feedback to editors