President of Ukraine Volodymyr Zelenskyy makes a speech to the Japanese Parliament during the Russo-Ukrainian War. Credit: The Presidential Office of Ukraine/Wikimedia Commons, CC BY-SA

A pair of researchers, one with Gymnasium of Johannes Kepler, the other with the University of California, Berkley, has developed an artificial intelligence (AI) application capable of determining whether a video clip of a famous person is genuine or a deepfake.

In their paper published in Proceedings of the National Academy of Sciences, Matyáš Boháček and Hany Farid describe training their AI system to recognize unique body movements of certain individuals to discern whether a video was real or not.

As technology has grown more sophisticated, it has become more difficult to determine whether a video is genuine. In the realm of public figures, such videos can become problematic. Such was the case when parties in Russia created a recent deepfake video of Ukraine president Volodymyr Zelenskyy saying things that he did not actually say—a video that was reportedly created to help the Russian government convince its citizenry of Russian state propaganda regarding the invasion of Ukraine. In this new effort, the researchers sought to find a way to use technology to distinguish real videos from deepfakes.

Boháček and Farid began by noting that in addition to body marks or , people have other characteristics that are unique, one of which is the way they move. They noted, for example, that Zelenskyy has a habit of raising his right eyebrow when he lifts his left hand. Using such information, they programmed a AI system to study the physical movements of a subject by analyzing multiple videos of them in action.

Over time, the system improved at recognizing actions that humans would be unlikely to notice—actions that together were unique to the subject of the video. They then tested their system by analyzing several deepfake videos along with real videos of different people. They found their system to be 100% successful in figuring out which were real and which were fake. It was also successful in determining that the video of Zelenskyy was fake, as well.

More information: Matyáš Boháček et al, Protecting world leaders against deep fakes using facial, gestural, and vocal mannerisms, Proceedings of the National Academy of Sciences (2022). DOI: 10.1073/pnas.2216035119

Journal information: Proceedings of the National Academy of Sciences