Credit: Facebook

Facebook scientists on Wednesday said they developed artificial intelligence software to not only identify "deepfake" images but to figure out where they came from.

Deepfakes are photos, videos or audio clips altered using artificial intelligence to appear authentic, which experts have warned can mislead or be completely false.

Facebook research scientists Tal Hassner and Xi Yin said their team worked with Michigan State University to create software that reverse engineers deepfake images to figure out how they were made and where they originated.

"Our method will facilitate deepfake detection and tracing in real-world settings, where the deepfake image itself is often the only information detectors have to work with," the scientists said in a blog post.

"This work will give researchers and practitioners tools to better investigate incidents of coordinated disinformation using deepfakes, as well as open up new directions for future research," they added.

Facebook's new software runs deepfakes through a network to search for imperfections left during the , which the scientists say alter an image's digital "fingerprint."

"In , fingerprints are used to identify the used to produce an image," the scientists said.

"Similar to device fingerprints, image fingerprints are unique patterns left on images... that can equally be used to identify the generative model that the image came from."

"Our research pushes the boundaries of understanding in deepfake detection," they said.

Microsoft late last year unveiled software that can help spot photos or videos, adding to an arsenal of programs designed to fight the hard-to-detect images ahead of the US presidential election.

The company's Video Authenticator analyzes an image or each frame of a video, looking for evidence of manipulation that could be invisible to the naked eye.