How to Spot a Deepfake: Detection Tool Maker Shares Key Takeaways Artificial Intelligence (AI)

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

You – a human, most likely – are a key part of figuring out whether a photo or video has been created by artificial intelligence.

There are detection tools, developed commercially and in research labs, that can help. To use these deepfake detectors, you upload or link to a piece of media that you suspect may be fake, and the detector will give a percentage probability that it was generated by AI. was done.

But understanding your senses and some key gifts when analyzing media gives a lot of insight into whether it's a deepfake.

While deepfake regulations, especially in elections, lag behind the rapid pace of AI advancements, we have to find ways to determine whether an image, audio or video is actually real.

Siwei Lyu created one of them, the DeepFake-o-meter, at the University of Buffalo. Its tool is free and open source, compiling more than a dozen algorithms from other research labs in one place. Users can upload a piece of media and run it through these various labs' tools to assess whether it can be generated by AI.

DeepFake-o-meter demonstrates both the benefits and limitations of AI-detection tools. When we ran a few well-known deepfakes through different algorithms, the detectors ranked the AI-generated probability from 0% to 100% for the same video, photo or audio recording.

AI, and the algorithms used to detect it, can be biased by the way it is taught. At least in the case of Deep Fake-O-Meter, the tool is transparent about this variation in results, whereas with commercial detectors purchased in the App Store, it's less clear what its limitations are.

“I think a false image of reliability is worse than low reliability, because if you rely on a system that's not fundamentally reliable to do the job, it could lead to problems in the future,” Liu said. said

Its system is still underwhelming for users, having only been publicly launched in January this year. But the goal is that journalists, researchers, investigators and everyday users will be able to upload media to see if it's real. His team is working on ways to rank the different algorithms it uses for detection to inform users which detector will work best for their situation. Users may choose to share uploaded media with Live's research team to help them better understand deepfake detection and improve the website.

Leo often serves as an expert source for journalists trying to assess whether something might be a deepfake, so he walked us through some of the best-known examples of deepfake from recent memory. What so we can tell they are not real? Some obvious gifts have changed over time as AI has improved, and will change again.

A human operator needs to be brought in to do the analysis, he said. “I think it's very important to have human algorithm support. Deep fax is a socio-technical problem. It's not going to be solved purely by technology. It has to have an interface with humans.


A robocall circulating in New Hampshire using an AI-generated voice of President Joe Biden prompted voters there to stay away from the Democratic primary, one of the first major examples of deepfakes in the US election this year. is one of

When Liu's team ran a short clip of a robocall through five algorithms on the Deep Fake-O-Meter, only one detector returned a greater than 50% chance of the AI ​​– which said it had a 100% chance. The other four have a probability of 0.2% to 46.8%. A longer version of the call produced three out of five detectors that hit at greater than 90% probability.

This tracks with our experience creating audio deepfakes: they're hard to pick out because you're relying entirely on your hearing, and easy to generate because there are so many examples of public figure voices for AI. are used to make a person's voice. Say what they want.

But there are some pointers to look out for in robocalls, and in AudioDeepFax in general.

AI-generated audio often has a flatter overall tone and is less conversational than the way we normally communicate, Liu said. You don't sound too emotional. There may not be proper breathing sounds, such as breathing before speaking.

Also pay attention to background noise. Sometimes there is no background noise when there should be. Or, in the case of robocalls, there's almost too much background noise mixed in to give an air of reality that actually sounds unnatural.


With photos, it helps to zoom in and examine closely for any “contrasts with the physical world or human pathology,” such as buildings with crooked lines or hands with six fingers, Liu said. Small details like hair, mouth and shadows can tell if something is real.

Liu said hands were once an obvious draw for AI-generated images because they often ended up with extra appendages, though the technology has improved and it's becoming less common.

We sent photos of Trump with black voters that a BBC investigation found were produced by AI using deepfake meters. Five of the seven image deepfake detectors returned a 0% probability of the fake image being fake, while one reached 51%. The rest of the detectors said that no face was detected.

An AI-generated image showing Trump with black voters. Photo: @Trump_History45
Another AI-generated image shows Trump with black voters. Example: AI generated image

Liu's team noted unnatural areas around Trump's neck and chin, visible teeth and webbing around some fingers.

Aside from these visual oddities, AI-generated images look pretty polished in many cases.

It's hard to put into quantitative terms, Liu said, but the overall view is that the image looks a lot like plastic or painting.


Videos, especially of people, are harder to fake than photos or audio. In some AI-generated videos without people, it can be difficult to know if the images are real, although they are not “deepfakes” in the sense that the term usually refers to faking or altering likenesses of people.

For the video test, we sent a dark transcript of Ukrainian President Volodymyr Zelenskyi showing him asking his armed forces to surrender to Russia, which didn't happen.

Liu's team said visual cues in the video include unnatural eye blinks that show some pixel artefacts. The edges of Zelinski's head are not quite right. They are etched and pixelated, a sign of digital manipulation.

Some detection algorithms look specifically at lips, as current AI video tools will mostly replace lips to say things a person didn't say. The lips are where the most contrasts are found. An example would be if a letter sound requires the lips to be closed, such as B or P, but the mouth of the deepfake is not completely closed, Liu said. He said that when the mouth opens, the teeth and tongue come out.

The video, to us, is more clearly fake than the audio or photo examples we flagged to Leo's team. But of the six detection algorithms that evaluated the clip, only three came back with a very high probability of AI generation (over 90%). The other three returned much lower probabilities, ranging from 0.5% to 18.7%.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment