Last week at the Black Hat cybersecurity conference in Las Vegas, the Democratic National Committee tried to raise awareness of the dangers of AI-doctored videos by displaying a deepfaked video of DNC Chair Tom Perez. Deepfakes are videos that have been manipulated, using deep learning tools, to superimpose a person’s face onto a video of someone else.

As the 2020 presidential election draws near, there’s increasing concern over the potential threats deepfakes pose to the democratic process. In June, the U.S. Congress House Permanent Select Committee on Intelligence held a hearing to discuss the threats of deefakes and other AI-manipulated media. But there’s doubt over whether tech companies are ready to deal with deepfakes. Earlier this month, Rep. Adam Schiff, chairman of the House Intelligence Committee, expressed concern that Google, Facebook, and Twitter don’t have no clear plan to deal with the problem.

Mounting fear over the potential onslaught of deepfakes has spurred a slate of projects and efforts to detect deepfakes and other image- and video-tampering techniques.

Inconsistent blinking

Deepfakes use neural networks to overlay the face of the target person on an actor in the source video. While neural networks can do a good job at mapping the features of one person’s face onto another, they don’t have any understanding of the physical and natural characteristics of human faces.

That’s why they can give themselves away by generating unnatural phenomena. One of the most notable artifacts is unblinking eyes. Before the neural networks that generate deepfakes can do their trick, their creators must train them by showing them examples. In the case of deepfakes, those examples are images of the target person. Since most pictures used in the training have open eyes, the neural network tend to create deepfakes that don’t blink, or that blink in unnatural ways.

Last year, researchers from the University of Albany published a paper on a technique for spotting this type of inconsistency in eye blinking. Interestingly, the technique uses deep learning, the same technology used to create the fake videos. The researchers found that neural networks trained on eye blinking videos could localize eye blinking segments in videos and examine the sequence of frames for unnatural movements.

However, with the technology becoming more advanced every day, it’s just a matter of time until someone manages to create deepfakes that can blink naturally.

Tracking head movement

More recently, researchers at UC Berkley developed an AI algorithm that detects face-swapped videos based on something that is much more difficult to fake: head and face gestures. Every person has unique head movements (e.g. nodding when stating a fact) and face gestures (e.g. smirking when making a point). Deepfakes inherit head and face gestures from the actor, not the targeted person.

A neural network trained on the head and face gestures of an individual would be able to flag videos that contain head gestures that don’t belong to that person. To test their model, the UC Berkley researchers trained the neural network on real videos of world leaders. The AI was able to detect deepfaked videos of the same persons with 92% accuracy.

Head movement detection provides a robust protection method against deep fakes. However, unlike the eye-blinking detector, where you train your AI model once, the head movement detector needs to be trained separately for every individual. So while it’s suitable for public figures such as world leaders and celebrities, it’s less ideal for general-purpose deepfake detection.

Pixel inconsistencies

When forgers tamper with an image or video, they do their best to make