Future Tense

Beware the Cheapfakes

Deepfakes are troubling. But disinformation doesn’t have to be high tech to be damaging.

Mark Zuckerberg and Nancy Pelosi.
Photo illustration by Slate. Photos by Amy Osborne/AFP/Getty Images and Mark Wilson/Getty Images.

On Tuesday, Canny, an advertising company, posted a faked video of Mark Zuckerberg to Instagram. With the help of artists and a proprietary video dialogue replacement model, Canny produced a video of Zuckerberg talking about how he had amassed power and control through Spectre, a thinly guised pseudonym for Facebook and the name of the evil organization in the James Bond franchise.

The A.I.-generated “deepfake” video implicitly but unmistakably calls for Facebook to make a public statement on its content moderation polices. The platform has long been criticized for permitting the spread of disinformation and harassment, but it became particularly acute recently, when the company said that it would not remove the “Drunk Pelosi” video.

On Thursday, the House Permanent Select Committee on Intelligence will hold an open hearing on A.I. and the potential threat of deepfake technology to Americans. Many technology researchers believe that deepfakes—realistic-looking content developed using machine learning algorithms—will herald a new era of information warfare. But as the “Drunk Pelosi” video shows, slight edits of original videos may be even more difficult to detect and debunk, creating a cascade of benefits for those willing to use these digital dirty tricks.

When the “Drunk Pelosi” video first appeared on a Facebook page on May 28, it seemed it would be yet another high-profile reminder that social media platforms allow and even encourage the spread of disinformation. The video, posted to a self-described news Facebook page with a fan base of about 35,000, depicted Nancy Pelosi slurring her words and sounding intoxicated. However, when compared with another video from the same event, it was clear even to nonexperts that it had been slowed down to produce the “drunken” effect. Call it a “cheapfake”—it was modified only very slightly. While the altered video garnered some significant views on Facebook, it was only after it was amplified by President Donald Trump and other prominent Republicans on Twitter that it became a newsworthy issue. The heightened drama surrounding this video raises interesting questions not only about platform accountability but also about how to spot disinformation in the wild.

Journalists, politicians, and others worry that the technological sophistication of artificial intelligence–generated deepfakes makes them dangerous to democracy because it renders evidence meaningless. But what panic over this deepfake phenomenon misses is that audiovisual content doesn’t have to be generated through artificial intelligence to be dangerous to society. “Cheapfakes” rely on free software that allows manipulation through easy conventional editing techniques like speeding, slowing, and cutting, as well as nontechnical manipulations like restaging or recontextualizing existing footage that are already causing problems. Cheapfakes already call into question the methods of evidence that scientists, courts, and newsrooms traditionally use to call for accountability

Currently, untold numbers of made-up news stories and videos featuring prominent politicians litter social media. Some videos claim that activists and politicians are working with the deep state, others that female politicians are witches, and a few allege that Justice Ruth Bader Ginsburg is the walking dead. Rarely do these fallacies become as popular across platforms as the Pelosi video. Numerous news organizations quickly debunked the video, but the damage to public trust was already done. Many will never know the video was a fake, but the advantages it gave to pundits will echo into the future. It’s a recent example of what legal theorists Bobby Chesney and Danielle Citron call the liar’s dividend. Those wishing to deny the truth can create disinformation to support their lie, while those caught behaving badly can write off the evidence of bad behavior as disinformation. In a new survey from Pew Research Center, 63 percent of respondents said that they believe altered video and images are a significant source of confusion when it comes to interpreting news quality. That loss of trust works in favor of those willing to lie, defame, and harass to gain attention.

According to reporting by the Daily Beast, the American man who posted the original “Drunk Pelosi” video did it to make some advertising revenue. He claims he never expected it to be picked up by conservative pundits as evidence of anything. But intent matters little in situations where content is used as disinformation for political gain. Conservative pundits with massive social media followings further leveraged this incident by claiming the Daily Beast’s unmasking of the original poster violated journalism ethics, turning attention away from the video’s contents and on to journalists who sought to debunk it. Platform companies had uneven responses: YouTube removed it, while Facebook added labels to the video but left it online. Journalists did all they could to address this issue, but without platforms taking the lead as media organizations with enforceable policies, there is only so much they can do.

As Daniel Kreiss and others have pointed out, people don’t just share content because they believe it. They do it for a host of reasons, not the least of which is simply because a message speaks to what users see as an implicit truth of the world even as they know it is not factually true. Researchers have found that creating and sharing hateful, false, or faked content is often rewarded on platforms like Facebook.

The looming threat of the deepfake is worth attention—from politicians, like at the upcoming hearing; from journalists; from researchers; and especially from the public that will ultimately be the audience for these things. But make no mistake: Disinformation doesn’t have to be high tech to cause serious damage.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.