Categories


Authors

How the Coming Deepfake Apocalypse Could Endanger Activism, Media, and the Truth

How the Coming Deepfake Apocalypse Could Endanger Activism, Media, and the Truth

Dave Johnson

Image via iStock

Image via iStock

In 2018, a Belgian political party released a video of Donald Trump calling on its country to withdraw from the Paris climate agreement. The video was incendiary: In it, Trump stated Belgium pollutes more now than before joining the Paris climate agreement and promised that he, apparently unlike the Belgians, had “the balls” to withdraw from the agreement. The only problem? Trump never gave that speech. It was a manipulated fake, with audio spliced onto previous footage of Trump.

The video was actually a particularly troubling example of a new technology known as deepfakes that could pose serious risks to dissemination of the truth and the fight against fake news in the 21st century. Deepfake technology allows creators to swap faces in a video and manipulate audio, working from a library of still photos of a person, seamlessly replacing the original countenance with someone else entirely. Using a process like that, it’s possible to make innocuous clips, such as Nicolas Cage appearing in “Raiders of the Lost Ark,” but most early uses of deepfakes have been malicious. The first exposure most people had to the technology was in the form of revenge porn—splicing the face of unsuspecting victims onto the bodies of actors in adult films, a process known as mapping. It has quickly bled over into the world of politics.

Once, tweaking video content as realistically as deepfakes do was the purview of film special effects designers. But it’s no longer prohibitively expensive or time consuming to pull off. Already, there are several no-cost applications available that use artificial intelligence (AI) to transpose a person’s face onto a different body or add seamlessly matched audio to an existing clip, and they’re no more difficult to use than a typical video editing program. Video—once the final arbiter of whether something truly happened the way it was reported—is on the verge of becoming as unreliable as “Photoshopped” photos. When you can’t trust video, that simultaneously undermines the media, casts doubt on legitimate content, and gives credibility to frauds. It raises the question: What happens when you can’t trust what you see?

The Belgian-disseminated Trump video was apparently intended to be satire—Dutch subtitles that are not translated into English tell viewers the video is a fake—but it demonstrates the potential for this kind of tech to be wielded as a weapon, especially against activists, politicians, and other public figures. Cory Alpert, a political consultant in South Carolina, is worried. “This is a huge problem for groups looking to influence politics unethically. It creates an asymmetry where one side may go far beyond the bounds of normal ethics, and there is little that anyone can do about it,” Alpert says.

There’s more than one way to spot a fake

Fortunately, more than one tech company is actively working to solve the problems deepfakes pose to an already growing landscape of fake news, political polarization, and reputational slander.

Deeptrace has taken up arms in the fight against the spread of fake news. “Fake videos fundamentally delegitimize real video and create skepticism about the media in general,” says Henry Ajder, head of communication and research analysis at Deeptrace. Ajder is on the front lines of this problem. A tech startup headquartered in Amsterdam, Deeptrace is attempting to create a sort of antivirus for deepfakes that will, eventually, run on computers and phones, alerting users when they’re watching something that bears the telltale fingerprints of AI-generated synthetic media. Deeptrace uses the same deep learning processes used to create the fake videos.

Deeptrace isn’t the only company researching ways to head off the coming deepfake apocalypse. Operation Minerva is taking a more straightforward approach to battling altered videos—comparing potential deepfakes to known video. Operation Minerva is highly effective at detecting fakes uploaded to other internet sites, as long as the clip has been digitally fingerprinted by Operation Minerva. “We look at video and see if there’s a match,” says Nate Glass, founder of Operation Minerva. “Our systems can find deepfakes because enough of the frame is unaltered that we can identify where the video originated.” Although as of spring 2019, Minerva is in use only for adult videos, it suggests a possible way for victims of the technology to regain their dignity and retain their rights.

Therein lies another vector for tackling deepfakes—simple copyright infringement, as long as the source video is protected by copyright. If a malicious entity wanted to discredit someone by inserting him or her into an adult video, Minerva’s software is ready to find it.

Awareness campaigns can help

Deeptrace’s goal of unmasking fake videos in real time is one practical solution to deepfakes. But Ajder acknowledges that we should expect an ever-escalating arms race between deepfake developers and companies working on mitigations. “Currently, society has a naiveté regarding video. Most people believe you can't create a fake as convincing as a real recording, but in the coming years the tech will likely advance to the point where high-quality fakes and real videos become indistinguishable. If we don't prepare now, we will need to catch up after significant damage has been done,“ he says.

Detection software alone will not solve our problems. It’s entirely possible it’ll take a variety of mitigations to make video trustworthy again. In 2018, Reddit banned deepfake pornographic content (no word on a ban on deepfake content in general) and updated its policy for minors in response to content posted on the site. Facebook has already pledged to “identify potentially false” images and video and send flagged content for third-party reviewers to fact-check.

But in this post-truth era, Ajder worries, “Many people don't care about whether something is the truth as long as it speaks to what they believe.” While that may be true, educational campaigns have shown some improvement on the spread of fake news—at least on Facebook. In 2017, Facebook launched a fake news awareness educational campaign designed to help users identify fake news. The campaign launched in 14 countries, where users could opt to click on an ad promoting advice for identifying fake news. According to a study conducted by researchers at Stanford, it appears that the efforts did help stop the spread of fake news on the site, at least temporarily—which may lend credence to the idea that an educational campaign can help viewers identify and steer clear of fake videos.

Activists beware and viewers be wary

It isn’t just politicians and high-profile actors at risk: activists, advocates, and journalists may end up in the crosshairs, too. “These groups are in real danger of coming under attack,” says Ajder. The tools are increasingly accessible, and already we’ve seen attempts to discredit public figures with comparatively low-tech editing techniques, such as CNN reporter Jim Acosta’s brush with a White House page last year. At the time, the White House used a video produced by far right-wing conspiracy site Infowars to justify banning the reporter, but the doctored clip relied on changing the timing between video frames to make it look as if Acosta had struck the female page when he had not. The video was quickly debunked by the mainstream media, but it was one example of how deepfake videos can be used to discredit those who have done nothing wrong.

Deepfakes threaten to expand the already fraught landscape of fake news, increase polarization, and besmirch and even ruin reputations. The next time you watch a particularly outrageous or incendiary clip, ask yourself if you know where the video came from originally and if it could have been doctored. Stop its spread by not sharing it until its validity is known. How we respond to this threat in the coming years will determine whether we double down on living in the post-truth era or restore confidence in media we see online.

In the Quest For Sustainable Eating, Should You Be Drinking Your Meals?

In the Quest For Sustainable Eating, Should You Be Drinking Your Meals?

The Charcoal Makers of Manila: Making the Best of Life in Poorest Slums of the Philippines

The Charcoal Makers of Manila: Making the Best of Life in Poorest Slums of the Philippines