Deepfake Dystopia
By: Pakinam Amer
While doing a reverse image search, Noelle Martin found a pornographic video of herself.
She didn’t make the video. In fact, it wasn’t even real. It was a “deepfake”, a form of AI-powered video manipulation that—Martin later discovered—was put together by a complete stranger who stole her pictures and selfies from her social media accounts.
“My stomach sank,” she told Earshot, ABC Radio’s flagship programme in a 2019 interview where she recounted the moment she came across the video. “I felt sick.”
Helpless, she watched the video proliferate widely online, accompanied by very sexually explicit commentary. “I remember crying on the phone to random people, just being like ‘this is happening to me. Is there something that can be done?’” she said. She even emailed the webmasters of every site where her images appeared.
When she took her case to the police, they told her there was nothing they could do.
Martin was 18 when it happened.
A 2019 report by DeepTrace Labs found that 96 percent of deepfakes circulating online are pornography, mainly featuring women and garnering well over 134 million views. The remaining four percent include uses of synthetic media for positive educational, entertainment and academic purposes, as this site does. The majority of deepfakes employ face-swapping technology to switch the faces of real porn performers with Hollywood actors, without their consent. But private individuals like Martin—who has gone on to become an activist against online image-based abuse in her home country of Australia—are clearly not immune.
Creating a highly realistic piece of synthetic media is complicated, generally utilizing advanced techniques and a high degree of expertise.
In the case of In Event of Moon Disaster—a “complete deepfake” which includes synthetic audio and visuals backed by significant computing power and financial resources—it took the directors, an actor and two synthetic media production companies three months of directed effort to complete it. This level of believability is hard, if not impossible, for the average person on the street to produce.
However, for deepfakes that are dependent on face-swapping, it’s a different story. A hobbyist or amateur can whip a convincing one up a week. A tech-savvy person, with access to the right software and computer applications, can do it in days, or hours. Last December, Timothy B. Lee, senior tech policy reporter at Ars Technica, created his own deepfake video in which he swapped Mark Zuckerberg’s face as he testified before Congress with that of Lieutenant Commander Data from Star Trek. Making the video cost him around $500.
In an extensive article on the making of his deepfake, he described his face-swapping training process in detail and listed the software he used. The idea behind his experiment was to highlight both the possibilities and limitations of the technology. But perhaps inadvertently, it also showed how easily accessible it is.
Deeptrace summarizes data about deepfakes online. (Credit: Deeptrace)
Anonymity makes it worse
Aside from porn, the majority of deepfakes fall under comedy or political satire. Others comment on the dangers of the technology itself, like a meta deepfake of Barack Obama where he warned people about the impact and rise of deepfakes. “We’re entering an era in which our enemies can make anyone say anything at any point in time,” fake Obama says during the manipulated video, created by news outlet Buzzfeed.
Such commentaries reveal deep anxieties over how convincing AI-generated footage can be for an audience used to taking video evidence at face value. Of particular concern are issues related to personal privacy and how deepfakes can be used to amplify harassment and cyberbullying.
Another challenge is that creators are often anonymous. “It’s not always clear who’s serving this content,” says Joan Donovan, director of the Technology and Social Change Research Project at Harvard Kennedy’s Shorenstein Center and co-author of Deepfakes and Cheap Fakes, a 2019 report about how AI-powered audio-visual manipulation has changed how we think about truth and evidence.
“Are people who they say they are? And if not, how do we find out?” Donovan says, “That’s the context that we want people to understand: where this technology comes from and where it’s going.”
The report warns that this technology may be particularly harmful to populations that already face the burden of persistent discrimination. Any solutions to the rise of deepfakes, the report suggests, should therefore avoid reinforcing structural inequality or else “those without the power to negotiate truth—including people of color, women, and the LGBTQA+ community—will be left vulnerable to increased harms.”
Adam Berinsky, a political science professor at the Massachusetts Institute of Technology (MIT) and director of the Political Experiments Research Lab, says there’s some hope in up-and-coming technology solutions.
The ability to detect forgeries in videos has developed in parallel to the technology, says Tim Hwang, director of the Ethics and Governance of AI Initiative at the Berkman-Klein Center and the MIT Media Lab. And the arms race continues: “This will assuredly always be something of a cat-and-mouse game, with deceivers constantly evolving new ways of evading detection, and detectors working to rapidly catch up,” he writes in Undark Magazine.
But merely waiting for technologists to step up their game may prove risky when the truth is at stake.
Some still fear that the public may become caught in a technology tug-of-war between two camps: rogue deepfake developers peddling open-source codes, and the researchers and deep tech companies clambering to create reliable detection tools. And so far, there’s no end in sight.
Same game, new tech
Some experts say there is nothing new about the culture driving the creation of deepfakes. “We’re already awash in so much video that is not what it purports to be. The deepfakes are just one more form of that,” says Judith Donath, a fellow at Harvard Berkman Center and author of The Social Machine.
A writer and artist who examines how new technologies transform society, Donath says it’s easy to be alarmed by the possible dangers of fake video, particularly on vulnerable populations and at-risk groups who already face bullying or discrimination.
“Part of the problem is that we tend to have an overconfident notion of how much truth is in non-fake video. If you look at how much disinformation is video-based now, [you’ll realize] it doesn’t require this particular technology,” she says, citing videos that are edited out of their context, or old videos falsely presented as new as common examples.This form of selective editing is what researchers call “shallow fakes”. A good example is the infamous video of Democratic House Speaker Nancy Pelosi showing her slurring her words, as if she’s drunk. The clip, which still lives on Facebook, was slowed down to give this effect.
The question people need to be asking, she adds, is why they should trust this source.“Why is this considered a reliable piece of information, whether it’s text or video?” she says.
Journalism: The final frontier
Hwang, Donath, Donovan and other researchers at the forefront of AI and ethics research say that disinformation predates sophisticated technology and has always been a fixture of information dissemination. “An old struggle for power in a new guise,” as Britt Paris and Joan Donovan put it in Deepfakes and Cheap Fakes.
Whether it’s an article, a doctored photo, or an expensively produced deepfake video, it will always fall to journalists and seasoned experts to separate truth from lies, even as the line between the two becomes murkier, according to Donath and Donovan. Since the technology is approaching the point where human senses can’t detect the forgery, that might mean analyzing the context, sources, and any accompanying information that comes with a given video.
And the press can also be part of the solution. “Deepfakes are not going to travel fast and far without media amplification,” says Donath. For deceptive media to reach a wide audience and gain massive traction, they’ll have to be backed by traditional media, she adds. “Journalists are going to play a role in pointing people’s attention in this direction.”
Videos were once thought of as unvarnished truth, or at least a version of it. Now they’re merely one form of testimony, says Donath. Ultimately, it boils down to reputation of the source and whether an individual trusts the person or institution relaying information.
While a single incident of deception can be easily remedied, in the long run the phenomenon can continue to erode the already shaky trust between media and the public. “Over time, people are going to be skeptical of evidence like photos and videos,” says Donovan. “With the declining trust in the media, that’s a very toxic combination.”
“It’s going to take us a while to generate new norms to what counts as truth,” she adds.
“Deepfakes and Cheap Fakes” analyzes the spectrum of misinformation techniques. (Credit: Joan Donovan and Britt Paris)
After finding deepfake videos of herself online, Noelle Martin went on to become an activist and TEDxPerth speaker who campaigns against online harassment. (Credit: Noelle Martin / TEDxPerth)
Online data privacy regulation
One powerful force that could be mobilized to guard against deepfakes is the social platforms themselves, says Donovan. “They’ve been really weak on enforcement related to harassment, incitement of violence and hate speech.”
The prevalent use of pornography in deepfakes—the most popular use of the technology at present—is a case in point, according to Donovan. Examples include identify theft, non-consensual image sharing, including “revenge porn”, and blackmail through sharing of explicit material created using images of women who are not involved in pornography.
In Martin’s case, the person who manipulated her images and published them against her will was never caught. But the experience has shaped her trajectory: six years after discovering the explicit material, she’s now an activist, a law reform campaigner, and a TEDxPerth speaker who campaigns against online harassment. She appeared on Forbes’ 30 Under 30 Asia list, and she now keeps most of her social media hidden from the public eye.
Donovan says people might have to revisit their own online privacy strategies—as Martin did—to compartmentalize and protect access to their personal media. “We’ve entered in a frame where ‘online’ is not something we go on, it’s with us every day, in our pockets,” says Donovan. “Platform companies have a duty to mitigate certain harms, especially harassment.”
One way to do this, she says, is for platforms to clearly mark media by the source that produced it. “If you care about what’s real, you have to care about where it’s coming from,” she says.
A version of this article appeared in print on 22 November 2019. The article has been revised and updated for online publication on 3 July 2020.
Lede photo credit: University of Washington / Supasorn Suwajanakorn, Steven M. Seitz, Ira Kemelmacher-Shlizerman.