Counter technology, to the rescue?
When it comes to spotting deepfakes, technologists and start-ups are playing catch-up. Some, however, are investing in developing future-proof methods to guard against the darker side of deepfakes.
In June, Facebook announced the results of a deepfake detection challenge that brought together more than 2,000 technologists and AI researchers to create and test algorithms to detect manipulated videos made by Facebook.
The top-performing model achieved 82.56 percent precision when tested against Facebook’s test dataset, but only a 65.18 percent accuracy rate when tested against a “blackbox” dataset containing real-world videos (which are typically previously unseen and more complex). Blackbox algorithms remain one of the biggest challenges facing machine learning detection technology.
Witness, an international nonprofit that uses video and technology-based strategies to expose corruption and aid human rights activism, has been lobbying for better investment in media forensics, citing the U.S. DARPA’s MediFor program as a promising step.
The Brookings Institute, a think tank in Washington, D.C., is urging policymakers to create an “an additional stream of funding awards for the development of new tools, such as reverse video search or blockchain-based verification systems, that may better persist in the face of undetectable deepfakes.” The institute also encourages policymakers to invest in training journalists and fact-checkers, and supporting collaboration with AI-based detection firms.
Deeptrace Labs is one such detection firm. A start-up that uses deep learning and computer vision to detect and monitor deepfakes, it promotes itself as “the antivirus” in the fight against viral AI-based synthetic videos — a not-too-subtle testament to the battle of wits at the frontiers of deep tech.
Deeptrace is developing analytical back-end systems that would detect fake videos, and could be used by individual users and media companies to help them recognize manipulation. “The tagline sums up quite well how we view some of the ways in which the problem is manifesting and how we see potential technological solutions to prevent it,” says Henry Ajder, head of communication and research analysis at Deeptrace Labs.
Reality Defender is another intelligent software that is built to run alongside you while you browse the web, detecting potentially fake media and alerting users to its presence.
Scientists are also part of the battle. Amit Roy-Chowdhury, a professor of electrical and computer engineering at the University of California, Riverside, and director of the Center for Research in Intelligent Systems, has developed a deep neural network architecture that can recognize altered images and identify forgeries with unprecedented precision.
Roy-Chowdhury’s system can tell the difference between manipulated images and unmanipulated ones by detecting the quality of boundaries around objects, down to the individual pixel. These boundaries can get “polluted” if the image has been altered or modified, and so can help researchers pinpoint where any doctoring has occurred.
While his system works with still images, in theory the same principle – with some adjustments – can be applied to deepfake videos, which consist of thousands of frames and images.
But despite solid efforts, most researchers agree that the process to detect deepfakes “in the wild” is a whole different ballgame. Plus, by and large, these experimental detection techniques are only in the hands of experts, and inaccessible to the general public.