We live in a time when technology rules everything around us, and, for the most part, it has helped all of us live better lives. It has done everything from keeping us connected to making us healthier, but one of the most significant changes technology has brought is how we consume our news. Now, more than ever, people are getting their news from video sources, which is fantastic because it allows more information to be shared in condensed chunks.
For decades, video has been the gold standard of visual news because, unlike pictures, it couldn't be manipulated, but all good things must end. The foundation of deep fakes can be traced back to the 1990s, but they didn't get their name until 2017, three years after Ph.D. graduate student Ian Goodfellow created generative adversarial networks (GAN), a key component to today's deep fakes. These initially harmless fake videos have morphed into something far more nefarious in just a few short years, from people stealing celebrity faces to endorse their products to people trying to frame politicians.
The rise of the deep fake has hit the news industry the hardest as it makes it more challenging than ever to verify if a source is legitimate in an "if you're not first, you're last" industry. The news is a pillar in a democratic society, so it is not like they can throw in the towel and be defeated by 21st-century technology. Instead, agencies are learning new strategies to prevent, detect, and adapt to this new-age threat.
What's the Big Deal?
Who cares if a few news agencies get egg on their face by sharing a fake video every once in a while, right? Wrong! The ramifications of these altered videos are dangerous and possibly even life-threatening, and we've already seen their effects. One of the most famous examples was the Tom Cruise deep fakes that took the world by storm and had people all over the internet convinced that Tom Cruise had been making his own music for his movies. Then there was a State Farm ad that claimed to show a video from the 90s that made shockingly accurate predictions about 2020; it turned out that the video was a deep fake as well. While these are innocuous examples, it is terrifying how many people believe them to be authentic.
Deep fakes could cause unknown amounts of damage in the wrong hands, from ruining someone's life to starting a war. For example, imagine if hackers went on live television with a deep fake of the POTUS saying that we have already sent nukes towards China. The Chinese government isn't going to stop to authenticate a video if they think it is under the threat of imminent attack. That example may be a worst-case scenario, but more realistic scenarios aren't that much better in the long run.
Perhaps, the biggest threat that deep fakes pose right now is the erosion of trust. Soon, people won't be able to tell if what they are watching is real or fake, which will lead to a blanket distrust of visual news media. While that idea sounds terrifying to ad revenue, it also paves the way for people to ignore when real news is happening around them.
The Cost of Deep Fakes
There is no way to pinpoint how much deep fakes will end up costing businesses and the government, considering it's still a relatively new technology, but it is reported that deep fakes cost companies upwards of 250 million in 2020. One such example was in 2019 when the CEO of an English energy company took a phone call from who he thought was the CEO of the company's parent company asking for an emergency fund transfer to another company to the tune of $243,000. The problem was that it wasn't the CEO of the parent company; it was fraudsters using deep fake audio technology to mimic his voice. The money was then moved multiple times, and the culprits were never caught - another problem with the future of digital crimes that can be carried out anywhere in the world.
How Do You Detect a Deep Fake?
The first and most important step to combating deep fakes is to detect when a video has been faked, which is easier said than done. However, the problem is at the heart of how deep fake technology functions - it is an AI learning platform that is meant to learn from and beat detection to create a more perfect image every time. According to the author of the book Deepfakes, Nina Schick:
"This is always going to be a game of cat and mouse, because just as soon as you build a detection model that can detect one kind of deepfake, there will be a generator that will be able to beat that detector."
She likened the idea to antivirus software which needs to constantly be updated to detect the newest threats. Schick suggests that instead of telling if videos have been faked, the more straightforward answer is to validate that a video is real. This can be done with hardware that, in essence, leaves a digital watermark that indicates the location the video was shot and if it has been manipulated in any way. Unfortunately, it seems as though deep fake technology is here to stay, and there will never be an easy solution. However, computer scientists are constantly working on new ways to make it easier for governments and news agencies to detect fakes efficiently.
It's Not Going Away
The deep fake dilemma will likely only get worse. They pose a real threat to the news community and the community at large, and there will be many challenges in the future as these fakes become more sophisticated and cheaper to create. Companies will likely have to develop a multi-tiered defense against deep fake threats. One place to start is getting your media from trusted sources via SnapStream. A trusted video source will give you more confidence in your reporting and cut valuable time from your "information-to-on-air" chain. Until there is some way to stop them, the deep fakers will fake, and it is up to us to differentiate the fakes from the real thing.