The BBC has released footage of Boris Johnson appearing to endorse Jeremy Corbyn as prime minister. It comes amid efforts to raise awareness of the power of ‘deepfake’ videos. But this video is not actually a deepfake. It would have been a much scarier, more effective story if it was.
The BBC‘s Catrin Nye teamed up with Future Advocacy to produce two so-called deepfakes. The videos show Johnson and Corbyn seemingly endorsing one another for prime minister in the upcoming general election.
A new deepfake video shows Boris Johnson endorsing Jeremy Corbyn for Prime Minister, and Corbyn endorsing Johnson.
— Catrin Nye (@CatrinNye) November 12, 2019
Read on...Support us and go ad-free
The bad news is, these are not deepfake videos. They are shallowfakes: a lower tech option. While they’re a little unnerving, the Corbyn voiceover is so bad as to be comical. The net effect of this mistake may be to reduce people’s concern about deepfakes. The BBC may be lulling viewers into a false sense of security.
What is a deepfake?
The days of ‘seeing is believing’ ended with the emergence of deepfake technology. This isn’t some lookalike, or slightly altered footage, voiced by an impressionist. The result of this technology is undetectably fake footage of real people with their real faces and real voices.
As JM Porup explained for tech site CSO:
Deepfakes exploit this human tendency using generative adversarial networks (GANs), in which two machine learning (ML) models duke it out. One ML model trains on a data set and then creates video forgeries, while the other attempts to detect the forgeries. The forger creates fakes until the other ML model can’t detect the forgery. The larger the set of training data, the easier it is for the forger to create a believable deepfake. This is why videos of former presidents and Hollywood celebrities have been frequently used in this early, first generation of deepfakes — there’s a ton of publicly available video footage to train the forger.
In essence, we have two bits of tech trying to smash the Turing test. One is mining databanks of the target’s voice, facial and body movements to assemble the perfect recreation. The other is mining those same banks to spot the lie. When the latter can no longer detect reality from fakery, the deepfake is complete. That’s why when you see a real deepfake, it looks and sounds just like the real thing.
What’s crazier is this is not a super-expensive technology used exclusively by Hollywood studios. Deepfake technology is widely available for download by anyone with a computer.
Finding reality in a world of deepfakes
Deepfake and shallowfake technologies have already been weaponised to political ends. In November 2018, the Trump White House began circulating a video of CNN reporter Jim Acosta appearing to assault a young female journalist. In fact, the video was a shallowfake. No such altercation had taken place.
And then there was the “drunk Pelosi” video which racked up over 2 million views despite being entirely fake.
In both cases, the original footage was available to refute the allegations. But nevertheless, a sitting President using this technology to attack a member of the press or a political opponent is a terrifying step. And it would have been far worse with an allegation where no original footage existed. A one-on-one meeting, an event on a battlefield, or some other such context would be much harder to invalidate or verify.
So what do we do?
The first plan was to invest heavily in deepfake-detecting technologies. The problem is deepfake tech is built to outfox such systems. David Gunning, the programme manager of DARPA’s attempt to find a tech solution to deepfake detection told MIT Technology Review exactly that:
Theoretically, if you gave a GAN all the techniques we know to detect it, it could pass all of those techniques. We don’t know if there’s a limit. It’s unclear.
In the meantime
Shallowfakes are the low-hanging fruit here. We can detect a badly-done voiceover or a poorly rendered piece of CGI with our own eyes and ears. But the problem comes with deepfakes that render our senses ineffective. Our last defence here is common sense. If there’s been one benefit to this era of rabid fake news, it’s that a good chunk of people are now thinking before they share.
— Marc R Gagné MAPP 🍁 (@OttLegalRebels) November 12, 2019
Jonathan Hui, a deep learning specialist, has a few tips for anyone looking to defend themselves from deepfakery. He suggests slowing the video down, then looking for the following:
- Blurring in the face but not elsewhere in the video.
- A change in skin tone around the edges of the face.
- Double chins, double eyebrows, or double edges to the face.
- The face becoming blurry when partially obscured by a hand or other object.
Unfortunately, we can no longer take it for granted that news editors have done this work for us. So it falls to us to do that work ourselves. It’s imperfect, but until we do master a means of nullifying the power of this incredible technology, it’s all we have.
Featured image via Twitter – Catrin Nye
We need your help to keep speaking the truth
Every story that you have come to us with; each injustice you have asked us to investigate; every campaign we have fought; each of your unheard voices we amplified; we do this for you. We are making a difference on your behalf.
Our fight is your fight. You’ve supported our collective struggle every time you gave us a like; and every time you shared our work across social media. Now we need you to support us with a monthly donation.
We have published nearly 2,000 articles and over 50 films in 2021. And we want to do this and more in 2022 but we don’t have enough money to go on at this pace. So, if you value our work and want us to continue then please join us and be part of The Canary family.
In return, you get:
* Advert free reading experience
* Quarterly group video call with the Editor-in-Chief
* Behind the scenes monthly e-newsletter
* 20% discount in our shop
Almost all of our spending goes to the people who make The Canary’s content. So your contribution directly supports our writers and enables us to continue to do what we do: speaking truth, powered by you. We have weathered many attempts to shut us down and silence our vital opposition to an increasingly fascist government and right-wing mainstream media.
With your help we can continue:
* Holding political and state power to account
* Advocating for the people the system marginalises
* Being a media outlet that upholds the highest standards
* Campaigning on the issues others won’t
* Putting your lives central to everything we do
We are a drop of truth in an ocean of deceit. But we can’t do this without your support. So please, can you help us continue the fight?