Almost everyone who uses the internet will have come across a 'deepfaked' video over the past few years, whether it be Queen Elizabeth giving a parody Christmas speech, or ex-Presidents arguing in a Call of Duty lobby. While this new technology can provide plenty of entertainment with celebrities placed in endless comical scenarios, the more serious powers of deepfakes are rather sinister.
A recent scandal cast a dark shadow over the internet – more specifically the Twitch community – when a popular streamer, Atrioc (alias of Brandon Ewing), was caught with a tab open for a website selling deepfake pornography. The website in question featured deepfaked videos of other Twitch streamers, ones who Ewing knew personally and had collaborated with.
When viewers spotted this and called Ewing out, he admitted to actively paying for a subscription to this website, to the horror of the Twitch community, and more importantly, the streamers whose identities had been hijacked. One streamer in question, Pokimane – alias of Maya Higa – responded plainly in a tweet: "Stop sexualising people without their consent." This indicates the problem laying at the heart of this issue - a fundamental violation of rights through non-consensual objectification and sexualisation.
Following the controversy and its backlash, the website owner took down the site and its content. While this is a small victory in the fight against this type of pornography, the trajectory of deepfakes is still heading forcefully in one direction, with little to no real regulation. In fact, it seems, deepfakes are becoming more and more embraced within mainstream media. One example of this comes in the form of a new ITV sketch show, in which this technology lies at the heart of the humour.
Deep Fake Neighbour Wars is an impersonation sketch show pitting deep-faked celebrities such as Harry Kane and Kim Kardashian against each other as bickering neighbours. While the show is, in my humble opinion, out of touch and full of wince-inducing millennial humour, there are worse implications for a show like this other than its poor comedic quality.
Someone who is internet-savvy would probably deduce quickly that deepfake technology is at play if they were flicking through the channels and happened upon the show. Someone who wasn't so caught up, however, wouldn't be remiss for not clocking on. After all, a lot of these impressions go past the point of uncanny and look like the real deal. Before you know it, people are taking to Twitter to attack this or that celebrity for something they didn't say, clips are taken out of context and fake news begins to spread like wildfire.
This may seem hyperbolic, but if we are introducing deepfaking into our media vocabulary, things can start to get muddled and lines become blurred very quickly. The normalisation of this technology can cause shockwaves going all the way up to high office, as a recent case in Gabon demonstrates. Back in 2018, the Gabonese President – Ali Bongo – was suffering from ill health and being treated outside of the country. After a period of no public appearance and with the general public becoming suspicious of his well-being, the vice president announced that Bongo had suffered from a stroke but was recovering well.
Despite the announcement, speculation and anxiety continued to grow, so to ease tension the government brought out the still-recovering President for a New Year's address video. Upon seeing the video, where Bongo's face appeared a little misshaped due to his stroke, the Gabonese military drew up a theory that this must in fact be a deepfake and that Bongo had died. Shortly thereafter, they thought they'd seize a moment of fragility and staged an ultimately unsuccessful coup, the first the country has seen in nearly 60 years.
Herein lies the danger in the normalisation of deepfakes: it's not that someone's identity could be forged, but that any given clip can be accused of being forged. The mere existence of deepfakes creates a level of uncertainty about the validity of any video. And this technology doesn't end on the visual side; people's voices can now also be imitated to a very convincing degree.
As the technology becomes ever-more clinical, how soon will it be before even those with the sharpest knowledge of it are unable to tell the difference? What if, say, an authentic scandalous audio clip came out of a politician that gave cause to have them sacked, and their defence was that it was a deepfake? Or even worse, it really was deepfaked, but they had no way to prove otherwise.
Of course, reverse-engineering technology could be developed to combat this. Alongside this, however, urgent regulation is needed in some capacity. While the discussion of regulation is in the works across many countries, the only place where it has actually been tackled so far is China. Under new rules, any consent must be given for use in any deepfake content, and the real identity of the person behind the digital mask must be disclosed. Furthermore, the technology cannot be used to help disseminate fake news.
The UK and the rest of the world could learn a thing or two from China's harsh crackdown, as so far in Europe, for example, the proposed legislation only goes so far as to have deepfake content labelled as such. Luckily, on the side of explicit content in the UK, deepfake pornography is becoming regulated in much the same way as revenge pornography is, with those whose identities are stolen being seen as victims in a similar capacity.
This is absolutely a step in the right direction. While the dangers of deepfake pornography are more forthcoming than in other contexts, the case of Gabon has shown us that left to its own devices, the capabilities of digital identity theft can quickly escalate into potential economic disasters and the entire toppling of governments. Deep Fake Neighbour Wars is not just a subpar TV show, it's a message to mainstream media and the general public that deepfakes are here to stay.
コメント