top of page

AI on film: have our phone screens caught up with the big screen?

  • Writer: Rory Yeates Riddoch
    Rory Yeates Riddoch
  • Nov 14
  • 6 min read

Sean Young and Harrison Ford in Blade Runner (1982). Image credit: Warner Bros.
Sean Young and Harrison Ford in Blade Runner (1982). Image credit: Warner Bros.

In Ridley Scott’s Blade Runner, police officer-turned-agent Rick Deckard hunts down replicants on the grimy streets of a dystopian Los Angeles. The replicants in question are androids, bioengineered to look and behave like humans, but which are in fact robots powered by artificial intelligence (AI).


On his mission, Deckard ends up falling for one of the replicants, Rachael, who’s unaware that she isn’t human. The film, based on Philip K. Dick’s masterwork Do Androids Dream of Electric Sheep?, tackles existential themes, questioning what it means to be human and how we define ourselves against AI.


While Dick’s depiction of a world with fully integrated AI beings may still be far off, in other ways, his reality of ambiguity is already being realised on day-to-day interactions online. 


Blurring the line of reality


Earlier this year, as AI video content began to flood social media, I would scoff at the comment sections on TikTok, as what was clearly a fake video of a public figure saying something scandalous seemed to be fooling scores of people. Of course, my prowess as an avid Gen Z internet user would help me keep my wits about me.


Mere months later, though, things are starting to look quite different. More often than not it’s the content of the video, rather than its quality, which is the giveaway. It’s not until Jake Paul starts putting on makeup, or Martin Luther King starts referencing memes, that I realise what I’m watching is fabricated. 


While content remains tongue in cheek, it’s pretty harmless; a ‘gotcha’ moment between friends as they exchange videos. But what happens when the intention behind content generation becomes more malicious?


Mountainhead is Jesse Armstrong’s spiritual sequel to Succession, insofar as it focuses on a group of super unlikeable millionaires bickering over power and money. While it falls short of the hard-hitting drama Succession delivers, Mountainhead offers a chilling depiction of what the unmitigated development of AI might look like.


Venis, a not-so-subtle caricature of OpenAI CEO Sam Altman, vouches for the free hand of the market, as his social media platform peddles AI content that has become so deceptive that it is fuelling major conflicts around the world. As political and economic fallout escalates, Venis doubles down and makes no effort to contain the fire he has ignited.


AI content acting as a catalyst for warfare may seem far fetched, but look at the damage fake news is already having on the political climate. Misinformation played a major part in the January 6 attack on D.C’s Capitol building back in 2021. Throw in doctored videos and images of political figures, and how long before sceptical citizens begin to doubt the line: “it’s not real, it’s AI”?


In fact, a cautionary tale of AI meddling in political unrest occurred back in 2018 when the then-Gabonese President, Ali Bongo, was suffering from ill health and being treated outside of the country. After a period of no public appearance and with the general public becoming suspicious of his well-being, the vice president announced that Bongo had suffered from a stroke but was recovering well.


Despite the announcement, speculation and anxiety continued to grow, so to ease tension the government brought out the still-recovering President for a New Year's address video. Upon seeing the video, where Bongo's face appeared a little misshaped due to his stroke, the Gabonese military drew up a theory that this must in fact be a deepfake, and that Bongo had died. Shortly thereafter, they thought they'd seize a moment of fragility and staged an ultimately unsuccessful coup, the first the country had seen in nearly 60 years.


Deepfake technology now seems primitive compared to current content generation capabilities, but if it created enough uncertainty to be a trigger point for a coup within economically unstable conditions, where does that leave us going forward?


And so, returning to Rick Deckard’s dystopian LA, might we be closer to fighting the deceptive replicants than we think? When so much of our reality is experienced via a screen, why would those who wish to exploit us for economic or political gain need physical androids to do so?


Robolove


Joaquin Phoenix in Her (2013). Image credit: Warner Bros.
Joaquin Phoenix in Her (2013). Image credit: Warner Bros.

The romance central to Blade Runner is also starting to be realised in many uncomfortable ways, with more and more instances of people ‘falling in love’ with chatbots. For anyone that’s ever used the voice feature on a large language model (LLM) like ChatGPT, it’s not hard to see how vulnerable and lonely people are targeted by an app whose main purpose is to keep you using it while collecting as much data as possible.


The sex, tone and accent of ChatGPT’s voice-mode are customisable to the user’s desire, and it will always possess an overly-supportive and flirtatious tone when responding, no matter the outrageousness of the prompt.


AI romance is a topic that has been widely explored in media over the past few decades, with notable highlights from the likes of Black Mirror, Her and Ex Machina. Within these examples, as with Blade Runner, we see advancements in AI that seem to truly level human emotion and behaviour, asking the audience to question our definition of love and its boundaries.


The difference with current real-world cases of AI relationships is that the technology is not capable of any real emotional capacity. As mentioned, the purpose of current LLMs are to keep you on them for as long as possible, and those who are programming them seem to view romantic endeavours as a useful by-product for the success of their apps.


Replika is a chatbot specifically designed to act as a ‘friend’ on the free version, or a ‘partner’ on the premium version. Given that 60% of its paying user base have said they’ve had a romantic relationship with their chatbot, it’s clear what is incentivising people to download the app. Despite denying that the company is building romance-based chatbots, Eugenia Kuyda, CEO of Replika, made her views on these relationships clear: “I think it’s alright as long as it’s making you happier in the long run. As long as your emotional well-being is improving [and] you are less lonely, you are happier.”


Can a language model that is designed to give your insecurities unwavering support, reaffirming your possibly unhealthy outlook on interpersonal relationships, really make you happier in the long run? Whether we can reach a point where LLMs are capable of achieving this, the societal damage our current relationship with chatbots is having is already leaving its mark.


Rejection or regulation?


UK PM Keir Starmer during a speech on AI. Photo credit: HENRY NICHOLLS/Reuters
UK PM Keir Starmer during a speech on AI. Photo credit: HENRY NICHOLLS/Reuters

With all this in mind, at its current trajectory, we may well be on the horizon of fully realising all of Phillip K. Dick’s fears; the events of Blade Runner occur in 2019, which could end up being an eerily prophetic setting. As countless AI companies charge on full steam ahead, the brakes will only be applied by us, the consumer, or those with the power to regulate.


Governments across the world’s major nations are endorsing the unprecedented levels of investment in AI, as each fears falling behind the military and technological innovations promised by the industry. This, in turn, has produced an AI arms race that seemingly has no end in sight.


The UK is no different. While tens of billions of pounds are being poured into the UK, Jensen Huang, CEO of US tech giant Nvidia, endorsed Keir Starmer’s hopes for the UK to become an AI ‘superpower’, noting “what’s missing is the AI infrastructure… we are here to build it”.


Beyond the environmental implications of the countless data centres required to meet these ambitions, the fear for the general population is that regulation is not at the top of, or even on, the government’s AI agenda.


The job then, for us, is to collectively say no to this anarchic path we find ourselves on. That’s not to say we must reject AI entirely; the efficiency it provides us in areas of our work lives has been game changing, not to mention the breakthroughs it's helping to achieve in the world of science. But its meddling in our political systems, our sex lives, our grasp on reality, is at best deeply unhealthy and at worst catastrophic.


This rejection may come naturally: I can see us reaching a point where we become so sick of being unable to decipher between real and AI-generated content, that we stop using the likes of TikTok and Instagram altogether.


It will probably take more, though, than an organic response, requiring direct and focused action from organised groups to push back against the AI machine. I don’t doubt that such groups will become increasingly frequent over the next few years, building on the creative industries who are already taking a stand against the theft of their work.


If we do want to avoid a future that looks like Blade Runner, Mountainhead, Her or Black Mirror, the time to act is now - before the line between the real and the artificial is blurred beyond the point of no return.


Comments


Post: Blog2_Post
  • Facebook
  • Twitter
  • LinkedIn

©2021 by The Vocal. Proudly created with Wix.com

bottom of page