Oxford's oldest student newspaper

Independent since 1920

Truth and Technology: a Fraught Relationship

Jamie Slagel examines the worrying impact of advancements of technology on our ability to discerning the truth.

Recent discussion on the topic of so-called ‘fake news’ has exponentially grown. The use of the term “fake news” itself has increased by 40x (on Google search results) in the last year alone. The era of “fake news” has many worrying consequences, but perhaps the greatest is our all-too-real inability to really determine true from false: deception and reality seem too intertwined, despite our best efforts to genuinely and impartially seperate them. These troubles have only been severely compounded by the more eerie side of technological advancements.

The problem is very simple: there is so much misinformation out there – aggrandised by products of the modern age, like artificial intelligence, deep fakes and targeted algorithms –   that fact-checking seems like an almost impossible task. When we do actually try to distinguish true from false, it takes so long that few people actually carry through. Instead, other belief forming mechanisms—or, perhaps worse, decision-making procedures—fall into place. We suspend rational belief, and either believe what we want to believe, or throw our hands up in the air in confusion, and go back to watching cute cat videos online. Finally, after all of this, we make judgements decisions on these issues without justified beliefs.

Own a farm producing milk for a living? You’re probably going to stand by the health value of milk. Ardent vegan who thinks the milk industry is a moral outrage? You’ll be inclined to dismiss milk as being of little benefit to health. Don’t really care about milk other than with your cereal each morning? Cute cat videos it is. We believe what we want to believe, and if we don’t really care, we resort to lacking belief in anything. Apathy or impassioned belief without understanding the reality of a complex topic is an important discussion in its own right, but what I want to focus on is that this issue arises primarily from the vast piles conflicting opinions, articles, journals, videos, political statements, tweets and who knows what else just to find the truth.  The problem we face today is that when we develop a belief the threshold required to justify it is increasing to a level we simply cannot meet regularly, if at all. Advancements in technology has changed the way we consume media, making finding out the real truth seemingly impossible.  

Just take the recent example of deep fakes, ultra-realistic videos created using Artificial Intelligence. They were first reported by Motherboard at the end of 2017 when used to create fake pornography starring Gal Godot. A more recent report by the UK-led East Stratcom task force suggests something a lot more sinister. East Stratcom is an EU counter-measure to disinformation which reported that trolls backed by the Kremlin are experimenting with new AI technology that manipulates videos, and these videos will be used in the online information war in politics. This is exemplified by fake footage of Obama expressing sympathy for gay and lesbian victims in a shooting was disseminated in Georgia amongst conservative Christians by Russian-backed media.

But what are these deep fakes? The simplified version is as follows: Generative Adversarial Networks (GANs), devised in 2014 by Ph.D. student Ian Goodfellow, allow algorithms to create images, rather than just classify them—and it does this by having two GANs try to fool one another into thinking images they’ve created are real. As Samsung’s AI Centre has reported, this is extremely powerful: GANs only need one image of a person in order to create an ultra-realistic deep fake video.] If you were so inclined, you could just about ruin a stranger’s life by taking a picture of them on the street with this technology.

The Verge argues that this is merely scare-mongering. They contend that this technology has been around a while and has even been well-known for a couple years. In their eyes, it would remained marginal in the political sphere and information watchdogs (like East Stratcom) and corporations (like Facebook) e continue to keep a close eye on this kind of technology. However, the suggestion that the manipulation of photographic/videographic evidence hasn’t entered the political sphere or affected people’s mentality is simply wrong. One must merely cast their mind back to the time that Donald Trump tweeted out a fake (slowed down) video of Nancy Pelosi, and it seems that The Verge might well have benefited—as we all would—from hindsight. Or, we think back to that time he tweeted a video which purposefully edited clips from the 9/11 attack into Muslim congresswoman Ilhan Omar’s speech to make her seem pro-9/11. Not to mention the deep fakes supposedly originating from Russia, which are alleged to have been used to influence Brexit, the debate on Catalonian Independence, and the Eurovision song contest.

Like all technology, deep fakes have taken a while to reveal their more damaging potential. We may treat anything Trump says with scepticism, but no one can deny the widespread impact Trump and his tweets have. I for one don’t want to think about the effect of Trump tweeting deep fake footage of anything from 2020 presidential candidates or of the Ayatollah. With that said, Trump is as much, if not more, a victim of this sort of technology than a perpetrator. On May 16, one Belgian political party tweeted out a fake video of Trump calling upon Belgium to follow America’s footsteps, and exit the Paris climate agreement. Fraudulent behaviour, phishing and scams are rampant and convincing enough as it is to have fooled many of us—the power of deep fakes combined with the malicious intent of such fraudsters is somewhat frightening to consider. What this means is that very little, if anything, that we find online, provides any definitive proof that it is accurately reflecting reality.

Perhaps an even more worrying issue is information over-saturation. One solution to the above problem is just to check multiple sources, right? Well, that’s not so clear when contending with purposeful confusion mongers and disinformation publishers. Information over-saturation has a number of causes. For a start, there are just so many voices shouting over each other online that it’s difficult to differentiate what is backed up by real facts and what is made up. The IPCC, an authoritative voice on climate change, published in 2007 that climate change was real and the cause of man. The world was on the brink of a shift to radical action, but competing sources of information, such as this article published, claiming to have found leaked emails, ‘exposing’ that climate science alarmists had manipulated data to push their agenda disrupted change. Colloquially known as “climate gate”, the article, although quoting accurately,  took phrases out of context:three independent committees have found “climate gate” to be unsubstantiated.

It is the combination of new technology and existing consumer habits and dissemination tactics that is particularly dangerous. Consider again East Stratcom’s report that Russia are experimenting with deep fakes. The task force notes that “Often, the Russian policy is not to back one side or the other, but to amplify extreme views on both sides of an issue to fuel conflict, confusion and disaffection. Russia is believed to spend up to $1bn a year on disinformation activity.” This may seem counter-intuitive, but it actually makes a great deal of sense. When there is so much information which seems equally plausible out there, the resulting confusion leads people to suspend judgement, or, potentially worse, to stop caring. Technology like bots allow lies to be spread en masse and add some sense of legitimacy.

Imagine what deep fakes and manipulative AI technology could do: everything from creating emails indistinguishable from real ones to spread ten fake facts about climate change for each accurate point of data. All the problems caused by the information war around vaccines and autism is a worrying example of where every conversation is heading. Over-saturation, deepfakes, and things like fake news factories (which were reported in 2016, when the term “fake news” first arose, and have recently taken on a more sinister form), are all just the beginning of the kind of technology that can influence conversations. The consequences are terrifying. In the words of Marc Morano, notable for his non-profit that promotes climate change denial, “gridlock is the best friend” of anyone who wants to stop action against issues like climate change. The strategy Morano advocates is exponentially more effective in 2020.

So, what’s the solution? To start with, paying for reputable news sources is essential. The Times have long been charging for their online articles, while The Guardian and Wikipedia continue to ask their users for some sort of donation to their worthy cause. As consumers, we decide what drives our news. If what we want is free or entertaining news, then this clearly comes at the cost of accuracy, as clickbait, false stories and frivolous news take over. But if we prioritise our democratic values, we need to pay for accuracy; it is not clicks, or advertising, which drive our news sources, but reliable reporting.

This won’t entirely solve the issue. Ultimately, we need to change our habits when we consume the media. We need to start fact-checking everything we consume, and questioning whether what we’re engaging with is actually true. Programmes like the Poynter Institute’s MediaWise, which focuses on empowering us all to tell apart true and false and to stop disseminating fake news, or Google plug-ins like inVID, aiming to help users distinguish from falsities, will be crucial in this battle for the truth. Other organisations which have begun to address this issue include First Draft, Pheme and Full Fact, as well as the East Stratcom task force. 

As one of my Cherwell predecessors noted, “When it comes to the big stories, one where the safety of sources lies in the hands of the press, journalists are still the ones we turn to.” It seems like this is still the case today. We need to gain autonomy in handling our information in the face of our technological age, these organisations, amongst many others, provide tools towards our doing that. At this moment in time, this is a monumental task and responsibility for all of us. It’s a vicious cycle, if we let the problem fester, it becomes even more difficult. 

The Poynter Institute on its advice regarding how we engage with news, and East Stratcom, who state that we need to encourage people to: “question what they are reading before they consume it.” The first sentence in this very article contained a lie—one which was purposefully deceptive. One which might still be believed to be true by some who never made it to the end of this article. The statistic about the use of “fake news” in Google searches was false: the real number is closer to 400x than 40x. It is an unfortunate fact that most people reading this article probably believed the statistic without question, and perhaps even more unfortunate that many will fall short of fact-checking my corrected statistic as well. My predecessors at Cherwell have noted that ‘fake news’ is not a new concept, but technology, and its ability to magnify the effect of this fake news, is.

I have no doubt that technological advancement will roll on, whether or not we face up to these difficult questions. So the question becomes whether our own moral advancement will manage to match this. Will we bother to protect reality by changing our habits in acquiring information, reflecting on our morals in disseminating information and developing our decision-making abilities in using this information? Aldous Huxley stated on BBC Radio before the publication of Brave New World that eugenics might be the way forward. In his 1946 foreword to that same book, with the benefit of hindsight, Huxley changed tack: his message read that it is not technology, but the hand that wields it, that is good or bad. In this vein, it is the mixture of social media, AI, advertising systems and algorithm technologies combined with human shortcomings that have created this problem. It’s not eugenics or social media, nor atomic energy or Artificial Intelligence, that are the problem. The blame can only be on us. So, whether it’s education, counter-technology, or reforming our news consumption habits, something must steer us into reality. 

Check out our other content

Most Popular Articles