Oxford's oldest student newspaper

Independent since 1920

Could artificial intelligence disrupt our world?

L. Sophie Gullino discusses why you should work on AI safety.

Every time that Netflix recommends you a movie, or you ask Alexa for today’s weather, you are using an artificial intelligence (AI) designed to perform a specific function.  These so-called “narrow” AIs have become increasingly more advanced, from complex language processing software to self-driving cars, however they are only capable of outperforming humans in a relatively narrow number of tasks. 

Following the intense technological race of the last few decades, experts state that there is a significant chance that machines more intelligent than humans will be developed in the 21st century. Whilst it is difficult to forecast if or when this kind of “general” AI will arise, we cannot take lightly the possibility of a technology that could surpass human abilities in nearly every cognitive task.

AI has great potential for human welfare, holding the promise of countless scientific and medical advantages, as well as cheaper high-quality services, but involves a plethora of risks. There is no lack of examples of failures of narrow AI systems, such as AIs showing systematic biases, as it was the case for Amazon’s recruiting engine which in 2018 was found to hire fewer women than men. 

AI systems can only learn from the information they are presented with, hence if the Amazon workforce has historically been dominated by men, this is the pattern the AI will learn, and indeed amplify.

Science fiction reflects that our greatest concerns around AI involve AI turning evil or conscious, nonetheless in reality the main risk arises from the possibility that the goal of an advanced AI could be misaligned with our own.  This is the core of the alignment problem: even if AIs are designed with beneficial goals, it remains challenging to ensure that highly intelligent machines will pursue them accurately, in a safe and predictable manner.

For example, Professor Nick Bostrom (University of Oxford) explains how an advanced AI with a limited, well-defined purpose, could seek and employ a disproportionate amount of physical resources to intensely pursue its goal, unintentionally harming humans in the process. It is unclear how AI can be taught to weigh different options and make decisions that take into account potential risks.

This adds on to the general worry about losing control to machines more advanced than us, that once deployed might not be easy to switch off. In fact, highly intelligent systems might eventually learn to resist our effort to shut them down, not for any biological notion of self-preservation, but solely because they can’t achieve their goal if they are turned off. 

One solution would be to teach AI human values and program it with the sole purpose of maximizing the realization of those values (whilst having no drive to protect itself), but achieving this could prove to be quite challenging. For example, a common way to teach AI is by reinforcement learning, a paradigm in which an agent is “rewarded” for performing a set of actions, such as maximising points in a game, so that it can learn from repeated experience.  Reinforcement learning can also involve watching a human  perform a task, such as flying a drone, with the AI being “rewarded” as it learns to execute the task successfully.  However, human values and norms are extremely complex and cannot be simply inferred and understood by observing human behaviour, hence further research into frameworks for AI value learning is required. 

Whilst AI research has been getting increased media attention thanks to the engagement of  public figures such as Elon Musk, Stephen Hawking, and Bill Gates, working on the safety of AI remains a quite neglected field. Additionally, the solvability of the problem, as well as the great scale and seriousness of the risks, make this a very impactful area to work on. Here, we discussed problems such as alignment and loss of control, but we have merely scratched the surface of the risks that could arise and should be addressed. For example, there are additional concerns associated with the use of AI systems with malicious intent, such as for military and economic purposes, which could include large-scale data collection and surveillance, cyberattacks and automated military operations. 


In Oxford, the Future of Humanity Institute, has been founded with the specific purpose of  working “on big picture questions for human civilisation” and safeguarding humanity from future risks, such as those resulting from advanced AI systems. Further research into AI safety is needed, however you don’t necessarily need to be a computer scientist to be able to contribute to this exciting field, as contributions to AI governance and policy are equally important. There is a lot of uncertainty associated with how to best transition into a world in which increasingly advanced AI systems exist, hence governance structures, scientists, economists, ethics and policymakers alike can contribute towards positively shaping the development of artificial intelligence.

Image: pixel2013 / CC Public Domain Certification via Pixnio

Check out our other content

Most Popular Articles