Oxford's oldest student newspaper

Independent since 1920

Oxford professors join Musk and Wozniak in call for six month pause in AI development

At least 13 members of Oxford University’s academic staff have now signed an open letter calling on labs developing artificial intelligence (AI) systems more powerful than GPT-4 “to immediately pause for at least 6 months”.

Currently, the letter has amassed more than 30,000 signatories, including the likes of Elon Musk, Apple co-founder Steve Wozniak and US politician Andrew Yang, but remains controversial among its supporters and critics alike.

The letter was penned by the Future of Life Institute, a non profit organisation criticised for supporting theories such as longtermism. This philosophy views longterm improvement as an essential moral priority and is supported by such academics as Oxford professor Nick Bostrom, who was criticised recently for a racist email he wrote in the 1990s and whose work is cited in the open letter. The letter raises concerns over what is viewed to be a disproportionately high rate of AI development in relation to a comparatively more limited understanding of the risks it might entail. 

The support of various Oxford academics among thousands of other signatories has been described by one academic as part of “sounding the alarm”. 

Carissa Véliz, one of the signatories of the open letter, is an Associate Professor at the Oxford Faculty of Philosophy and the Institute for Ethics in AI. The institute, launched in 2021 as a part of Oxford’s Faculty of Philosophy following a donation by Stephen A. Schwarzman, has been dedicated to exploring the ways in which the world of artificial intelligence interacts with areas such as human rights, democracy, environment, governance and human well-being. Cherwell recently spoke with the Professor about how she believed the University of Oxford specifically should – if at all – respond to the current rate of development. 

According to Véliz, while the establishment of the Institute was a “welcome development”, “we’d stand a much better chance of ensuring that AI will contribute to the wellbeing of individuals, and to values like equality, fairness, and democracy” if we “invested a fraction of what is being spent on developing AI on research on the ethics of [it’s] governance”.

When asked why she believes the focus on regulation within the artificial intelligence industry has received less attention than other fields, particular emphasis was placed on the significance of private sector monopoly.

That artificial intelligence is “mostly being developed in private companies, as opposed to public institutions or universities […] makes it harder to regulate”. According to Véliz, these challenges have been contributed to further by the lobbying power of “big tech companies”, as well as the very nature of artificial intelligence as “a very complex technology, with unforeseen applications and possible consequences”. She also added that she does not “subscribe to the longtermism movement”.

According to the Future of Humanity Institute, the six month “pause” in AI development called for hopes to mitigate these unknowns, rather than pause the development of artificial intelligence in general. 

Despite this, certain experts within the field have criticised the contents of the letter for not going far enough, furthering a cycle of “AI hype”, rather than offering concrete solutions for the threats actually posed by limited regulation. According to Arvind Narayanan, an Associate Professor of Computer Science at Princeton University, the letter “further fuels AI hype and makes it harder to tackle real, already-occurring AI harms”. 

While there are “valid long-term concerns […] they’ve been repeatedly strategically deployed to divert attention from present harms”, Narayanan tweeted on Wednesday. While he agrees that these concerns warrant attention, “collaboration and cooperation […] the hype in this letter—the exaggeration of capabilities and existential risk—is likely to lead to models being locked down even more”.

Further criticism has come from a group of researchers at the DAIR (Distributed AI Research Institute). They published a riposte to the letter, claiming that while the authors raise many legitimate concerns about AI, “these are overshadowed by fearmongering and AI hype”. The DAIR writers also criticize the longtermist philosophy behind the open letter and the lack of attention to the exploitative practices of large corporations. There are also no signatories from Open-AI, designer of Chat GPT-4, or the Open-AI spin-off Anthropic, which aims to create safer AI, as of March 29.

Oxford University’s Associate Professor in Machine Learning, Michael Osborne, is another member of the university’s teaching staff to sign the letter. Echoing fears of the impact of artificial intelligence in undermining democracy, Osborne highlighted in conversation with Cherwell that the potential threats of under-regulated AI may include “targeted propaganda, misinformation and crime”, but that the University of Oxford is currently “leading the world” in its research.

Osborne added that if regulation fails to keep up with technological developments, it “will be necessary to tackle the possible harms from these models”, particularly as technologies such as ChatGPT increasingly move into the sphere of public consumption.

Check out our other content

Most Popular Articles