Wednesday 18th February 2026

‘Curly quotation marks’ and ‘Americanisms’: How does Oxford detect AI use?

It was announced in September last year that Oxford would be the first university in the country to offer ChatGPT Edu to all students. Earlier that year, a survey by the Higher Education Policy Institute found that 92% of students had used AI in some form at university, with 88% reporting to have used generative AI in assessments.

These figures have surged since 2024, when only 53% of students admitted to having used generative AI in assessments. The survey only shows the picture from students themselves across the country. How is Oxford, an institution renowned for intellectual rigour and world class academia, keeping up with the AI revolution? 

The increase in misconduct cases

Between July 2023 and January 2026, there were a total of 33 cases of suspected AI misconduct handled by the University. 30 of these cases relate to coursework, while the other three were for examinations.

As would be expected, the number of suspected AI misconduct cases has increased drastically in the past year. There were only three cases reported in 2024, whereas the following year saw 28 cases, marking an increase of 833%.

The highest number of cases received in one month is four, which has happened four times. Between August 2023 and July 2024, there was not a single case of suspected AI use reported to the Proctors’ Office. Since the release of ChatGPT Edu, there have been eleven  cases.

Interestingly, AI in itself is still not classed as a separate category of academic misconduct when handled by the University. Instead, these cases are classified as ‘plagiarism’, according to a freedom of information (FOI) request by Cherwell.

By contrast, another FOI request sent to the University of Bristol shows far higher numbers of AI cases at other universities. In the 2023/24 academic year, Bristol issued 526 penalties for suspected AI misconduct, dwarfing Oxford’s figures by some margin.

Tell tale signs

Cherwell’s freedom of information request also shines a light on the indicators used in determining whether academic work is the product of AI. 

Indicators like fake quotes, factual inaccuracies and prompts left in text are considered the most obvious indicators of suspected AI misconduct. Other indicators, however, are more up for debate. For example, students would “not normally have been taught” to use em-dashes in their writing.

Another indicator of potential misconduct is the use of ‘americanisms’. The guidance does note that international students are more likely to have learned American English, though mixing British and American English in the same text is considered a sign that AI may have been used. Other indicators include curly quotation marks, unusual levels of repetition, poorly argued prose, highly polished text, and bland statements.

The internal guidance is prefaced by a disclaimer that the indicators “may not provide definite proof that the student used AI without permission”, and urges the Proctors to consider each case holistically.

The accused

Though the data relates only to cases of AI misconduct in officially assessed work – rather than in tutorials or collections – Cherwell spoke to students who have faced accusations of AI usage by their college.

One modern languages student, who graduated last year, was accused of using AI in collections in his final year. He explained that he was called into a meeting and that his tutor wanted to escalate the complaint further. He told Cherwell: “I was very scared that, if she thought I had used AI when I hadn’t, how is it going to go in my finals?”

He told Cherwell that their way of checking was putting the essay question into ChatGPT, and it came out with a similar answer. He explained that this approach “is not a valid way of checking if someone used AI at all.”

When asked whether the ordeal changed the way he approached academic work, he said: “It didn’t change the way I approached it because I am really stubborn and I love an em dash.”

Another student that Cherwell spoke to, however, has been more inclined to approach academic work differently following accusations from tutors of AI usage. She explains how the discrepancy between different tutors’ attitudes towards AI may therefore leave students without a clear answer as to when, if at all, AI use is acceptable in academic work..

She told Cherwell that she “lost all confidence” when she stayed behind after a tutorial to ask questions about a topic which she was curious about, but her tutor instead questioned if she had used AI to collect notes and plan the essay.

However, in tutorials with younger tutors, she explained that they tend to be more open to using AI tools to break down a question and understand difficult concepts. She told Cherwell: “I often wonder whether, if I had more time to break down and review the information for my essays, I would have a more sufficient understanding of the topic and be able to write a coherent essay without needing to cut corners by using AI.”

What the experts say

Thomas Lancaster, Principal Teaching Fellow in Computing at Imperial College London, told Cherwell that, although guidance regarding the use of AI in universities exists, the biggest challenge is that it isn’t always consistent or up to date. The key issue, he explains, is that “so much of it assumes that every academic discipline operates in the same way”. 

One way in which some universities have attempted to cope is by increasing the number of closed book, handwritten exams.  Oxford made this change for its modern languages in May 2025 due to fears over AI, though the move sparked debate from students at the time who would have to adapt to a form of assessment that they had not anticipated. 

However, when asked whether a blanket shift to in-person, handwritten examinations would be a viable solution to the AI misconduct conundrum, Lancaster told Cherwell: “I think that would be completely inappropriate. Most universities in the UK just aren’t set up for an exam based curriculum, and frankly, handwriting just isn’t a skill that people have. This also limits what people can accomplish, which is very different for preparing students for an AI-first world.

“The Oxford deal with OpenAI really showed the University being at the forefront of AI adoption, although the educational sector has moved on since then… There’s nothing wrong with an assessment testing the ability of students to work with modern technology, but the assessment has to be phrased in those terms. Similarly, there’s nothing wrong with AI free assessments. It’s all about creating a balance.”

Ben du Boulay, Emeritus Professor of AI at the University of Sussex and Editor of the Handbook of Artificial Intelligence in Education, also has ideas for how assessments can adapt to the challenge of AI. He told Cherwell that, in some cases, “it may be advantageous to allow students to use a large language model (LLM) but require them to submit both the LLM’s answer as well as their improved version of that answer, highlighting and explaining the changes that have been made”.

Boulay also advocates for more student training, telling Cherwell that it should make clear what it means to be a student, how an assignment develops understanding and skill, and that being a student means improving metacognitive understanding and regulation.

A spokesperson for Oxford University told Cherwell: “’The University is committed to encouraging the ethical, safe, and responsible use of AI and it has published clear guidance on this for students who use AI tools to support their studies. Unauthorised use of AI for exams or submitted work is not permitted and students should always follow any specific guidance from their tutors, supervisors, department or faculty. 

“Oxford’s teaching model emphasises the importance of face-to-face learning and requires students to clearly demonstrate subject knowledge, critical thinking and evidence-based arguments. Together with clear guidance on responsible use of AI for study, and policy on AI use in summative assessment, this helps to safeguard against inappropriate or unauthorised use of AI. Where concerns about unauthorised use are raised, cases are reviewed via established academic misconduct processes. All policy and guidance is under constant review, in response to rapid changes in the AI landscape.”

Check out our other content

Most Popular Articles