Oxford's oldest student newspaper

Independent since 1920

IB results day: a broken algorithm which decided students’ futures

Zacharie Mouillé examines the International Baccalaureate's controversial marking methodology, and what this might lead us to expect for A-levels results day this year.

Students in the International Baccalaureate programme received their final grades on the 5th of July this year and many of them were disappointed. Although the IB had promised a fair method of determining these grades to replace the cancelled IB exams, students’ final grades generally fell well below their predicted grades, causing them to miss their university offers and leaving them with no higher education plans for the next year.

In the days following the release of IB results, significant evidence emerged that an unusually high number of students suffered significant reductions in their final score relative to their predicted grades. A petition calling for the IB to take remedial action regarding the results has gained over 23,000 signatures and claims that grades were lowered by up to 12 points in many cases. Wired and The Financial Times have also featured statements from students and teachers who confirmed these sharp declines in grades and explained that the results have prevented many students from gaining places at universities.

Looking at the statistical report released by the IB, it appears that little has actually changed from previous years. In fact, the IB reported higher pass rates and average scores this year than the May 2019 exam session: 79.10% in 2020 versus 77.83% in 2019, and 29.92 in 2020 versus 29.65 in 2019 respectively. The grade distribution curve is also in line with previous years’. Yet, although the IB is satisfied that their methodology seems to have yielded the expected results on paper, an obvious issue with these overall statistics is that they reveal nothing about how individual students performed relative to their predicted grades, which is what students have taken issue with.

The reason why students are reporting such sharp declines is quite simply down to the marking methodology used by the IB this year: the algorithm. When the exams were cancelled in March, the IB promised to award diplomas based on the following criteria: student coursework, predicted grades and historical data from schools. They delegated the design of an algorithm taking into account these criteria to an unnamed educational organisation. Each of these criteria on their own are problematic, and all together are inadequate at determining final scores.

This year, exceptionally, every piece of coursework has been graded externally by the IB, thus making it the key determinant of a student’s individual performance. However, considering that for many subjects, notably the sciences, coursework accounts for merely 20% of the total grade, this simply does not accurately reflect the grade a student should receive in the entire subject. In normal years, a student who submitted sub-par coursework could greatly increase their final grade by performing well in exams. The excessive reliance on coursework resulting from the cancellation of the exams has denied many students this opportunity, meaning students’ grades are defined largely by coursework they did not know would be so significant at the time it was submitted.

Predicted grades are estimated by teachers prior to students applying to universities and are notoriously unreliable. Schools are under scrutiny to ensure that students receive accurate predictions, but this does not remove the fact that a teacher’s estimate based on a year of work is not a reliable determinant that should form a significant part of a student’s final result.

The most jarring of the criteria is the use of historical data of a school’s performance. The IB has explained that it generated a unique factor for each subject in a school, which models both “predicted grade accuracy as well as the record of the school to do better or worse on examinations compared with coursework”. This criterion indicates nothing about the potential of an individual student to achieve a top grade and effectively punishes students for attending schools which perform poorly according to the IB’s model. Furthermore, many schools have very small cohorts taking the IB, which results in grades (and the accuracy of predicted grades) varying greatly year by year, thus harming the reliability of the historical data. Schools where students traditionally receive unconditional offers (for example, schools with many applicants to the United States) would also suffer due to more students falling below their predicted grades in previous years. This results in less accurate predictions, albeit not attributable to teachers’ calculations. The unfortunate result of this criterion is that although most schools can happily report that their average scores remained roughly the same due to the algorithm taking into account their past results, individual students have been prevented by factors entirely outside of their control from achieving the grades they deserve.

Using an algorithm to determine IB scores certainly has its advantages. It ensures that all students are subject to the same methodology of determining final scores. Though this seemingly promotes the fairest possible method, this is only true insofar as the algorithm deduces the fairest result every time. In the majority of real-life situations where an algorithm is used, anomalous results are taken into account and manually modified to more accurately reflect the actual result. Yet, the IB has released no information suggesting that anything of the sort was done. In this case, the IB’s policy should have been to find out individual cases where a student’s final score fell well below their predicted grades. Then, a panel should have been employed to look at each of these individuals’ coursework and other relevant data a second time, before awarding a final score that most accurately reflects the student’s achievements. Instead, the IB has accepted all the results of the algorithm as gospel, stating that they “awarded grades in the fairest and most robust way possible in the absence of examinations.” The confusion resulting from these sharp declines from predicted grades has been compounded by the IB’s lack of transparency regarding the algorithm, as they refuse to disclose the full details of the methodology and how it was designed.

I am personally of the view that an algorithm should never have been considered in the first place. Other educational programmes, such as the French Baccalaureate, even though they also cancelled their exams, awarded diplomas without relying on an algorithm, and were not met with outrage on results day. The College Board’s Advanced Placement (AP) programme, though suffering from technical problems of its own, was able to carry out its exams online. The relative success of other educational programmes in fairly awarding grades exposes the IB’s failure at all stages to adapt to the current extraordinary times. Although moving all IB papers online in just a few months would have been a difficult task, no adequate solution will be easy. Holding every paper online may not have been possible, but the IB could have designed shorter open-book examinations mainly testing students on the skills they developed over the last two years. Whereas the current methodology is frustratingly opaque and outside of the students’ control, an online exam arrangement would have made the methodology significantly fairer and far less speculative, and would have given students a sense of control over their outcomes. Instead, the easiest solution that satisfied both the IB and the majority of schools was chosen, leaving the students behind.

Students unhappy with their marks have been left with few options. As in other years, students may request remarks of individual papers for a fee. However, the IB’s remarking process, much like the algorithm, lacks transparency; the petition claims that students requesting remarks (which are usually expected to take at least several days to be completed) have received responses from the IB within a day, reporting no grade increase and providing no explanation as to how the final decision was reached. Students further have the option to sit formal exams during the November session, but this will incur a fee of 119 USD per subject retaken, of which there are six in total, plus 147 USD in core fees. For many students, this is simply not a realistic option. The IB must take responsibility for forcing students to strongly consider costly retakes as their last chance to receive a fair grade and heed the demands of these students to lower or outright remove excessive fees for the next examination session.

The fallout from IB results day will certainly leave A-levels students worried about the outcome of their own results day on the 13th of August. Ofqual set out guidelines for the new marking system that sound eerily similarly to the IB’s own methodology. This year, schools and colleges sent centre assessment grades (essentially predicted grades), as well as a ranking of students within each grade and subject. The centre assessment grades are then to be standardised using a model designed by Ofqual, taking into account a range of factors including, worryingly, the results of the school or college in recent years. This criterion has already drawn outrage from parents and students alike who fear that students attending schools with historically low results will suffer due to no regard being given to an individual student’s ability to thrive in a difficult learning environment.

If the A-levels exam board shows a level of disregard similar to that which the IB has demonstrated to its students in the last few weeks, we can definitively say that educational programmes have failed this year’s cohort of graduating students, preferring to take the easiest way out from a complex issue at the expense of their own students’ futures.

Check out our other content

Most Popular Articles