Tech, UX, Life

O-Level Grading UX Issues

May 14, 2020

Within Malta’s Education system, as a secondary school student, you are constantly working towards the O-Levels, which are examinations set by the government’s Ministry of Education, which will award you with the SEC Certificate you need to be able to continue towards higher education.

Grades in the Maltese Education System

Famously these exams award you with grades that are defined by the following: 1, 2, 3, 4, 5, 6, 7 and U.

  • 1 is the highest grade, 7 is the lowest
  • 1 through 5 are considered Passes
  • 6 and 7 are considered Failures
  • U indicates that the exam was unable to be marked


Due to the Covid-19 pandemic, the inital sitting of the MATSEC exams usually held in April/May was cancelled. Going forward, students have been given the following options:

  • Sit for their SEC exams in the September Sitting (typically used for resits of the April/May sitting).
  • Be assessed by the Board based on their mock exam results (typically run by schools, for the purpose of preparing students for the actual SEC exam).

This poses two main issues:

  • It is unclear so far (in May) that the September sittings will actually be held, due to the dynamic nature of the pandemic. If they are not, then students may lose their one option to be assessed this year.
  • The student misses on their opportunity to take a resit. I personally have benefitted from resits to help me realise that I was over-confident or under-prepared for the subject I was being examined on

Whilst the latter seems to be a more viable option, its implementation has created a number of UX issues which will continue to plague students, teachers, educators and administrators for a number of years to come, as I will explain throughout this blog post.

According to the following Times of Malta article on the matter, the students who choose the latter option will be awarded “predicted levels” in the following manner:

  • Level 3 indicates that the student is at the “expected level” (equivalent to grades 1 through 5)
  • Level 2 indicates that the student is “nearly at the expected level” (equivalent to grades 6 and 7)

When I read the above article, it was quickly obvious to me that whoever or whichever comittee came up with this scheme did not apply any human centric design thinking to this problem. A number of issues are apparent:

Problem 1: Number re-use

By re-using numbers, it will cause confusion in both conversation and administrative tasks. If the response to “What grade did you get?” is “I got a 2”, this no longer means that the student passed and scored quite well, now it can now ambiguously mean that the student failed.

Also consider a simple administrative task: a headmaster requests a report of the distribution of marks attained by the school’s students in a specific subject. If an administrator, using a tool such as Excel, performs the following query: =COUNTIF(A1:A10, "=3"), without taking a separate indicator into consideration that the student opted for a predicted grade, the report will now be factually incorrect.

Problem 2: The use of the predicted marks is inconsistent with the standard marks

In the brain of every teacher, parent, student or administrator there is the following thought: a smaller mark is a better mark. Since 1 is the best mark and the 7 is the worst mark, the smaller your grade, the better your score.

The predicted marks however, follow the opposite system. Marks 1 through 5 are considered Level 3, whilst 6 and 7 are considered Level 2. In the predicted mark system, the smaller mark means that you “failed”, whilst the larger mark means that you “passed”.

Problem 3: The predicted marks are coarser than the standard marks

O-Level grades are often used to differentiate between one student application and another within certain competitive sixth-form instiutions. By making the marking coarser, one cannot tell between a student who would have scored a 5 or a student who would have scored a 1, only that the student “passed” or “failed”.

Proposed solutions

Maltese are known for complaining without wanting to help make the issue better, so here is my contribution on how I would resolve the issues mentioned above:

Solution 1: Use new, unique marks for the predicted levels

By using unique marks (for example: E for Expected Level and N for Nearly Expected Level). The pandemic is an extenuating circumstance, so why not consider extenuating marks? This will avoid:

  • Confusion between Level 2/3 and the actual grade 2/3
  • Providing historical context. If a student has an E grade, everyone would know the year when the student had their exam

Solution 2: Avoid providing coarse marks in the first place

From the article, it appears that the MATSEC board is going to be using mark schemes provided by the teachers who would have created the mock exams. If a marking scheme has been provided, a solution would be to mark the exam, then grade the students on a bell curve. If the exam is deliberately easy to allow students to pass or deliberately difficult to attempt to prepare students for the SEC exam, normalising in this way would reduce that bias.

The coarse “pass” or “fail” marks have to be determined somehow. It may simply be that if a student scores more than 50 on 100, then they receive Level 3, otherwise Level 2. This lack of transparency does not inspire confidence that the marking will be fair.


In conclusion, I hope that this article would help bring awareness that, without applying user centric design thinking, even our non-technical or software systems will end up causing extra work or increase the possibility of mistakes which could easily be avoided. If you are in a position of determining any sort of policy or regulation, consider seeing it from the user’s perspective before going ahead.

Kudos to Joseph Trapani for proofreading this article before publication.

Written by Simon who lives in Malta, working with Software and UX You can follow him on Twitter, LinkedIn or reach out via email.