High Ambition – why exams work

Here at PTE, we’re passionate about schools having High Ambition for all their pupils, to ensure that children have the widest and most fulfilling range of options open to them when they enter the adult world.

We’re obviously not the only ones to want this. We’re also far from the only people who have clear views on how this is best achieved. It seems you can’t turn a corner in the world of education at the moment without bumping into yet another commission or campaign looking to reinvent schooling – and in particular the exam system.

Whether it’s The Times’ Education Commission, the NEU’s Assessment Campaign, Pearson’s Future of Qualifications and Assessment research project, ResPublica’s Lifelong Learning Commission, Rethinking Assessment, or the Foundation for Education Development, the list goes on.

Lots of people care passionately about what and how we teach young people, and how we check how well they’ve actually learned it all over time. It’s why schools and what they do is always in the public eye. And it’s also why it sometimes feels as though everyone has an opinion on every decision taken by a school or teacher.

This is especially the case when it comes to testing pupils. Many of us have tales from our own school days about an exam going really well, or disastrously, and the impact it had on us as a result. Understandably, it is often these personal experiences that most strongly inform our views on the topic.

Assessment is an essential part of the teaching and learning processes. It’s how teachers and pupils judge what and how firmly the things taught have been grasped. It informs next steps for planning lessons or revision. And as well as being formative in these ways, there are important points in the journey where we have summative assessments – summaries or judgements on where a pupil is in terms of what they know and can do, be it absolute or relative to other pupils in their cohort or over time.

There are lots of different ways of assessing people, and no form of assessment is perfect. Whether we use teacher grades, interviews, open book tests, unseen exams, portfolios – they all have strengths and some major drawbacks. Which of these is the most effective way one to use in a situation will depend on what you’re trying to gauge, what that will be used for, who you are assessing, and a whole other bunch of factors.

Over time, though, we have come to understand that for the UK’s pupils, standardised national assessments are the best and fairest way to measure summatively at key points in their schooling what pupils have learnt, and how different schools or groups of pupils are performing. In England this is why we have the Year 1 phonics check, the Year 4 times tables check, Year 6 SATs, and GCSEs, BTECS and A-levels in Y11 and Y13. 

For younger pupils these are “low stakes”, and purely used to identify next steps for their teachers. Exams for older pupils are “high stakes” as they are used by schools, colleges, employers, and the wider world to make judgements on courses or jobs they can do.

Either way, we want our assessments to be as accurate as possible – that is to say, we want them to be the best reflection of what pupils know and can do. And for this, we need them to be as valid and reliable as possible.

By valid we mean the assessment actually measures the thing we want to know about, not something else. And by reliable we mean that it measures it consistently, so that the same pupil would get the same result over time, or two pupils with the same knowledge would get the same result as each other.

Teacher assessments can be really useful for summative purposes but, as we have seen this year and last, they can involve a huge amount of work for everyone – and a lot of stress for all concerned too.

They are also inevitably affected by bias, which tends to work against pupils from lower-income backgrounds, or boys, or other groups of pupils who tend to achieve lower outcomes than others.

Also, they are really hard to keep reliable. Different teachers in the same school might assess the same pupil differently. Pupils in different schools that know the same content as well as each other might be awarded different grades by their respective teachers.

The big advantage that national standardised exams have is that we can make them more valid and more reliable than any other method of assessment. They can be externally set and externally marked, with smaller numbers of expert examiners marking work by large numbers of pupils from lots of schools, ensuring consistency in what is tested and how it is judged.

This enables us to avoid many of the downsides of teacher or internal assessments and is why sticking with exams as far as possible for important checks is so important. It is a question of accuracy and – importantly – fairness too.

Now obviously they are less accurate if a pupil has a bad day, or something goes wrong when the exam is sat. Exams aren’t perfect – but they’re the least bad way to accurately and fairly assess what people know and can do, especially when compared to the alternatives.

And they provide us with a really powerful and strong currency for pupils to use as they make their way into the world. GCSEs, BTECs and A-levels are well-respected qualifications, valued by those who gain them, and by employers, universities and wider society. They create a level playing field for everyone – kids sit the same exam, and can get the same grade, whether they go to Wellington or Waingels College, Eton or Etonbury. 

Switching to a different system would do away with this. It’s why we make any shift away at our pupils’ peril – and why it’s so important that post-pandemic we get back to “normal” exams as soon as we can.