
There are some things that once they’ve happened, they can’t unhappen. A development, a change that can’t be undone. Once nuclear fission had been discovered in the late 1930s, the atom was split. We couldn’t ‘un-discover’ this, ‘un-split’ the atom and pretend we didn’t know about it (with all of its consequences and potentialities – both positive and, to be frank, horrific).
That’s a slightly strange train of thought. What’s perhaps even stranger is that what set this train on its journey was a blog post by AdvanceHE.
what’s in a word?
The post was Kay Hack and Stuart Norton’s blog about the latest release in AdvanceHE’s Essential Frameworks for Enhancing Student Success series: Enhancing Assessment in Higher Education. As I was reading the post, all was as you would have expected. Then I came across something that made me pause.
This new publication replaced an earlier framework published in 2016. The post highlights how in 2016 the focus of universities was on ‘transforming’ assessment, but ‘fast forward several years and the sector now focuses on ‘enhancing’ rather than ‘transforming’’.
I can believe it’s true that this is what the sector talks about. But I’m less convinced that the shift is right. There’s a plausible argument that in recent years assessment in higher education has changed in ways that we can’t undo. This atom has been split, and we can’t pretend it hasn’t been.
So why do I think that?
computer world
The most predictable part of that answer is the impact of the speed with which the age of easy-to-use, ubiquitous AI tools has arrived.
The initial moral panic in some parts of the sector quickly gave way to more considered approaches seeking to recognise the new reality while maintaining the rigour and validity of assessment. And across the sector we’re seeing a lot of creative and innovative approaches to how to respond to the implications of AI.
However, we’re still very close to the bottom of a very steep learning curve on how AI will change assessment in universities. How long and steep that curve is going to be we don’t know – there are more alarming takes on this, and (slightly) less alarming scenarios. But either way, over the coming few years AI is going to lead to significant change, and yes transformation, in what we assess and how we do this.
But there’s much more that’s happened to assessment in recent years, which suggests we are in a period of transforming rather than enhancing assessment.
follow the numbers
At times it feels that UK higher education would like to forget about what happened during the pandemic. That’s very understandable, given the loss and trauma that Covid caused for so many.
But that shouldn’t lead us to turn away from the realities that many things have been changed in crucial ways by that experience.
On assessment a lot of focus has been given to the way all universities introduced Safety Nets/No Detriment during the pandemic, significantly changing the way we considered progression, award and classification. Effectively how we used the numbers that were generated by our assessments. But we need to follow the numbers, go back to their source.
During the pandemic we made very significant changes to how we got those numbers in the first place, i.e. in how we assessed. With in-person examination being almost impossible in 2020, and very challenging (and as a result much less used) in 2021, modes of assessment were radically different in both years.
Together these led to significant changes in patterns of student achievement – a general rise, and in particular a closing of awarding gaps between different groups of students. I don’t think we’ve reflected enough on this, either as a sector of in individual universities.
Many universities undertook analysis that showed that Safety Nets/No Detriment had some impact on student attainment, but that this was much less than was perhaps expected. What really drove the increase in student attainment was the change to modes of assessment.
Have we really had the discussion we need on whether this really represented the decline in standards that some of the sector’s critics would claim; or if the different balance of modes of assessment the pandemic necessitated, actually led to better ways of assessing the abilities, learning and achievements of our students? My gut feeling is towards the latter rather than the former (while recognising that there are crucial discipline-specific dimensions to this). And if that’s right, it adds to the case for assessment transformation.
history repeats itself
Let’s go back to Safety Net/No Detriment.
The pandemic isn’t the only time the sector has gone down this road. Many universities followed similar approaches last summer in response to the Marking and Assessment Boycott, awarding provisional classifications on the basis of partial marks – with a guarantee that once all the marks were available, final classifications could only go up.
If that’s the case, we’re effectively saying, as autonomous degree awarding bodies, that last summer we believe we were making appropriate, defensible awards and classifications without needing students to complete all the assessment we normally require.
Doesn’t that start to raise pretty important questions about how we should be assessing and classifying our students’ achievements when we don’t have the pressures of industrial action, particularly on the vexed question of assessment load/burden? And strengthen the suggestion that we’re in the realms of needing to transform, rather than enhance assessment?
And there are other things that strengthen that argument.
time to face the facts
For years, every August we would see a fair amount of commentary to the effect of ‘welcome the overall strength of the NSS results, but scores for assessment and feedback are noticeably lower and we need to raise them to the levels achieved in other areas of the survey’.
I was never entirely signed up to that viewpoint. I think that there is something different about the nature of assessment and feedback that means it’s almost inevitable that the NSS scores for this will be lower than for other areas. We know that assessment and feedback are activities that have a strong emotional dimension to them. Yes it’s someone’s work that is being assessed, not the person themselves; but the sense that it is you being assessed rather than your work can be difficult to escape. I suspect that this makes any lapses, failings or inadequacies that someone experiences or perceives in relation to assessment and feedback that much more personal; and, crucially, more salient when they come to fill in the NSS. The consequence is that scores for assessment and feedback will have a tendency to trend lower than those for the rest of the survey.
I still think that’s true.
But that argument only holds so far: 20 years of gaps of the size we’ve seen between NSS scores for assessment and feedback and other issues ultimately adds to the evidence of the need for structural change in the way we approach assessment (even if we’re never going to see NSS scores at the same level as for other parts of the Survey) .
Again, transformation not enhancement.
enhancement or transformation?
I could go on: e.g. the explosion of mitigating circumstances that predates but was accelerated by the pandemic also tells us something about the state of assessment in HE; as does the growing attention to the importance of integrative, holistic assessment, while our modular systems make this more difficult to achieve so that it happens all too infrequently. But this post is already overly long.
I know that a lot of incredibly valuable work has taken, and is taking, place in module teams, programme teams, departments and in individual universities to improve all dimensions of assessment in higher education. But I can’t help but think that we’re at a point now where that may not be enough.
Genuine transformation in assessment is needed. Of course this is hugely challenging to do, both in its own right and particularly given the financial challenges all universities face. But the changes we have experienced, and will continue to experience, mean that transformation rather than enhancement is needed. And perhaps a more concerted, co-ordinated approach is required.
Maybe this can’t or shouldn’t be taken forward at sector level. Too many memories of the dead end reached by the multiple Burgess groups in the 2000s and 2010s, though perhaps the problem there was too great a focus on grading and classification, rather than assessment itself.
And of course there are inherent challenges of sector level initiatives in this kind of area, given the diversity of universities in the UK. But might something be possible above the level of the individual institution, that would give a diversity of perspectives and expertise to allow something of value to be generated? Could this, for example, be taken forward by/through a university mission group?
Perhaps not. But if so, that doesn’t change my feeling that we are in an age of transformation not enhancement and we need to think more about how we respond to that.






Leave a comment