Freepik: Characters in lab or class learning mathematics.

So it’s a new year, and a time for new starts. Often surprising. Sometimes not particularly well-founded. But new year, new starts is a tradition that I’m going to have a go at maintaining. Or, depending on your point of view, establishing such a tradition as 2024 is this blog’s first January.

the surprising bit

The OfS approach to academic quality and standards has some interesting and valuable elements, compared to what went before.

I suspect for those who have read previous blogs you might think that, coming from me, is the surprising bit.

Though it shouldn’t be entirely surprising.  To speak is a sin identified some positives among the problems with the ‘boots on the ground’ quality assessments; shopping implied the need for universities to address more effectively the requirements of consumer protection law; and you choose challenged universities to take up the opportunity created by OfS to implement quality processes based on their demonstrable effectiveness on educational quality, rather than viewing the B conditions (as has been suggested by some) as an implicit (and in some cases explicit) injunction by OfS to jettison all process.

the unfair bit

A fair amount of the criticism of OfS’s approach to quality and standards has been about its emphasis on quantitative data: the B3 requirements, the monitoring and against these and their use as a regulatory tool as part of an outcomes-based approach.  This isn’t entirely fair though.

Outcomes do matter and there wasn’t sufficient attention to these in the pre-2018 national quality system.  And there are other ways in which the criticism of the use of quantitative data has been unfair.

The caricature has been that the B3 thresholds need to be hit, otherwise providers will feel the wrath of OfS.  This of course would make little sense.  I’ve lost count of the number of workshops and presentations where I’ve put the following quote up on the screen:

the glitter of statistics and the reassurance that some people find in possibilities of ‘objective measurement’ must not be allowed to obscure the fact that philosophical thinking, as well as other disciplines such as anthropology, supply evidence too, and evidence that … we clearly need

Richard Smith, ‘Beneath the skin: statistics, trust and status’, Educational Theory, vol. 61(6), p.645

Education is a social process.  When we’re dealing with social processes quantitative data can indicate something you probably need to look at and think about, but it frequently can’t explain the thing that it’s highlighted.  That requires other, more qualitative approaches together with the application of expertise and practical wisdom.

the intention

In many ways that’s what we have from OfS on quality and standards.

Performance on the B3 metrics, against pre-determined and un-benchmarked thresholds is monitored.  But the real substance of OfS’s quality and standards requirements lie in the other B conditions, which are much more qualitative in nature.

And when performance in a provider (or an area in a provider) is significantly below OfS B3 thresholds, a quality assessment team is sent in to gather evidence (quantitative and qualitative) and then apply expert judgment to come to a conclusion whether quality and standards at that provider, in the areas assessed do meet OfS requirements.

So in theory the design is sound: use proxy data to identify potential issues, then draw on qualitative evidence gathered and assessed by experts.

the reality

There are some challenges of describing the reality of boots on the ground OfS quality assessments, not least as OfS still hasn’t published a clear operational description.  What we do have, though, is five published reports from these inspections.

the not well-founded bit?

The nearest we have to an operational description come in David Kernohan’s reconstruction from the first three such reports, and this appears to confirm that quantitative data was used to identify first the subject areas looked at; then the providers selected; and then quantitative data was used to further focus the work of the quality assessment team.  Which then used a range of evidence (documentation, observation, meetings with the provider) to which it applied its expertise and judgment to reach a broad-based conclusion.

And here’s the interesting thing.  We’ve now had five reports on OfS quality assessments, and in three of these the teams did not identify any concerns regarding compliance with the B conditions.

In many ways of course this is a good thing, though it’s unclear whether the Prime Minister will feel he’s getting his money’s worth from that sledgehammer with which he’s smashing ‘rip-off degrees’. (But even if he isn’t, we can rest assured that he won’t be getting tetchy about it).

But does it also beg a more serious question? How useful and valid are the B3 metrics, if in three out of the five cases that we currently know about poor performance on these metrics did not indicate low quality provision? Is the claim that these are useful proxies for quality of provision actually well-founded? And if it’s not, how well does this aspect of OfS’s approach add up?

Of course it could be that my questioning of this isn’t well-founded. Some might say that even the existing ‘failure rate’ is sufficient to validate the quantitative proxies being used by OfS. Additionally thanks to OfS’s elastic relationship with deadlines (which reminds of being an undergraduate, where I always regarded a coursework deadline as the opening of negotiations with the module leader), there are a lot of quality assessment reports still to be published. Perhaps the ‘failure rate’ will be much higher in those reports.

But whether this all adds up is something we need to think about and reflect on as the year unfolds.

underpinning the foundations?

If it’s the use of the B3 data as a proxy that’s less well-founded than was intended, what’s the answer?

Well I don’t think it should be a full-blown retreat from using outcomes data as part of the regulation of academic quality and standards.  Even if we need to treat such proxy data carefully, it still has value.  Instead, we need to think about how we better underpin the existing foundations in the OfS approach.

Jim Dickinson recently wrote about the warm reception he was finding among students to the OfS quality assessment reports, and that suggests a way forward.  As I’ve written before, OfS quality assessments are essentially a new variant on periodic review – as first practised in QAA subject reviews, and then through internal periodic reviews. So a renewed emphasis in external regulation of quality and standards on robust internal periodic review could both provide more robust assurance, and (if we do it right as a sector) give greater confidence to students that their interests are being considered, protected and advanced.

When HEFCE re-opened the quality wars in 2016, a key element of their Revised Operating Model for Quality Assessment was a requirement for robust, periodic review through internal processes validated by QAA on behalf of HEFCE.

This made a lot of sense. Periodic review was universal practice in the UK university sector as a whole before the establishment of OfS, and is still common practice, though there are increased instances of universities resiling from this practice and/or senior management teams questioning the continuation of such reviews. And perhaps of particular relevance here is the way in which periodic review remains a central feature and requirement of the external regulation of quality and standards in Scotland by the Scottish Funding Council.

There are good reasons for the emphasis on periodic review.  They allow actual provision to be considered, with quantitative and qualitative evidence being considered by expert peers to reach rounded judgments. Yes it would mean broadening OfS’s approach to regulating quality and standards to incorporate an element relating to process rather than only focusing on outcomes, but this could be done in a way that’s consistent with OfS’s commitment to principles not rules based regulation (e.g. the HEFCE proposals were clear that there should be diversity in approaches to periodic review).

For reasons that were never really clear, the validation/accredited of periodic review processes that was proposed by HEFCE in 2016 never happened.  But finally implementing that proposal, through internal periodic review processes validated/accredited by OfS, offers a chance to underpin academic quality and standards more effectively than is happening currently.

Leave a comment

Previous:
Next: