
In a piece I posted a little while ago about what I thought were the real implications of the OfS’s approach to regulating academic quality and standards, I said:
Some of the processes, in some universities, that existed prior to OfS’s new approach had become bureaucratic and ineffective (and at some point I’ll write more about how and why I think that happened).
So I thought it was about time that I made good on that promise/threat (delete according to reader’s preference).
so how ‘bad’ was it?
The first thing is to stress the heavy lifting being done by the word some in the quote above. There’s been a tendency in recent years for parts of the sector to portray the pre-2018 approach to quality and standards as a dark ages of quality assurance, where all that counted was process and pieces of paper.
Clearly I don’t subscribe to that view. I was a QAA reviewer in that period, and I continue to be a QAA reviewer. And I’m always wary of anybody or any organisation that feels the need to justify their own approaches/changes by undermining and discrediting what preceded them. Sometimes that’s justified, but often it is a red warning light about the person/organisation choosing to justify themselves in this way.
The pre-2018 approaches to managing academic quality and standards, and those participating in these approaches (academic and professional services), made significant and valuable contributions to ensuring high quality educational experiences for students. At the same time though, if we’re honest there were times and places that the focus on the purpose of quality management became a little lost.
I remember one colleague, for example, who would sometimes open programme approval panels with a comment along the lines of ‘so the main purpose of why we’re here today is to make sure that we’ve got the paperwork in place should we ever be asked for it’. Of course no-one ever said anything in response, but I suspect that I didn’t always hide particularly well the inner frustration such comments caused me. (It was that type of situation which led a colleague to tell me that even (particularly?) when I didn’t say something I tended to have a ‘loud face’).
So why did our approach to quality sometimes lead to a perception, and perhaps sometimes a reality, that what mattered was the paperwork, rather than the educational experience?
the BBO
Part of answer perhaps explains the anecdote above.
For example, the purpose of a programme approval process is to support effective programme design so that that a new programme give students a high-quality educational experience. But sometimes there’s a nervousness about suggesting that this requires anything other than just leaving the academic programme team to ‘just do it’ – either a pre-emptive nervousness, or a response to programme proposers making clear that they just want to be left to it.
An approach to this would be to explain how the process was designed to deliver value against the purpose of setting up well-designed programmes that delivered high quality educational experiences: developing learning outcomes so there’s clarity about what is being taught and to what purpose, in terms of what the students should know, understand and be able to do; understanding how learning and teaching opportunities will support students in developing the agreed knowledge, skills and mindsets; developing an assessment strategy that supports learning and assesses outcomes reliably; engaging with internal and external academic peers to discuss these issues in order to improve the educational design of the programme.
But explaining that can be challenging in any number of ways, so the temptation was always to invoke the Big Bad Other. Except that the TLA (Three Letter Acronym) invoked wasn’t BBO, but QAA.
The quality processes had to be done to satisfy QAA.
And when this was the explanation given for the quality process, the purpose of the process got lost and it became about ticking boxes to satisfy the demands of the BBO. In other words the process, the bureaucracy became the point of what was being done. And while I’ve focused here on programme approval, this could be applied to other quality processes.
Of course the extent to which this happened varied – over time, between universities and even within universities. But it was a factor.
sugaring the pill
I think there is, though, another explanation that is less commonly picked up on than the one set out above. And that’s the way in which many universities chose to integrate quality assurance processes, with other institutional processes and requirements.
The requirements of the external quality assurance became clearer through the mid 1990s and into the early 2000s, and all universities had to respond to this. And in many, a key feature of this response was two different types of integration.
The first was that where a quality process needed to be more formally established or developed, as well as focus on the educational quality aspect we also loaded in significant existing or new learning and teaching administrative requirements. So, for example, programme approval processes often asked for very significant amounts of information that was essential to setting up and running a new programme from an administrative perspective, but which had little to do with the educational quality of the proposed programme.
The other type of integration was where new, quality requirements were added in to existing processes.
For instance, when a new module was being established or an existing module was being changed there was always a need to capture key information about that module to inform key learning and teaching administrative processes (e.g. student module registration, assessment, provision of transcripts etc.). And often educational quality elements were added to an existing administrative process. The precise balance between the educational and the administrative has varied between the universities I’ve worked at, but in all of them module approval has always been as much and often more about generating data for learning and teaching administration as it has been about educational quality.
Of course integration has happened for good reason. A feeling that perhaps this sugared the quality pill (‘while you’re giving us this information about your assessment, would you just tell us what you’re assessing by setting out the learning outcomes’). Probably even more so, it was down to a sense that this was ‘efficient’.
But over time this has blurred the lines between processes for educational quality, and processes for learning and teaching administration. Often it has reduced the effectiveness of the former, and by associating the processes with the latter made it feel (and in some cases made it so) that the processes are about administration not educational quality. So much so that in some places where I’ve worked it blackened the name of educational quality so much that learning and teaching administration processes and activities that had nothing at all to do with educational quality, were criticised as being instances of ‘quality bureaucracy’.
a way ahead
Of course, that’s all in the past – but that’s what I set out to focus on in this post. That doesn’t stop me, though, from wanting to finish off with some thoughts about the future – echoing but hopefully amplifying some of the things I said in my you choose post.
Integrating to achieve efficiency isn’t a bad thing. It’s a good thing (perhaps even an essential one, given the financial challenges universities face), but the why, what and how of that integration matter.
I’d characterise three aspects of the way I’ve heard people talk about responding to OfS’s approach to academic quality and standards.
One has been on the need to return to a more education development focus in the way we manage quality and standards (which was the conclusion of you choose), seeking greater integration between our processes for supporting educational design and development and the way we do quality management.
Another has been to focus on compliance, which in many ways has been a knee-jerk response to much of the tone of the messaging coming out from OfS.
And the final one has been to focus on data, in response to the emphasis on B3; overlooking to a large extent that B3 data is a proxy that quality as such is actually set out in the other B conditions and that these statements of quality are primarily qualitative not quantitative in nature.
Each university will, of course, blend these three elements and the blend will vary by university. Overall though, we need to think seriously about how we more effectively integrate our approaches to managing educational quality with the ways in which we support educational development. That doesn’t mean that we can or should entirely separate our educational quality processes from our learning and teaching administration processes, but we do need to think again and more carefully about where and how we do this; and not do it in ways that risks undermining the key purpose of those educational quality processes.






Leave a comment