It seems that the look of the summer for HE policy makers is a way of monitoring and assuring teaching quality via a set of data-driven metrics.
First up, HEFCE’s ongoing quality assurance consultation stepped up a notch with one of their old-fashioned “early” consultations on the principles that would underpin a new system of institutional QA. Spectacularly failing to demonstrate that this would be more efficient or provide better results than the current model [read the KPMG report for maximum LOLs], and monstered by everyone from Wonkhe to the Russell Group, HEFCE took to their own blog to defend the proposals a mere 48 hours later.
A day later, it was the turn of BIS – with a speech from new HE bug Johnson Minor at Universities UK. He offered what the tabloid end of the HE press would call a “tantalising glimpse” of the future Teaching Excellence Framework (manifesto bargaining fodder if ever I saw it), drawing a hurried HEFCE response clarifying that their tremendously important two-day-old QA proposals were different, though linked, to the emergent TEF.
Learning Gain is the wild card in this mix. If it worked properly it would be the single biggest breakthrough in education research of the last 100 years. It won’t work properly, of course – but it will burden students and staff with meaningless “work” that exists for no other reason than to generate metrics that will punish them.
None of this, of course, has any appreciable impact on actual students, but the idea of them being “the heart of the system” underpins everything. Remember, undergraduates, you may never see your tutor as she’s preparing another data return for one of these baskets, but it is all for your benefit. Somehow.
For all the differentiation, it is difficult to slip an incomplete HESES return between the two sets of proposals as they currently stand. Deciding to look at a basket of output measures as a way of assuring and/or enhancing quality is the epitome of an idea that you have when you’ve no idea – the artistry and ideology comes in with selecting and weighing the raw numbers (as Iain Duncan Smith so ably demonstrated today).
Future generations of policy makers are limited to tweaking the balance to return the answers that they expect and require. Whilst the sector themselves focus on gameplaying rather than experimentation in an innumerate “improve-at-all-costs” world. And the students? – well, the NSS results are going up. Aren’t they?
William Davis, in “The Happiness Industry” writes about the financialisation of human interaction – the replacement of human voices with a suite of metrics that can be mapped to known responses. This is basically akin to Philip’s Economic Computer with a flawed model of cause and effect wielded for particular policy goals, and controlling the lives of millions. The advent of social media allows us all to have a greater voice in policy making – at precisely the time that policy making as we know it is disappearing.
Both the TEF and the new QA model advance the “dashboard model” of policy analysis, and a managerial rather than leaderly approach to institutional management – neither expose the important assumptions that underpin the measurements. Sure, it’s fun to watch the emerging turf war between BIS and HEFCE – and it is fun to read the guarded snark of the Russell Group – but we’re really seeing poor quality policymaking being disguised by a whiff of big data.