First the tide rushes in. Plants a kiss on the shore…

I’m genuinely at a loss to describe how good James Wilsdon’s report of the independent review of the role of metrics in research assessment and management (“The Metric Tide“) is. Something that could so easily have been a clunky and breathless paean to the oversold benefits of big data is nuanced, thoughtful and packed with evidence. Read it. Seriously, take it to the beach this summer. It’s that good.

It also rings true against every aspect of the academic experience that I am aware of – a real rarity in a culture of reporting primarily with an ear on the likely responses of institutional management. Wilsdon and the review team have a genuine appreciation for the work of researchers, and recognise the lack of easy answers in applying ideas like “impact” and “quality” to such a diverse range of activity.

Coverage so far has primarily centred on the implications for research metrics in REF-like assessments (the ever eloquent David Colquhoun and Mike Taylor are worth a read, and for the infrastructure implications Rachel Bruce at Jisc has done a lovely summary) but towards the end of the report come two chapters with far-reaching implications that are situated implicitly within some of the more radical strands of critique in contemporary universities. Let it be remembered that this is the report that caused none less than the Director of Research at HEFCE to suggest:

What if all UK institutions made a stand against global rankings, and stopped using them for promotional purposes?

(which was unexpected, to say the least).

Chapters 6 (“Management by metrics”) and 7 (“Cultures of counting”) are a very welcome instance of truth being spoken to power concerning the realities of the increasing binary opposition between academic staff and institutional management via the medium of the metric. Foregrounded in the Wilsdon’s introductory mention of the tragic and needless death of Stephan Grimm , the report is clear that the use of inappropriate and counter-productive metrics in institutional management should not and cannot continue.

Within this cultural shift [to financialised management techniques], metrics are often positioned as tools that can drive organisational financial performance as part of an institution’s competitiveness. Coupled with greater competition for scarce resources more broadly, this is steering academic institutions and their researchers towards being more market-oriented.

Academics should have a greater control over their own narrative (the report laments the outsourcing of performance management to league tables and other commercially available external metrics), and this narrative should not be shaped by the application of inappropriate metrics. The “bad metrics prize” looks an excellent way to foreground some of the more egregious nonsense.

Fundamentally, the purpose of a higher education institution should not be to maximise its income – it should be to provide a sustainable and safe environment for an academic community of scholars. That’s pretty much straight out of Newman, but in 2015 it feels more like a call to arms against an environment focused on competition for funding.

A decision made by the numbers (or by explicit rules of some other sort) has at least the appearance of being fair and impersonal. Scientific objectivity thus provides an answer to a moral demand for impartiality and fairness. Quantification is a way of making decisions without seeming to decide. Objectivity lends authority to officials who have very little of their own.” [T.M Porter]

With an uncompromisingly honest epigraph, Chapter 7 lays the blame for this state of affairs firmly at the door of poor-quality institutional management. Collini, Docherty and Sayer are cited with tacit approval for perhaps the first time in an official HEFCE report.  Broadly, the report argues:

  • That managers use metrics in ways that are not backed up by what the metric actually measures.
  • That managers use metrics in a way that is heavy-handed, and insensitive to the variety implicit in university research.

Institutional league tables and Journal Impact Factors (JIFs) receive particular criticism as being opaque, inappropriate and statistically invalid. But it is noted that managers use these indicators (the preferred term) in order to absolve themselves from making qualitative decisions that are open to accusations of bias and secrecy.

Many academics are complicit in this practice, arguing either for transparency or from a perceived advantage to themselves over their peers. This wider cultural issue is seen as being outside of the scope of this report, and being only sparsely documented, but this boundary prompts the obvious question: which report will focus on these wider issues? (the Wilsdon report does call for more research into research policy – to me this could and should be extended to a call for urgent research into higher education policy and culture more generally.)

A section on “gaming” metrics, and one on bias against interdisciplinary research,  rehearses what is currently widely known on this practice (no mention of Campbell’s Law!) and again calls for an expansion of the evidence base. I know that much work under the collective umbrella of SRHE and BERA over the years has touched on these issues, and perhaps both organisations and others[1] need to plunder their archives and ensure that what evidence has been presented can be represented in an openly readable form.

It’s clear that the RAE/REF has had an impact: on the type of research conducted, where it is published and how it is built upon. This influence has already been noted and used in a welcome way with the recent requirements on open access. But as well as adding new stipulations, the older ideas about status and quality that underpin the REF (and for that matter, peer assessment) need to be examined and reconsidered.

If it is impossible to stop people producing research in the image of the REF requirements, maybe we need to change the requirements in order that interesting research is produced. But, as is noted, many of the constraining factors are applied at an institutional or departmental level – and it is these multiple nanoREFs that are likely to have the greatest day-to-day impact on the research-active academic. These require local management practice changes, rather than national policy changes, to become less painful and it is perhaps time to consider intervening directly rather than using levers designed to drive up research quality.

The goal of “reducing complexity and cost” within research policy is a commendable one, and I am sure few will be waving the flag for the current labour-intensive system of assurance and assessment: the “gold standard” is a heavy one, and we should investigate lightening the load wherever quality would not be affected. The Wilsdon review argues, cogently, that the trend towards quantitative tools is already having significant adverse effects, and indicates that efficiency may not be the only goal we need to keep in mind. As such, it is a major contribution to the ongoing health of academia and (perhaps) the first mainstream indicator of a wider resistance to poorly applied metrics in all areas of university life. Designers of teaching metrics should take careful note.



[1] for example Kernohan, D and Taylor, D.A (2003)”What Is The Impact Of The RAE?”, New Era In Education 84(2), pp56-62 […]

10 thoughts on “First the tide rushes in. Plants a kiss on the shore…”

  1. Pingback: hpupdater

Leave a Reply

Your email address will not be published. Required fields are marked *