Those of you who’ve worked with me will know that I have the habit of asking “interesting” questions. Nearly always at exactly the wrong time, and in a way that entirely derails what we are meant to be focusing on. This is one of them. [DISCLAIMER: Shoddy, beer-fuelled data analysis follows. BIG CAVEAT: Don’t phone, it’s just for fun.)
Research policy making is often flawed because we don’t take account of the vast number of academics who research without external funding, often without institutional knowledge and occasionally in direct contradiction to their contract of employment. I keep saying this in conversations about research policy, and eventually people ask how big a deal this really is. No-one knows.
There’s huge implications, of course. Everything from article processing costs, to data storage, to library use and lab consumables is based on a calculation of how much research is being done in a given department/institution/system of HE and what the cost implications are likely to be. It worries me that at a national level, the assumptions are all too often based on directly (grant) funded research (and the connected idea that you could just bash any additional costs onto the project budget or gouge out a load more overhead costs).
Like all the best questions, this one is a bit of a rabbit hole.
- There’s differences between institutions and the way they handle cost assumptions, discipline areas and the amount of money they may have available as grants (pause for Education researchers to chuckle ruefully).
- There’s QR, and the varying ways that it may or may not filter down into actual budgets for actual researchers (and the fact that most subject areas in most institutions barely get enough for a round of drinks)
- There’s different academic contract types, and the assumptions that may be based around them.
- There’s academics on teaching-only contracts researching in their own time using their own money. There’s people who have left the sector all together (or were never in it in the first place) who are still producing amazing research.
- And there’s any amount of organisations, offering any amount of grants for any amount of money, with no single way of identifying all of them.
There’s basically a whole PhD in getting a reliable to answer to this question (a response that many of my questions elicits… maybe I should do one of those one day…). I could use something as sweet as RIOXX and open access publications, or data coming out of something like academia.edu (hmmm…) or ORCID profiles (yay!). Each of those has strengths and drawbacks, but you could mash it up with some survey data and stuff from HESA and get an interesting answer. But for the moment, how about a quick’n’dirty approximation?
In 2013 the Centre for Business Research at Cambridge University, along with the UK Innovation Research Centre produced a report for BIS entitled “The Dual Funding Structure for Research in the UK: Research Council and Funding Council Allocation Methods and the Pathways to Impact of UK Academics”. What a title!
It was a strange report, attempting to analyse the link between research performance (as measured by the beloved RAE/REF), research funding methods (basically grant or no grant) and research motivations. It drew on an earlier, richer 2010 survey supporting a report into Knowledge Exchange – and I was largely drawn to it because it has a very large, very representative sample (balanced for gender, subject, seniority… not for contract type – too few teaching-only – but you can’t have everything) of 21,170 academics employed in UK HE in the summer of 2009.
It’s a really nice data set but – if you had only read the earlier report – you wouldn’t know that it could shed any light on our question. The 2013 report re-interrogated the same data, but mashed in from the Research Councils details of any grant that the academics surveyed may have held at the time of the survey.
In the summer of 2009 there were 179,040 academics in the UK (giving us a healthy sample-size of a little over 11%). In that year the research councils were the biggest providers of UK research grants offering around £1.3bn worth of grants.
The 2013 report [section E3] notes that QR is closely correlated both with research funding council grants and other grants.
It is also worth noting the useful summary of research funding volume by source in section E2. It draws on HESA data, which is also used to generate the figures and charts in one of my favourite HEFCE publications, the “Guide to Higher Education”. Here’s the 2009 version, which has figures for each strand of research funding for that year on page 29.
Way down on page 89 of the 2013 CBR report, we see the first instance of our sample mapped to grant activity: of the 22,170 surveyed academics, 18,972 did not have a research council grant (86%). We can draw this out to a broad subject area level using the data from same question:
- Arts and Humanities: of 3,674 academics in this area 3,234 (88%) did not have a research council grant.
- Sciences: of 11,270 academics in this area 9,120 (81%) did not have a research council grant.
- Social Sciences: of 7,226 academics in this area 6,583 (91%) did not have a research council grant.
Now we know that research council (RC) grants are very strongly correlated with QR income, and that QR income is strongly correlated with other sources of income at an institutional and departmental level. We’ve not, sadly, got strong enough data to do this at an individual level to get an idea of the prevalence of non-research council grants.
But we can look at the proportionality of each form of grant income, based on the HEFCE/HESA data alluded to earlier in the report. If we ignore QR for the moment, RC is the single largest source of UK research funding.
- Charities made grants to the value of 61% of RC grants
- Central & Local Government/Public Sector made grants to the value of 47% of RC grants
- UK industry made grants to the value of 18% of RC grants
- Other sources of grants were to the value of 44% of RC grants. (all of this in 2009)
So, as all of that is about 169% of the value of RC grants that year (or £2.3bn if you’d rather), our best case assumption is that a completely different set of academics got grants in the same kind of numbers as that amount of RC funding (worst case is, of course, that the same academics hoovered up these grants too – which is actually more likely given how concentrated research funding tends to be).
In a thrilling parallel to my policy making career till about 2012, I’m now going to allocate a proportion of this £2.3bn of grants to our sample of academics, assuming that they are similar in size (unlikely) and similarly distributed across broad subject areas (very, very, unlikely).
- In arts and humanities, I allocated a grant to 1184 more academics. So 68% are still unfunded.
- In sciences, 5784 extra academics got a grant, leaving 49% with no grant at all.
- In social sciences, I awarded 1730 other academics a grant, so now only 76% are still unfunded.
- In total 61% of academics are unfunded in our best case scenario.
Remember we agreed that we felt that things were liable to tend towards the worst case (where the same academics got most of these other grants). In that world:
- In arts and humanities, I allocated a grant to 304 more academics. So 80% are still unfunded.
- In sciences, 3634 extra academics got a grant, leaving 68% with no grant at all.
- In social sciences, I awarded 1087 other academics a grant, so now only 85% are still unfunded.
- In total 75% of academics are unfunded in our worst case scenario.
So, it appears (even given the insane approximations and dubious rounding) I can be certain that significantly more than half of all academics are not in receipt of external funding for their research. The true figure is likely to lie somewhere between 61% and 75% of academics. Sobering stuff.