How to read the Green Paper

This is written before I have had sight of the immanent “green paper” on Higher Education from BIS. It is an attempt to spot what might be interesting within the paper in advance, and maybe something of a “wonk’s guide” to making sense of the thing when it eventually comes out.

First up, a “green paper” is an early stage consultation paper. It’ll be tempting to treat it as a “white paper“, but the choice of colour is a very important signifier. A “white” paper could be seen as analogous to a research paper – we’ve done the research, here are our findings and conclusions, do you agree with them?

A “green” paper is more like a hypothesis – we think that these ideas are the correct ones, we wish to test these with some research. In this case, the research will be a full consultation, with much more scope for responses to shape policy direction than with 2011’s “Students at the Heart of the System“. (Eager wonks will recall that much of the contents of the 2011 paper had already been implemented before the “white paper” consultation began).

We already know, and can infer, a great deal about what we can expect within the green paper. There have been two key speeches from Johnson Minor, one in July and another in September, that offer clues. Though most speculation (and god, has there ever been a lot of speculation…) has been about the TEF, it is not by any stretch the most important aspect.

To go through things in the order I am interested in them:

Regulatory and structural reform

That James Wilsdon (big fan!) was party to a fascinating leak from BIS concerning the spending review. The scale of cuts (looking like Osborne’s aspirational 40% for a non-ringfenced department) is now clear, as is the concept of a “bonfire of the partner agencies” – the usual response to calls for departmental cuts, and one that is usually followed by a realisation that the partner agencies were invented primarily to shunt staff numbers out of the main “Whitehall” count whilst keeping key jobs done.

I don’t rate Sajid Javid as a minister, not least because he seems to  want to make cuts rather more than he wants to run services. And it is his influence that saw Johnson Minor saying palpable nonsense like “much of the higher education system is ripe for simplification.” in September. (Compare his more conventional new-junior-minister structure-building speech in July, which neglected to mention the more recent horror of the “day 1” slide).

Of the 40+ partner bodies named by BIS, 10 are associated with Higher Education, suggesting that at least four will likely perish. Which ones are for the chop depend entirely on choices made around the issues below.

Changes to research funding in England

Merging the seven research councils must look like an easy win for Javid – surely back office functions and branding could be combined with minimal disruption and significant savings. Sir Paul Nurse’s review of research funding is now overdue (expected Summer 2015), and given he is on record as claiming that a government that to cut research spending would be “Neanderthal“, one has to suspect that the delay may be due to rocks being banged together concerning the gap between this recommendation and Javid’s small government instincts. (free social media tip for @bispressoffice – launch the report with the words “guh ruh guh urrrgh rahr Sir Paul Nurse” :-) )

Although it is likely that Nurse has recommended efficiencies around research support (his review was tasked with examining the split between project and strategic funding, ominously) it is unlikely that a recommendation to merge the councils directly would feature – indeed he’s been reported as saying this would “not be on the table”.

The other option would be to look at the non-project end of research funding – which would mean QR (basically research funding given to institutions to support general research-y activity) and the REF, both currently managed by HEFCE. Earlier this year Javid’s favourite think-tank, the IEA, called for both to be abolished. The regulatory environment upshot of this would be to get shot of HEFCE.

In the past, HEFCE’s responsibility for teaching funding would have stymied this approach – but post Browne/Willetts their teaching funding role is vestigial, to say the least. Widening participation is now under the auspices of OFFA, and the quality assurance remit is a whole other can of worms (see below). QR is much loved as a means of supporting blue-skies research and scholarship, as opposed to the more direct economic benefits often returned from research council projects.

Data requirements/”transparency”

A central plank of the Browne review was to offer students more information in order to make the market work better. It was a neo-classical economics admission of failure – the “sticker price” was obviously not conveying all the information (as free-market zealots like to believe it does), so more data was needed to give the market at helping hand.

HEFCE research (yes, them again) back in 2014 suggested that students don’t much use even the existing data in making course application decisions. And even the venerable NSS is currently under review . This consultation was released last week by HEFCE as kind of a data collection review version of leaning out of the window of a Passatt in Hull requesting a “bare knuckle”. In happier times for HEFCE, this release would have been a part of the green paper release and flagged as a sub-consultation within it- alas these days releases of HEFCE consultations tend to happen a day or two before a much bigger BIS announcement that renders them largely meaningless.

So, precise details on how universities spend student fees will likely be the order of the day. Bonus prize for anywhere that officially notes that they spend £9,000 per student on “running a university”.

Widening participation

A year ago, Westminster politicians used to swagger past Holyrood ones, kicking (45%) sand in their faces and claiming that their HE funding system was more progressive than the Scottish one, despite having £27,000 worth of fee loans. This was possible because until this parliament students from less monied backgrounds got maintenance grants.

Alas, in northern primary school terms, George Osborne is now the “cock of the school” and his need for insignificant budgetary fiddling to preserve the twin lies that austerity is working and our economy is in good shape has trumped Caledonian bragging rights. Forget for a moment that most of the loans will never be paid back, Osborne has never been a long term thinker – indeed I think “cock of the school” at St Paul’s meant something different.

So “widening participation” once again becomes an issue for England, but not in an egalitarian sense. Basically there are votes in promoting white working class ambitions, but not many, so Johnson’s speech suggested a bit more data and maybe a university might deign to offer a scholarship or two. Boring cynical stuff, but the mention of OFFA in the September speech makes them safer than the HEFCE institutional structure they sit in.


Ah, the ****ing TEF. Friend to the second-rate HE commentator. There’ll be no surprises here, basically a grab-bag of the likely indicators (NSS, first destination, maybe some widening participation/POLAR numbers and anything else HESA have up their sleeve for next academic year) and a commitment to explore other data sources for a more refined TEF2 in the years to come.

The 2015 budget added spice to the long rumbling debate by allowing institutions that had been judged to have excellent teaching to raise their fees in line with inflation each year. To put that in perspective, the sector saw £9bn of home and EU fees last year. Inflation (RPI) currently sits at a tumescent 0.1% after many months of deflation (thanks, George!).  So English universities could be looking at sharing a maximum of just under an extra £9m a year if all are judged to have excellent teaching. (That’s £45m over 5 years – compare the £315m over 5 years devoted to the CETL programme)

New providers, DAPs and quality assurance

The gun has already been fired here regarding new providers. The doors are open and new “market entry” guidance has been issued. Surely new providers, and of a higher quality to those reported on by Dr McGettigan and others. It has already been indicated that the barriers to degree awarding powers and university title will be lowered, obviously because the initial experiments went so well. The only surprise here will be how little safeguards will be included.

Finally, expect nothing at all on QA. HEFCE’s consultation is being hung out to dry in the most exquisite way possible – via a parliamentary committee hearing.

In conclusion

So there we are. The news before the news, as they say. On downloading my own copy of the green paper I will first look to see whether HEFCE are toast, and then figure out what a mess has been made of the dual support system.

Sail on

Despite being Viv’s birthday, the 26th September 2015 also marks the 10th anniversary of Yacht Rock, JD Ryznar and Hunter Stair’s smooth masterpiece. Combining cultural history, highly quotable dialogue, moderate production values and the sweet sounds of late 70s/early 80s marina rock, it remains the grandaddy of all youtube viral hits.

Take an hour out of your day today, pour yourself something tropical, sit back and binge watch episodes 1 through 12. Smooth Jesus commands you to.

(Yacht Rock changed my life. I blame Brian)

Learning gain, again

Perceptions of the future of teaching quality monitoring have come a long way since I last wrote about HEFCE’s strange fascination with quantifying how much students learn at at university. A full consultation concerning the ongoing review of QA processes detonated in late June , swiftly followed by the summer’s all-consuming speculative think-piece generator, the TEF.

Today- alongside the announcement of 12 year-long institutional projects to “pilot” a bewildering range of metrics, e-portfolios, skills assessments and pig entrail readings – HEFCE released the research conducted for them by RAND Europe. Interestingly, RAND themselves are still waiting for a “co-ordinated concurrent release with other publication outlets”.

(screengrab: 13:45BST, 21/09/2015)
(screengrab: 13:45BST, 21/09/2015)

The report itself does have a rushed feel to it – shifting typography, a few spelling, grammatical and labelling howlers – which itself is unusual given the high general quality of HEFCE research. And why would RAND label it as “withdrawn”? But I’ve heard from various sources that the launch was pre-announced for today at some point late last week, so – who knows.

We started our journey with an unexpected public tendering exercise back in June 2014 though this is also shown as being launched in May of the same year. The final report, according to the contract viewable via the second link in this paragraph, was due at the end of October 2014, making today’s publication nearly a year behind schedule.

So over a year of RAND Europe research (valued at “£30,000 to £50,000) are presented over 51 variously typeset pages, 10 pages of references (an odd, bracketless, variant of APA if you are interested) and 5 appendices. What do we now know?

RAND set out to explore “explore[…] the concept of learning gain, as well as current national and international practice, to investigate whether a measure of learning gain could be used in England.”

They conclude [SPOILERS!] that the purpose to which learning gain is put is more important than any definition, there is a lot of international and some UK practice of varying approaches and quality, and that they haven’t got the faintest idea as to where you could do learning gain in the UK but why not fund some pilot studies and do some more events.

Many of the literature review aspects could have been pulled off the OECD shelf – Kim and Lalancette (2013) covers much of the same ground for “value added” measures (which in practice includes much of what RAND define as learning gain, such as the CLA standardised tests and the Wabash national study), and adds an international compulsory-level analysis of practice.

Interestingly, the OECD paper notes that “[…] the longitudinal approach, with a repeated measures design often used in K-12 education, may not be logistically feasible or could be extraordinarily expensive in higher education, even when it is technically possible” (p9) whereas RAND are confident that “Perhaps the most robust method to achieve [comparability of data] is through longitudinal data, i.e. data on the same group of students over at least two points in time” (p13).

The recommendation for a set of small pilot studies, in this case, may appear to be a sensible one. Clearly the literature lacks sufficient real world evidence to make a judgement on the feasibility of “learning gain” in English higher education.

By happy coincidence, HEFCE had already planned a series of pilots as stage two of their “learning gain” work! The “contract” outlines the entire plan:

“The learning gain project as a whole will consist of three stages. The first stage will consist of a critical evaluation of a range of assessment methods and tools (including both discipline-based and generic skills testing), with a view to informing the identification of a subset that could then be used to underpin a set of small pilots in a second stage, to be followed by a final stage, a concluding comparative evaluation. This invitation to tender is solely concerned with the first stage of the project – the critical review”(p5)

So the RAND report has – we therefore conclude – been used to design the “learning gain” pilot circular rather than as a means of generating recommendations for ongoing work? After all, the circular itself promised the publication of the research report “shortly” in May 2015 (indeed, the pdf document metadata from the RAND report suggests it was last modified on 27 March 2015, the text states it was “mid-January” when drafting concluded) – and we know that the research was meant to inform the choice of a range of methods for piloting.

The subset comprising “standardised tests, grades, self-reporting surveys, mixed methods and other qualitative methods” that was offered to pilot institutions does echo categorisation in the RAND report (for example in section 6.3.2, the “Critical Overview” the same headings are used.)

However, a similar list could be drawn from the initial specifications back in May 2014.

  • Tools currently used in UK institutions for entrance purposes (e.g. the Biomedical Admissions Test) or by careers services and graduate recruiters to assess generic skills
  • Curriculum-based progress testing of acquisition of skills and knowledge within a particular discipline
  • Standardised tests, such as the US-based Collegiate Learning Assessment (CLA), the Measure of Academic Performance (MAPP) and the Collegiate Assessment of Academic Proficiency (CAPP).
  • Student-self- and/or peer-assessed learning gain
  • Discipline-based and discipline independent mechanisms
  • Other methods used by higher education providers in England to measure learning gain at institutional level
  • International (particularly US-based) literature on the design and use of standardised learning assessment tools in HE […]
  • Literature on previous work on learning gain in UK HE
  • UK schools-based literature on the measurement of value-added (p7)

In essence, RAND Europe have taken (again, let us be charitable) 10 months to condense the above list into the list of five categories presented in the HEFCE call for pilots. (The pilots themselves were actually supposed to be notified in June 2015, though they seem to have kept things a carefully guarded secret until Sept 16th, at least. Well done, Plymouth!).

It is unclear, though unlikely, whether teams developing institutional bids had sight of the RAND report during the bid development process. And it is doubly unclear why the report wasn’t released to a grateful public until the projects were announced.

But the big question for me is what was the point of the RAND Report into Learning gain?

  • It didn’t (appear) to inform HEFCE’s plan to run pilot projects. There were already plans to run pilots back in 2014, and whereas the categories of instrument types to use “RAND language” this would be equally possible to derive from the original brief.
  • It was released at the same time as successful bids were announced, and thus could not (reasonably) have contributed to the design or evidence base for institutional projects. (aside: wonder how many of these pilots have passed through an ethical review process)
  • It didn’t significantly add to a 2013 OECD understanding of the literature in this area. It referred to 6 “research” papers (by my count) from 2014, and one from 2015.
  • There was a huge parallel conversation about an international and comparable standard, again by the OECD, during the period of study. We (in England) said “goodbye” as they said “AHELO”, but would it not have made sense to combine background literature searches (at least) with an ongoing global effort?

Though I wouldn’t stay I started from the position of unabashed enthusiasm, I have been waiting for this report with some interest. “Learning gain” (if measured with any degree of accuracy and statistical confidence) would be the greatest breakthrough in education research in living memory. Drawing a measurable and credible causational link between an intervention or activity and the acquirement of knowledge or skills: it’s the holy grail of understanding the education process.

There’s nothing in this report that will convince anyone that this age-old problem is any closer to being solved. Indeed, it adds little to previous work. And reading between the lines of the convoluted path from commission to release, it is not clear that it meaningfully informed any part of HEFCE’s ongoing learning gain activity.

All told, a bit of a pig’s ear.


BigGroupsHey listneners – you’re tuned to W-ALT-FM, home of the brave and your source for the best in AOR and freeform radio. arrived


Freeform AOR radio is more than just a radio format, it’s a way of life – it is how people live here and now in 1980.Ahistory

It’s not a new development. Ever since the boring corporate rock stations gave us the FM band to play with in the mid 70s we’ve done what no-one expected. We’ve cut through all the schedulers, the pluggers, the hype and the payola and given music radio back to the people… with jocks like my good self as your guide.


This is my story.AORstory

I’m proud to be  an AOR jock, have been for three or four years now. I feel like I *know* music, I mean deep music – real music – not this top 40 rubbish.
personality There’s no novelty hits on W-ALT-FM – just favourites and deep cuts from our favourite artists, and notable new music that you need to hear about. Think of us as your record collection. As your cooler friends cooler record collection – the one you learn from.pic1









caution cleaner


I don’t mean to sound like your old teacher, but a good AOR jock is an educator.





Sure, I want you to be entertained, but I also want you to discover something – learn something. I spend a lot of time choosing not just each track but every playlist. I’ll play you a hot Supertramp song, then the Beatles tune that inspired it. Then I’ll play some Doobies – in the same key, at the same speed, and then from that to Steely Dan with the same singer. You don’t need to worry about this, you might miss most of it, but everything is where it is for a

AOR has been through some tough times – when punk broke, most of us didn’t know what to make of it. But the quality of some of the “new wave” was surprising. These days we’re glad to play Blondie, The Police and Talking Heads – maybe not straight after the Eagles but they’re in the mix. And classic soul has always been our thing – no color bar here! Bring on the Commodores!
howhighisupSome people have been talking about bringing in a more “disciplined” sound, making it easier to sell ads and syndicate. But that’s the old model – that’s what ruined AM radio. On FM we proved that music mattered. And that someone with ears and a finger on the pulse could draw an audience – even break an artist.streets roots






AOR is here to stay – because this time, things are different. We’ve got our own space now, and no big business ideas are going to take it from us.





Thisrespect issue of R&R I’m reading – wow. There’s so much stuff about the future of stations like W-ALT-FM. If you look past the ads for all the great albums, there’s guys we know standing up and keeping the AOR fire burning. Maybe a few are showing their greedy colors, but in the main we stand strong. I like that.

pimpleRadio is a part of life, it’s a constant. It’s where you hear new music, discover oldies, get a sense of what the future of music holds. Everywhere I walk and everywhere I look I see kids with portable radios – they’re so cheap now! – making the music we play part of their lives. And it’s almost as if the big stations don’t realise what is happening.

There’s some stuff we don’t do – we don’t do sport for a start. We certainly don’t do disco – man! disco sucks, yeah! And we don’t do much news. But there’s other places you can get them, sometimes you just want music. And real music, good music, artists at the peak of their craft. Like Hall and Oates, Boston, Led Zep. Kenny Loggins. The timeless stuff.


So I’ve been rapping about me for too long. But it’s not about me – it’s about the music.pic2

[in 1985 W-ALT-FM moved to a “Hot Adult Contemporary” format, having been bought out by CBS-FM. It was sold in 1989, and the station moved to a heavily playlisted “classic rock” mix. By 1995, it was almost pure Country. Around the turn of the century it merged with two other local stations and now runs as a “Talk” station with a conservative slant.]

“I watch the ripples change their size but never leave the stream” #altc 2015

This post is the “amplified” version of my presentation at ALTC 2015. The presentation summarises the main arguments that are made in more detail in this post.

So let’s look at the 2015 Google Moonshot Summit for Education. No particular reason that I start here rather than anywhere else, other than the existence of Martin Hamilton’s succinct summing up on the Jisc blog. Here’s the list of “moonshot” ideas that the summit generated – taken together these constitute the “reboot” that education apparently needs:


  • Gamifying the curriculum – real problems are generated by institutions or companies, then transformed into playful learning milestones that once attained grant relevant rewards.
  • Dissolving the wall between schools and community by including young people and outsiders such as artists and companies in curriculum design.
  • Creating a platform where students could develop their own learning content and share it, perhaps like a junior edX.
  • Crowdsourcing potential problems and solutions in conjunction with teachers and schools.
  • A new holistic approach to education and assessment, based on knowledge co-construction by peers working together.
  • Creating a global learning community for teachers, blending aspects of the likes of LinkedIn, and the Khan Academy.
  • Extending Google’s 20% time concept into the classroom, in particular with curriculum co-creation including students, teachers and the community.

“Gamification”, a couple of “it’s X… for Y!” platforms and crowd-sourcing. And Google’s “20% time” – which is no longer a thing at Google. Take away the references to particular comparators and a similar list could have been published at any time during my career.

The “future of education technology” as predicted has, I would argue, remained largely static for 11 years. This post represents an exploration concerning why this is the case, and makes some limited suggestions as to what we might do about it.

Methodological issues


Earlier in 2015 I circulated a table of NMC Horizon Scan “important developments in educational technology” for HE between 2004 and 2015, which showed common themes constantly re-emerging. The NMC are both admirably public about their methods and clear that each report represents a “snapshot” with no claims to build towards a longitudinal analysis.

Methodologically, the NMC invite a bunch of expert commentators into a (virtual, and then actual) room, and ask them to sift through notable developments and articles to find overarching themes, which are then refined across two rounds of voting. There’s an emphasis on the judgement of the people in the room over and above any claims of data modelling, and (to me at least) the reports generally reflect the main topics of conversation in education technology over the recent past.

Though it is both fun and enjoyable to criticise the NMC Horizon Scan (and the list is long and distinguished) I’m not about to write another chapter of that book. The NMC Horizon Scan represents one way of performing a horizon scan – we need to look at it against others.

The only other organisation I know of that produces annual edtech predictions at a comparable scale are Gartner, with the Education Hype Cycle (pre-2008 the Higher Education Hype Cycle, itself an interesting shift in focus.) Gartner have a proprietary methodology, but drawing on Fenn and Raskino’s (2008) “Mastering The Hype Cycle” it is possible to get a sense both of the underlying idea and the process involved.

The Hype Cycle concept, as I’ve described before, is the bluntest of blunt instruments. A thing is launched, people get over-excited, there’s a backlash and then the real benefit of said thing is identified, and a plateau is reached. Fenn and Raskino describe two later components of this cycle, the Swamp of Diminishing Returns and the Cliff of Obsolescence,  though these are seldom used in the predictive cycles we are used to seeing online (used in breach of their commercial license.)


Of course not everything moves smoothly through these stages, and as fans of MOOCs will recall it is entirely possible to be obsolete before the plateau. As fans of MOOCs will also recall, it is equally possible for an innovation to appear fully-formed at the top of the Peak of Inflated Expectation without any prior warning.

In preparing Hype Cycles Gartner lean heavily on a range of market signals including reports, sales and investments – and this is supplemented by the experience of Gartner analysts who in working closely with Gartner clients are well placed to identify emerging technologies.  But the process skews data driven, whereas the NMC skews towards expertise. (There’s a University of Minnesota crowd-sourced version based on “expertise” that looks quite different. You can go and add your voice if you like.)

How else can we make predictions about the future of education technology? You’d think there would be a “big data” or “AI” player in this predictions marketplace, but other than someone like Edsurge or  Ambient Insight extrapolating from investment data. Other than the obvious Google Trends looking at search query volumes, it appears that meaningful “big data” edtech predictions are themselves a few years in the future. (or maybe no big data shops are confident enough to make public predictions…)

A final way of making predictions about the future would be to attend (or follow remotely) a conference like #altc. It could be argued that much of the work presented at this conference are generally “bleeding edge” experimentation and that themes within papers could serve as a prediction of the mainstream innovations of the following year(s).

Why is prediction important?

Neoclassical economists would  argue that the market is best suited to process and rate innovations, but reality is seldom as elegant as neoclassical economics.

Within our market-focused society, predictions could allow us to steal an advantage on the workings of the market and thus allow us to make a profit as the information we bring from our predictions is absorbed. As all of the predictions I discuss above are either actually or effectively open to all, this appears to be a moot point, as an efficient market would quickly “price in” such predictions. So the paradox here is that predictions are more likely to influence the overall direction of the market than offer any genuine first-order financial benefit.

As Philip Mirowski describes in his (excellent) “Never let a serious crisis go to waste” (2014): “It has been known for more than 40 years and is one of the main implications of Eugene Fama’s “efficient-market hypothesis” (EMH), which states that the price of a financial asset reflects all relevant, generally available information. If an economist had a formula that could reliably forecast crises a week in advance, say, then that formula would become part of generally available information and prices would fall a week earlier.”

Waveform predictions – like the hype cycle, and Paul Mason‘s suddenly fashionable Kondratieff Waves (or Schumpter’s specific extrapolation of the idea of an exogenous economic cycle to innovation as a driver of long-form waves: Kondratieff describes the waves as an endogenous economic phenomenon…) – can be seen as tools to allow individuals to make their own extrapolations from postulated (I hesitate to say “identified”) waves in data.

File:Kondratieff Wave.svg

(Kondratieff Waves, Rursus@wikimedia, cc-by-sa)

In contrast, Gartner’s published cycles say almost nothing useable about timescales, with each tracked innovation proceeding through the cycle (or not!) at an individual pace. In both cases, the idea of the wave itself could be seen as a comforting assurance that a rise follows a fall – by omitting the final two stagnatory phases of the in-house Gartner cycle it appears that every innovation eventually becomes a success.

But there is no empirical reason to assume waves in innovation such as these are anything other than an artefact (and thus a “priced-in” component) of market expectations.

Two worked examples

I chose to examine five potential “predictive” signals across up to 11 years of data (where available) for two “future of edtech” perennials: Mobile Learning and Games for Learning. I don’t claim any great knowledge of either of these fields, either historically or currently, which makes it easier for me to look at predictive patterns rather than predictive accuracy.

My hypothesis here is that there should be some recognisable relationship between these five sources of historic prediction.

Games in Education

Year NMC Gartner ALT-C Ambient Insights Google
(notes) (Placement on “technologies to watch” HE horizon scan) (placement on education hype cycle) (conference presentations, “gam*”) (tracked venture capital investment level) (trends, “Games in education” level at mid-year & direction)
2004 Unplaced Unplaced 1 presentation 68 (falling slightly)
2005 2-3yrs #2 Unplaced 2 presentations 37 (falling)
2006 2-3yrs #2 Unplaced 5 presentations 36 (stable)
2007 4-5 #1

(massively multiplayer educational gaming)

Unplaced 4 presentations 30 (stable)
2008 Unplaced Unplaced 5 presentations 42 (rising)
2009 Unplaced Unplaced 3 presentations 41 (rising)
2010 Unplaced

(also not on shortlist)

Unplaced 3 presentations $20-25m 55 (rising)
2011 2-3yrs #1 Peak (2/10) 1 presentation $20-25m 70 (rising)
2012 2-3yrs #1 Peak (2/8) 2/231 presentations $20-25m (appears to have been revised down from $50m) 58 (sharp decline – 80 at start of year)
2013 2-3yrs #1 Peak (4/9) 1/149 presentations

1 SIG meeting

$55-60m 46 (steady, slight fall)
2014 2-3yrs #2 Trough (8/18) 0/138 presentations

1 SIG meeting

$35-40m 40 (steady, slight fall)
2015 N/A (was 2-3yrs trend 3) Trough (11/14) 4/170 presentations n/a 33 (falling)

We see a distributed three year peak between 2011 and 2013 with NMC, Gartner and investors in broad agreement around a peak. Google also trends hits the start of this peak before dropping sharply.

Interestingly, games for learning never became a huge theme at ALTC (until maybe this year #altcgame), but there is some evidence of an earlier interest between 2006-8, which is mirrored by a shallower NMC peak.

Mobile Learning

Year NMC Gartner ALT-C AmbInsight Google
(notes) (word “phone” or “mobile”) (word “phone” or “mobile”) (search on session titles – “phone” or “mobile”) Data available from 2010 only. “Mobile Learning” – mid year value and description
2004 Unplaced Unplaced 1 presentation 50 (steady)
2005 Unplaced Unplaced 4 presentations 36 (choppy, large peak in May)
2006 “The Phones in Their Pockets”

#1 2-3yr

Unplaced 7 presentations 42 (steady, climb to end of year)
2007 “Mobile Phones”

#1 2-3yr

Unplaced 10 presentations 57 (fall in early year, rise later)
2008 “Mobile broadband”

#1 2-3yr

Unplaced 6 presentations 56 (steady, peak in May)
2009 “Mobiles”

#2 1-2yr

“Mobile Learning”

Rise 10/11 (low end) 11/11 Smart

11 presentations 56 (steady)
2010 “Mobile Computing”

#2 1-2yr

“Mobile Learning”

Peak 1/10 (low end) & 8/10 (smart)

8 presentations $175m 81 (rising)
2011 “Mobiles”

#2 1-2yr

“Mobile learning”

Peak 5/10 (LE)

Trough 4/12 (smart)

8 presentations $150m 94 (peak in June)
2012 “Mobile Apps”

#1 1-2yr

“Mobile Learning”

Trough 1/15 (LE)

8/15 (Smart)

12 presentations $250m[1] 81 (steady)
2013 Unplaced “Mobile Learning”

Trough 3/14 (LE)

10/14 (Smart)

4 presentations $200m 70 (falling)
2014 Unplaced “Mobile Learning”

Slope 5/13 (LE)

Trough 17/18 (Smart)

5 presentations $250m 64 (falling)
2015 “BYOD”

#1 1-2yr

“Mobile Learning”

Slope 1/12 (Smart)

2 presentations n/a 59 (steady, fall in summer)

Similarly, a three year peak (2010-12) is visible, this time with the NMC, Gartner and Google Trends in general agreement. The investment peak immediately follows, and ALTC interest also peaks in 2012.

Again there is early interest (2007-2009) at ALT-C before the main peak, and this is matched by a lower NMC peak.  Mobile learning has persisted as a low-level trend at ALT throughout the 11 year sample period.

Discussion and further examples

A great deal more analysis remains to be performed, but it does appear that we can trace  some synergies between key predictive approaches. Of particular interest is the pre-prediction wave noticeable in NMC predictions and correlating with conference papers at ALTC – this could be of genuine use in making future predictions.

Augmented reality  fits this trend, showing two distinct NMC peaks (a lower one in 2006-7, a higher one in 2010-11). But ALTC presentations were late to arrive, accompanying only the latter peak alongside Google Trends, though CETIS conference attendees would have been (as usual) ahead of this curve.  Investment still lags a fair way behind.

“Open Content” first appears on the NMC technology scan in 2010 and has yet to appear on the Gartner Hype cycle, despite appearing in ALT presentations since 2008, and appearing a a significant wider trend – possibly the last great global education technology mega-trend -substantially before then. The earlier peak in “reusable learning objects” may form a part of the same trend, but what little available data from the 90s is not directly comparable.

(MOOCs could perhaps be seen as the “investor friendly” iteration of open education, and for Gartner, at least, peak in 2012, falling towards 2014 and disappear entirely in 2015. They appear only once for the NMC, in 2013. ALT-C gets there in 2011 (though North American conferences were there, after a fashion, right from 2008), and there are 10 papers at this year’s conference.  Venture Capital investment in MOOCs continues to flow as providers return for multiple “rounds”. I’m seeing MOOCs as a different species of “thing”, and will return to them later)

One possible map of causality could show the initial innovation influencing conference presentations (including ALTC), which would raise profile with NMC “experts”. A lull after this initial interest would allow Gartner and then investors to catch up, which would produce the bubble of interest shown in the Google Trends Chart.

Waveform analysis

A pattern as identified above is useful, but does not address our central issue regarding why the future is so dull. Why are ideas and concepts repeating themselves across multiple iterations?

Waves with peaks of differing amplitudes are likely to be multi-component – involving the super-position of numerous sinusoidal patterns. Whilst I would hesitate to bring hard physics into a model of a social phenomenon such as market prediction, it makes sense to examine the possibility of underlying trends influencing the amplitude of predictive waves of a given frequency.

Korotayev and Tsirel (2010) use a spectral analysis technique to identify underlying waveforms in complex variables around national GDP, noting amongst other interesting findings the persistence of the Kuznets swing as a third harmonic (a peak with approximately three times the frequency) of the Kondratieff Cycle. This relationship can be used to argue for a direct relationship between the two – mildly useful in economic prediction. To be honest it looks more like smoothing artefacts, and the raw data is inconclusive.

Koratayev and Tsirel

(fig 1 from “Spectral Analysis of World GDP Dynamics”)

If we examine waves within predictions themselves, rather than dealing with an open, multi-variable system it may be more helpful to start from the basis that we are examining a closed system. By assuming this I am postulating that predictions of innovations are self reinforcing and can, if correctly spaced, produce a positive feedback effect. I’m also avoiding getting into “full cost” for the moment.

“Re-hyping” existing innovations – as is seen in numerous places even just examining NMC patterns, could be seen as a way of deliberately introducing positive feedback to amplify a slow recovery from the trough of the hype-cycle. For those who had invested heavily in an initial innovation that does not appear likely to gain mass adoption, a relaunch may be the only way of recouping this initial outlay.

Within a closed system, this effect will be seen as recurrent waves with rising amplitudes.



I’ve attempted to answer two questions, and have – in the grand tradition of social sciences research – been able to present limited answers to both.

Do predictions themselves show periodicity?

Yes, though a great deal of further analysis is required to identify this periodicity for a given field of interest, and to reach an appropriate level of fidelity for metaprediction.

Has something gone wrong with the future of education technology?


If we see prediction within a neoliberal society as an attempt to shape rather than beat the market,  there is limited but interesting evidence to suggest that the amplification of “repeat predictions” is a more reliable way of achieving this effect than feeding in new data. But this necessitates a perpetual future to match a perpetual present. Artificial intelligence, for example, is always just around the corner. Quantum computing has sat forlornly on the slope of enlightenment for the past nine years!

Paul Mason – in “post-capitalism” (2015)- is startlingly upbeat about the affordances of data-driven prediction. As he notes “This [prediction], rather than the meticulous planning of the cyber-Stalinists, is what a postcapitalist state would use petaflop-level computing for. And once we had reliable predictions, we could act.”

Mirowski is bleaker (remember, that these writers are talking about the entire basis of our economy, not just the ed-tech bubble!): “The forecasting skill of economists is on average about as good as uninformed guessing.

  • Predictions by the Council of Economic Advisors, Federal Reserve Board, and Congressional Budget Office were often worse than random.
  • Economists have proven repeatedly that they cannot predict the turning points in the economy.
  • No specific economic forecasters consistently lead the pack in accuracy.
  • No economic forecaster has consistently higher forecasting skills predicting any particular economic statistic.
  • Consensus forecasts do not improve accuracy (although the press loves them. So much for the wisdom of crowds).
  • Finally, there’s no evidence that economic forecasting has improved in recent decades.”

This is for a simple “higher or lower” style directional prediction, based complex mathematical models. For “finger-in-the-air” future-gazing and scientific measurement of little more than column inches as we get in edtech, such predictions are worse than worse than useless. And may be actively harmful.

Simon Reynolds suggests in “Retromania” (2012) that “Fashion – a machinery for creating cultural capital and then, with incredible speed, stripping it of value and dumping the stock – permeates everything”. But in edtech (and arguably, in economic policy) we don’t dump the stock, we keep it until it can be reused to amplify an echo of whatever cultural capital the initial idea has. There are clearly fashions in edtech. Hell – learning object repositories were cool again two years ago.

Now what?

There is a huge vested interest in perpetuating the “long now” of edtech. Marketing copy is easy to write without a history, pre-sold ideas easier to sell than new ones. University leaders may even remember someone telling them two years ago of the first flourishing of an idea that they are being sold for the first time. Every restatements amplifies the effect, and the true meaning of the term “hype-cycle” becomes clear.

MOOCs are a counter-example – the second coming of the MOOC (in 2012) bore almost no resemblance to the initial iteration in 2008-9 other than the name. I’d be tempted to label MOOCs-as-we-know-them as a HE/Adult-ed specific outgrowth of the wider education reform agenda.

You can spot a first iteration of an innovative idea (and these are as rare and as valuable as it might expect) primarily by the absence of an obvious way by which it will make someone money. Often it will use established, low cost tools in a new way. Venture capital will not show any interest. There will not be a clear way of measuring effectiveness – attempts to use established measurement tools will fail. It will be “weird”, difficult to convince managers to allocate time or resource to, even at a proof-of-concept level.

So my tweetable takeaway from this session is: “listen to practitioners, not predictions, about education technology”. Which sounds a little like Von Hippel’s Lead User theory.

Data Sources

NMC data: summary file, Audrey Watters’ GitHub, further details are from the wiki.

Gartner: Data sourced from descriptive pages (eg this one for 2015). Summary Excel file.

ALTC programmes titles:

2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015 (incidentally, why *do* ALT make it so hard to find a simple list of presentation titles and presenters? You’d think that this might be useful for conference attendees…)

“A Vintage Year” – Johnson on the TEF

So Johnson Minor spoke this morning at the UUK conference, a hotly anticipated talk that was trailed to offer more detail on government plans for HE – in particular, legislative plans and the TEF.

It’s the latter that will get the most interest, details have been maddeningly sparse and the middle months of 2015 will henceforth be known as the Summer of Unfounded TEF Expectation.

Sadly, this situation will continue. Very little hard fact was added. The TEF section of the speech was notable only for a few minor, but potentially revealing, linguistic choices.

  • “teaching is highly variable across higher education”. “Variable” underpins the impression that a TEF would have a standardising function, rather than driving innovation. Basically a system that would produce innovation needs variability in order to do so, otherwise all innovation would need to be centrally mandated. Those hoping that TEF will drive innovative teaching and professional autonomy will be disappointed.
  • “There are inspiring academics who go the extra mile, supporting struggling students, emailing feedback at weekends and giving much more of their time than duty demands.” Those hoping that the TEF will recognise and value teaching may also be disappointed. Most – and I’m not exaggerating here – is what is good about HE happens in the personal time of academics. As Jonathan Worth said at ALTC2015 this is family time, failed relationships, missed school sports days, working holidays, late nights. Johnson’s language codifies this, being frank, voluntary academic work as an expectation. I’m expecting the UCU response to major on this aspect.
  • “marginal funding as being principally determined by scholarly output” – most universities make more from their physical campus estate than from research income. This is an old canard, and needs to be shot down.
  • “the TEF will not just be about accessing additional funds” – but it primarily will be. It rewards institutions for the poorly paid and poorly recognised work of academic staff who teach. Don’t think for a second any of this money will go back to the academics in question. And don’t think that most teaching will not still be done by the young and easily exploitable precariat.
  • “the National Student Survey has started to shift the focus back towards teaching, feedback and academic support within universities” – no it hasn’t. It has shifted focus back towards standardisation of the student experience It has brought targets, metrics, data collection and fear. And it is eminently gameable, subject to statistically unlikely periodicity and of almost no use as a serious research instrument.

So this is not a cheery message. This is not filling me with confidence that the sound advice of the ANTF and Higher Education Academy is taken on board. This is not even convincing me that BIS have read “The Metric Tide“.

What with this, and parallel whispers about changes to regulatory organisations, McKinsey, and even changes to the “dual support” system for funding research, autumn for UK HE is looking bleak and painful.

[other coverage: Martin EveMark LeachHugh Jones ]

OECD attacks English HE funding model

Andreas Schleicher. You may remember his name from Andy Westwood’s superb wonkhe post about the ongoing use of an “OECD endorsement” as to the sustainability of the current English HE funding system.

Despite Andy’s solid debunking of this canard (basically he initially referred to the previous regime, then wrote a personal blog post claiming that we had one of the best systems that includes student fee loans), drawing on the actual words of Dr Schleicher himself, Johnson Minor’s first speech also refers to:

a transformed financial situation; as the OECD says, we are one of the only countries in the world to have found a way of sustainably funding higher education.

Of course Schleicher is not the OECD, and is not making a pronouncement on behalf of the OECD – he is stating his personal opinion, just as I am doing here. I can understand why BIS researchers speechwriters make the elision (it’s common among people who don’t really understand social media) but it is not correct.

That’s the story so far.

During the summer I’ve been enjoying Pearson’s Tumblr entitled “If I were secretary of state for education…“. No, really, I have. The schick is that they ask a bunch of edu-policy luminaries (including David Blunkett, a former SoS) what they would do if they were minister for education. And publish it on Tumblr, because publishing things is hard.

I’ll be honest, Michael “Dr Target” Barber opining that he would “not tinker with structures or get in the way of successful schools” doing what they want to do” was my favourite initially, with AC Grayling‘s call to end the “closed shop of higher education” a close second.

But then I read Andreas Scheicher’s contribution. Sure, there’s the expected madness about increasing class sizes and claiming that “google knows everything”. But just look at his final point:

Leaders in high performing school systems seem to have convinced their citizens to make choices that value education more than other things. Chinese parents invest their last money into the education of their children, their future. Britain has started to borrow the money of its children to finance its current consumption. I would work hard with my fellow Secretaries to change that.

“Britain has started to borrow the money of its children to finance its current (educational) consumption” – this sounds suspiciously like a reference to HE funding – where the future income of our children is used to pay for their current educational consumption. And Schleicher would work hard to change that.

Sure – it’s tenuous. But no more so than BIS’s claims to OECD blessing on our expensive and ill-conceived funding method.

Principles – if you don’t like ’em, we have others

Just in case people find it useful, this is a worked example of responding to that “principles” section that tends to crop up in policy consultation. Any policy idea, even the most nakedly and arbitrary ideological nonsense, will have “principles” – because where would we be if we didn’t have principles?

(who said “Top Shop”?)

Whereas it may seem to be a fairly innocuous set of “motherhood-and-apple-pie” stuff that pretty much everyone would nod through, these principles serve to both frame and constrain the debate in and around the paper that follows. The majority of consultation responses will concentrate on later question as these directly impinge on institutional or organisational activity or commitment – and most people who wade through stuff like this are paid to do so by said organisation or institution.

But as I’m responding on my own account I’m more concerned with the assumptions that underpin the consultation, and less concerned with any interim projections of likely effects. This means that I’m hyper-vigilant (almost comically so) to the nuances in phrasing and meaning within these short and apparently uncontroversial statements.

There is a huge value in making an independent and personal response to a consultation – and I would encourage all wonks and wonks-in-training to have a crack at a couple (HEFCE QA would be a good one, also have a look at the BIS student loan repayment threshold freeze if you fancy getting stuck in to a bit of finance . It’s a great personal learning exercise, and it can sometimes have a positive effect on national policy-making.

[for the avoidance of any doubt, what follows is an excerpt from a personal response to the QA consultation, that explicitly does not reflect the views of any organisation, grouping, political party or secret society. It is presented in the public domain (cc-0), so you may reuse it without citation if you wish]

Question 1: Do you agree with our proposed principles to underpin the future approach to quality assessment in established providers?

I have responded to each principle in turn.

  1. Be based on the autonomy of higher education providers with degree awarding powers to set and maintain academic standards, and on the responsibility of all providers to determine and deliver the most appropriate academic experience for their students wherever and however they study.

This principle attempts to address the new complexity of the institutional landscape in this area. Broadly “providers of HE” may or may not be approved “HE providers” – with or without institutional undergraduate and/or research degree awarding powers – and may or may not hold the title “university” (and may or may not have tier 4 sponsor status).

For the purposes of academic quality assurance it is not clear why a distinction is drawn here between “HE providers with degree awarding powers”, and “all providers”. For the latter, the designation process already requires that a particular course meets “quality” criteria via the QAA Higher Education Review and consequent annual monitoring. This process explicitly examines the ability of any provider to manage quality and academic standards.[1] The principle should surely be (as was the case until very recently) that all HE should be delivered to the same academic standards and assured to the same academic standards wherever it is delivered.

The use of “autonomy” in one case and “responsibility” in the other also exacerbates this artificial divide. The current system of QA requires that all HE delivery is supported by an institutional system that manages and ensures academic quality and academic standards and this principle should be defended and maintained.

2. Use peer review and appropriate external scrutiny as a core component of quality assessment and assurance approaches.

A purely internal system of scrutiny would not be fit for purpose in ensuring the continued high standard of English HE provision. Though internal institutional monitoring (both data-led and qualitative) will support the maintenance of standards, the “gold standard” is comparability with peers and adherence to relevant national and global requirements. The existing QAA Higher Education Review process (which is common to existing providers and new entrants) directly ensures that peers from across the sector are involved in making a judgement on institutional quality assurance and quality assessment processes.

3. Expect students to be meaningfully integrated as partners in the design, monitoring and reviewing of processes to improve the academic quality of their education.

The key here is a “meaningful” integration, beyond mere committee membership. Academic staff at all levels should also have a role in designing, monitoring and reviewing processes – this would be a key factor in developing processes that are genuinely useful in ensuring a quality academic experience for students without an unreasonable institutional burden.

As James Wilsdon noted in “The Metric Tide”[2], “The demands of formal evaluation according to broadly standardised criteria are likely to focus the attention system of organisations on satisfying them, and give rise to local lock-in mechanisms. But the extent to which mechanisms like evaluation actually control and steer loosely coupled systems of academic knowledge is still poorly understood.” (p87)

It is therefore essential that both internal and external systems of quality assurance take into account the well-documented negative effects of a metrics-driven compliance-based culture, and it would appear that a meaningful integration of students, academic staff and support staff into the design as well as the delivery of these processes would be an appropriate means to do this.

4. Provide accountability, value for money, and assurance to students, and to employers, government and the public, in the areas that matter to those stakeholders, both in relation to individual providers and across the sector as a whole.

This principle should be balanced very carefully against principle (a), above. Assessment of “value for money”, in particular, should be approached with care and with greater emphasis on longer-term and less direct benefits than are currently fashionable. The risk of short-term accountability limiting the ability of academia to provide genuinely transformational and meaningful interventions in the lives of students and society as a whole is implicit within the current model of institutional funding, and a well-designed system of QA should balance rather than amplify this market pressure.

5. Be transparent and easily understood by students and other stakeholders.

It is difficult to argue against this principle, though simplicity must be balanced with a commitment to both academic and statistical rigour. HEFCE will doubtless remember the issues with over-simplified NSS and KIS data leading to a misleading and confusing information offer to prospective students, as documented in some of the HEDIIP work around classification systems[3] – and should also note the findings of their own 2014 report into the use of information about HE provision by prospective students.[4]

6. Work well for increasingly diverse and different missions, and ensure that providers are not prevented from experimentation and innovation in strategic direction or in approaches to learning and teaching.

It is important here to draw a distinction between experimentation and innovation in learning and teaching practice, which is a central strength of UK HE as evidenced by a substantial body of literature and practice, and experimentation and innovation in institutional business models.

The former should be encouraged and supported, with specific funding offered to individual academics and small teams with the ability to innovate in order to meet existing or emerging learner or societal needs. Funding and opportunity for research into Higher Education pedagogy and policy are severely limited, and in order that experimentation can be based on sound research further investment is needed. Organisations such as the ESRC, SRHE, Higher Education Academy, BERA, SEDA, Jisc and ALT should be supported in addressing this clear need.

The latter should also be encouraged and supported, but the risk to students and the exchequer is far greater here and this should be mitigated and managed carefully. Recent activity in this area has demonstrated risks around the needs of learners being insufficiently met, risks around accountability for public funds, risks around investment being diverted from core business, and risks around reputational damage for the sector as a whole.  In this area experimentation should be evidence-based, and the exposure of learners and the exchequer to the negative consequences of experimentations should be limited.

7. Not repeatedly retest an established provider against the baseline requirements for an acceptable level of provision necessary for entry to the publicly funded higher education system, unless there is evidence that suggests that this is necessary.

Recent research conducted for HEFCE by KPMG concluded that the majority of the “costs” associated with quality assurance in HE come from poorly-designed and burdensome processes at an institutional level, and multiple PSRB engagements. As such, it is difficult to make an argument to limit national engagements as the data and materials will most likely be collected and prepared regardless.

Interim engagement could focus on targeted support to reduce the internal cost of QA activity via expert advice on designing and implementing systems of assurance, and optimising institutional management information systems (MISs). The QAA and Jisc would be best placed to support this – and engagements of this nature would provide much greater savings than simply limiting the number of external inputs into institutional processes.

Of course, QAA support for PSRBs in designing and implementing robust yet light-touch reviews would be a further opportunity for significant savings.

8. Adopt a risk- and evidence-based approach to co-regulation to ensure that regulatory scrutiny focuses on the areas where risk to standards and/or to the academic experience of students or the system is greatest.

Again, it is difficult to argue against this – though a definition of co-regulation (I assume this refers to the totality of sector QA to include national, institutional and subject area specific processes) would be beneficial. Risk monitoring should primarily focus on responsiveness in order to encompass unpredictable need, especially as relates to business model innovation.

9. Ensure that the overall cost and burden of the quality assessment and wider assurance system is proportionate.

This principle should explicitly refer to the overall cost and burden of QA and assurance as a whole, rather than just national processes. The KPMG report was clear that the majority of costs are linked to institutional data collection and PSRB-related activity, and it is here that the attention of HEFCE should be primarily directed.

10. Protect the reputation of the UK higher education system in a global context.

HEFCE and the QAA should continue to work with ENQA, EQAR and INQAAHE, to ensure that the global QA context is paramount in English and UK assurance activity.

11. Intervene early and rapidly but proportionately when things go wrong.

This should continue as is currently the case, with HEFCE (as core and financial regulator), QAA (as academic quality assurance specialists), UCU (as staff advocate) and both OIA and NUS (as student advocates) working together to identify and resolve issues.

13. Work towards creating a consistent approach to quality assessment for all providers of higher education.

Consistency of approach is less important than consistency of academic standards, and as such this principle appears to work in opposition to principle (5). QA approaches at an institutional level should be adaptable to identified needs amongst a diversity of providers and activity.





(if anyone is interested in my responses to the remaining questions, I’d be happy to share. Do leave a comment or send a twitter DM)

First the tide rushes in. Plants a kiss on the shore…

I’m genuinely at a loss to describe how good James Wilsdon’s report of the independent review of the role of metrics in research assessment and management (“The Metric Tide“) is. Something that could so easily have been a clunky and breathless paean to the oversold benefits of big data is nuanced, thoughtful and packed with evidence. Read it. Seriously, take it to the beach this summer. It’s that good.

It also rings true against every aspect of the academic experience that I am aware of – a real rarity in a culture of reporting primarily with an ear on the likely responses of institutional management. Wilsdon and the review team have a genuine appreciation for the work of researchers, and recognise the lack of easy answers in applying ideas like “impact” and “quality” to such a diverse range of activity.

Coverage so far has primarily centred on the implications for research metrics in REF-like assessments (the ever eloquent David Colquhoun and Mike Taylor are worth a read, and for the infrastructure implications Rachel Bruce at Jisc has done a lovely summary) but towards the end of the report come two chapters with far-reaching implications that are situated implicitly within some of the more radical strands of critique in contemporary universities. Let it be remembered that this is the report that caused none less than the Director of Research at HEFCE to suggest:

What if all UK institutions made a stand against global rankings, and stopped using them for promotional purposes?

(which was unexpected, to say the least).

Chapters 6 (“Management by metrics”) and 7 (“Cultures of counting”) are a very welcome instance of truth being spoken to power concerning the realities of the increasing binary opposition between academic staff and institutional management via the medium of the metric. Foregrounded in the Wilsdon’s introductory mention of the tragic and needless death of Stephan Grimm , the report is clear that the use of inappropriate and counter-productive metrics in institutional management should not and cannot continue.

Within this cultural shift [to financialised management techniques], metrics are often positioned as tools that can drive organisational financial performance as part of an institution’s competitiveness. Coupled with greater competition for scarce resources more broadly, this is steering academic institutions and their researchers towards being more market-oriented.

Academics should have a greater control over their own narrative (the report laments the outsourcing of performance management to league tables and other commercially available external metrics), and this narrative should not be shaped by the application of inappropriate metrics. The “bad metrics prize” looks an excellent way to foreground some of the more egregious nonsense.

Fundamentally, the purpose of a higher education institution should not be to maximise its income – it should be to provide a sustainable and safe environment for an academic community of scholars. That’s pretty much straight out of Newman, but in 2015 it feels more like a call to arms against an environment focused on competition for funding.

A decision made by the numbers (or by explicit rules of some other sort) has at least the appearance of being fair and impersonal. Scientific objectivity thus provides an answer to a moral demand for impartiality and fairness. Quantification is a way of making decisions without seeming to decide. Objectivity lends authority to officials who have very little of their own.” [T.M Porter]

With an uncompromisingly honest epigraph, Chapter 7 lays the blame for this state of affairs firmly at the door of poor-quality institutional management. Collini, Docherty and Sayer are cited with tacit approval for perhaps the first time in an official HEFCE report.  Broadly, the report argues:

  • That managers use metrics in ways that are not backed up by what the metric actually measures.
  • That managers use metrics in a way that is heavy-handed, and insensitive to the variety implicit in university research.

Institutional league tables and Journal Impact Factors (JIFs) receive particular criticism as being opaque, inappropriate and statistically invalid. But it is noted that managers use these indicators (the preferred term) in order to absolve themselves from making qualitative decisions that are open to accusations of bias and secrecy.

Many academics are complicit in this practice, arguing either for transparency or from a perceived advantage to themselves over their peers. This wider cultural issue is seen as being outside of the scope of this report, and being only sparsely documented, but this boundary prompts the obvious question: which report will focus on these wider issues? (the Wilsdon report does call for more research into research policy – to me this could and should be extended to a call for urgent research into higher education policy and culture more generally.)

A section on “gaming” metrics, and one on bias against interdisciplinary research,  rehearses what is currently widely known on this practice (no mention of Campbell’s Law!) and again calls for an expansion of the evidence base. I know that much work under the collective umbrella of SRHE and BERA over the years has touched on these issues, and perhaps both organisations and others[1] need to plunder their archives and ensure that what evidence has been presented can be represented in an openly readable form.

It’s clear that the RAE/REF has had an impact: on the type of research conducted, where it is published and how it is built upon. This influence has already been noted and used in a welcome way with the recent requirements on open access. But as well as adding new stipulations, the older ideas about status and quality that underpin the REF (and for that matter, peer assessment) need to be examined and reconsidered.

If it is impossible to stop people producing research in the image of the REF requirements, maybe we need to change the requirements in order that interesting research is produced. But, as is noted, many of the constraining factors are applied at an institutional or departmental level – and it is these multiple nanoREFs that are likely to have the greatest day-to-day impact on the research-active academic. These require local management practice changes, rather than national policy changes, to become less painful and it is perhaps time to consider intervening directly rather than using levers designed to drive up research quality.

The goal of “reducing complexity and cost” within research policy is a commendable one, and I am sure few will be waving the flag for the current labour-intensive system of assurance and assessment: the “gold standard” is a heavy one, and we should investigate lightening the load wherever quality would not be affected. The Wilsdon review argues, cogently, that the trend towards quantitative tools is already having significant adverse effects, and indicates that efficiency may not be the only goal we need to keep in mind. As such, it is a major contribution to the ongoing health of academia and (perhaps) the first mainstream indicator of a wider resistance to poorly applied metrics in all areas of university life. Designers of teaching metrics should take careful note.

[1] for example Kernohan, D and Taylor, D.A (2003)”What Is The Impact Of The RAE?”, New Era In Education 84(2), pp56-62 […]

Territorial Pissings

It seems that the look of the summer for HE policy makers is a way of monitoring and assuring teaching quality via a set of data-driven metrics.

First up, HEFCE’s ongoing quality assurance consultation stepped up a notch with one of their old-fashioned “early” consultations on the principles that would underpin a new system of institutional QA.  Spectacularly failing to demonstrate that this would be more efficient or provide better results than the current model [read the KPMG report for maximum LOLs], and monstered by everyone from Wonkhe to the Russell Group, HEFCE took to their own blog to defend the proposals a mere 48 hours later.

A day later, it was the turn of BIS – with a speech from new HE bug Johnson Minor at Universities UK. He offered what the tabloid end of the HE press would call a “tantalising glimpse” of the future Teaching Excellence Framework (manifesto bargaining fodder if ever I saw it), drawing a hurried HEFCE response clarifying that their tremendously important two-day-old QA proposals were different, though linked, to the emergent TEF.

Learning Gain is the wild card in this mix. If it worked properly it would be the single biggest breakthrough in education research of the last 100 years. It won’t work properly, of course – but it will burden students and staff with meaningless “work” that exists for no other reason than to generate metrics that will punish them.

None of this, of course, has any appreciable impact on actual students, but the idea of them being “the heart of the system” underpins everything. Remember, undergraduates, you may never see your tutor as she’s preparing another data return for one of these baskets, but it is all for your benefit. Somehow.

For all the differentiation, it is difficult to slip an incomplete HESES return between the two sets of proposals as they currently stand. Deciding to look at a basket of output measures as a way of assuring and/or enhancing quality is the epitome of an idea that you have when you’ve no idea – the artistry and ideology comes in with selecting and weighing the raw numbers (as Iain Duncan Smith so ably demonstrated today).

Future generations of policy makers are limited to tweaking the balance to return the answers that they expect and require. Whilst the sector themselves focus on gameplaying rather than experimentation in an innumerate “improve-at-all-costs” world. And the students? – well, the NSS results are going up. Aren’t they?

William Davis, in “The Happiness Industry” writes about the financialisation of human interaction – the replacement of human voices with a suite of metrics that can be mapped to known responses. This is basically akin to Philip’s Economic Computer with a flawed model of cause and effect wielded for particular policy goals, and controlling the lives of millions. The advent of social media allows us all to have a greater voice in policy making – at precisely the time that policy making as we know it is disappearing.

Both the TEF and the new QA model advance the “dashboard model” of policy analysis, and a managerial rather than leaderly approach to institutional management – neither expose the important assumptions that underpin the measurements. Sure, it’s fun to watch the emerging turf war between BIS and HEFCE – and it is fun to read the guarded snark of the Russell Group – but we’re really seeing poor quality policymaking being disguised by a whiff of big data.