“Keep the Fire” – notes on my #OpenEd15 presentation

[Slides] [Data] [Song]

Open Education, and indeed Education Technology more generally, exists in a perpetual “now” – or to be more precise, a perpetual near future. No matter how many times we attempt to put it into a narrative, it retains a deliberate ahistoricity that – after a while – begins to jar. Even to say that we have heard all this before has become a cliché, as there always seems to be one of those sessions at every conference.

It’s usually my session.

“Retain” is the first of Wiley’s “5Rs”. To give the complete language:

“The right to make, own and control copies of the content, eg download, duplicate, store and manage.”

In this formulation it implicitly refers to the virality of open content. Open content doesn’t have to exist in just one place, it can exist simultaneously in multiple (accessible and non-accessible) places. The comparison is with content licensed under a closed licence – it can’t be everywhere you want it to be. Even though Apple Music (say) will temporarily store music on your computer, you can never be said to “own” a copy. And most widely used software (such as windows) is only licensed, never owned. Even the software in your car or tractor is only licensed to you – you can never truly be said to own your car.
Ownership implies a relationship between you, an object, and time. Something belongs to you until you decide not to own it. In the perpetual near-future of “edtech”, ownership is a concept that is almost obsolete.
What about a community? Can you “own” membership in a community? I would argue that you could – membership of the community of open educators is ours as long as we choose to claim it. No one – not even Stephen Downes – can refuse to let you be a part of this community.
But is the community active in time?
Maybe I need to unpack this a little. My first OpenEd was here in Vancouver in 2012 (I was meant to go to Barcelona in 2010 but I couldn’t for various personal reasons). But what did I miss by not being there in earlier years? – what did other attendees bring to the conference in 2012 that I did not? How could I even find out what happened at say, OpenEd2010? Or OpenEd2008? or any of the predecessor conferences?
This is important, because a community of practice is a shared history of that practice. When we all complained about Sebastian Thrun “inventing” open education in 2011 – this was an expression of the history of our practice. Some of us were able to talk about, say, David Wiley’s experiences with WikiClasses in 2005, or George Siemens’ experiments in 2008 and say – look, this is the same thing. We’ve done this, we learnt from it, we want to share what we learned.
But – and I mean here no criticism of a decade of hard-working conference organisers – we are actually quite bad at preserving what we have learned and in particular we are bad at capturing what happened at these conferences.
Open Education is a field often criticised – for being without a research base, for being unconcerned with context and pedagogy, for being blind to the problems inherent to the idea of reuse, for being focused on content rather than community. These are major challenges to the integrity and nature of this field: and we have no real way to answer them.
I collected and tabulated 11 years of OpenEd conference activity. Not much, just session titles and presenter names. This took me more than a week, and necessitated me promising not one but two pints of beer to Alan Levine. In some cases abstracts and/or slides were available, in other cases not. I drew primarily on the (amazing) Internet Archive, which captures the old conference sites in various states, allowing for the vagaries of the technology underpinning them. (ColdFusion? seriously…)
I did this to scratch an itch, to see something that no-one else had seen (a similar motivation, I think, drove Adam Croom to serendipitously do a similar job in a slicker way). Others may need to find old abstracts for other reasons. Validation, for instance: to prove that they where at OpenEdxx and they presented on whatever it was. Or research: they’d seen a reference to a great presentation in previous years and wanted to read about it for themselves so they could build more research on top of it.
Some presentations here become papers (or blog posts). Some begin as papers or blog posts. But many more exist as a moment in time, maybe a set of those fashionable slides with big images and not much text. And a presentation that held the room. Maybe every presentation is captured as a youtube video, or an audio file – some years it is, some years it isn’t.

opened15 graph

And as you can see, there are patterns in there. I talked about the peaks in critique and sustainability talk in 2010 (linked to the end of many Hewlett grants at that time), the slow but inexorable growth of “policy” as a theme, and the dearth of interest in “reuse” since ’06. The tags are broad and subjective – in releasing the data I’m hoping people will feel bold in using their own tags to drive their own understanding. (The only unclear tag I use was “update” – just a way of indicating sessions primarily focused on providing an update on an ongoing project. I was heartened and surprised to see more updates this year than ever before – clearly there are more projects running than people may realise)

But the real focus of the session was just to help the community get better at archiving findings and building on them. If open education ever grows beyond simple implementation, this is how it will happen.

A brief note on FutureLearn finance

“Fast-track growth”, eh?

I awoke this morning to the news that the Open University has committed a further £13m to their “FutureLearn” MOOC platform. For those reading overseas, “FutureLearn” was a late UK-centric “me-too” entry to what back then looked like an interesting trend for free online courses. For people used to the likes of Coursera and EdX, this may all seem like very small (warm) beer – but it is a useful case study in mooconomics.

Unlike the MOOC platforms you may have heard of, FutureLearn is not funded by venture capital. Rather all of their funds have come from share capital allocations and other finance from the Open University. Not including £4.9m of start-up costs, the initial OU financial commitment was for £15m of share capital linked to milestones in an (as yet, unpublished) business plan. The OU Council minutes of July 2013 give these figures (para 9.8, page 7), but the comments of council members are also worth considering.

An associate lecturer noted at the meeting (para 11.7):

“[T]he paper about governance relating to FutureLearn Limited (C-2013-03-01 Appendix 1) had been very helpful, but a number of Council members had also asked to see the FutureLearn Limited business plan. It was appropriate to delegate consideration of the financial risk to Finance Committee, although a written report on the advantages of the investment would have been useful. However, sight of the business plan might also help the Council to understand how the FutureLearn Limited business would impact on the OU operation, not just in terms of financial return and student numbers, but also in terms of staff resource, expertise and intellectual property, as this was not within the remit of either Finance or Audit Committees.”

Just to break that down – because the language of minute writing tends to flatten the drama of interventions like these – the OU Council were being asked to approve £15m of capital spending against a business plan that they hadn’t seen. Both the financial and the audit committee had seen it, but had not provided anything more detail than the vague recommendations earlier in the document. No-one appeared to have considered the implications on OU staff resource, OU expertise or (incredibly) intellectual property.

Para 11.8 is worth taking line-by-line:

“The University Secretary observed that the business plan concerned FutureLearn Limited, not the OU’s involvement in the venture as one of several partners.”

This is correct, but as the OU were (and remain) the sole source of funding for FutureLearn, which is to all intents and purposes a wholly-owned OU subsidiary.

“[…] It was Finance Committee’s role to advise the Council, as a shareholder, as to whether to invest in FutureLearn Limited.”

Again, this is the niceties of corporate structure. There was no other investor in FutureLearn, a limited company that had no income (and still shows very few signs of covering running costs, as we shall see). So it would be perhaps more correct to say that the Finance Committee should be making the case to the OU Committee to set up a wholly-owned subsiduary – using, lest we forget, public money.

“As the business plan belonged to the FutureLearn Limited board, the Committee had found it difficult to interrogate the business plan. Consequently, KPMG had been employed to scrutinise it and advise the University, through the Finance Committee, as to the robustness of its assumptions.”

So the Finance Committee (responsible for scrutinizing the business plan as a case for investment) had employed KPMG to comment on it, as they found doing so “difficult”. As neither the KPMG report (focused on audit and finance issues only) nor the Business Plan has ever been made public – or, as far as I am aware, shared with the OU council – we need to take this at face value.

“As previously mentioned, the costs were fairly certain, but the income assumptions were risky and speculative. It was unlikely that this would have been clear from the business plan, but as an expert committee it was Finance Committee’s role to advise the Council on such issues.”

Of course the income assumptions were “risky and speculative”, this was a MOOC start up! It is difficult to believe that there were any realistic income assumptions in the plan – especially as Simon Nelson has still yet to talk about any income other than the sales of certificates of completion (now £34, an ideal Christmas gift…).

Paragraph 11.11 of the minutes, wherein the Finance Director responds to the concerns of the members, is also a fascinating read. We learn that:

  • The OU would be unable to claw back allocated funds in the event of FutureLearn closing
  • Despite the advice of the Finance Committee, it was “assumed” that FutureLearn would be profitable.
  • There was never any question of institutional partners contributing to the costs of running or developing FutureLearn as a business or platform.
  • The FL business plan was phased regarding a path to breakeven (income meeting costs). The milestones by which FL could draw down share capital up to the £15m limit were linked at least partially to the level of income they could generate.
  • Though the FutureLearn board was technically accountable to the council, in practice approval of further allocations of funding would be managed by the finance committee only.

Do read the whole document, but I just want to quote one more telling paragraph (para 9.9 on the OU revenue budget):

“The Council approved the proposed consolidated revenue budget for 2013/14, which showed a deficit of £9.6m after allowing for £4.9m for the costs of setting up FutureLearn Limited.”

The filing history of FutureLearn at Companies House is where we need to go to examine the progress of the company, and extrapolate back what the initial business plan milestones may have been.

  • The company was formed on 10th December 2012, and the press launch was on the 14th
  • …  but it first started acting like a company on 20th Jan 2013. New (OU) directors were appointed as the company “moved” from the offices of a PR company to the OU’s HQ in Walton Hall.
  • The “launch CEO”, Simon Nelson, was appointed on 9th May, and the Just after the OU purchased £500,000 of FutureLearn shares on 8th April. Yes, that is before the capital spend was approved by Council.
  • On 30th July August the OU increased their holding to £2.5million, based on the approval of the OU Council. (note that the £4.9m start-up costs were not part of the share capital allocation)
  • On the 29th November (just as the first courses had commenced, following another press launch in October), the OU increase their holding to £4m.
  • The first annual accounts show an operating loss of just under £2m. FutureLearn’s only non-OU income was £4,000 for video production. To clarify, this meant that FutureLearn had spent, by July 2013:
    • £4.9m of “start up” costs.
    • £1.9m of share capital
    • £4,000 of other income.
  • The OU upped their shareholding to £5m on 21st March 2014. So I guess that at least one of the milestones would have been to generate some non-OU income in their first year.
  • The 14th May 2013 saw the OU up their investment to £6.5m
  • And the 14th July saw this increase to £7m
  • On 17th December, this was increased again to £8.5m
  • On 24th March 2015, FutureLearn filed their accounts for year 2. This was under the “small company” rules, so is in a different format and doesn’t tell us about their income. We learn that:
    • FL had spent £6.52m of the (then £7m of) share capital they had been allocated since launch.
  • The total capital investment from the OU was upped to £11.4m on 25th June 2015, after one Peter Horrocks (OU VC) was installed as a director on the 11th June.
  • The most recent allocation of share capital was on 17th July 2015, bringing the OU’s capital investment to £11.65m. There has been no further investment since that date, suggesting that no further business model milestones have been reached.

Thanks for bearing with me on that – if you just skipped to the end, know that FutureLearn have spent a boatload of money and have never showing any sign even of covering their costs, much less passing profits back to shareholders.

Of the milestones in their business model (that are linked to capital allocation), we can intuit that they have failed to meet most of the final third. The announcement of this new £13m of capital before the remainer of the initial allocation suggests that FutureLearn have not and will not meet those final milestones, so a new business plan was required.

Sadly the most recent release of OU Council minutes kept discussions about this investment confidential. So we’ll never know for sure.

So what’s the point of establishing an Office for Students? #HEgreenpaper

(I’ve been all over Fulfilling our Potential, the BIS Green Paper, over the last 24 hours. As a part of Wonkhe’s ongoing effort to give everyone access to the best, brightest and quickest HE policy analysis I wrote what I think was the first detailed look at the TEF, and have been contributing further thoughts to the live blog. This is a little extra for FOTA fans and avid HEFCE-watchers.)

The existence of the entirety of part D of the green paper (reducing complexity and bureaucracy in research funding) is hugely significant. Not because of what it says – it doesn’t really say very much as others have pointed out – but of why it is there.

As James Wilsdon points out: “Once the decision had been made to abolish HEFCE and create the Office for Students (OfS), the green paper had no choice but to address the architecture of the research funding system, given HEFCE’s role in the REF and the allocation of quality-related funding.” Given the huge number of deferrals to the Nurse Review, and the lack of content, there was clearly no other compelling reason to publish on research. Which very much prompts the question as to why the decision to abolish HEFCE was made.

As the regulatory centre of English HE, calls to abolish HEFCE are a little like calls to abolish rain – so common as to be largely meaningless and notable only in their powerlessness. In managing the complexities of HE policy, people are bound to be annoyed by one or other requirement. But there has been no recent major uprising against a HEFCE decision, or no shocking report that has trashed their sector reputation (not even Learning Gain).

The new Office for Students (OfS) has 10 stated overarching responsibilities (part C, chapter 1, para 9):

  1. operating the entry gateway;
  2. assuring baseline quality;
  3. running the TEF;
  4. collecting and providing information;
  5. widening access and success for disadvantaged students;
  6. allocating grant funding (depending on which of the two options described in paras 16 and 17 is adopted);
  7. ensuring student protection;
  8. promoting the student interest;
  9. ensuring value for money for students and the taxpayer; and,
  10. assuring financial sustainability, management and good governance.

HEFCE currently owns responsibilities 1 and 2 (which it contracts out to the QAA), 4 (which it contracts out to HESA), 6, 7 (in partnership with the OIA – which remains, rather confusingly), 8 (via support for the HEA, QAA and in partnership with the OIA), 9 and 10. Some of these could, it is true, be pulled in-house – but at massive expense and for no discernible impact

Responsibility 5 is owned by OFFA, a quasi- organisation that very much sits within HEFCE for most purposes. Responsibility 3 is a new one, obviously, but one can only assume that the experience of HEFCE in running things like the REF and CETLs would be central here.

Speaking of the REF, research functions would pass from HEFCE to… well, someone. Incredibly, whilst it is clear that OfS would not have research funding functions because “reasons”, there is not a plan for where it would go – and the vague plans that do exist (merging it into the postulated mega-research council raises questions of a double ringfence, pushing it in to BIS could engender suspicions of overt political interference and sound Haldene klaxons).

So why not leave research funding with HEFCE. Indeed why not leave HEFCE as is?

The very well informed Conservative Home article earlier this week suggested: “Although it has oversight duties that its predecessor did not, the clue to its deficiencies are in its title: there is too much stress on giving money to Universities – hence “funding council” – and too little on how it is spent.”

But even a moments reflection on this suggests that HEFCE a clear and sustained interest in ensuring money is well spent, coupled of course with a healthy regard for the principle of institutional autonomy. The pre-2012 funding method included the legendary and daunting financial memorandum. As post Browne-Willetts funding moved towards loans, this oversight has necessarily diminished but a “funding agreement” still remains for all funds that HEFCE allocates to institutions and I would strongly recommend that people read through the 12 pages of grant conditions, financial requirements and audit requirements.

Which leaves us with the name. Surely it is not beyond the wit of BIS to stump up £50k for a quick rebrand and some new stationary? This would surely fulfil the obvious ministerial priority to have the word “student” in more parts of the university regulatory landscape without an inefficient, lengthy, unsettling and largely pointless chair-shuffling exercise.

We may never know precisely why BIS appears to have it in for HEFCE. But we need better reasons before we make the expensive changes that we are contemplating.

Children of the Revolution – (almost) line-by-line on the #greenpaper in the “Tory Diary”

[source article from Conservative Home. My comments represent my own personal views and not those of any other group.]

Universities have expanded, polytechnics have joined them…

By ancient tradition, all Conservative articles about HE start in 1992, with hand-wringing about a 23-year old John Major policy.

But the Higher Education Funding Council is still in place – itself a successor of the University Grants Committee, set up in 1918 to channel funds from government to universities.

Both HEFCE and UGC, of course, set up by Conservative administrations (in coalition with the liberals in the latter instance).

This mix of past and present helps to highlight the ambiguities and contradictions of the tertiary education sector. 

… and specifically Conservative policymaking therein.

Some British Universities are among the best in the world, but the international league tables that measure their work search for research excellence, not teaching quality. 

Not true. First column in the THE/Elsevier Global University Rankings, for instance. All the rankings I know of have a teaching component. But clearly facts are a distraction, we’ve got a narrative to build.

Students pay more, but there is little sign of the competitive and innovative education market that we were promised, with more higher education institutions charging the maximum tuition fee.  Government spends more, subsiding the loans system to the point that, for some students, it turns out not to be such a system at all.  And all the while, employers say that graduates are not up to the job.

Well, there is a competitive and innovative education market, but price has not proved to be a key differentiator. Which is a shame, as huge amounts of public money was bet on a neoliberal wet dream that was never, ever, going to lead to price factoring in all other market information like in a year 1 economics lecture. Meanwhile, employers have been carping about HE since long before 1918, but we can always bring in employer voices to sharpen any argument we want to make about universities. (Top marks for a Daily Mail link also, quality journalism will out.)

During the summer, Jo Johnson, the Universities Minister, complained that the sector’s market was “frankly anti-competitive”; mocked the requirement for new institutions to have degrees validated by an existing university, which he said is “akin to Byron Burger having to ask permission of McDonald’s to open up a new restaurant”, and declared that “we need to bust this system wide open”. 

Johnson bolstered his free-market credentials over the summer. And I’m sure enjoys a tasty complimentary burger from Byron Burger every now and then.

He has a plan to do so.  A Green Paper on Higher Education is to be published soon.  A bill is pencilled in for the next Parliamentary session.  And near the heart of his proposals is the replacement of that funding council which, in one form or another, has been in place for the best part of a century.

First confirmation of planned legislation. We’d assumed this, but remember what happened after the Willetts version?

Johnson’s proposals begin with the student experience.  Some University teaching is excellent; too much is “execrable”, to borrow a word sometimes used in the department.  To help raise the standard, he wants Universities to be rewarded for better teaching.

This is a nice sleight of hand, as who could fail to rail against “execrable” teaching (whether or not evidence exists for it) and in favour of raising the standard. However if you consider teaching as something that happens between academics and students in seminars, lectures, tutorials and labs, there is no proposal to reward excellence in these areas.

The metrics used will include lower drop-out rates, good graduate outcomes for disadvantaged students…

Measuring institutional ability to recruit bright, motivated students from all backgrounds, and taking adequate pastoral care of them to ensure that they don’t leave.

, and an improved national student survey.

Measuring institutional ability to get students to complete surveys in a positive manner.

His friends claim that evidence shows students value better teaching above lower fees: there is an reflection here of Nick Hillman’s finding, over at the Higher Education Policy Institute, that they are “less motivated by student issues, like tuition fees, than has often been supposed”.

If we are doing Daily Mail links as evidence: “friends“. And, frankly the HEPI research did not directly show evidence of student motivation – it showed evidence of how student voted. Maybe some of them thought austerity was a good idea? Who knows.

The Higher Education Funding Council will go.


 Although it has oversight duties that its predecessor did not, the clue to its deficiencies are in its title: there is too much stress on giving money to Universities – hence “funding council” – and too little on how it is spent.

Three words. HEFCE financial memorandum. HEFCE had (under the previous funding system) huge powers over how money is spent. Of course the Student Loans Company, as the primary mechanism for distributing public money to institutions, does not. Such are the requirements of the magical price-focused HE market in the sky.

Johnson wants this to change.


The new body will be charged with overseeing the metrics that measure teaching, and ensuring that Universities offer value for money to students, taxpayers and employers.  It will also be empowered to allow new entrants to enter the higher education market: here is the means of busting the system open that the Minister wants.

Here, we seem to be mangling HEFCE and QAA  (and possibly OFFA) powers, demonstrating a policy confusion similar to Kernohan’s First Law of Merging the QAA With Things.

However, it will probe and inspect these new institutions to a greater degree than it will older and established ones: this is Johnson’s means of balancing quality assurance with a light touch. 

Risk-based QA – a bit of wonk nostalgia from the 2013 white paper.

Finally, that emphasis on disadvantaged students will reach wider.  David Cameron has pledged to double the university entry rate among students from disadvantaged backgrounds by 2020, and wants to see a 20 per cent increase in the number of black and minority ethnic students going to university by 2020.  He is championing colour-blind applications.  Johnson has pressed UCAS, the body charged with processing University applications, to publish place offers by ethnic group, which it will do.

I’m going to break with the trenchant sarcasm of my earlier comments and come out as saying that this is a damned good idea, and should have happened years ago. I’m also hoping that we can see more open data coming out of UCAS. That’s as linked data or in a .csv please UCAS, not in a bloody PDF.

If the new inspection body is to be the stick, there will also be a carrot. As George Osborne announced in the Budget, Universities that teach better will be allowed to raise fees in line with inflation from next year.  Permitting further rises later has not been ruled out.

Inflation currently stands at -0.1%. And the idea that inflationary fee increases are a suitable carrot is, even where inflation exists, laughable. Inflation represents a rise in the cost of living – so refusing to let a TEF-negative institution raise fees effectively takes money (in real terms) away from it.

Maybe a brownish carrot with a tough exterior, that grows on a tree. That adds to the terrifying level of borrowing that BIS already has on the books.

There will be no shortage of objections to all this.

For “objections”, read “straw men”.

Some Universities don’t want to be challenged by new entrants.

Name one.

There will be questions of detail, such as whether the metrics will work.

*picks up megaphone, stands on soap-box*


There will be those of principle, such as whether it is really government’s business to tell the Universities how to conduct theirs.

A nod to the Conservative Libertarians. Hi CWF! (and hello UKIP).

And there will be concerns about whether a tuition fee hike will deter poorer applicants.

Not as much as replacing maintenance grants with loans will.


But it is noticeable that his plans appear to by-pass the Office for Fair Access – set up under the Coalition after pressure, in particular, from the Liberal Democrats. Where they wanted to set up a special new body, he wants to replace an old one. 

(OFFA was set up by Labour in 2004)

Indeed, Johnson’s plans owe less to the party that previously presided over the department he now serves in than to Steve Hilton.  The latter’s ideal of accountability and transparency, honed in opposition, has flowered in government – in the form of Michael Gove’soverhauled Ofsted, Theresa May’s crime maps, Hunt’s MyNHS data information service, Eric Pickles’spublication of spending of over £250 online…and so on.

The idea that no-one thought of accountability or transparency before Hilton, is nearly as bizarre as the idea that the listed policies are good expressions of either.

Johnson’s new body has echoes of Ofsted, though it will not be inspecting courses directly.  His friends say that he clocked some of the problems in the Higher Education sector while heading Downing Street’s Policy Unit during the last Parliament.

He wrote the manifesto, so this whole mess was most likely his idea. And as the main thing Ofsted does is directly inspect teaching, and the new daemonic chimaera of whatever abomination he has in mind will not, one wonders which echoes are being heard?

There is a view that Jo Johnson is the real Johnson to watch.  I am not quite sure about that, but the signs are very good.

I think I’d like to keep all of them where I can see them, to be honest.

[update: the always impressive @Martin_Eve has done a proper grown-up response and analysis that you could print out and show to your Vice-Chancellor]

triangle of institutional innovation

A triangle of institutional innovation

I’ve been thinking a lot about disruptive innovation, what with it being a thing that people for some reason still take seriously.

What if disruptive innovation needed to be part of a wider conception of institutional innovation? By this, I mean that although disruptive innovation has obvious flaws when viewed in isolation – not least that it isn’t a very good description of any innovation that we know about – but needs to be combined with other ideas of innovation in order to make sense.

“Sustaining innovation” has a similar issue, in that it assumes an organisation as a single organism with a common purpose, and “user innovation” – although I love it – also doesn’t really describe the way that big organisations actually change.

(as an aside, we are also talking about three different perspectives on history – disruption is recognisably ahistorical to the field of endeavour it is acting on, sustaining innovation draws on a canonical “history of the victorious” which codifies a single organisational story, whereas user innovation draws more on the “folk memory” of lived experience)

So I plotted the three as sides of a triangle, and then thought about the vertices.

We can see the effects of combining sustaining and disruptive innovation in the activities of many universities – the rise of corporatism and taking ideas from other businesses. “Lean” is a good example here that makes sense from a managerial and financial perspective but actually makes it harder for “users” (in this case, staff)

Combining sustaining and user innovations is a great way of optimising processes and practices. It makes it easier for users to keep doing the same thing by adding short-cuts based on their observed behaviour. This leads to incremental changes rather than wholesale “innovation” as externally observed, basically what happens at a sensible institution that listens to people at the “chalk face”.

Finally, combining user innovation and disruptive innovation – this made me think of the “edupunk” movement, users grabbing and using external tools with little regard for the needs of the institution (and often without the institution ever knowing): doing exciting things but storing up a whole world of interoperability problems.

So it seems to me that a truly useful innovation will draw on each of the three strands: disruptive (external), sustaining (management), and user (worker). I couldn’t find a diagram that explained this so I made one.

triangle of institutional innovation

(and if you are a director of innovation and may still be dubious, I have performed what may soon be the defining test for theories of innovation and solved for Yacht Rock.)


How to read the Green Paper

This is written before I have had sight of the immanent “green paper” on Higher Education from BIS. It is an attempt to spot what might be interesting within the paper in advance, and maybe something of a “wonk’s guide” to making sense of the thing when it eventually comes out.

First up, a “green paper” is an early stage consultation paper. It’ll be tempting to treat it as a “white paper“, but the choice of colour is a very important signifier. A “white” paper could be seen as analogous to a research paper – we’ve done the research, here are our findings and conclusions, do you agree with them?

A “green” paper is more like a hypothesis – we think that these ideas are the correct ones, we wish to test these with some research. In this case, the research will be a full consultation, with much more scope for responses to shape policy direction than with 2011’s “Students at the Heart of the System“. (Eager wonks will recall that much of the contents of the 2011 paper had already been implemented before the “white paper” consultation began).

We already know, and can infer, a great deal about what we can expect within the green paper. There have been two key speeches from Johnson Minor, one in July and another in September, that offer clues. Though most speculation (and god, has there ever been a lot of speculation…) has been about the TEF, it is not by any stretch the most important aspect.

To go through things in the order I am interested in them:

Regulatory and structural reform

That James Wilsdon (big fan!) was party to a fascinating leak from BIS concerning the spending review. The scale of cuts (looking like Osborne’s aspirational 40% for a non-ringfenced department) is now clear, as is the concept of a “bonfire of the partner agencies” – the usual response to calls for departmental cuts, and one that is usually followed by a realisation that the partner agencies were invented primarily to shunt staff numbers out of the main “Whitehall” count whilst keeping key jobs done.

I don’t rate Sajid Javid as a minister, not least because he seems to  want to make cuts rather more than he wants to run services. And it is his influence that saw Johnson Minor saying palpable nonsense like “much of the higher education system is ripe for simplification.” in September. (Compare his more conventional new-junior-minister structure-building speech in July, which neglected to mention the more recent horror of the “day 1” slide).

Of the 40+ partner bodies named by BIS, 10 are associated with Higher Education, suggesting that at least four will likely perish. Which ones are for the chop depend entirely on choices made around the issues below.

Changes to research funding in England

Merging the seven research councils must look like an easy win for Javid – surely back office functions and branding could be combined with minimal disruption and significant savings. Sir Paul Nurse’s review of research funding is now overdue (expected Summer 2015), and given he is on record as claiming that a government that to cut research spending would be “Neanderthal“, one has to suspect that the delay may be due to rocks being banged together concerning the gap between this recommendation and Javid’s small government instincts. (free social media tip for @bispressoffice – launch the report with the words “guh ruh guh urrrgh rahr Sir Paul Nurse” :-) )

Although it is likely that Nurse has recommended efficiencies around research support (his review was tasked with examining the split between project and strategic funding, ominously) it is unlikely that a recommendation to merge the councils directly would feature – indeed he’s been reported as saying this would “not be on the table”.

The other option would be to look at the non-project end of research funding – which would mean QR (basically research funding given to institutions to support general research-y activity) and the REF, both currently managed by HEFCE. Earlier this year Javid’s favourite think-tank, the IEA, called for both to be abolished. The regulatory environment upshot of this would be to get shot of HEFCE.

In the past, HEFCE’s responsibility for teaching funding would have stymied this approach – but post Browne/Willetts their teaching funding role is vestigial, to say the least. Widening participation is now under the auspices of OFFA, and the quality assurance remit is a whole other can of worms (see below). QR is much loved as a means of supporting blue-skies research and scholarship, as opposed to the more direct economic benefits often returned from research council projects.

Data requirements/”transparency”

A central plank of the Browne review was to offer students more information in order to make the market work better. It was a neo-classical economics admission of failure – the “sticker price” was obviously not conveying all the information (as free-market zealots like to believe it does), so more data was needed to give the market at helping hand.

HEFCE research (yes, them again) back in 2014 suggested that students don’t much use even the existing data in making course application decisions. And even the venerable NSS is currently under review . This consultation was released last week by HEFCE as kind of a data collection review version of leaning out of the window of a Passatt in Hull requesting a “bare knuckle”. In happier times for HEFCE, this release would have been a part of the green paper release and flagged as a sub-consultation within it- alas these days releases of HEFCE consultations tend to happen a day or two before a much bigger BIS announcement that renders them largely meaningless.

So, precise details on how universities spend student fees will likely be the order of the day. Bonus prize for anywhere that officially notes that they spend £9,000 per student on “running a university”.

Widening participation

A year ago, Westminster politicians used to swagger past Holyrood ones, kicking (45%) sand in their faces and claiming that their HE funding system was more progressive than the Scottish one, despite having £27,000 worth of fee loans. This was possible because until this parliament students from less monied backgrounds got maintenance grants.

Alas, in northern primary school terms, George Osborne is now the “cock of the school” and his need for insignificant budgetary fiddling to preserve the twin lies that austerity is working and our economy is in good shape has trumped Caledonian bragging rights. Forget for a moment that most of the loans will never be paid back, Osborne has never been a long term thinker – indeed I think “cock of the school” at St Paul’s meant something different.

So “widening participation” once again becomes an issue for England, but not in an egalitarian sense. Basically there are votes in promoting white working class ambitions, but not many, so Johnson’s speech suggested a bit more data and maybe a university might deign to offer a scholarship or two. Boring cynical stuff, but the mention of OFFA in the September speech makes them safer than the HEFCE institutional structure they sit in.


Ah, the ****ing TEF. Friend to the second-rate HE commentator. There’ll be no surprises here, basically a grab-bag of the likely indicators (NSS, first destination, maybe some widening participation/POLAR numbers and anything else HESA have up their sleeve for next academic year) and a commitment to explore other data sources for a more refined TEF2 in the years to come.

The 2015 budget added spice to the long rumbling debate by allowing institutions that had been judged to have excellent teaching to raise their fees in line with inflation each year. To put that in perspective, the sector saw £9bn of home and EU fees last year. Inflation (RPI) currently sits at a tumescent 0.1% after many months of deflation (thanks, George!).  So English universities could be looking at sharing a maximum of just under an extra £9m a year if all are judged to have excellent teaching. (That’s £45m over 5 years – compare the £315m over 5 years devoted to the CETL programme)

New providers, DAPs and quality assurance

The gun has already been fired here regarding new providers. The doors are open and new “market entry” guidance has been issued. Surely new providers, and of a higher quality to those reported on by Dr McGettigan and others. It has already been indicated that the barriers to degree awarding powers and university title will be lowered, obviously because the initial experiments went so well. The only surprise here will be how little safeguards will be included.

Finally, expect nothing at all on QA. HEFCE’s consultation is being hung out to dry in the most exquisite way possible – via a parliamentary committee hearing.

In conclusion

So there we are. The news before the news, as they say. On downloading my own copy of the green paper I will first look to see whether HEFCE are toast, and then figure out what a mess has been made of the dual support system.

Sail on

Despite being Viv’s birthday, the 26th September 2015 also marks the 10th anniversary of Yacht Rock, JD Ryznar and Hunter Stair’s smooth masterpiece. Combining cultural history, highly quotable dialogue, moderate production values and the sweet sounds of late 70s/early 80s marina rock, it remains the grandaddy of all youtube viral hits.

Take an hour out of your day today, pour yourself something tropical, sit back and binge watch episodes 1 through 12. Smooth Jesus commands you to.

(Yacht Rock changed my life. I blame Brian)

Learning gain, again

Perceptions of the future of teaching quality monitoring have come a long way since I last wrote about HEFCE’s strange fascination with quantifying how much students learn at at university. A full consultation concerning the ongoing review of QA processes detonated in late June , swiftly followed by the summer’s all-consuming speculative think-piece generator, the TEF.

Today- alongside the announcement of 12 year-long institutional projects to “pilot” a bewildering range of metrics, e-portfolios, skills assessments and pig entrail readings – HEFCE released the research conducted for them by RAND Europe. Interestingly, RAND themselves are still waiting for a “co-ordinated concurrent release with other publication outlets”.

(screengrab: 13:45BST, 21/09/2015)
(screengrab: 13:45BST, 21/09/2015)

The report itself does have a rushed feel to it – shifting typography, a few spelling, grammatical and labelling howlers – which itself is unusual given the high general quality of HEFCE research. And why would RAND label it as “withdrawn”? But I’ve heard from various sources that the launch was pre-announced for today at some point late last week, so – who knows.

We started our journey with an unexpected public tendering exercise back in June 2014 though this is also shown as being launched in May of the same year. The final report, according to the contract viewable via the second link in this paragraph, was due at the end of October 2014, making today’s publication nearly a year behind schedule.

So over a year of RAND Europe research (valued at “£30,000 to £50,000) are presented over 51 variously typeset pages, 10 pages of references (an odd, bracketless, variant of APA if you are interested) and 5 appendices. What do we now know?

RAND set out to explore “explore[…] the concept of learning gain, as well as current national and international practice, to investigate whether a measure of learning gain could be used in England.”

They conclude [SPOILERS!] that the purpose to which learning gain is put is more important than any definition, there is a lot of international and some UK practice of varying approaches and quality, and that they haven’t got the faintest idea as to where you could do learning gain in the UK but why not fund some pilot studies and do some more events.

Many of the literature review aspects could have been pulled off the OECD shelf – Kim and Lalancette (2013) covers much of the same ground for “value added” measures (which in practice includes much of what RAND define as learning gain, such as the CLA standardised tests and the Wabash national study), and adds an international compulsory-level analysis of practice.

Interestingly, the OECD paper notes that “[…] the longitudinal approach, with a repeated measures design often used in K-12 education, may not be logistically feasible or could be extraordinarily expensive in higher education, even when it is technically possible” (p9) whereas RAND are confident that “Perhaps the most robust method to achieve [comparability of data] is through longitudinal data, i.e. data on the same group of students over at least two points in time” (p13).

The recommendation for a set of small pilot studies, in this case, may appear to be a sensible one. Clearly the literature lacks sufficient real world evidence to make a judgement on the feasibility of “learning gain” in English higher education.

By happy coincidence, HEFCE had already planned a series of pilots as stage two of their “learning gain” work! The “contract” outlines the entire plan:

“The learning gain project as a whole will consist of three stages. The first stage will consist of a critical evaluation of a range of assessment methods and tools (including both discipline-based and generic skills testing), with a view to informing the identification of a subset that could then be used to underpin a set of small pilots in a second stage, to be followed by a final stage, a concluding comparative evaluation. This invitation to tender is solely concerned with the first stage of the project – the critical review”(p5)

So the RAND report has – we therefore conclude – been used to design the “learning gain” pilot circular rather than as a means of generating recommendations for ongoing work? After all, the circular itself promised the publication of the research report “shortly” in May 2015 (indeed, the pdf document metadata from the RAND report suggests it was last modified on 27 March 2015, the text states it was “mid-January” when drafting concluded) – and we know that the research was meant to inform the choice of a range of methods for piloting.

The subset comprising “standardised tests, grades, self-reporting surveys, mixed methods and other qualitative methods” that was offered to pilot institutions does echo categorisation in the RAND report (for example in section 6.3.2, the “Critical Overview” the same headings are used.)

However, a similar list could be drawn from the initial specifications back in May 2014.

  • Tools currently used in UK institutions for entrance purposes (e.g. the Biomedical Admissions Test) or by careers services and graduate recruiters to assess generic skills
  • Curriculum-based progress testing of acquisition of skills and knowledge within a particular discipline
  • Standardised tests, such as the US-based Collegiate Learning Assessment (CLA), the Measure of Academic Performance (MAPP) and the Collegiate Assessment of Academic Proficiency (CAPP).
  • Student-self- and/or peer-assessed learning gain
  • Discipline-based and discipline independent mechanisms
  • Other methods used by higher education providers in England to measure learning gain at institutional level
  • International (particularly US-based) literature on the design and use of standardised learning assessment tools in HE […]
  • Literature on previous work on learning gain in UK HE
  • UK schools-based literature on the measurement of value-added (p7)

In essence, RAND Europe have taken (again, let us be charitable) 10 months to condense the above list into the list of five categories presented in the HEFCE call for pilots. (The pilots themselves were actually supposed to be notified in June 2015, though they seem to have kept things a carefully guarded secret until Sept 16th, at least. Well done, Plymouth!).

It is unclear, though unlikely, whether teams developing institutional bids had sight of the RAND report during the bid development process. And it is doubly unclear why the report wasn’t released to a grateful public until the projects were announced.

But the big question for me is what was the point of the RAND Report into Learning gain?

  • It didn’t (appear) to inform HEFCE’s plan to run pilot projects. There were already plans to run pilots back in 2014, and whereas the categories of instrument types to use “RAND language” this would be equally possible to derive from the original brief.
  • It was released at the same time as successful bids were announced, and thus could not (reasonably) have contributed to the design or evidence base for institutional projects. (aside: wonder how many of these pilots have passed through an ethical review process)
  • It didn’t significantly add to a 2013 OECD understanding of the literature in this area. It referred to 6 “research” papers (by my count) from 2014, and one from 2015.
  • There was a huge parallel conversation about an international and comparable standard, again by the OECD, during the period of study. We (in England) said “goodbye” as they said “AHELO”, but would it not have made sense to combine background literature searches (at least) with an ongoing global effort?

Though I wouldn’t stay I started from the position of unabashed enthusiasm, I have been waiting for this report with some interest. “Learning gain” (if measured with any degree of accuracy and statistical confidence) would be the greatest breakthrough in education research in living memory. Drawing a measurable and credible causational link between an intervention or activity and the acquirement of knowledge or skills: it’s the holy grail of understanding the education process.

There’s nothing in this report that will convince anyone that this age-old problem is any closer to being solved. Indeed, it adds little to previous work. And reading between the lines of the convoluted path from commission to release, it is not clear that it meaningfully informed any part of HEFCE’s ongoing learning gain activity.

All told, a bit of a pig’s ear.


BigGroupsHey listneners – you’re tuned to W-ALT-FM, home of the brave and your source for the best in AOR and freeform radio. arrived


Freeform AOR radio is more than just a radio format, it’s a way of life – it is how people live here and now in 1980.Ahistory

It’s not a new development. Ever since the boring corporate rock stations gave us the FM band to play with in the mid 70s we’ve done what no-one expected. We’ve cut through all the schedulers, the pluggers, the hype and the payola and given music radio back to the people… with jocks like my good self as your guide.


This is my story.AORstory

I’m proud to be  an AOR jock, have been for three or four years now. I feel like I *know* music, I mean deep music – real music – not this top 40 rubbish.
personality There’s no novelty hits on W-ALT-FM – just favourites and deep cuts from our favourite artists, and notable new music that you need to hear about. Think of us as your record collection. As your cooler friends cooler record collection – the one you learn from.pic1









caution cleaner


I don’t mean to sound like your old teacher, but a good AOR jock is an educator.





Sure, I want you to be entertained, but I also want you to discover something – learn something. I spend a lot of time choosing not just each track but every playlist. I’ll play you a hot Supertramp song, then the Beatles tune that inspired it. Then I’ll play some Doobies – in the same key, at the same speed, and then from that to Steely Dan with the same singer. You don’t need to worry about this, you might miss most of it, but everything is where it is for a reason.money

AOR has been through some tough times – when punk broke, most of us didn’t know what to make of it. But the quality of some of the “new wave” was surprising. These days we’re glad to play Blondie, The Police and Talking Heads – maybe not straight after the Eagles but they’re in the mix. And classic soul has always been our thing – no color bar here! Bring on the Commodores!
howhighisupSome people have been talking about bringing in a more “disciplined” sound, making it easier to sell ads and syndicate. But that’s the old model – that’s what ruined AM radio. On FM we proved that music mattered. And that someone with ears and a finger on the pulse could draw an audience – even break an artist.streets roots






AOR is here to stay – because this time, things are different. We’ve got our own space now, and no big business ideas are going to take it from us.





Thisrespect issue of R&R I’m reading – wow. There’s so much stuff about the future of stations like W-ALT-FM. If you look past the ads for all the great albums, there’s guys we know standing up and keeping the AOR fire burning. Maybe a few are showing their greedy colors, but in the main we stand strong. I like that.

pimpleRadio is a part of life, it’s a constant. It’s where you hear new music, discover oldies, get a sense of what the future of music holds. Everywhere I walk and everywhere I look I see kids with portable radios – they’re so cheap now! – making the music we play part of their lives. And it’s almost as if the big stations don’t realise what is happening.

There’s some stuff we don’t do – we don’t do sport for a start. We certainly don’t do disco – man! disco sucks, yeah! And we don’t do much news. But there’s other places you can get them, sometimes you just want music. And real music, good music, artists at the peak of their craft. Like Hall and Oates, Boston, Led Zep. Kenny Loggins. The timeless stuff.


So I’ve been rapping about me for too long. But it’s not about me – it’s about the music.pic2

[in 1985 W-ALT-FM moved to a “Hot Adult Contemporary” format, having been bought out by CBS-FM. It was sold in 1989, and the station moved to a heavily playlisted “classic rock” mix. By 1995, it was almost pure Country. Around the turn of the century it merged with two other local stations and now runs as a “Talk” station with a conservative slant.]

“I watch the ripples change their size but never leave the stream” #altc 2015

This post is the “amplified” version of my presentation at ALTC 2015. The presentation summarises the main arguments that are made in more detail in this post.

So let’s look at the 2015 Google Moonshot Summit for Education. No particular reason that I start here rather than anywhere else, other than the existence of Martin Hamilton’s succinct summing up on the Jisc blog. Here’s the list of “moonshot” ideas that the summit generated – taken together these constitute the “reboot” that education apparently needs:


  • Gamifying the curriculum – real problems are generated by institutions or companies, then transformed into playful learning milestones that once attained grant relevant rewards.
  • Dissolving the wall between schools and community by including young people and outsiders such as artists and companies in curriculum design.
  • Creating a platform where students could develop their own learning content and share it, perhaps like a junior edX.
  • Crowdsourcing potential problems and solutions in conjunction with teachers and schools.
  • A new holistic approach to education and assessment, based on knowledge co-construction by peers working together.
  • Creating a global learning community for teachers, blending aspects of the likes of LinkedIn, and the Khan Academy.
  • Extending Google’s 20% time concept into the classroom, in particular with curriculum co-creation including students, teachers and the community.

“Gamification”, a couple of “it’s X… for Y!” platforms and crowd-sourcing. And Google’s “20% time” – which is no longer a thing at Google. Take away the references to particular comparators and a similar list could have been published at any time during my career.

The “future of education technology” as predicted has, I would argue, remained largely static for 11 years. This post represents an exploration concerning why this is the case, and makes some limited suggestions as to what we might do about it.

Methodological issues


Earlier in 2015 I circulated a table of NMC Horizon Scan “important developments in educational technology” for HE between 2004 and 2015, which showed common themes constantly re-emerging. The NMC are both admirably public about their methods and clear that each report represents a “snapshot” with no claims to build towards a longitudinal analysis.

Methodologically, the NMC invite a bunch of expert commentators into a (virtual, and then actual) room, and ask them to sift through notable developments and articles to find overarching themes, which are then refined across two rounds of voting. There’s an emphasis on the judgement of the people in the room over and above any claims of data modelling, and (to me at least) the reports generally reflect the main topics of conversation in education technology over the recent past.

Though it is both fun and enjoyable to criticise the NMC Horizon Scan (and the list is long and distinguished) I’m not about to write another chapter of that book. The NMC Horizon Scan represents one way of performing a horizon scan – we need to look at it against others.

The only other organisation I know of that produces annual edtech predictions at a comparable scale are Gartner, with the Education Hype Cycle (pre-2008 the Higher Education Hype Cycle, itself an interesting shift in focus.) Gartner have a proprietary methodology, but drawing on Fenn and Raskino’s (2008) “Mastering The Hype Cycle” it is possible to get a sense both of the underlying idea and the process involved.

The Hype Cycle concept, as I’ve described before, is the bluntest of blunt instruments. A thing is launched, people get over-excited, there’s a backlash and then the real benefit of said thing is identified, and a plateau is reached. Fenn and Raskino describe two later components of this cycle, the Swamp of Diminishing Returns and the Cliff of Obsolescence,  though these are seldom used in the predictive cycles we are used to seeing online (used in breach of their commercial license.)


Of course not everything moves smoothly through these stages, and as fans of MOOCs will recall it is entirely possible to be obsolete before the plateau. As fans of MOOCs will also recall, it is equally possible for an innovation to appear fully-formed at the top of the Peak of Inflated Expectation without any prior warning.

In preparing Hype Cycles Gartner lean heavily on a range of market signals including reports, sales and investments – and this is supplemented by the experience of Gartner analysts who in working closely with Gartner clients are well placed to identify emerging technologies.  But the process skews data driven, whereas the NMC skews towards expertise. (There’s a University of Minnesota crowd-sourced version based on “expertise” that looks quite different. You can go and add your voice if you like.)

How else can we make predictions about the future of education technology? You’d think there would be a “big data” or “AI” player in this predictions marketplace, but other than someone like Edsurge or  Ambient Insight extrapolating from investment data. Other than the obvious Google Trends looking at search query volumes, it appears that meaningful “big data” edtech predictions are themselves a few years in the future. (or maybe no big data shops are confident enough to make public predictions…)

A final way of making predictions about the future would be to attend (or follow remotely) a conference like #altc. It could be argued that much of the work presented at this conference are generally “bleeding edge” experimentation and that themes within papers could serve as a prediction of the mainstream innovations of the following year(s).

Why is prediction important?

Neoclassical economists would  argue that the market is best suited to process and rate innovations, but reality is seldom as elegant as neoclassical economics.

Within our market-focused society, predictions could allow us to steal an advantage on the workings of the market and thus allow us to make a profit as the information we bring from our predictions is absorbed. As all of the predictions I discuss above are either actually or effectively open to all, this appears to be a moot point, as an efficient market would quickly “price in” such predictions. So the paradox here is that predictions are more likely to influence the overall direction of the market than offer any genuine first-order financial benefit.

As Philip Mirowski describes in his (excellent) “Never let a serious crisis go to waste” (2014): “It has been known for more than 40 years and is one of the main implications of Eugene Fama’s “efficient-market hypothesis” (EMH), which states that the price of a financial asset reflects all relevant, generally available information. If an economist had a formula that could reliably forecast crises a week in advance, say, then that formula would become part of generally available information and prices would fall a week earlier.”

Waveform predictions – like the hype cycle, and Paul Mason‘s suddenly fashionable Kondratieff Waves (or Schumpter’s specific extrapolation of the idea of an exogenous economic cycle to innovation as a driver of long-form waves: Kondratieff describes the waves as an endogenous economic phenomenon…) – can be seen as tools to allow individuals to make their own extrapolations from postulated (I hesitate to say “identified”) waves in data.

File:Kondratieff Wave.svg

(Kondratieff Waves, Rursus@wikimedia, cc-by-sa)

In contrast, Gartner’s published cycles say almost nothing useable about timescales, with each tracked innovation proceeding through the cycle (or not!) at an individual pace. In both cases, the idea of the wave itself could be seen as a comforting assurance that a rise follows a fall – by omitting the final two stagnatory phases of the in-house Gartner cycle it appears that every innovation eventually becomes a success.

But there is no empirical reason to assume waves in innovation such as these are anything other than an artefact (and thus a “priced-in” component) of market expectations.

Two worked examples

I chose to examine five potential “predictive” signals across up to 11 years of data (where available) for two “future of edtech” perennials: Mobile Learning and Games for Learning. I don’t claim any great knowledge of either of these fields, either historically or currently, which makes it easier for me to look at predictive patterns rather than predictive accuracy.

My hypothesis here is that there should be some recognisable relationship between these five sources of historic prediction.

Games in Education

Year NMC Gartner ALT-C Ambient Insights Google
(notes) (Placement on “technologies to watch” HE horizon scan) (placement on education hype cycle) (conference presentations, “gam*”) (tracked venture capital investment level) (trends, “Games in education” level at mid-year & direction)
2004 Unplaced Unplaced 1 presentation 68 (falling slightly)
2005 2-3yrs #2 Unplaced 2 presentations 37 (falling)
2006 2-3yrs #2 Unplaced 5 presentations 36 (stable)
2007 4-5 #1

(massively multiplayer educational gaming)

Unplaced 4 presentations 30 (stable)
2008 Unplaced Unplaced 5 presentations 42 (rising)
2009 Unplaced Unplaced 3 presentations 41 (rising)
2010 Unplaced

(also not on shortlist)

Unplaced 3 presentations $20-25m 55 (rising)
2011 2-3yrs #1 Peak (2/10) 1 presentation $20-25m 70 (rising)
2012 2-3yrs #1 Peak (2/8) 2/231 presentations $20-25m (appears to have been revised down from $50m) 58 (sharp decline – 80 at start of year)
2013 2-3yrs #1 Peak (4/9) 1/149 presentations

1 SIG meeting

$55-60m 46 (steady, slight fall)
2014 2-3yrs #2 Trough (8/18) 0/138 presentations

1 SIG meeting

$35-40m 40 (steady, slight fall)
2015 N/A (was 2-3yrs trend 3) Trough (11/14) 4/170 presentations n/a 33 (falling)

We see a distributed three year peak between 2011 and 2013 with NMC, Gartner and investors in broad agreement around a peak. Google also trends hits the start of this peak before dropping sharply.

Interestingly, games for learning never became a huge theme at ALTC (until maybe this year #altcgame), but there is some evidence of an earlier interest between 2006-8, which is mirrored by a shallower NMC peak.

Mobile Learning

Year NMC Gartner ALT-C AmbInsight Google
(notes) (word “phone” or “mobile”) (word “phone” or “mobile”) (search on session titles – “phone” or “mobile”) Data available from 2010 only. “Mobile Learning” – mid year value and description
2004 Unplaced Unplaced 1 presentation 50 (steady)
2005 Unplaced Unplaced 4 presentations 36 (choppy, large peak in May)
2006 “The Phones in Their Pockets”

#1 2-3yr

Unplaced 7 presentations 42 (steady, climb to end of year)
2007 “Mobile Phones”

#1 2-3yr

Unplaced 10 presentations 57 (fall in early year, rise later)
2008 “Mobile broadband”

#1 2-3yr

Unplaced 6 presentations 56 (steady, peak in May)
2009 “Mobiles”

#2 1-2yr

“Mobile Learning”

Rise 10/11 (low end) 11/11 Smart

11 presentations 56 (steady)
2010 “Mobile Computing”

#2 1-2yr

“Mobile Learning”

Peak 1/10 (low end) & 8/10 (smart)

8 presentations $175m 81 (rising)
2011 “Mobiles”

#2 1-2yr

“Mobile learning”

Peak 5/10 (LE)

Trough 4/12 (smart)

8 presentations $150m 94 (peak in June)
2012 “Mobile Apps”

#1 1-2yr

“Mobile Learning”

Trough 1/15 (LE)

8/15 (Smart)

12 presentations $250m[1] 81 (steady)
2013 Unplaced “Mobile Learning”

Trough 3/14 (LE)

10/14 (Smart)

4 presentations $200m 70 (falling)
2014 Unplaced “Mobile Learning”

Slope 5/13 (LE)

Trough 17/18 (Smart)

5 presentations $250m 64 (falling)
2015 “BYOD”

#1 1-2yr

“Mobile Learning”

Slope 1/12 (Smart)

2 presentations n/a 59 (steady, fall in summer)

Similarly, a three year peak (2010-12) is visible, this time with the NMC, Gartner and Google Trends in general agreement. The investment peak immediately follows, and ALTC interest also peaks in 2012.

Again there is early interest (2007-2009) at ALT-C before the main peak, and this is matched by a lower NMC peak.  Mobile learning has persisted as a low-level trend at ALT throughout the 11 year sample period.

Discussion and further examples

A great deal more analysis remains to be performed, but it does appear that we can trace  some synergies between key predictive approaches. Of particular interest is the pre-prediction wave noticeable in NMC predictions and correlating with conference papers at ALTC – this could be of genuine use in making future predictions.

Augmented reality  fits this trend, showing two distinct NMC peaks (a lower one in 2006-7, a higher one in 2010-11). But ALTC presentations were late to arrive, accompanying only the latter peak alongside Google Trends, though CETIS conference attendees would have been (as usual) ahead of this curve.  Investment still lags a fair way behind.

“Open Content” first appears on the NMC technology scan in 2010 and has yet to appear on the Gartner Hype cycle, despite appearing in ALT presentations since 2008, and appearing a a significant wider trend – possibly the last great global education technology mega-trend -substantially before then. The earlier peak in “reusable learning objects” may form a part of the same trend, but what little available data from the 90s is not directly comparable.

(MOOCs could perhaps be seen as the “investor friendly” iteration of open education, and for Gartner, at least, peak in 2012, falling towards 2014 and disappear entirely in 2015. They appear only once for the NMC, in 2013. ALT-C gets there in 2011 (though North American conferences were there, after a fashion, right from 2008), and there are 10 papers at this year’s conference.  Venture Capital investment in MOOCs continues to flow as providers return for multiple “rounds”. I’m seeing MOOCs as a different species of “thing”, and will return to them later)

One possible map of causality could show the initial innovation influencing conference presentations (including ALTC), which would raise profile with NMC “experts”. A lull after this initial interest would allow Gartner and then investors to catch up, which would produce the bubble of interest shown in the Google Trends Chart.

Waveform analysis

A pattern as identified above is useful, but does not address our central issue regarding why the future is so dull. Why are ideas and concepts repeating themselves across multiple iterations?

Waves with peaks of differing amplitudes are likely to be multi-component – involving the super-position of numerous sinusoidal patterns. Whilst I would hesitate to bring hard physics into a model of a social phenomenon such as market prediction, it makes sense to examine the possibility of underlying trends influencing the amplitude of predictive waves of a given frequency.

Korotayev and Tsirel (2010) use a spectral analysis technique to identify underlying waveforms in complex variables around national GDP, noting amongst other interesting findings the persistence of the Kuznets swing as a third harmonic (a peak with approximately three times the frequency) of the Kondratieff Cycle. This relationship can be used to argue for a direct relationship between the two – mildly useful in economic prediction. To be honest it looks more like smoothing artefacts, and the raw data is inconclusive.

Koratayev and Tsirel

(fig 1 from “Spectral Analysis of World GDP Dynamics”)

If we examine waves within predictions themselves, rather than dealing with an open, multi-variable system it may be more helpful to start from the basis that we are examining a closed system. By assuming this I am postulating that predictions of innovations are self reinforcing and can, if correctly spaced, produce a positive feedback effect. I’m also avoiding getting into “full cost” for the moment.

“Re-hyping” existing innovations – as is seen in numerous places even just examining NMC patterns, could be seen as a way of deliberately introducing positive feedback to amplify a slow recovery from the trough of the hype-cycle. For those who had invested heavily in an initial innovation that does not appear likely to gain mass adoption, a relaunch may be the only way of recouping this initial outlay.

Within a closed system, this effect will be seen as recurrent waves with rising amplitudes.



I’ve attempted to answer two questions, and have – in the grand tradition of social sciences research – been able to present limited answers to both.

Do predictions themselves show periodicity?

Yes, though a great deal of further analysis is required to identify this periodicity for a given field of interest, and to reach an appropriate level of fidelity for metaprediction.

Has something gone wrong with the future of education technology?


If we see prediction within a neoliberal society as an attempt to shape rather than beat the market,  there is limited but interesting evidence to suggest that the amplification of “repeat predictions” is a more reliable way of achieving this effect than feeding in new data. But this necessitates a perpetual future to match a perpetual present. Artificial intelligence, for example, is always just around the corner. Quantum computing has sat forlornly on the slope of enlightenment for the past nine years!

Paul Mason – in “post-capitalism” (2015)- is startlingly upbeat about the affordances of data-driven prediction. As he notes “This [prediction], rather than the meticulous planning of the cyber-Stalinists, is what a postcapitalist state would use petaflop-level computing for. And once we had reliable predictions, we could act.”

Mirowski is bleaker (remember, that these writers are talking about the entire basis of our economy, not just the ed-tech bubble!): “The forecasting skill of economists is on average about as good as uninformed guessing.

  • Predictions by the Council of Economic Advisors, Federal Reserve Board, and Congressional Budget Office were often worse than random.
  • Economists have proven repeatedly that they cannot predict the turning points in the economy.
  • No specific economic forecasters consistently lead the pack in accuracy.
  • No economic forecaster has consistently higher forecasting skills predicting any particular economic statistic.
  • Consensus forecasts do not improve accuracy (although the press loves them. So much for the wisdom of crowds).
  • Finally, there’s no evidence that economic forecasting has improved in recent decades.”

This is for a simple “higher or lower” style directional prediction, based complex mathematical models. For “finger-in-the-air” future-gazing and scientific measurement of little more than column inches as we get in edtech, such predictions are worse than worse than useless. And may be actively harmful.

Simon Reynolds suggests in “Retromania” (2012) that “Fashion – a machinery for creating cultural capital and then, with incredible speed, stripping it of value and dumping the stock – permeates everything”. But in edtech (and arguably, in economic policy) we don’t dump the stock, we keep it until it can be reused to amplify an echo of whatever cultural capital the initial idea has. There are clearly fashions in edtech. Hell – learning object repositories were cool again two years ago.

Now what?

There is a huge vested interest in perpetuating the “long now” of edtech. Marketing copy is easy to write without a history, pre-sold ideas easier to sell than new ones. University leaders may even remember someone telling them two years ago of the first flourishing of an idea that they are being sold for the first time. Every restatements amplifies the effect, and the true meaning of the term “hype-cycle” becomes clear.

MOOCs are a counter-example – the second coming of the MOOC (in 2012) bore almost no resemblance to the initial iteration in 2008-9 other than the name. I’d be tempted to label MOOCs-as-we-know-them as a HE/Adult-ed specific outgrowth of the wider education reform agenda.

You can spot a first iteration of an innovative idea (and these are as rare and as valuable as it might expect) primarily by the absence of an obvious way by which it will make someone money. Often it will use established, low cost tools in a new way. Venture capital will not show any interest. There will not be a clear way of measuring effectiveness – attempts to use established measurement tools will fail. It will be “weird”, difficult to convince managers to allocate time or resource to, even at a proof-of-concept level.

So my tweetable takeaway from this session is: “listen to practitioners, not predictions, about education technology”. Which sounds a little like Von Hippel’s Lead User theory.

Data Sources

NMC data: summary file, Audrey Watters’ GitHub, further details are from the wiki.

Gartner: Data sourced from descriptive pages (eg this one for 2015). Summary Excel file.

ALTC programmes titles:

2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015 (incidentally, why *do* ALT make it so hard to find a simple list of presentation titles and presenters? You’d think that this might be useful for conference attendees…)