FOTA Brexit Nonsense Update 3 – oh god no

(part one) (part two)

This is the post I hoped I wouldn’t be writing – but welcome to Boris Johnson’s “sunlit meadows“.


We’ve been and gone and done it, and now it is all about the consequences. Already today we’ve seen the UK lose a triple-A agency rating, stock markets plummet, the pound at a 31-year low against the dollar (good thing George Osborne calmed the markets this morning, eh?).

Meanwhile, every pathetic racist in the land feels emboldened to make life miserable and uncomfortable for everyone that looks even mildly foreign. Scotland wants out of the UK if it means leaving the EU, as does Gibraltar, as does Northern Ireland.

The wheels are already coming off the Vote Leave promises, there are widespread reports of “buyers remorse” amongst leave voters, and the heroes of the hour (Gove, Johnson Major, Farage, Carswell) are conspicuous by their absence.

Both major political parties are currently imploding, with Cameron’s replacement due to start in September and Corbyn’s any time from tomorrow.

And this is T+4 on our “independence day”.  In Book of Revelation terms (which is really the only viable comparison we have) we still haven’t got to the end of the letters to the seven churches – and this with the unlikely figure of Nick Clegg as St John of Patmos.

There are so many questions, and so few answers, I feel justified in stepping into the bounds of probabilistic interpretation whilst (hopefully) dropping some constitutional science on those assembled.

What almost certainly won’t happen…

rerun of the referendum. Yes – there were lies told. But movements legally challenge the referendum result, and/or require a new one (perhaps on modified rules) are doomed to failure. The reason is simple: the referendum wasn’t legally binding. Parliament is sovereign so this and all other UK referendums (with the exception of the 2011 AV referendum which had specific provision in this regard) are advisory only. That said, it is pretty clear advice that would be politically difficult to ignore…

A speedy negotiation and withdrawal. To leave the EU, parliament has to empower the Prime Minister of the day to enact Article 50 of the Lisbon Treaty (The Treaty on the Functioning of the European Union). We know now that we won’t have a prime minister (to all sensible intents and purposes) in post until September at the earliest. And even then, both houses of parliament would have to agree (majority) to grant this power.

As both houses have significant majorities that are opposed to leaving the EU, and as a person who may well become prime minister (I know, I’m sorry…) has already expressed a desire to take things slowly this would be no mean accomplishment. And though the EU may demand speedy resolution, not least to settle the uncertainty that will likely affect economies around the world, they have no power to force Britain to declare article 50.

Those who voted to “take control” may reflect here that this is possibly the last point we would have control over our relationship with Europe, so given our complete lack of preparedness to negotiate time is very much not of the essence.

Anarchy in the streets. Angry citizens on both sides will march, chant and campaign. Millions of people will sign entirely useless petitions. Some idiots will be violent and deeply unpleasant, they will be (rightly) arrested and imprisoned. But as a population looks to the government to protect them from the consequences (devaluation, economic slowdown, job losses…) of the referendum it is unlikely to simultaneously enact full communism/a neo-fascist junta.

What probably won’t happen…

Brexit. Boom. I went there. For the reasons given above, I would be hugely surprised if Britain ever uses article 50. And as there are no other ways to leave the EU, that would very much be that. Practically nobody in government wants to actually leave the EU, even Boris has been clear all along he wants to use a vote to leave as a bargaining tool.

As we progress through a long, painful, summer it is likely that public mood will tend towards “bregret” as the bad news continues. A new PM in September may opt to seal the deal via a snap election won on an explicit anti-brexit stand – possibly with some concessions from the EU to sweeten the deal.

Pariah lawmaking. As the international community attempts to navigate what are already difficulty delicate geo-political and economic waters, the need to punish and isolate the UK will be unlikely to be forefront in their mind. The remorseless logic of capital, and the “function” of the market in idenifying and pricing in risk will make things far worse for us Brits – punative policymaking and sanctions from other nation-states or international groupings are very unlikely to.

However, there is a small chance that if the brexit vote sparks a wave of similar referendums in Europe the EU may wish to make an example of the United Kingdom- though I’d guess it is more likely that we would be already be a cautionary tale thanks to the invisible hand.

What probably will happen…

A UK general election in 2016. Yup – I heard you liked politics so I put some politics on your politics. A general election (already hinted at by Cameron and others) is our grand brexit get-out-of-jail-free card, and an incoming prime minister is likely to want a more recent “remain” mandate to counter the “leave” vote.

A new political landscape. Both major parties are fractured and in disarray – the Tory “leave” right has far more in common with UKIP than with the modernising “remain” wing. It’s been the longest relationship break-up in history but I’d be surprised to see an intact Conservative party as the clamour to remain grows.

Ditto, alas, Labour. Corbyn’s determination to hang on till the bitter end highlights the difference between the labour left and the centrists, who themselves probably have more in common with the Liberal Democrats or – indeed (whisper it) – the Tory centre.

What almost certainly will happen…

More chaos: of the economic, cultural and political varieties. Also sporting. Despite these reading almost like predictions no-one has any idea what will happen or when. It will be an unsettled summer reflected in financial markets and political paralysis.

Scottish independence. Sorry, England, but we’re not going to get away with this again. Even without a brexit, it’s clear to everyone that Scotland and England are on very different trajectories and can no longer trust a Westminster Government to act in their best interests.

Bonus round

Stuff that I am interested in but I have literally no answers to yet includes:

  • The very dodgy legal ground that depriving individuals of their EU citizen rights would be on. We generally don’t restrict peoples rights (these days, he says, drawing a veil unconvincingly over our nasty imperial past) unless they are criminals.
  • Effects on European Central Bank investment: despite us not being in the Eurozone the ECB invests heavily in British industry. Will it continue to during this interregnum?
  • The now legendary British Bill of Rights and the UK leaving the ECHR. I’m assuming Brexit panic will give the government chance to finally abandon this insane idea, but I also have a horrible feeling it may be a sop to ardent “give me my country back” folks in the event of a non-brexit. Please no.

FOTA Brexit Nonsense Update 2

(the second in a short series)

It’s been a quarter of a year since I last turned white pixels black with the aim of getting some sense out of the ongoing EU referendum debate. In the intervening time, both sides of the debate have opted to abandon theatre and scaremongering, simply stating facts soberly, and with the barest minimum of interpretation, in order that the Great British Public can make an informed choice come June 23rd.

Or, as it turns out, not.

At this stage, most sensible humans have largely tuned the whole thing out – paying attention briefly to amusing asides like the chap failing to burn the EU flag. Or Bpoplive.

As each side describes the other as desperatedesperate… (it’s the insult du jour, folks – the worst thing is being seen as wanting to win so badly you tell your supporters what they want to hear, be it that Grandma needs to be put in a home or that it’s all a massive gubmint conspiracy) a cavalcade of desperate – in many senses – voices are wheeled out for our edification.

There are people in high places in Britain who believe in all seriousness that a word or two from popular crowdpleasers like George Osborne and Michael Gove would affect the thinking of the populace. There are those who consider Kate Hoey and Jeremy Corbyn compelling public speakers.

Any debate that pits Jeremy Clarkson against our own Donald-Trump-with-a-Latin-a-level man of the people Alexander Boris de Pfeffel Johnson should by rights be one to savour, but when something as compellingly awful as Boris’ various attempts at alternate future journalism are met with nary a shrug, we have to consider what else is going on.

The purpose of the Referendum, lest we forget, is to heal a decades-old rift in the Conservative party. Blue blood, blue Audi and blue rinse must come together as one.

And we already know it won’t work. The referendum has already failed before a single vote is cast.

Cameron has been promising a referendum for more than 10 years, both to fend off the forces of senile xenophobia in the form of UKIP and his own fractious backbenches – now it appears he’ll lose either way. Meanwhile the rest of the country looks on in an appalled fascination, contemplating arrant nonsense about either TTIP or mass immigration, depending on personal preference.

If you’d indulge me in a little critical theory, the various competing meta-narratives are only congruent in brief moments of intertextuality – with the sheer immanence of the spectacle itself a reflexive attempt to unify a fractured discourse. Or, as we say on Teesside: I canna be doin’ wi’ this bag o’ doyles, I swear down.

Two tribes

A wonderful survey by YouGov back in March demonstrated that it was possible to stereotype each voting tendency with a high degree of confidence.

If you are over 50, are of social groups C2,D and E, live in the Yorkshire, East Anglia or the West Midlands, have no formal education beyond the age of 16 – you are more likely to vote leave. Whereas if you are under 39, are of social groups A, B and C1, live in London, Scotland or Wales, have a university degree – you are more likely to vote remain.

One can, and nearly everyone does, read their own prejudices into these gaps. There are equally profound party splits, with Conservative and UKIP voters more likely to vote leave, and Green, Liberal Democrat, SNP and Labour supporters likely to vote remain.  (Incidentally, men and women are both split around equally on the subject)


But though it is clearly an interesting survey and one worthy of further study, it has exacerbated the already deep split in the electorate. It is not one ideology about the world and Britain’s place in it against another, it has become the old versus the young, the rich versus the poor, the university graduate against the labourer. Dangerous stuff.

How much?

By now, even the most avid news-avoider must be aware that the £350million a week figure being bandied about by leave campaigners is nonsense – it’s the equivalent of saying that a pint costs £20 because that’s the size of the note you gave the barmaid, whilst ignoring the change that you get back and, indeed, the value of the beer alongside the other delights of the pub.

In gross terms we are talking about a membership fee of £250m/week. In net terms our national contribution is £120 million a week, including our rebate, other EU spending in the UK and the amount we count as part of our international aid spending.

Even this £120 million does not include any calculation of the other benefits we get from EU membership, such as an increase in foreign direct investment (the estimation made by the linked paper suggests that this outweighs the net membership costs by a factor of 10!) Or the savings that transnational companies make due to unified trading standards. Or… well, you can take it from here.

Coming over here, running our essential services…

I don’t even want to talk about immigration at this point – it’s depressing.

I could write paragraphs on how much the NHS benefits from migration, how reliant many of our industries are on low-paid migrant labour, how economic migrants work hard doing the jobs we don’t want to or are unable to, from Professor of Theoretical Chemistry to central London barista, whilst paying by far more taxes than they ever receive in social security.

It would’t make the blindest bit of difference.

Neither would a few hundred words on the plight of Syrian refugees, asking little more than the right to live in a house that hasn’t been bombed – leaving the world they had built lives and families in with literally nothing, forced to start afresh, drawing on only the skills (often including fluent English) and determination they themselves offer. As boats began to attempt to land on the east coast, one of our most popular politicians was most scared of human corpses despoiling our beaches.

If you are voting to leave to stem the flow of immigration, you can be assured that it won’t work. Actually – if you are lucky, it won’t work… we need the skills of foreign nationals to manage our country.

This is only going to get worse. There will be more Brexit nonsense, and less decent reporting to make your mind up about. Here’s a few of my favourite academic (because most readers of this blog tend to be academics) sources of analysis:

The always excellent LSE blogs have a European Policy portal, which covers more of the soft social science end of the debate. CEPR run VoxEU, which is great for economic analysis. Oxford have an excellent international politics blog. Based at King’s College, the “UK in a Changing EU blog” is a fantastic initiative.  “The Conversation” have an EU referendum portal, and with the previous example this is generally an easier read though not always of the same high quality.

Good luck.










The Pocket and the Politician

In a more general context, a key question that still needs to be addressed is the origin of the [long-range correlations], which are common in a variety of systems. The underlying system must be sufficiently complex, described by a nonlinear differential equation (or many of them), and there must be a proper amount of feedback. However, the origins might largely variate from system to system, and it is difficult to generate universal models that could qualitatively describe, e.g., heart-beat intervals, magnetoconductance oscillations, and drumming intervals in the same footing.

(Räsänen et al, 2015)

So as the message discipline and the mask of respectability falls away, we might view this as a Democratic opportunity.  People who think that words matter– that they should be used responsibly and not to manipulate people through subtle emotional cues embedded in euphemisms and dysphemisms–can celebrate the loss of [Frank] Luntz’s influence.

(Daily Kos, 2015)

Frank Luntz, and men like him (in the UK you could look at Philip Gould or Peter Mandelson, there’s also Lynton Crosby, Karl Rove,  Jim Messina … ) can be seen as the last of a dying breed of political messaging specialists or “spin doctors”. The great, devastating political campaigns of the 90s and 00s were successful only in their own terms – to the outside observer they led to a parade of “machine politicians” who sought power by surrendering ideals.

Luntz (and the others) worked by means of a focus group. Thousands of hours of recorded conversations gave them an insight into terms and language that “played well”, often the language that later appeared on billboards and in interviews started in the mouth of an ordinary member of the public – the return via the ears and eyes was orchestrated precisely to bring about a “resonance” based on the repetition of language already perceived as “common sense”.


In other words, phraseology such as that used in the image above (“It’s not racist to impose limits on immigration”: in that case explicitly rendered in a “personal” hand – though amended, unofficially in another) reinforces what was identified as an underlying pulse of popular discourse.

Repetition amplifies the sentiment- but it will never feel entirely natural. And the power of repetition relies on exact repetition, requiring a huge amount of message discipline. The later has become a politico-industrial pseudo-science – devoted to the idea of communication without any of the communicative (empathetic) aspects.

So these nuggets of distilled phraseology are seen as a way to make a minimum viable impression on a carefully selected target market. Though the use of “found” phrases (from research or focus groups) is common, these are generally decided on centrally within an organisation before being fed out to often-nonplussed adherents and staff.


Effective? possibly, in the short term. But, as the rise of Trump (and, indeed, Sanders) and the continuing bewildering relevance of Boris Johnson assert, perhaps an idea that suffers from the attentions competing narratives of popular influence.

The theory of message discipline discussed both the quantity and quality of messages – not only must messages be carefully aligned to the language of the target group, but they must be presented uncluttered with other messages. (Lyotard fans should be pricking up their ears round about now). A focus on a few simple messages striates the communicative space, but very broadly and with significant liminal possibilities for a demagogue to exploit. The larger, and “broader” the grouping, the less likely a message discipline approach can capture the full spectrum of opinion and emotion, and the more likely that an off-message individual can find underutilised resonances to exploit.

This years GOP primaries demonstrated not just one (Trump), but a number (Cruz, Carson…) of counter-message candidates who were able to exploit a distrust of such a poorly-expressed and tightly constrained narrative from an “establishment” (itself a loaded, and counter-message term). In Britain, Johnson’s opportunistic and self-centred embrace of Brexit can be seen as a similar attempt to capitalise on years of counternarrative positioning as bumbling, off-message, Boris. Ditto the unexpected and unpredictable rise of Jeremy Corbyn.

Those of you who read the first quote, above, and maybe the underlying paper (and you should!) may wonder where precisely I am going with this. The Trump phenomenon as anti-establishment posture has been donetodeath (alas not literally) all over the popular press. But I daresay none of them have considered fractal patterns within the hi-hat part of a Michael McDonald track in this context.

Jeff Porcaro is a machine. Seriously – it was his brother Steve that suggested the use of samples to power the legendary Linn LM-1 drum machine, and Jeff himself learned to programme one (notably and unmistakably on George Benson’s 1981 “Turn your love around“) . Basically any early 80s LA pop record that has ever made you think “wow… those drums…” – that was Jeff.

Four years before his untimely death in a bizarre gardening accident, Jeff recorded a hugely influential drum instructional video. Here he talks about his hi-hat part in the track Rasanen et all discussed – “I keep forgetting“.

If you have any kind of a musical background you may now pick your jaw back up off the floor.

What I want to note here is both the fluid and utterly mesmeric way he can place any technique on any subdivision of the bar, effortlessly, every time – and the way he makes it sound so fluid and natural that you can help but move. Drummers are generally either technical players or groove monsters, Porcaro’s feel defined the early 80s as he managed to be both.

Last year (yes, I just said that so I could say “it’s been a year since they went away, Rasanen et al…“) a team of researchers analysed the timing and volume of that single-handed 16th hi-hat part and deduced that it very clearly wasn’t as exact as it initially sounds. Here’s the numbers…


The A parts (the intro and verse) tend to slow down, the B parts (the chorus) speed up – both very slightly, but still measurably. This is done for musical reasons, to accentuate changes of mood in the song. Despite this you can still see a periodicity in the shorter spikes representing a “pushed” accent on the same 16th note of every two-bar phrase. This is the “long range correlation” which connects the precision of a virtuoso with the undeniable groove of a human being.

Jeff could very easily have programmed the same part in, indeed the George Benson track above uses a broadly similar feel. But if he had, these micro-fluctuations in timing and power would be lost and the track would feel very different.

And no-one told him precisely what to play – he had a feel which he interpreted in his own way to the benefit of the song.

Message discipline could be compared to the hypothetical use of the drum machine, the human effect is lost even though it can be closely simulated by expert programmers. Any movement, organisation or political party that designs in message discipline designs out the fluidity and freedom that allows for a virtuosic interpretation of values and ideals to the detriment of wider goals. You get the precision, but what people really react to is the pocket – not a place where you hold a message but where a message gently holds you.

Baby’s first semantic blog citation metrics #oer16

Note: If you want to see semantometrics done for real to have a read about the Jisc-supported work by Petr Knoth and Drahomira Herrmannova. My approach is loosely inspired by their research – which I’ve been fortunate enough to see presented on a couple of occasions – but in terms of reliability, technical finesse and generally having a clue I’m very much a Robbie Dupree to their Doobie Brothers. I should also be clear that what follows is my own efforts entirely.

In all honesty, and if we look at Vivien Rolfe’s superb systematic reviews of open education, we’d have to conclude that literature in the area leaves a fair bit to be desired. Faced with this I’ve heard it said on a number of occasions that “the good stuff is in the blogs”, and I decided that the time had come to test this.

If open education blogs do have academic merit, I would expect them to be cited in the more traditional literature, both around the subject each covers and further afield. This might seem circular – but as there are clearly gaps in the literature one might reasonably expect blog posts to be filling these.

For the purposes of this experiment (as presented at OER16) I looked at five blogs that I felt were consistently high quality, and that I had seen frequently referenced in conference presentations and similar:

Now the “scholarly graph” (the ways in which publications are interconnected by a network of citations) is notoriously hard to mine, unless you happen to be Thompson Reuters or are in a position to give them money. I am neither, so I needed to use the tools I had available to me, which are necessarily incomplete and variable in coverage.

Google Scholar is far from perfect, but it does let me search for references in a round-about way. What I did was searched for the root domain of the blog as a text string, to generate a corpus of literature that included (most likely) a link or citation to a specific page of the blog. I then went to export details of the question and found that a simple export to .csv is not possible. You can export individual records to a small selection of bibliographic software, but not as a whole.

I eventually found the marvelously named “Harzing’s Publish or Perish” which appears to use some Tony Hirst-esque page-scraping magic to automate what would otherwise be a labourious task. I cleaned out duplicate records and self-blog citation (pages from the blog itself returned, which happened quite a lot for popular individual posts), then rendered to the spreadsheet available here. (I also had a special issue with David Wiley – hey, don’t we all! – as many early papers cited his Open Publication license as as a means of licensing their work. I got round this by searching only for citations to his “blog” directory.)

[I also searched for “edupunk” just out of interest – as this was a 2008 term coined to deal with a range of activities that included blogging instead of formal publication. To be honest, it didn’t tell me very much other than that the majority of publications on edupunk are in Spanish.]

Publish or perish also returns a bunch of those research power statistics that occasionally come up in conversation – and it was tickled to play with the h-index for each corpus of papers I returned. The h- (or Hirsch) index is generally seen as an author-level metric, but it can be calculated to characterise any group of papers. In this instance it gives a reasonable measure of the kind of influence that the papers that cite each blog have.

  • David Wiley – 23
  • George Siemens – 38(! – this is very high for an education related subject)
  • – 29
  • Martin Weller – 20
  • Audrey Watters – 12

All of these are – if you are the kind of person who cares about your h-index – pretty respectable showings. In particular it should be noted that Audrey is a journalist (and a damned fine independent one whom you should support!) who makes no pretense to be writing academic research or comment.

[I should note here that I also looked at commonality between pairs of blogs – are any particular two likely to be cited together. The short answer is yes – people who cite George Siemens’ blog also tend to cite David Wiley’s blog. The data is on the spreadsheet.]

My next step was to find the five most common words used in the titles of the papers citing each blog, and compare these to the top five words used in the blog itself (using the standard list of English stop-words built into voyant tools.) A proper look at this topic might take more words from each source, and employ a weighting based on the ratios between word counts, but at this point it was Sunday evening before the conference.


It’s possible to do a manual sort to look for any interesting patterns – in this case what struck me is how often citing papers talked about technology-related issues or *shudder* MOOCs, whereas the blogs were more likely to consider students or use terms like OER.


I decided to automatically compare titles to the list of common terms from the blog in question using a simple excel formula – I used an average of the number of instances where the title did contain one of the common terms to create an index of semantic prediction – basically a higher number (with 1 being the highest possible) meant that the terms commonly used in the blog were terms likely to be found in titles of papers citing the blog. Here’s how that stacks up:

  • Wiley – 0.70
  • Siemens – 0.77
  • Connectivism – 0.58
  • Weller – 0.58
  • Watters – 0.12

Now Petr Knoth and Drahomira Herrmannova postulate that a greater semantic distance between the citing paper and the cited resource implies a greater contribution to knowledge – works that bridge disparate areas of research could be seen as more valuable as new knowledge has been synthesised via a new connection formed (kinda rhizomatic, wouldn’t you say?)

By this measure Audrey’s work can be said to have the greatest contribution to knowledge, with David and George much more likely to be cited in the field in which they write. Whether this tells us anything meaningful in the grand scheme of things is open to question, not least because of the general shonkiness of my methods – this being just a first look which has much scope for improvement.

It also made me think about Cameron Neylon‘s concerns about our poor understanding of the nature of citation. Citation is one of those things that seems simple at first thought, but has a huge layer of social and cultural practices built on top of it. Unless we have a better understanding of the reasons for each citation (as quote source, acknowledgement, attribution of ideas or methods, cultural norm in a domain of research….) we can’t really assume that all citations carry equal weight – though most serious citation metrics do so. Some of this may be categoriseable using Knoth and Herrmannova’s deep text analysis alongside a carefully designed system of categories and indicators (another reason I am watching that project and other citation experiments with huge interest.)

So – to answer my initial questions:

  • is all the good stuff in the blogs? There is clearly a lot of good stuff in blogs, which is frequently cited by literature that itself is highly cited. I’d love to look at other areas of research with similarly important blogs to compare – any suggestions would be welcome!
  • are blogs cited only within the domains they write? broadly no, though some blogs are more often cited in the domains they most closely identify than others. Though Audrey’s blog citations showed a low level of semantic prediction, this may be because there were lower citations overall, and I would like to refine the metric to account for this (possibly via some form of sampling?)
  • is this interesting enough to look at further? Absolutely!

Here’s the slides from mine and Viv’s presentation at OER16

The Chain

In the 90s, actuaries (and actuarial science) were at a bit of a low ebb as all kinds of public and private institutions had issues with pension liabilities. The actuaries – whose job it was to make detailed predictions of the likely long term demands on these funds and compare these to what the funds actually had in them – had taken a decades long jag of Panglossian assurance that all was for the best in the best possible of all worlds.

One of the odder things they did was to allow fund managers to claim that equities (stocks and shares) were more valuable than any other kind of investment (say government bonds, commodities, derivatives…). This was an accounting convention which allowed “profits” for returns on these equities (dividends or profits from sales, and gains in realisable value during the inflationary years) several years in advance – thus reducing the net liabilities of the pension funds.

By the mid 90s the flaws in this approach were beginning to show – and by the time of the equities crash in the early 00s (that’d be the dot-com bubble bursting) it was clear that something had to be done, and pension funds began de-risking – selling equities (at a loss) and using the proceeds to buy less risky products with smaller, but near-guaranteed returns.

Or so you’d think.

What actually happened in many cases was – having been burned by equities, and egged on by the interests of the defined-benefit pension-holders themselves – pension fund managers sought newer forms of investment that could offer a comparable return but at a lower risk. And a product was there to meet their needs.

David X Li, another actuary, began his financial career investigating ways of quantifying risk. Drawing on the copula function (specifically Gaussian Copulas) he came up with a way of predicting correlation between financial securities. Basically, he could turn the correlation between the likelihood of mortgage holder A defaulting and the likelihood of mortgage holder B defaulting into a single number – simple enough for investment bankers to understand.

Even though Li specifically warned against it, this (comparatively) simple output from a (largely poorly understood) formula became the driving force of the market for collateralised debt obligations – CDOs. (basically a CDO piles a whole bunch of debt into big box, and then slices it into segments rated at different levels of risk. You can buy a low rate of return with little risk of default (the “senior tranche”), or a high rate of return with a much larger rate of default. Li’s formula supported the generation of synthetic CDOs (sometimes CDO-squared), wherein the more-difficult-to-sell lower tranches were put into new boxes and resliced into tranches, thus generating new “senior trances” that could be sold.

And that -it has to be said – ended badly for the pensions of the world.

So where next for your savings and pensions? What about something that combined the the volatility and risk of equity with the anonymity, fungibility and algorithmic obfuscation of the CDO? What if we could combine these features with an entire lack of regulation or state backing, and – just for fun – a huge user base in illegal activity? Ready to fill your boots?

Such a product already exists.

There is significant global debate as to what species of financial instrument a “blockchain-derived currency” really is – a debate that has implications for taxation and licensing by governments.

  • Certainly it is tradeable against other currencies, with a widely fluctuating exchange rate – though it is backed neither by commodities nor a government/central bank.
  • With no extrinsic value, and no value backed by a known trustworthy party, it is clearly not itself a commodity, although it behaves as such.
  • It is issued by what could be considered a decentralised autonomous organisation in return for (computational) work conducted – work that is essential to the continuing viability of the organisation. But unlike equity it offers no voting rights or investor protections.
  • There is even some discussion as to whether or not it is a derivative.

What blockchain-derived currencies do offer is a lower administrative cost for transactions, where these transactions are simple. For on-network transactions there are no direct costs – though obviously the conversion to another currency at either end has a cost, and the “hidden” costs of power, connectivity and processing time are not factored in.

In terms of security, blockchains prioritise anonymity and encryption over direct trust. Indeed, “trust” is something of a dirty word – the industries that blockchain is slated to disrupt are generally those based on the need for trust.

Considering say, a bank: it offers some anonymity (it shouldn’t disclose the contents of my account to anyone without my permission), and some transparency (I am able to read about the strategy of the bank at my leisure, and monitor activity via publication and meeting). But the main reason to use a bank is for one of trust – I can (generally) assume that a bank will not do anything to jeapordise the money I have invested in it, and if this trust is misplaced I can (generally) assume that a central bank or nation state will ensure that my losses are mimimised.

By amping up the anonymity (it is impossible for anyone to know the content of my account, or link it to me) and transparency (all transactions are publicly listed), blockchain technology removes the need for trust – for all I (or indeed anyone) knows, Satoshi Nakamoto could be anyone or anything, and the other blockchain participants could be thieves and scoundrels or possibly even libertarians. The Ayn Rand forum accepts bitcoin payments – *shrug*

Bitcoin – to use the most common example, is a deflationary currency. There will only ever be 21m bitcoins – as transactions become more widespread the intention is to stop rewarding “mining” (the generation of the crypographic hash that is used to publicly record a transaction) directly with coins and instead allow miners to receive transaction fees. This change will happen automatically and was designed in to the bitcoin system

So – like CDOs – a smart algorithm has removed the need for me to trust anyone, and – like the old accounting convention around equities – an implication of guaranteed growth and return is undermined by deflationary reality. All of this is governed by the design of the algorithm – requiring (ironically) that we trust only in the wisdom of an anonymous coder that the various variables and correspondences have been set correctly.

Cue myriad screams of “this sounds AWESOME, how can we apply it to edtech?”

When silicon valley sneezes, edtech catches a cold about three-to-five years later. So if you’ve recovered from the “year of the MOOC” (yep, every year since 2012 it seems) then brace yourself for the first of many years of the blockchain! Hurrah!

Blockchain has provided a handily fashionable hook to attach increasingly moribund ideas like micro-credentialing and badges to. Of course we have to actually trust the issuers of the badges in questions, ensure that badges can be rescinded if this trust breaks down on either side, and despite this somehow have a means to incentive miners to keep growing the chain.

It’s fair to say that the blockchain is as poor a fit for micro-credentials as learning itself is – though this has not stopped various initiatives to combine the two. Despite this, there are genuinely interesting uses of blockchain technology which may have educational resonance: Ethereum is one that I see as being worth keeping an eye on more generally, and Mark Johnson is (as usual) thinking rich and stimulating thoughts.

Audrey Watters is researching this area in more detail, and I would highlight her ongoing struggles as a means of staying abreast of the tide of distributed anonymised ledger based nonsense that this year will bring.



FOTA Brexit nonsense update 1

I’ve a horrible feeling that there will be a few of these over the coming months – primarily because so much nonsense is going to be talked and so few facts will be checked. 

The practice of calling a referendum (or plebiscite) to resolve major political questions via a direct expression of democratic will has a surprisingly brief history within UK government. Only two UK-wide referenda have ever been held: the first was the 1975 vote on membership of the European Community (“Do you think the United Kingdom should stay in the European Community (Common Market)?”), and the second was the 2011 vote on changes to the voting system for general elections (“At present, the UK uses the “first past the post” system to elect MPs to the House of Commons. Should the “alternative vote” system be used instead?”).

A third referendum will be held on 27th June this year, again on European Community membership (“Should the United Kingdom remain a member of the European Union or leave the European Union?”).

So, in UK politics we tend to call nationwide referenda when:

  1. one or more parties are hopelessly split at an existential level.
  2. This issue at stake is so obscure and complex that only a very small number of wonks will understand it.

In the past everyone has taken the opportunity to air whatever prejudices they happen to have before the status quo option prevails and a significant number of UK politicians enter a decades-long sulk. The political mainstream will then take the result as a mandate to carry on doing whatever they were doing anyway, and the whole thing will be a massive waste of time, money and air.

There’s no real reason to suppose anything different will happen this time. But it’s fun to take the opportunity to nail some common myths being bandied about, and hopefully add a small leavening of fact to the huge number of words that will be deployed for very little purpose.

Myth 1 – “Brussels Bans X”

Easiest one first. Starting in the 80s, initially in one Boris Johnson’s columns for the Telegraph, it became fashionable to make up outright lies about “barmy Brussels” and supposed EU “diktats” or “laws” that stopped honest, brave, British folks from doing – well, anything. Because only about 100 people in the UK actually pay any attention to what the EU does on a day-to-day basis, these are repeated and embellished rather than fact-checked. The whole genre exists as an insight primarily into the often vivid imaginations of UK journalists.

The London office of the EU has painstakingly refuted pretty much all of these stories (latterly the preserve of the Express, Mail and Sun) on their superb “Euromyths” blog. Here’s an A-Z list that covers everything from 1992-2015. It’s at once impressive in scope, and depressing as to how much nonsense has been passed off as news and how much the heirs to Boris (and Boris himself) continue to dissemble. You can follow their day to day struggle on the blog.

Myth 2 – “… unelected bureaucrats…”

I mentioned “barmy Brussels” above – to any lover of British tabloid journalese the full phrase is “barmy Brussels bureaucrats”. The idea of the EU as the last repose of the earlier meddling, lazy, civil servant stereotype is difficult to shake off.

Allow me to drop some legislative process on those assembled. Like our own dear Westminster Government the EU has two legislative chambers, a presidential role and a civil service. However, in pretty much every way the EU is more democratic and more accountable than the Westminster equivalent.

  1. The Lower Chamber. In the UK we have the “House of Commons”, full of our representatives (MPs) that we vote for every five years. In the EU we have the “European Parliament” full of our representatives (MEPs) that we vote for every five years. (we do ruin this somewhat by voting for UKIP people who take all the expenses they can but don’t do any actual work). In Westminster there are political parties who generally vote as a block and ensure the government gets their way. In the European Parliament there are loose groupings which change in every parliament, but MEPs generally vote independently.
  2. The Upper Chamber. In the UK we have the “House of Lords”, made up of people who are appointed there primarily by dint of their birth, penchant for arse-licking or job (if they happen to be a Bishop in the Church of England). It’s probably the least democratic legislative body -outside of actual dictatorships – in the entire world. By contrast, the EU has the European Council of Ministers (not the European Council or the Council for Europe), which is made up of the Ministers of State from each member state who have responsibility for the topic in question. (So if the topic was, say, immigration the UK would send Teresa May. Yikes.) This reflects the actual government of each country, voted for by popular vote.
  3. The Presidential Role. In the UK we have the Queen (gawd bless her) as the titular head of state. She basically waves at things, and rubber-stamps laws passed by parliament. Once a year she makes a speech (written by “her” government) that sets out what “her” government will do. The only way you get to be the UK head of state is to be born by the previous one. The EU has the “European Council“, which does have a president (rotating every six months) but is made up of the heads of the constituent state governments – so David Cameron in our case. The European Council acts as the leadership of the EU, you become a member by being prime minister of an EU member country and then become president by waiting your turn.
  4. The Civil Service. Dead easy – we have the Civil Service, the EU has the European Commission. Both help the legislative bodies and leadership in drafting and implementing decisions. The UK Civil service is led by the Head of the Civil Service supported by Permanent Secretaries (24) and employs some 447,000 people. The European Commission is led by a president, supported by a College (28 senior staff) and employs a little over 23,000 people.
  5. Other bits. The UK has a Supreme Court, the EU has a Court of Justice. The UK has an Audit Office, the EU has a Court of Auditors. The UK has a central bank that controls the pound, the EU has a central bank that controls the euro.

So to summarise, the EU has the same legislative and executive structures as the UK, except the EU ones are – in general – more democratic. Indeed, a recent ERS investigation into the state of EU democracy concluded with a set of recommendations… for Westminster!

Myth 3 – TTIP

If you’re on any form of social media, or if you read George Monbiot in the Guardian you’ll have heard something of these trade agreement negotiations. These are some things you think you know about TTIP:

  • It’s secret
  • It’s an EU plot
  • It’s a way to destroy democracy, sell off the NHS, and make kittens look sad.
  • If we leave the EU then TTIP will go away, somehow.

None of these things, in the grand tradition of EU journalism in the UK, is actually true.

Here’s the EU TTIP twitter account – have a read down it. (That’s just one of the many ways they are communicating – here’s the main website, here’s the complete set of negotiating texts, here’s a statement from the chief negotiating officer on the last (12th) round of talks.  There’s even a snapchat channel (!)) So it’s only a secret in the sense that very few people are reading it.

As for being an EU plot, you may have spotted that the EU are negotiating on our behalf with the US. We can see, read, and comment on the EU position – a recent speech by EU Trade Minister Cecilia Malmström suggests that these are taken into account in developing the EU position. For instance, concerns about secret (ISDS) courts are addressed in the EU position – they won’t be secret, they’ll be live streamed and led by an actual real judge. Concerns about public sector bodies like the NHS are written into exemptions that are negotiated for.

The alternate (“brexit”, or “flexit”) position is that something very like TTIP would be negotiated between the UK and the US, and the UK and EU (and the UK and Canada). Rather than the fairly sensible Malmström (a centrist, vaguely New Labour-sh, Swedish politician) these would be led by Sajid Javid – a man who freely admits to the central place Ayn Rand’s “The Fountainhead” has played in his life. For those who want to preserve the NHS as we know it, it would seem that the EU negotiating team would get a far better deal.

There’s so much more to go into, but generally you are more likely – it seems to me – to be misinformed by our press than the EU.

#altc and me

The last working day of January saw the release of the ALTC2016 call for proposals, an eerie advance echo of mid-autumn at the very start of the year. In times past I’ve taken this as a spur to take the various ideas and half-baked inferences that usually turn up on here or on my twitter, and turn them into something I could probably present at a serious conference like ALTC.

In practice this meant I would come back to the idea at the start of the summer and spend those long summers evenings playing with video clips, or data collection, or textual analysis, or reference collecting…

… actually let’s see some uncharacteristic FOTAlove for a moment, I’ve presented each year at ALT since 2010, and every year has been my absolute A game. Five years of FOTA awesomeness:

ALTC2010 – Flat-out nailed the xMOOC movement and attendent “employer needs” hype, more than a year before it happened. No big deal.

ALTC2011 – Future of open education, but with an old west theme, featuring Amber Thomas, Helen Beetham and Dave White in cowboy hats and me as a travelling preacher man? Oh and the first ever ALTC session trailer? Done.

ALTC2012 – Just a meticulously referenced and definitive history of sharing learning resources in the UK, co-authored with Amber Thomas. That’s all.

ALTC2013 – The hype-strewn rise of  the global MOOC movement, told via deep text analysis of the very worst of bandwagon journalism, and with a little tasty soupçon of French critical theory.

ALTC2014 – You know that whole backlash about social media as a teaching tool? Privacy, ownership, twitterstorms, peak social. And memes. Lots of memes.

ALTC2015 – The actual future, the predicted future, why both of them suck. With added economic theory. (incidentally, the NMC Horizon Scan 2016 seems to fit the pattern…)

Now to paraphrase Jim Groom:


Six years of stuff that, looking back, I’m actually pretty damn proud of doing. Blog posts that have had actual serious citations as scholarly work. Multimedia – even live music. And a whole range of critical perspectives, methodological adaptations and detailed analysis. I’m not going to claim any of it changed lives or caused a revolution, but I am going to say that it made the conference, and indeed the edtech field, better and more interesting for some people.

So you’d think that I’d be planning something equally cool for ALTC2016 – but the sad fact of the matter is that I am almost certainly not even going to be attending. Simply put, I don’t work in #EdTech any more, not by choice but by the fact that my employer needs me (and more importantly, pays me) to work in other areas.

Even the most generous of observers would struggle to match what I presented with my actual job in any given year. But actually now working in an entirely non-cognate field means that the very generous leeway that I’ve been offered in the past (I want to single out @sarahjenndavies for her support and tolerance) is just not even going to be arguable this year.

So here’s a few things that I might have submitted this year (some of these may leak out as blog posts during the year if I get time)

  • I’m sure I’d have had more to say on the idea of debt, scholarly debt and the way we count it in academia.
  • Whither education (technology) research? Who is funding, and who is legitimising, people trying to figure out how to actually deliver “excellent teaching” in this data-driven “post-theory” world? (According to one source, Pearson!? – you know I can’t resist that)
  • Connecting those two, whatever the TEF turns out to be, or not to be, would be something I’d love to analyse.
  • Billionaire-driven edtech as a service – or why does a person, in receipt of a good living from ads on social media, feel obliged to try and fix education with money and bad tech? And why does it never actually work.
  • Online learning at a distance – updating Dave White’s 2010 work (that was actually done by Marion Manton and the team at Oxford Continuing Education, mind you!) to encompass the Second Great Online Learning Bubble.
  • The effects of [insert bleak 2016 event here, probably Brexit or President Trump or both] on global higher education, and the affordances or otherwise of technology in addressing these.

I certainly hope to write about all, or most, of these this year. I’m just sad that I’m unlikely to be at ALTC to share and discuss these and many other issues with the wonderful UK edtech community in 2016. Maybe my circumstances will change, but for now I can only encourage all of you to get on and submit something properly awesome to the conference.

Scholarly debt and deficit

“Violence and quantification are intimately linked” – David Graeber

For those of you who have yet to read Graeber’s “Debt: the first 5,000 years“, the central theses are:

  • Debt, seen in the sense of owing something to the general well-being of your community, is far older than both money and barter; and
  • At the various moments during history that this community-minded sharing has been disrupted by attempts to quantify debt, violence and poverty have been close behind.

For those of you who *have* read the book, you may be wondering why I have brought it up again four years after publication. Certainly, I do so with a wariness of the kind of historico-economic “cycles” that would suggest a means of predicting the future.

What Graeber calls the “axial age” period, placed between 800BCE and 600CE involved an explosion of scientific, philosophical and theological innovation – including the founding of all of today’s major world religions – and also saw the invention, primarily as a means of paying soldiers and collecting taxes from expanding empires, of a way of readily quantifying social and personal debt by means of coinage.

Easily portable, and with a known value backed up by military might, coins encouraged trade between soldiers and those whose territory they had occupied. Coins exchanged for food, supplies (including precious metals for the minting of coins) and slaves were the only means a conquered town or country were permitted to pay the required tribute to a distant imperial capital.

Coupled with the birth of mass literacy and the rise of rational calculation (after, for example, Pythagoras), this enabled an affectless data-driven means of managing human interaction – as Graeber describes “impersonal markets, born of war, in which it was possible to treat even neighbors as if they were strangers”.

Here a “stranger” is someone from outside of a given community. Their standing and trustworthiness are unknown – so rather than an inculcation into a communistic tradition of sharing and loyalty, all transactions are based around direct equivalence.

This was a great model for financing an expanding empire, and a strikingly modern economic plan. But when expansion ceased and empire faded, both coins and soldiers disappeared, and taxes could no longer be collected. Though the values of the old coins could be used to make value calculations were required, credit (based on interpersonal interaction and often nominally denominated – if not actually settled -in equivalence to a currency or other reliably valuable commonplace ( food, fancy goods, maidservants) reasserted itself.

And as empires crumbled around the world, civilisation entered what are called medieval times, or the “dark ages”. Except something very interesting happened.

Nalanda – possibly the world’s earliest independent teaching institution focused on higher learning – was founded right at the beginning of this medieval period (in northern India in 427CE), and was followed by the growth of Madrasah in the Muslim world, the rise of Confucian institutes in China and eventually (the west was a little of off the pace here) the Universities of Bologna (1088), Oxford, Salamanca, Paris, Padua and so on.

As I’ve written elsewhere, universities came into being when scholars and teachers banded together to support each other both in the practicalities of administration and in the expansion of knowledge and perspective. Relying largely on unspoken and unexamined ideas about the nature of their academic society (the first codification of this at Oxford, for example, were the Laudian Statutes in 1636) , members shared and supported a common scholarly endeavour. What we can here argue is this is the birth, in the wider sense, of scholarly debt.

This was a parallel to changes in the wider economic life of societies and the re-assertion of the idea of an unquantifiable debt to the home community – manifesting not in a quantifiable culture of repayment but in a general state of openness to share what was needed with other community members.

Surprisingly little has been written from a historical or anthropological perspective about the concept of scholarly debt.  Finnel (2014) and Allen et al (2014) both take an instrumentalist perspective, examining the rise of the practice in order either to re-assert the role of the librarian (yay! librarians!) or as groundwork for quantitative measurement.

We use the idea as shorthand for the whole “standing on the shoulders of giants” idea, but I would argue for a conception that involves standing and membership within this strange medieval idea of “academia”. “Scholarly debt” is the reason we review papers, present at conferences, heckle at conferences(!), contribute to policy debates and do “public intellectual” things and – hell, why not – share stuff with colleagues, collaborators, friends and fellow-travellers outside of an expectation of a quantifiable return on this activity.

Which is a really, really, medieval way of behaving from an economic perspective. We have no way of knowing if the person whose paper we are giving a (hopefully) helpful in-depth peer review to will directly reciprocate – but we do know that someone in the community will, and that if we didn’t participate in peer review our standing in the community would be lowered.

(I’ve sneaked in peer review here – but a few of you will have spotted that it is a bit problematic in this context. The first recognisable academic journals – Philosophical Transactions of the Royal Society and Journal des scavans – came into being in 1665. These replaced an earlier method of review by private correspondence within distributed scholarly societies, which itself was a proxy for discussion and debate between peers. In the modern era, and until very recently, the published journal article has been the de facto standard for scholarly communication. However, in using peer review as an example I am highlighting one of many roles academics take on for the health of the academy or discipline rather than for their own direct benefit.)

For the following millennium this peculiar set of behaviours resisted challenge from a variety of new perspectives – from the disruptive entrepreneurs of the Enlightenment, the bureaucracy and managerialism of the industrial revolution, the white heat of technology and research exploitation, and the relativism of the post-modern turn. Though the global economy entered a period that seemed closer to the Axial age and Graeber’s military-currency-slavery nexus, academia seemed curiously separate – contributing on occasion but not altering practice (if you are thinking of Anathem here, I’m with you).

Historically, the shift between a community-focused reputational credit and a quantified system of debt within any given society has been marked by a great deal of violence. The quantification was imposed by violence (the need to pay the taxes of a conqueror in their own currency), and drew further violence (peasants’ revolts, insurgency, millenarianist religion) in response, especially as newly denominated debt lead to the development of a non-landowning precariat, and debt peons were created.

So that sounds like this evening’s news for society as a whole (if you lump Trump and ISIS in with millenarianism, which is both largely accurate and wonderfully ironic), but what of academia, which has been resisting both modernity and quantified scholarly debt since the renaissance?

Alas, though violence (through quantification of debt) is dealt to academia, it is not resisted in kind. The latest technological push to quantify all of the things (that’d be “big data”, as previous attempts to quantify the unquantifiable failed because the data was just too small, I guess) has resulted simply in the normal response of attempting to gamify whatever system of equivalence is enforced with the violence of finance.

Even this gamification is a capitulation to the quantification of scholarly debt (what is the JIF of the journal you review for, how prestigious is that conference, how widely are you cited and by whom, how many twitter followers do you have…)  simply because what we owe to our institution -as a proxy of the academy as a wider ideal – now has an exchange rate set, and we are only quibbling around the exact equivalences.


So I was going to finish this on an upbeat note – a fight to be fought, a standard to be carried and the permanence of scholarly regard as the last community/reputational credit. Prof Smithcote (from the superb Cow Country) is probably correct that in the future we will all be regionally accredited, both in the short-term by the seemingly unstoppable march of quantification and efficacy in previously unmeasurable areas – and in the long term as we re-establish our alternative during the next great dark age. And if you think that’s a step too far, just look at Trump’s numbers.

SD cards in SPAAACE

So Jisc Futurist Martin Hamilton took to the pages of the venerable Ariadne to call for the inception of a Jisc Space Program Project. As specified, it’s a fairly simple mission.

But the really silly thing here is how (comparatively) easy and cheap part two could be.

Starting with some numbers: the base price for getting a functioning satellite into space is just $8,000. If I sold my car and Brian Lamb sold his car then we’re nearly there. That money gets you a base kit for a TubeSat (a beer-can sided hunk o’metal and PCB that can broadcast a signal back to earth) and puts whatever you do with it into low earth orbit. Thinking bigger, you can have a CubeSat (a 10cm cube) up there for $12,500, including the base kit. Both of these come under the emerging spacefaring category of pico-satellites.

By simply affixing the SD card to the satellite frame of our choice (probably with a tried-and-tested space adhesive like duct tape), we’re already backing up gigabytes of data in space for a laughably small amount of money.

But low earth orbit ain’t enough for Hamilton – nope, to avoid the all-engulfing apocalypse he predicts he wants data on the actual moon. Funnily enough, NASA are already working on a project to send a CubeSat to the moon with a solar sail. A solar sail is just what it sounds like, a shiny sheet that allows the constant energy from the sun to push an object forward in space. It’s slow, but (comparatively) cheap and it will get you there. And all we need to add to our craft is a means of deploying such a sail when low orbit is reached.

Now we’re traveling in space we have to start thinking about steering and trajectories and suchlike. I’d be looking at something like these nano-thrusters (powered by Teflon, no less) to get us pointing in the right direction. And we’d now need some kind of on-board computer to control everything, maybe a smartphone. And thus a couple of solar cell arrays to power the thing.

(a note: the physics and the software development involved here are a huge deal… I’m largely ignoring this to make a point but there would be – even if we could build on well-documented previous projects the cost of this would be enormous)

(of course, Jisc are off the pace here: the University of Surrey already launched a smartphone into space as a functional satellite in 2013.)

So, we’re basically at the moon for a little over £12,000. Now all we need to do is land, or at least crash in a controlled way that protects our precious cargo.

In the bad old days a lunar lander had a whole assembly, with thrusters, just for this. But we don’t have enough room  – so I’d propose that once we got near enough to the moon an inflatable cushion would cover the satellite. The small amount of explosion needed to deploy this could be used to get rid of the solar sail and get us moving towards the surface of the moon at speed. As soon as we hit the rocks it would gently deflate, meaning that we wouldn’t bounce off and get lost in space.

And there Martin’s SD card will lie, undisturbed, for many years – until it is discovered and read by intelligent aliens as a testament to the former existence of a race of beings that didn’t quite make it.

9 things to watch out for in 2015 – redux

Round about this time last year I did “9 things to watch out for in 2015” – a seemingly popular post that dared to predict some things about this year that weren’t entirely obvious. I did this pretty well in 2014, so now’s the time to take a look at how I managed this time around…

… and I would have to say that my performance was credible rather than earth-shattering. On some points I’m spot on – on others, clearly dreaming.

Those who enjoy irony will note that I wrote quite a lot about not-so-good prediction this year, so maybe you could use the points I made against me in the comments to this piece.

1. Education policy in politics

“I’m not going to win any points by predicting a UK general election next year, or an unusual result that is likely to mark a decisive shift away from the two party politics that have dominated the country since the second world war… What I am predicting is HE policy being a point of clear distinction between parties.”

Yup, we had an election and that was a thing that happened. And the collapse of two party politics seems to be Labour tearing itself inside out.

But the parties *did* differentiate themselves on HE, sort of – remember Labour’s £6K fees idea? No, me neither.

But maybe half-a-mark here, for being accidentally right on some points.

2. Academia against the institution

“Expect more strikes, more protests, more hard questions asked of institutions – and hopefully some answers.”

Yup. The ongoing round of industrial action against the OU is telling in the incorporation of informed strategic and financial critique. As yet, few answers, but a feeling of managers being held to account.

3. W(h)ither the MOOC

“I’ll come straight out and predict that at least one major platform will either close entirely or move away from offering free and accessible online courses in 2015.”

And I read this today on Coursera’s blog.  Students paying for assessment from January 2016. That’s pretty much a pivot away from free and accessible online courses. I would never, if I’m frank, have expected that from Coursera (I’d kind of reckoned FutureLearn as the platform to go – but they still seem to be able to draw down money from the OU as required).

4. Teaching quality enhancement metrics

“learning gain will inform the key policy arguments about learning technology in 2015, and that we will see a welcome collaboration between SRHE and ALT in response.”

Oh God, and how. The Teaching Excellence Framework has defined the year in teaching quality enhancement – going from a dumb idea in the Conservative manifesto, to Jo Johnson (who wrote said manifesto) being made HE minister and actually trying to implement it, to George Osborne omnishambles-ly linking it to inflationary fee increases (at a time of deflation, LOL right?).

And then HEFCE (RIP, 4eva in r hearts) still came on like Learning Gain was a thing.

And then the green paper came out (eventually) in Novemeber, and it was as big a dogs breakfast as any of us imagined. Now every other article in the UK HE press is about teaching quality metrics and it remains as stupid and bad an idea as ever.

(ALT and SRHE haven’t stepped up to critique it, though Wonkhe – increasingly the home of practical HE policy research – did a sterling job)

5. Independent researchers

“Like it or not, much of the significant work on education technology and education policy will be done by people in their own time, and with little or no external funding. My prediction here is that independent researchers in a number of non-science fields will begin to organise themselves for mutual support and benefit.”

None less than the mighty James Wilsdon highlighted the need for investment in research into education policy in his seminal “metric tide” report. And although I know anecdotally of unfunded researchers in edtech, education policy and elsewhere grouping together to support each other (via conversations at various conferences), there’s been little visible movement.

It’s worth mentioning the steady and beautiful growth of OpenLibHums here – one of the best HE stories of 2015, and one supporting humanities scholarship in a tangible and sustainable way.

6. Authenticity 

“Many of the most powerful arguments made about the condition of academia in 2015 will be not be framed in financial or statistical language. They will be pure, beautiful and true.”

I had to sit back and think about this a bit, but what were those whole range of post-OpenEd15 blog posts but an authentic response to a movement appearing to focus on financial and statistical language? As well as a wonderful re-awakening of a range of critical perspectives on openness, this was a visible example of the power of authenticity, culminating in the best thing David Wiley has written for a long time.

7. Students as ______ ?

“Sometime in 2015 we will see the development of a proper position that sees the consumer aspect of these interactions as one part of a very complex whole.”

Not really, no. Though the NUS  flirting with a critical position on TEF was a pleasant surprise.

8. Uncapturing the lecture

“I predict a resurgence of the lecture – as outreach, as destination and as the cornerstone of the higher education experience.”

Again, not really. Lectures retained the place that they always have had in HE. I could still see them becoming an “event”, and public lectures still form a more sustainable and valuable form of outreach than MOOCs will ever be, but as a 2015 key trend… no.

9.  Collaborative tools

“So in 2015 the technologies that will impress us most will be collaboration tools of various purposes.”

That’s pretty vague, for which I apologise. But a year that has seen the launch of Mike Caulfield’s federated wordpress and the Thompson Rivers University “SPLOTs” can be said to embody this trend of simple collaboration. And I’m increasingly enamoured of as a means of quickly deploying collaborative tools on demand.

So – I make that six out of nine – 66% accuracy rate – so slightly better than a monkey with a wordpress would have done. Join me in a few days for some predictions about 2016, perhaps FOTA has come of age as means of predicting the future of UK HE policy and edtech, but I can be certain that love has come of age.

Take it away, Kenny Loggins!