2018 didn’t happen

It’s still, basically, 2016. We’re still in shock over the two huge geopolitical convulsions that have logjammed the anglosphere – and, though we’ve done our best to kid ourselves that steps are being taken to recover in reality nothing of the sort has actually happened.

The trails of Trump and Brexit both appear to lead back to Russia -findings confirmed multiple times in the mountains of opinion, research, conjecture, and official statements that we have amassed over two-and-a-bit years of hand-wringing. None of this has made the blindest bit of difference to anyone other than the small number of low-level functionaries in the UK and US who have been found guilty of breaking an actual law.

Meanwhile – from poison in Salisbury to drones in Crawley – the UK feels less safe and less orderly, a sensation that can be only be heightened by careful viewing of goings on at Westminster. If I did a book of the year prize it would have to be Erskine May – never has parliamentary procedure been more newsworthy. Procedure is also a great substitute for activity – most government time this year has been spent in interminable debates on broad-brush topics. Speeches in the Commons are now made with at least half an eye on how they can be edited for sharing on Facebook.

The leader of the opposition is stage-managed to a degree that would make Peter Mandelson blush. But Blair never had a cadre of fans determined to paint his every act as strategically designed to further the cause of whatever socialism now is. It’s as if, having seen how Theresa May used a tone-deaf, core vote strategy to narrowly win an election in 2017, Labour are intent on copying her. So, in flitting between nearly taking a strong line on actual issues and (again Blairish) schools-and-hospitals style crowd pleasing, nothing has changed.

Meanwhile on the other side, the grand strategy appears to be that we’ll eventually feel sorry for Theresa May because of how useless she is. This of course means that we forget the cold nastiness of her Home Office days, and mistake her arrogant refusal to ever admit she was wrong on anything as some kind of inner strength. Government resignations (including – a personal highlight – the two most recent Higher Education minsters) from every possible ideological persuasion have done nothing to staunch the inevitability of a shambolic disorderly exit from the EU – and with the 1922 committee card played and lost, the rest of the party seems out of ideas.

This legislative inertia points the way to a disorderly Brexit – the deal on the table being an uncomfortable reminder that red lines don’t allow for a blue sky. It is indeed the best and only deal in that it is the only deal Theresa May could accept – with her increasingly childish parlaying of “laws, money, and borders” into an end to the kind of international cooperation that we spent so long trying to convince the former Eastern Bloc to adopt in the 1990s.

So much of the thin gruel on offer can be traced back to a bizarre hatred for the European Court of Justice. A tale of one woman against the very idea of international law. The checks and balances that have prevented a world war have never looked so vulnerable.

There are any numbers of awful stories about people disadvantaged by this retreat from the global stage – and the numbers hurt by stupidly implemented UK policy have grown too. Universal Credit – in normal times – would be seen as a totemic failure of project management. Ministers would have resigned over it. But in 2018 it’s been mood music.

The man who literally wrote the book about the science of project delivery in government now spends his days trying to prove how tough a universities regulator he can be. But if 2018 has had a theme, I’d go for the weakness of project delivery (with maybe our collective rediscovery of the unicorn as a counterpoint).

Agile project management – invented by software developers so they could ignore specifications and avoid writing project reports – is an essentially reactive structure. One leaps from bad idea to other bad idea, hamstrung by the need to “ship” something, anything, and get through to the next scrum. If you wanted a case study as to the dreck this process can produce then you really couldn’t do better than the Home Office Settled Status mobile app. Notoriously unreliable, compatible with a small enough handful of android mobile phones that law firms and universities have been resorting to just buying some that they know will work – it’s a metaphor for how badly we’ve prepared for everything.

Brexit – I sometimes believe – has been an Agile project. It kicked off far too early, and the first iterations were riddled with basic logical errors. The instinct has been to polish presentation rather than build core functionality – where work has been done it has been on aspects that are meant impress users.

Meanwhile Trump starts from a blank page each morning, and often manages to upset or offend just about everyone by the end of the day. Yet he still has around 40% of the US population agreeing with him. Expecting him to deliver anything would be so basic a category error that the mere idea seems laughable. As I write he’s shut down his government to build a wall – a wryly apt season finale for the scripted reality that is US politics.

Or maybe that’s too generous. Scripted reality was what the Trump campaign felt like, or the early red-white-and-blue Brexit days. Remember the Brexit dividend? If there’s a script now you need to be a conspiracy theorist to discern it – but there again isn’t everyone a conspiracy theorist now?

And shall we talk about how Putin seems to sit at the centre of everything? A modern day Rasputin using magic to control the world? Democracy as a plaything in the world’s first genuinely post-democratic  state? Are we perhaps projecting here a little? If we find a villain does that mean we are absolved of our own villainy?

I’ve been thinking a lot about three other periods of history this year. Have these as three ghosts of Christmas past if you will.

  • The first is the early 90s – the “End of History” days where we all felt that liberal democracy was the cut-scene at the end of the final level… that the Generation X idealist cynicism was the future of progressive protest. We felt like we could leave the strife of the past behind us even as we invaded oil rich countries. Shorn of our own significance, we replayed the prime mover moments of our history, until two buildings fell and the age of fear began.
  • The second is the end of the First World War. Years of pointless deaths ended in the least military way that could be imagined – a far left uprising. We entered into international relations but retained the caution and lust for vengeance. Meanwhile, capital began to collapse everywhere – reality didn’t agree with capitalism so we broke the links to reality. But in the midst of this the golden age of civic responsibility continued, and we still believed that progress was possible
  • The third is the middle of the fifteenth century. New  media gave a voice to the unheard – the elite didn’t like what they said so we overthrew them. Religion and direct experience won out over scholarship, history, and reason – but only because the latter was in the service of corruption and funding. The upheaval lasted only a decade, but the scars shape the world we live in.

We are in the same place that we were twelve months ago – the only change is that matters that were once pressing have become urgent. The dwindling pro-Brexit (or pro-Trump) rump are the “snowflakes” we hear so much about, painfully sensitive to the idea that anyone can hold opinions that disagree with their own received thinking. We’re carving out safe spaces in the conversation about our future for those who cling to old, discredited, ideas based on fear, hatred, and wishful thinking. Sometime soon we need to face this down – agree that freedom of speech does not guarantee a respectful hearing.

But I don’t think 2019 will be that year. 2019 is another placeholder.

The leap not taken

In which the author uses outdated critical theory to draw cultural lessons from a not-very-good young adult book and film. Just imagine that I’ve taken over You Yell Barracuda for the day or something.

So it turns out, culturally, we’re actually OK with experts – especially experts in the humanities and computer sciences domain.

The nerd wish-fulfillment that is Ready Player One – both the Spielberg film and the (slightly #problematic) Ernest Cline novel can both be read, with a following wind, as a validation of properly old-fashioned academic shibboleths like the idea of a Canon, citation practice, contested scholarship, librarianship and – for the post-modernists – bricolage as creative projectI’ll admit to blanching a little when people get dates with manic pixie dream girls via a viva, but for the most part RP1 as academic hero’s quest seems to hold up.

Wade Watts doesn’t really work as a hero in any other way. All the other members of the “high five” have practical skills – Daito’s martial arts, Aech’s self-sufficiency, Shoto’s magnificent eleven-ness. Art3mis is actually a more traditional hero in that she actually does stuff, organises things, takes risks and has a proper story arc with an explicit motivation.

But (filmic) Wade is useless – he hasn’t really done anything apart from sit in his room, gather facts, and make connections. He’s utterly unused to, and largely amblivient to, the real world with the jarringly real problems of social collapse and fuel poverty. Until people actually come along and connect him to the real world, you don’t really get the sense that the Egg quest is anything but metatextual play for him.

Our “real world” is an abandoned, liminal space. It is heavily implied that people have turned to Baudrillardian simulacra in the most crushingly obvious way – a retreat into a fantasy constructed from the detrius of an Eighties childhood. Damn, RP1 needs theoretical sociologists – but Wade is concerned with the text(s) rather than the context.

Until he gets sucked into something approaching a grand narrative by a scholar (and creator) of a previous generation. Again this narrative is textual rather than para-textual – we get hints that Wade’s pure concern with the text itself is a strength in that he is beyond the more worldly interest in the implications of the prize.

Sure, he’s against the idea of IOI owning the Oasis dreamworld – but only, really, because it would obstruct the purity of the text. There are huge issues of inequality (the film goes for a convertability between real and virtual currency absent in the book) within the fantasy itself, but these are of no concern to Wade – neither is the poverty of in-world creativity (with the usual future-culture gap – why were no popular culture stories released between now and 2045?) – as for many a good postmodernist it is all about the intertextual play.

But the quest for historico-cultural connections, and indeed the very idea of an “Easter egg” – something that has never been found, discovered via novel and deep research – that, to me, is an academic project.

However Wade’s prize is five hundered trillion dollars and ownership of a large MMORPG, rather than the chance to compete for an ajunct teaching-only role. I suppose this is research selectivity taken to a logical conclusion.

So this is mainly for my own entertainment at this point, but is there anything we can actually learn from all this?

Well, the purity of the academic project is maybe one part. With two classic unworldly scholars running around – one awarding the prize, the other winning it – we could maybe draw a lesson that academia holds itself to seperate standards beyond the s(a)ecular world.

We could maybe say something about the value of humanities research – fundamentally Wade is into the field of late c20th popular culture, and the life of an old programmer, because it is damn interesting. The narrative arc is useful, yes, but to other people rather than him.

His eventual re-connection with the “real world” (and his subsequent decision to limit access to simulacra!) is quite a peculiar end point. You get the sense he’d have been less happy than he was at the start of the film (the love story between him and Samantha/Art3mis does not in any way convince, let’s be honest) and his decision to pull the ladder up after himself – you know, has he become some kind of a Vice-Chancellor here? – is out of character.

Yes people need to focus on the real world. But just occasionally, people don’t. Again, this isn’t a feature of the novel – there Wade will turn the virtual world off, but some day in the future – after he’s finished this next level, watched this next film, written this next paper…

I sometimes feel like academia and scholarship are beginning to shear away from the “real world” – the fact that the latter can occasionally tip a hat to the former (even when disguised as nerd culture) is consoling. But the other way round, that’s a leap not taken but perhaps with good reason.

Also – big love for the cross-media cataloging effort that is the Halliday Archive. Maybe the real hero is an unnamed metadata architect…

“Give us back our old gods”

The Mail, and a government whip, are taking issue with the idea that academics may be doing down the glories of Brexit. Why?

We love to tell ourselves stories – we love to situate our actions and our emotions within an overall narrative with a start, middle and end. Progress towards a goal, a conclusion, a brighter future. We all do it – the post-doc juggling two temporary teaching contracts in two subjects related to her research interest, the new father promising himself the new responsibilities will change who he is, the voter helping to plot a course towards opportunity, security or honesty.

Brexit – what is it but another one of these great cinematic stories? The plucky island nation, rich in history and passion, seizing the chance to determine for itself a bright future. Seeking freedom from international regulations and rules, the chance to trade on advantageous terms, to make the laws and decisions it needs. Brexit is the hope of a country once again seeking to drive the narrative forward. To make stuff happen, not to have stuff happen to it.

The job of the contemporary academic is to destroy hope. Not just destroy – atomise. Disintegrate. De-construct. The early enlightenment sought to build the ultimate human and holy narrative – connecting for ourselves the cogs in the blind watchmaker’s finely wrought machine. The flowering complexity of late modernism, itself a further reaction to what seemed to be the last gasp of backward-looking romanticism, saw this narrative teeter at the limit of human comprehension. And then it fell – Einstein, Schrodinger,  Wolstencraft, Wittgenstein. Two brutal, pointless, and bloody wars. Foucault, Derrida, Butler, Lyotard, Kristeva, Hall.

The life of the mind took on a new complexion – turned on the connected, ever-growing progress plotted far into the future. It turned on itself, critiquing and unmasking the leaps of logic and the unexpected constants. The expediencies that allowed us to continue to point to the future with hope.

When Donald Fagen sung  “What a beautiful world this could be – what a glorious time to be free” – he sang with irony, with disillusionment, and with a certain wistful longing. For us this is amplified, academic life means a surrender of the absolute, a destruction of a human faith in outcomes to be replaced by a practitioners faith in process.

The new critical study of the history of thought brought new tools to play on old solid assumptions. Morality returned to science, a shock that still reverberates and perplexes.

But outside our collegiate walls the world didn’t change. Socialism promised equality, fascism promised purity, capitalism promised wealth – but life, for most, remained brutal, difficult and painful. Dreams and hopes of a better world kept people alive, and kept them looking for the magic that would repair everything. Sometimes these even seemed to work, for a while, until the next crisis and the next time a swathe of honest hard-working lives were destroyed by the whims of global finance. A button was sought – today the button is Brexit.

The fact that it won’t work and can’t work is immaterial. People want to believe that something will, and Brexit – whatever else it is – is something. Scaffolded by lazy political finger-pointing, populist opportunism and expert equivocation – it’s the event of the season. A fashion – an idea that will reek of the late teens like SuperDry, Elephant’s Breath, Superorganism and the SUV.

Who are academics to take away hope? To ruin the story? The friend you once shared a film with, banished after pointing out plot holes and discontinuities. The woman in the office who read Game of Thrones rather than watched the series. The guy in the pub with the score on his phone when you wanted to watch the highlights.

Brexit – spoilers. Of course they hate it. Of course they hate us. A glimmer of hope occluded by fact. A dream spoiled by a morning alarm. How could they not?

The newspapers and politicians that argued for a dumb deal don’t want us to see how dumb it was. Not just yet. There’s more power, more influence, to wield. Whoever ran all that faked social media has a plan. So any chink in the dream armour must be repaired – anyone who peeps behind the walls of Oz must be silenced.

Because true self-determination, true understanding, in a cold, random uncaring world – is truly terrifying. Universities take young people and help them to deal with the darkest truth of all – that nothing matters, nothing works, and no-one has a clue what will happen next.

Rhythm guitar styling in the why-I’m-not-edublogging tribute band

Hi folks, I – er – haven’t been doing blogging here much because I’ve been busy. Specifically, I’ve taken on the role of Associate Editor at wonkhe.com, and that’s quite a full on job involving plenty of writing, reading and editing.  For those missing the semi-regular UK HE policy posts that used to turn up on here, I can only direct you to the good ship Wonkhe and the associated (and very worthwhile) Monday Morning Briefing – wherein appears some of my writing on that topic.

So – as I’ve failed to engage with the Twitter “pinned post” thing, I’ve just left a pun at the top of my stream to entertain myself.

LOL – right?

That’s sat there for a few months, and then suddenly I think to look at the replies it has been getting. A nice comment from (ds106) Roland, and … this image.

This response, from the totally-a-real-account  shows my OER11 name badge resplendent on a zebraskin sheet, in a room I’m not sure I recognise. Maxim32583813 has only three tweets to their name, this image and two in Russian, copied from a bot that appears to post bad generic status updates.

(I know what you are thinking – you old rogue, Kernohan. Hot Manchester Conference Centre loving. But seriously, nope. Not my style, and apart from anything else I was married at the time. My memory of OER11 is that I presented a really nice thing about guitar tabulature, went to the CUBE gallery, and had a beer with Phil Barker in the Lass O’Gowrie.)

Fair enough, I think to myself. I’m sure it’s just some sophisticated algorithm that has picked up an image related to me to pique my interest. But I want to know what the context of the image was so I do a reverse image search.

Nothing. Nada.

The image does not exist on the internet, other than in that one tweet. I have no context (twitter strips the exif data too). Neither TinEye nor Google Images returns anything.

So what to make of it? I don’t think I know anyone that would go to this level of obscurity to troll me. And I know Pat Lockley, so that’s saying something.

It was the first tweet from that account. There have been two since, none since June and nothing of what I would call “content” other than that image.

The account doesn’t follow anyone I know, no-one I know follows it.  (in fact, no-one at all follows it).

So clearly, the internet is a far stranger place than we give it credit for

404 – UK government not found

This is an interesting page – detailing news of the ongoing reshuffle after the June 2017 general elections. As well as any intrinsic interest there may be in knowing that Boris Johnson is still somehow Foreign Secretary, it has a particular piquancy when compared to a similar page from after the last election in 2015.

Four little words at the top of the latter … “has formed a government”.

Because, in June 2017, we do not yet have a government in the UK. Neither will we until Monday 19th June at the very earliest.

This is also an interesting page – as well as announcing the delay in the long-anticipated announcement of the TEF, it also announces that we are still in the pre-election period (or “purdah” as it is known). These rules prohibit things being announced on behalf of the government when it is not yet clear who the government is. Or if there is one.

When Theresa May went to speak to the Queen on the 9th June she asked – after the Queen had stopped laughing – for permission to attempt to form a minority government based on a “confidence and supply” agreement with another party.

The fatuous announcements currently making the news – Theresa May’s reshuffle (of course the Queen can’t confirm any appointments…”), the alleged agreement with the DUP (who don’t negotiate on the Sabbath) – are designed to convince us that we have a government. That we have “certainty”. Reshuffling looks Prime Ministerial – so does making agreements with other parties, and taking calls from leaders of countries with actual governments.

Here’s the Cabinet Office manual on the issue:

(para 2.30) “Immediately following an election, if there is no overall majority, for as long as there is significant doubt over the Government’s
ability to command the confidence of the House of Commons, many of the
restrictions set out at paragraphs 2.27–2.29 (on “purdah”) would continue to apply. The point at which the restrictions on financial and other commitments should come to an end depends on circumstances, but may
often be either when a new Prime Minister is appointed by the Sovereign or where a government’s ability to command the confidence of the Commons has been tested in the House of Commons.”

So there will be a government when a Queen’s speech is passed by the House of Commons. We don’t yet know what will be in the speech, though the best guess is that it will be very short indeed – we do know that there are likely to be amendments tabled by Labour and others, and we can even suspect that there may be an attempt to vote it down.

Such a vote would constitute a vote of no confidence in a putative Conservative Minority Government – this would be quickly followed by a motion under the terms of section two of the Fixed Term Parliament Act leading (unless confidence can be regained) to another general election 14 days later.

If a Queen’s speech is passed, a government could still very easily fall whenever the DUP decide to renege on a political promise and withdraw from a power-sharing agreement – which is something they have rather a habit of doing. Or it could fall if a small group of backbenchers decide they don’t like what is happening. Or it could fall for any number of other reasons.

So, to summarise: there is no UK government, Theresa May holds very little power, and we won’t know anything until the 19th June.

ge17 on reading opinion polls

As I work for a UK charity, I need to be very careful on social media during the election campaign. Charities are constrained by the requirements of both charity regulations and electoral law. Simply put, charities are forbidden to publicly support or oppose any candidate or party. Although I’m sure no-one sees this personal blog as the opinions of my employer, I will be being cautious and conforming to the rules above during the election period.

There was a little bit of concern, to put it mildly, about the accuracy of election polling in 2015. In response, polling companies have modified the way they collect, analyse and draw conclusions from polling data – but although each company has reacted, they have all done so in differing ways.

Understanding Polling

So, to make sense of any random 2017 poll, we really need to know three things – the polling company responsible, the date of the poll, and the type of poll.

Some people think that the political affiliation of the newspaper or website that publishes the poll also has an impact – in practice no reputable polling company would fudge their data to meet the political predilections of an editor.

But how to spot a reputable polling company? The easiest way is to check that they are members of the British Polling Council. Members are expected to comply to rules, which require each company to share full details of their sampling and analysis methodologies. Though the BPC doesn’t endorse particular methodologies, it ensures that each is clearly documented with, where possible, the underlying data also disclosed.

There’s a similarity with the process of peer review – and as with peer review the lay reader such as you or I will assume that the methodologies and maths have been seen to make sense by other experts.

I’ve been waving the word “methodology” around a bit – this just means the way a sample is taken and the way this sample is analysed and extrapolated to give those all-important headline figures.

There are two main types of polls – phone polls involve ringing people up at random to gather a representative sample, whereas online polls take a large number of willing participants and select a representative sample from within these. Both have common criticisms that can be quickly dismissed – although there are demographic (age, social class…) indicators correlated with the likelihood of home phone use, and although online participants are likely to be more politically engaged than other groups the analysis and extrapolation stage takes account of these differences.

One other red herring is the idea of “clustering”, some people who should know better claim that polls will aim to have results in line with other polls rather than risk being an outlier. Whereas it is sensible to suggest that poll responses are influenced by other poll results (as indeed may be the election itself), the idea of polling companies massaging their figures to fit a trend line is ridiculous.

Let’s have some more definitions – a sample is a small segment of a larger population that is as representative as possible of the wider population. For elections, the wider population is everyone who will vote in the election, and the sample aims to reflect the make-up of this population as closely as possible.

Polling companies may take account of – for example – age, social class, location, previous or current political activity, voting history and likelihood of voting in developing a sample. For most companies a sample will be around 1,000 people.

Responding to 2015

At the last election, polls showed a likely hung parliament right up until the exit poll. This error was claimed by some to have effected the election campaigns, and there was serious disquiet about the state of polling from commentators and politicians. In response, the BPC commission a report into polling practice, which was published in March last year.

A big point of controversy around the 2015 polls concerned how polling samples are made up. The BPC report concluded:

Our conclusion is that the primary cause of the polling miss was unrepresentative samples. The methods the pollsters used to collect samples of voters systematically over-represented Labour supporters and under-represented Conservative supporters. The statistical adjustment procedures applied to the raw data did not mitigate this basic problem to any notable degree.

adding that

[We can] rule out the possibility that at least some of the errors might have been caused by flawed analysis, or by use of inaccurate weighting targets on the part of the pollsters. We were also able to exclude the possibility that postal voters, overseas voters, and unregistered voters made any detectable contribution to the polling errors. The ways that pollsters asked respondents about their voting intentions was also eliminated as a possible cause of what went wrong

The BPC simply felt that the samples used by polling companies contained too many people that are unlikely to vote, and too many people that supported Labour, to be a fair representation of the country as a whole.

Older people are more likely to vote. And they are more likely to vote Conservative. So some polling companies have focused on this correlation as a means of correcting for the 2015 errors.

Kantar Polling (formerly TNS), for example, has adjusted their sample weighting methodology to include more over 70s in the analysed data. YouGov have also increased the numbers of over 65s in their weighted samples.

Rather than adding older voters to the sample (which carries a risk of skewing the poll in other ways), some companies have focused on likelihood to vote as a key determinant of sample weighting.

Ipsos MORI, ICM and YouGov are using reported past voting behavior (did a participant vote in the 2015 election and/or the 2016 EU referendum?) as a sample weighting tool. ComRes use a statistical methodology based on weighting for age and social class instead of self-reported behavior.

Panelbase, as of last week, use 2015 voters not the general population as the basis of their sample weighting.

The “Don’t Know” problem

When you ask people who they will vote for, there will always be some who have not made a decision. The way “don’t knows” are handled in polling is a matter of no small controversy. The always entertaining UK Polling Report (run by Anthony J Wells of YouGov) has a good explanation of the background of this issue.

The TL;DR is that people who say that they don’t know for whom they will vote are likely to end up voting for the same party they voted for at the last election. Some (ICM, Populus) have historically used this as a weighted indicator of future voting, others (Ipsos MORI, ComRes) use “squeeze questions” to flush out a party preference which is then counted in a similar way as definite voting intentions. And there is YouGov, which simply did not include “don’t knows” in their samples, considering them less likely to vote.

The BPC report was pretty scathing on this whole mess, recommending that polling companies.

review current allocation methods for respondents who say they don’t know, or refuse to disclose which party they intend to vote for. Existing procedures are ad hoc and lack a coherent theoretical rationale. Model-based imputation procedures merit consideration as an alternative to current approaches.

So, by 2017, these controversial allocations have changed in some cases.

ICM are now going to add more “don’t knows” to parties previously supported (they used to add half of them, they now add three quarters to Conservative or Labour totals as applicable. They are also going to assume that those who don’t indicate a preference this time round and don’t know who they voted for last time are 20% more likely to vote for Conservatives and 20% less likely to vote for Labour.

Kantar have added a squeeze question for “don’t knows”, and are developing a model to add even those who answer “I don’t know” to the squeeze question to some later polls – based on which leader they find most trustworthy and respondent demographics.

A note on dates

The key thing to look for is the dates during which field work (the actual collection of responses) was carried out, not the date of publication. Wikipedia lists polls according to the field work dates and, as such, has a useful trendline that reflects possible changes in votes over time.

So what?

The above has been a (hopefully readable) summary of how election polling works, but how can we use this information to make sense of polls and preserve our blood pressure. Here’s a few tips from me:

  • Only pay attention to polls from BPS members. Though others may be fun, we don’t know anything about how they were conducted or how they might skew.
  • For analysing trends, only compare polls from the same company. The same or similar methodology producing different results on different dates is suggestive of a change in public opinion.
  • For analysing the differences between polling companies, compare polls conducted on the same date. If you think company X’s methodology overrepresents party A, compare to polls conducted at similar times by companies Y and Z.
  • Remember the margin of error. It is fair to assume that a poll of around 1,000 people will be accurate to around 3%, 19 times out of 20. So a poll showing a party share of 40% may indicate support anywhere between 37% and 43%. This error shrinks slightly for larger samples.
  • Beware unusual polls conducted in novel ways – a good recent example is the YouGov aggregated statistical model that startled everyone over the Bank Holiday. This is a highly experimental model based on extrapolating constituency-level results from very small samples using machine learning approaches. It might be interesting, but we don’t yet know what margin of error it may have, or how it compares to other more conventional polls.
  • Beware outliers – polls at odds with the consensus are often shared and reported more widely than other, more “boring”, poll results. But take account of the margin of error, and the possibility that it just could be an unusual sample.
  • Beware confirmation bias – reputable polls you don’t like are equally likely to be as accurate as reputable polls that you do.
  • Look for the data tables – as in all fields of research, publication of data tables allows us to take a more detailed view of the results. Is the sample “normal”? Are the extrapolations fair? Looking at the raw data can tell us.


Roaming Autodidacts and the Neo-Reactionaries #OER17

“[The] literature [on open education] was preoccupied with what I call “roaming autodidacts”. A roaming autodidact is a self-motivated, able learner that is simultaneously embedded in technocratic futures and disembedded from place, cultural, history, and markets. The roaming autodidact is almost always conceived as western, white, educated and male. As a result of designing for the roaming autodidact, we end up with a platform that understands learners as white and male, measuring learners’ task efficiencies against an unarticulated norm of western male whiteness. It is not an affirmative exclusion of poor students or bilingual learners or black students or older students, but it need not be affirmative to be effective. Looking across this literature, our imagined educational futures are a lot like science fiction movies: there’s a conspicuous absence of brown people and women”

(McMillan Cottom, Tressie. 2015. “Intersectionality and Critical Engagement With The Internet” in The Intersectional Internet: Race, Sex, Class, and Culture Online eds. Safiya U. Noble and Bredesha Tynes. Peter Lang Publishing. Accessed online)

The earliest reference to Tressie McMillan Cottom’s game changing coinage of “roaming autodidact” is from her presentation at MIT in July 2014. It is such a perfect description of the idealised online learner – effortlessly grazing learning from MOOCs, Wikipedia and sundry open courseware whilst remaining resolutely white, male, western and comfortably off – that it feels somehow timeless.

Tressie McMillan Cottom, Catherine Cronin, Audrey Watters and others have taken the concept as a jumping-off point to understand the worlds of those left behind by online learning, and have begun to tackle the huge ingrained assumptions that colour learning design, resource sharing and platform development – slowly unpicking the lazy thinking that prevents learning online being available to all, and rooting perspectives on learning inside the lived reality of learners, wherever we may meet them.

This is hugely important work. But what happened to all the roaming autodidacts?

Well… They became Nazis.

Or some of them did.

Mencius Moldbug is a roaming autodidact. He’d be the first to admit how much he has drawn on the resources shared by Project Gutenburg, Wikipedia and various OpenCourseWare efforts to create the reactionary neo-feudal monarchical restorationist system of thought described in “Unqualified Reservations” (a body of work supposedly feted by the likes of Steve Bannon and Peter Theil).

He’s undeniably well read. Late last year I paddled through some of the surface waters of his world (and of parallel realms such as Nick Land’s “Dark Enlightenment”) as a painful and possibly misguided attempt to understand precisely what was going on in 2016. I can’t claim to be familiar with the majority of sources he cites (I don’t think anyone could), but the same could be said for any serious book in the social sciences.

When people write PhDs (and I take a moment to honour the sheer work each of you who have done this put in) they draw together bodies of knowledge that have never before been drawn together. They synthesise it, make links and draw conclusions. Examiners (and again, I take a moment…) cannot be expected to be on top of this entire corpus. But what they are incredibly on top of is the safe ways that all of this information (data) can be drawn together and built on. Methods. Statistical, historical, socio-cultural, scientific – these are our tools of discernment. And these are the things that a PhD Viva is designed to allow you to defend.

A roaming autodidact has little use for methods. He (and yes, it is always a “he”) does not need methods to draw conclusions. He needs sources. A researcher knows she can find a source to validate just about any crazy idea, a roaming autodidact knows he can find a source to validate his crazy idea– but he does not know that this is universally true.

Moldbug practices “slow history”, explaining it thus:

“The student of slow history, who has no faith at all in consensus wisdom, official truth, and “everybody knows” chestnuts, is willing to rest enormous judgments on a single, indisputable, authentic primary source”


And again:

“The nice thing about reading a primary source from 1942 is that you are assured of its “period” credentials, unless of course someone has hacked Time’s archive. The author cannot possibly know anything about 1943. If you find a text from 1942 that describes the H-bomb, you know that the H-bomb was known in 1942. One such text is entirely sufficient.”


This may be “slow”, but it is not “history” in any academic sense. It is cherry-picking. It is appeal to anecdote. Just because this one guy said something in 1942 doesn’t make it any more reliable than this one guy at the bar last Saturday. It’s a single point of information. A datum, if you like.

To start drawing anything reliable at all from it we need a few more sources. And not just the next ones we find, we need a strategy to find a balanced, representative sample of these. And then we can start doing some work around context, purpose, reliability.

But of course, that’s just academic consensus. What do academics know? Moldbug has a problem with consensus (and, indeed, academia) – drawing on a lay perspective of consensus being innately suspicious, and an absence of substantial counter-evidence being doubly so.

As he puts it in relation to Anthropogenic Global Warming:

“The unusual trustworthiness of science, despite the fact that scientists are humans and humans are not generally trustworthy, exists when (a) hypotheses are falsifiable, and (b) the professional institutions within which scientists operate promote, broadcast, and reward any falsification. We can trust a consensus of scientists on a problem for which (a) and (b) are true, because we are basing our trust on the fact that, if the hypothesis is false, a large number of very smart people has tried and failed to discover its error. This is not, of course, impossible. But it is at least unlikely.”


But there speaks a guy who doesn’t hang out with academics much. The ones I know love controversy. They love being the voice speaking out against the tide. It’s a great way to get keynote gigs and well-cited papers. Disagreement is the lifeblood of the only academia I recognise. Hell – I’ve never seen a bunch of academics agree on which pub to go to, on the correct citation method for journal articles… If you want to make absolutely sure of academic argument, try attempting to enforce consensus from above (I used to work for a funding council…).

Open education, and open culture, has put a great deal of information into the public arena. Much of it is primary in nature – undigested, undifferentiated. The terms of common open licences do not allow us to care how it is used after release – and it would be perhaps unfair to castigate Project Gutenberg or MITOCW for the genesis of the alt-right and the birth of Trumpism.

Mike Caulfield is doing as much as anyone to work out what we need to do afterwards.

“My solution to the post-truth crisis is to develop a culture of collaborative explanation and exploration via development and use of new and different tools.

My belief is that humans have a couple modes of working with truth. Some are adversarial and propagative, and some are exploratory and collaborative. The adversarial mode is killing us.”

(Mike Caulfield)

Collaborative explanation is academia at its best. It is the guts of the scientific method – where a perfectly executed rebuttal is a cause for joy as truth is further revealed. It is how humans really get stuff done, whereas the adversarial mode (election campaigns are the best example that comes to mind) is how we stop things being done. It is a mode of enquiry that it is important we inculcate the next generation of roaming autodidacts in before they become Nazis. Or some of them do.

Adversarial explanation is academia at its worst. It’s the “I know something you don’t” mode of debate – praising esoterica, and using sources as weapons. It’s the mode of debate that leads to conspiracies and polarisation – “hidden secrets” and arcana. It can make for wonderful storytelling, mesmeric speaking and writing. But fundamentally it is a model of smartness that celebrates breadth and denigrates synthesis. And it is classic Moldbug. He throws sources and connections at you so fast unpicking it and critiquing it all becomes an exercise in translation – with text like that you can only read and react. His obscurantism is a false signifier, adding the illusion of credibility to his painful and regressive positions on race, crime and governance.

Caulfield’s recent work has been focused on the development of online tools to foster what he prefers to term “choral explanation” – multiple voices synthesising into consensus.

But the will is as important as the tool. And teaching roaming autodidacts the will to collaborate, corroborate and develop as a natural everyday response to a primary source is the next great task of the open movement.

The HE Bill in the Lords: Once More, With Feeling

Previously on the HE and R Bill… – I’ve been following progress with considerable interest, but the wonkhe.com coverage has been so good that I’ve not felt inclined to write anything here. The text of the bill passed through the House of Commons without any changes, despite significant issues with both the drafting and the underlying policy. After some feisty exchanges in the Lords the bill has been substantially altered, both by the government and – on six occasions so far – by others. A small government majority in the commons means that these could be overturned again during “ping-pong” (the adorably named process by which the commons and lords come to agreement via conflicting votes and all-night sittings). But this is neither the most urgent or most visible legislative matter the government are trying to manage….

As the House of Lords spend another afternoon going through the motions, it is interesting to consider our own Higher Education and Research Bill alongside the wider work of the upper legislative chamber as we speed towards the Easter recess.

As I’ve hinted in previous weeks on Twitter I’ve got a theory that pressures on parliamentary time will prove to be a defining factor in the shape of the future Act. If peers are together in continuing to block key aspects we need to ask how willing the government are to spend the time needed to push through their own vision, and to throw the various opposition and cross-bench amendments – if they don’t cut the mustard – out.

Since the Commons stages – where, lest we forget, not one single amendment was made to the text of the bill – we have seen significant government climb-downs and alterations. Pressure from peers is beginning to realise the kind of government rethink that was asked for in the commons committee and third reading. No longer under the spell of Jo Johnson’s draftsmen, we can never tell quite where the line will be drawn as regards what initially appeared to be essential parts of the policy framework.

At this point one has to step back and consider what it is that the Bill is actually trying to do, and why. Which – in all honesty – is surprisingly little. Some regulatory changes, based more on a wish to remove HEFCE and tidy up a powerpoint slide than any new possibilities offered? Changes to sector entry and exit tickets – and of course the TEF! – as yet another attempt to make HE work like a market? In many arguments, most notably those made in independent HE, this is sold as a revolutionary policy package – but after this is complete will the sector honestly get to rest in peace for a few years?

As the the dawn of prorogation approaches, there may be cause to for the government to lament this lack of vision. The Higher Education and Research Bill is just one of many bills the Government are shepherding through the Lords at the moment, and may not be what they feel is the most pressing legislative issue currently standing.

The Criminal Finances Bill, for example awaits further committee sessions, and a report. The Third Reading of the Digital Economy Bill on 29th March may not run sweetly. Bills exist around Lords Reform that could become more important to the government after recent “rebellions” – votes on the HE Bill have been just one part of a series of votes that have left Lords walking closer to the fire. Following issues around business rates in the budget, Sajid Javid’s Local Government Finance Bill could be resurrected and forced through before May – his Neighbourhood Planning Bill (with a third reading this week in the Lords) could be equally controversial. A Prisons and Courts Bill is currently in the Commons but could be progressed with haste if the increasingly clear problems in that system continue to make headlines. Closer to home the Technical and Further Education Bill has a committee report in the Lords at the end of March. All this alongside numerous debates, committee reports and other parliamentary business.

And, of course the European Union (Notification of Withdrawal) Bill. This, more than anything, is the clear government priority currently. Amendments in the Lords gave a lot of people something to sing about last week but when “ping-pong” beings – and may be seen on both Monday and perhaps Wednesday as interventions throughout the HE Bill report – no-one can be sure at what point consensus will be realised. Lords will be keen to demonstrate their value as scrutineers in a de-politicised second chamber on this one-in-a-generation constitutional issue, but will be anxious not to be seen as defying the will of the people. It’s a difficult line to walk.

If that bill is delayed – or if other means are found to delay the Prime Minister’s Brexit timetable – this could mean further work for peers. Couple this with an already packed last 25 or so days of sitting (we don’t know for sure exactly how long, but this is a best guess) and something will have to give.

If that something is the HE Bill (either losing it entirely, or mollifying the Lords with even more substantial amendments than were introduced before the report stage) then where do we go from here? It would be an ignominious coda to Johnson’s first substantial legislation, and a sorry end to a project that was perhaps more concerned with messaging and effect than genuine regulatory improvement.

Honestly, if Daniel Hannan can – with a straight face – compare Brexit to the Lord of the Rings I see no reason why I can’t compare the HE Bill to an episode of Buffy…

Thoughts on open education at UC Berkeley and the ADA

Like many people I’m disappointed by UC Berkeley’s decision to remove a range of “legacy” openly licensed online resources from public access YouTube and iTunes U, linked to from their webcast.berkeley.edu portal. This represents 20,000 audio or video recordings of lectures from between 2004-2015, which will be moved behind an institutional sign-in. And in particular I feel that comments like “Finally, moving our content behind authentication allows us to better protect instructor intellectual property from ‘pirates’ who have reused content for personal profit without consent” are a very bad look, no matter what the context.

Lecture recordings from on-campus provision are generally not great in quality or educational utility unless they have been specifically packaged for online/remote consumption. This process would likely involve exactly the kind of accommodations that are rightly required under the Americans with Disabilities Act (ADA) – at the very least transcription, and the deliberate use of teaching methods and resources suitable for remote learning. And consumer channels like YouTube and iTunes are hardly the best means of distributing a full package of learning materials. I should emphasise that this is good practice for supporting all learners.

A further issue with lecture capture is the likely use of copyrighted material within slides – for a “mass” operation like the one at Berkeley these are notoriously hard to police and check from recordings – as the slides were never provided alongside the recordings (a practice that would have gone at least some way towards addressing the ADA issue) even basic tools like reverse image search were unavailable.

The issue was brought to the attention of Berkeley and the Department of Justice by the National Association of the Deaf. A review of the case found that the complaint was a legitimate one, and that Berkeley (as a public body) were not meeting the requirements of the Americans With Disabilities Act, Title II.

Since 2015 Berkeley had already stopped posting new lecture recordings on the publicly available channels – this, coupled with the bizarre statement on piracy (how was the university losing money? why were they not enforcing the BY-NC-ND license they had chosen?) leads me to have a reasonable suspicion that the ADA judgement is just useful justification for a decision that had already been made.

However, Berkeley will continue to offer lecture capture as a service to enrolled students, and will to continue to share material via their EdX imprint, BerkeleyX – noting in the statement regarding the withdrawal of the legacy content that: “Berkeley will maintain its commitment to sharing content to the public through our partnership with EdX (edx.org). This free and accessible content includes a wide range of educational opportunities and topics from across higher ed.”

EdX, of course, famously had their own run in with ADA back in 2015. Despite claiming that they were not subject to ADA as they were not offering a “public accommodation” (and hell, deaf people hardly buy any certificates of completion…) the DoJ required that they sign an agreement to provide accessible accommodations. Note that they claim that this does not extend to course content, but the DoJ disagrees.

Current UC Berkley offerings on EdX do not meet ADA requirements. Though a decent transcript is offered, and this is downloadable, neither audio or text-to-speech versions of figures presented during videos are available. The example below is from the first video I encountered on “GG101x: The Science of Happiness“.

The table in the screen grab above (which I am claiming as “fair use”) is taken, unattributed (other than in the well-hidden Course Bibliography rendered in that legendarily accessible file format the PDF!) from Uchida, Y., & Ogihara, Y. (2012). Personal or interpersonal construal of happiness: A cultural psychological perspective. International Journal of Wellbeing, 2(4), 354-369. doi:10.5502/ijw.v2.i4.5 . The IJW make all articles available under an CC-BY-NC-ND license … a license that the legal team at Berkeley presumably know well 🙂

So at least one of Berkeley’s offerings on EdX does not meet the ADA requirements that EdX were required to meet, and also uses openly licensed content in breach of licensing terms (the attribution did not meet expected best practice, EdX is arguably a commercial concern and a derivative work was used). Apparently “UC Berkeley […] content has been discovered on for-profit websites, which use either a subscription fee or on-page advertising.” so I’m super glad I didn’t pay for that certificate…

For those interested in the legal background to the Berkeley decision, you could do worse than to read up on the way the 2015 EdX ruling offered notice that a website hosting learning content could be seen as a “place of education” for ADA/s502 purposes. I enjoyed this article from Cooley LLP and you might too.

And for those interested in the amazing history of this open education initiative at Berkeley, Audrey has you covered.

Rethinking “Edtech”

I was asked to offer some perspective on the wider idea of edtech – what follows covers investment management, theories of learning, education reform politics, innovation theory and around 80 years of history. Some may be surprised at the scope – I would argue that it is not enough to understand how, to truly make an intelligent decision we need to at least consider why.

I should note that I was asked to give a personal and idiosyncratic view, so just to be absolutely clear these are my own opinions only. 

As an investment category, defined perhaps by the breathless coverage of EdSurge and TechCrunch, EdTech is old news. The last boom years, such as they were, largely sit between 2012 and 2015, with the latter year seeing $18bn of investment attracted into the sector. Those with longer memories may recall a similar bear market at the turn of the century, aligned to the wider “dot com” boom. (and fans of TechCrunch may be interested to learn of the FinTech boom that immediately followed it)

The boundaries of the category are variously drawn, but generally encompass teaching and administrative adoption of technology and infrastructure. There is a smaller, but separate, market segment encompassing research technology with links to commercial R&D, cloud storage and big data analytics and metrics (which you could trace back, if you wanted, to ISI). Academic research infrastructure and support in itself is too small a market to consider separately for most mainstream investors – and is primarily supported by government funding.

Investors of the sort that cover EdTech are operating with a high appetite for risk, and will expect a low number of their investments to offer significant returns. This plays into the fail-fast ethos in wider Silicon Valley, but tends to favour vivid ideas rather than well-considered interventions, and incremental innovation rather than revolutionary ideas (which would have a longer-term return). Very few “EdTechs” are actually making a return on their investments, a scant few (online course provider Udacity, for instance) are even turning a working profit. The model for funders is to grow mindshare and a user base, before being acquired by a larger tech company (Google, Microsoft, Blackboard…) – again, as in wider Silicon Valley.

As a historic project, your modern edtech (in the sense of mechanical or digital aids to the process of education) sits very much on a line drawing from a behaviourist (Skinnerian) model of learning. Drawing on ideas of repetition and reward, it underpins drill-and-kill learning tools such as Duolingo, and many test preparation or content delivery packages.

A later strand drawing on constructivist and social constructivist theories of learning (Durkheim, Illich, Papert through perhaps to someone like George Siemens) emphasised the agency of the learner to make sense of the world around them, drawing on networks of peers. The rise of social media around 2008 spurred the development of “connectivism”, a postulated theory concerning the way networks comprising human and non-human members interact, grow and learn (rhizomatically).

Cognitive learning theories (Piaget, also Badderly, Chomsky) are the basis of the “personalisation” agenda wherein technology can “adapt” within bounded states to suit individual learner needs – much of what is described as “AI” in learning, and indeed many of the models of learning that define AI research – are cognitivist.

And outside of learning theories all together, you have the same drives around efficient management of information that define the wider tech-boom. Administrative technology also has the advantage that the burden of proof is seldom asked for – access to information is an axiomic good.

You could connect these trends together to explain something like the MOOC, which started with an explicitly connectivist underpinning but pivoted quickly (with the pressure of growth and massification) to a behaviourist model, though with a cognitive science gloss via the collection and use of administrative user data.

But why would you? Simply put, these ideas underpin the majority of edtech development. Despite the neo-mania of EdTech as narrative (as Audrey Watters notes “the best way to predict the future is to write a press release”, and I would agree), it is a surprisingly conservative field in terms of approach, although an army of silicon valley patent lawyers would love to convince you otherwise.

Part of the leverage that the field has on education policy makers comes from the wider narrative of Education Reform. Joining parents and educators with genuine concerns about the quality of education with investors and politicians looking to improve the profitability of education, this narrative – which I love to characterise as “Education is broken” – underpins many of the machinery of education (Charter school, free school, challenger institutions…) changes that open up education to “disruptive innovation”.

Harvard Business Administration researcher Clayton Christensen first postulated that idea of disruption, and he applied it to education in his 2008 book ‘Disrupting Class‘. Simply put, the concept of low-end disruptive innovation suggests that any established market can be destabilised by the entry of a new actor offering a similar but inferior product at a vastly lower price. This new actor initially serves a niche interest and does not provide the features of premium products in the marketplace but through repeated innovation it expands and improves to serve wider needs and increases profitability.

However, this theory has been debunked specifically within education (by none less than Christiansen himself in 2013), and more generally as a fundamental narrative of innovation (Jill Lapore in 2014 is flat-out superb). As attractive as the idea of low cost innovation may be to investors, it has not and does not explain innovation as it actually happens.

Entrepreneurial state theory – as described by Mariana Mazzucato in her book of the same name, sees a role for the long term, stable nature of state funding in supporting and developing innovation. An example would be the support in defence spending for early cybernetics projects that became VR, networked communication, responsive software (and also pigeon-guided bombs – courtesy of one BF Skinner… but not every experiment is a success…) and underpin much of what became Edtech.

There are people better qualified than me to talk about theories of innovation, but I will content myself to mentioning Von Hippell’s lead user theory – broadly watching the working practices of expert practitioners, identifying where existing processes or technologies are shortcutted, then working with practitioners to design tools to simplify these short-cuts.

So what is “an EdTech”? Despite overweening claims around innovation, the easiest way is to characterise their intended mechanism. An EdTech uses one or more of the three educational theories above (either knowingly or, more commonly, implicitly) to either sell into existing education providers, or to attempt to disrupt these providers by establishing alternate providers and selling to learners. As hype around the central category has grown, more generally applicable administrative interventions have been branded as edtech.

Actual sales (in terms of money being exchanged for goods or services) are rare, as the focus is on growing a user-base and associated hype in order to be acquired by a larger enterprise. (This is just mainstream Silicon Valley business practice).

But do “EdTechs” improve education? It is difficult to say. Certainly to read the press releases that have flooded the inboxes of education or technology journalists – very few cover both, so it has been possible to exploit gaps in knowledge (see Audrey Watters “What every techie should know about education“) – would indicate that we now live in a golden age of cheap, ubiquitous, personalised and effective learning.  And yet.

Certainly the things that do improve education as a wider are often far removed from the mythologised moment of learning – administrative system interoperability, open licensing for academic content – solving, in other words, known problems as reported by expert practitioners.

(Careful readers will note that I owe a huge debt to Audrey Watters, Phil Hill, Michael Feldstein, Rolin Moe and many others.)