ge17 on reading opinion polls

As I work for a UK charity, I need to be very careful on social media during the election campaign. Charities are constrained by the requirements of both charity regulations and electoral law. Simply put, charities are forbidden to publicly support or oppose any candidate or party. Although I’m sure no-one sees this personal blog as the opinions of my employer, I will be being cautious and conforming to the rules above during the election period.

There was a little bit of concern, to put it mildly, about the accuracy of election polling in 2015. In response, polling companies have modified the way they collect, analyse and draw conclusions from polling data – but although each company has reacted, they have all done so in differing ways.

Understanding Polling

So, to make sense of any random 2017 poll, we really need to know three things – the polling company responsible, the date of the poll, and the type of poll.

Some people think that the political affiliation of the newspaper or website that publishes the poll also has an impact – in practice no reputable polling company would fudge their data to meet the political predilections of an editor.

But how to spot a reputable polling company? The easiest way is to check that they are members of the British Polling Council. Members are expected to comply to rules, which require each company to share full details of their sampling and analysis methodologies. Though the BPC doesn’t endorse particular methodologies, it ensures that each is clearly documented with, where possible, the underlying data also disclosed.

There’s a similarity with the process of peer review – and as with peer review the lay reader such as you or I will assume that the methodologies and maths have been seen to make sense by other experts.

I’ve been waving the word “methodology” around a bit – this just means the way a sample is taken and the way this sample is analysed and extrapolated to give those all-important headline figures.

There are two main types of polls – phone polls involve ringing people up at random to gather a representative sample, whereas online polls take a large number of willing participants and select a representative sample from within these. Both have common criticisms that can be quickly dismissed – although there are demographic (age, social class…) indicators correlated with the likelihood of home phone use, and although online participants are likely to be more politically engaged than other groups the analysis and extrapolation stage takes account of these differences.

One other red herring is the idea of “clustering”, some people who should know better claim that polls will aim to have results in line with other polls rather than risk being an outlier. Whereas it is sensible to suggest that poll responses are influenced by other poll results (as indeed may be the election itself), the idea of polling companies massaging their figures to fit a trend line is ridiculous.

Let’s have some more definitions – a sample is a small segment of a larger population that is as representative as possible of the wider population. For elections, the wider population is everyone who will vote in the election, and the sample aims to reflect the make-up of this population as closely as possible.

Polling companies may take account of – for example – age, social class, location, previous or current political activity, voting history and likelihood of voting in developing a sample. For most companies a sample will be around 1,000 people.

Responding to 2015

At the last election, polls showed a likely hung parliament right up until the exit poll. This error was claimed by some to have effected the election campaigns, and there was serious disquiet about the state of polling from commentators and politicians. In response, the BPC commission a report into polling practice, which was published in March last year.

A big point of controversy around the 2015 polls concerned how polling samples are made up. The BPC report concluded:

Our conclusion is that the primary cause of the polling miss was unrepresentative samples. The methods the pollsters used to collect samples of voters systematically over-represented Labour supporters and under-represented Conservative supporters. The statistical adjustment procedures applied to the raw data did not mitigate this basic problem to any notable degree.

adding that

[We can] rule out the possibility that at least some of the errors might have been caused by flawed analysis, or by use of inaccurate weighting targets on the part of the pollsters. We were also able to exclude the possibility that postal voters, overseas voters, and unregistered voters made any detectable contribution to the polling errors. The ways that pollsters asked respondents about their voting intentions was also eliminated as a possible cause of what went wrong

The BPC simply felt that the samples used by polling companies contained too many people that are unlikely to vote, and too many people that supported Labour, to be a fair representation of the country as a whole.

Older people are more likely to vote. And they are more likely to vote Conservative. So some polling companies have focused on this correlation as a means of correcting for the 2015 errors.

Kantar Polling (formerly TNS), for example, has adjusted their sample weighting methodology to include more over 70s in the analysed data. YouGov have also increased the numbers of over 65s in their weighted samples.

Rather than adding older voters to the sample (which carries a risk of skewing the poll in other ways), some companies have focused on likelihood to vote as a key determinant of sample weighting.

Ipsos MORI, ICM and YouGov are using reported past voting behavior (did a participant vote in the 2015 election and/or the 2016 EU referendum?) as a sample weighting tool. ComRes use a statistical methodology based on weighting for age and social class instead of self-reported behavior.

Panelbase, as of last week, use 2015 voters not the general population as the basis of their sample weighting.

The “Don’t Know” problem

When you ask people who they will vote for, there will always be some who have not made a decision. The way “don’t knows” are handled in polling is a matter of no small controversy. The always entertaining UK Polling Report (run by Anthony J Wells of YouGov) has a good explanation of the background of this issue.

The TL;DR is that people who say that they don’t know for whom they will vote are likely to end up voting for the same party they voted for at the last election. Some (ICM, Populus) have historically used this as a weighted indicator of future voting, others (Ipsos MORI, ComRes) use “squeeze questions” to flush out a party preference which is then counted in a similar way as definite voting intentions. And there is YouGov, which simply did not include “don’t knows” in their samples, considering them less likely to vote.

The BPC report was pretty scathing on this whole mess, recommending that polling companies.

review current allocation methods for respondents who say they don’t know, or refuse to disclose which party they intend to vote for. Existing procedures are ad hoc and lack a coherent theoretical rationale. Model-based imputation procedures merit consideration as an alternative to current approaches.

So, by 2017, these controversial allocations have changed in some cases.

ICM are now going to add more “don’t knows” to parties previously supported (they used to add half of them, they now add three quarters to Conservative or Labour totals as applicable. They are also going to assume that those who don’t indicate a preference this time round and don’t know who they voted for last time are 20% more likely to vote for Conservatives and 20% less likely to vote for Labour.

Kantar have added a squeeze question for “don’t knows”, and are developing a model to add even those who answer “I don’t know” to the squeeze question to some later polls – based on which leader they find most trustworthy and respondent demographics.

A note on dates

The key thing to look for is the dates during which field work (the actual collection of responses) was carried out, not the date of publication. Wikipedia lists polls according to the field work dates and, as such, has a useful trendline that reflects possible changes in votes over time.

So what?

The above has been a (hopefully readable) summary of how election polling works, but how can we use this information to make sense of polls and preserve our blood pressure. Here’s a few tips from me:

  • Only pay attention to polls from BPS members. Though others may be fun, we don’t know anything about how they were conducted or how they might skew.
  • For analysing trends, only compare polls from the same company. The same or similar methodology producing different results on different dates is suggestive of a change in public opinion.
  • For analysing the differences between polling companies, compare polls conducted on the same date. If you think company X’s methodology overrepresents party A, compare to polls conducted at similar times by companies Y and Z.
  • Remember the margin of error. It is fair to assume that a poll of around 1,000 people will be accurate to around 3%, 19 times out of 20. So a poll showing a party share of 40% may indicate support anywhere between 37% and 43%. This error shrinks slightly for larger samples.
  • Beware unusual polls conducted in novel ways – a good recent example is the YouGov aggregated statistical model that startled everyone over the Bank Holiday. This is a highly experimental model based on extrapolating constituency-level results from very small samples using machine learning approaches. It might be interesting, but we don’t yet know what margin of error it may have, or how it compares to other more conventional polls.
  • Beware outliers – polls at odds with the consensus are often shared and reported more widely than other, more “boring”, poll results. But take account of the margin of error, and the possibility that it just could be an unusual sample.
  • Beware confirmation bias – reputable polls you don’t like are equally likely to be as accurate as reputable polls that you do.
  • Look for the data tables – as in all fields of research, publication of data tables allows us to take a more detailed view of the results. Is the sample “normal”? Are the extrapolations fair? Looking at the raw data can tell us.


Roaming Autodidacts and the Neo-Reactionaries #OER17

“[The] literature [on open education] was preoccupied with what I call “roaming autodidacts”. A roaming autodidact is a self-motivated, able learner that is simultaneously embedded in technocratic futures and disembedded from place, cultural, history, and markets. The roaming autodidact is almost always conceived as western, white, educated and male. As a result of designing for the roaming autodidact, we end up with a platform that understands learners as white and male, measuring learners’ task efficiencies against an unarticulated norm of western male whiteness. It is not an affirmative exclusion of poor students or bilingual learners or black students or older students, but it need not be affirmative to be effective. Looking across this literature, our imagined educational futures are a lot like science fiction movies: there’s a conspicuous absence of brown people and women”

(McMillan Cottom, Tressie. 2015. “Intersectionality and Critical Engagement With The Internet” in The Intersectional Internet: Race, Sex, Class, and Culture Online eds. Safiya U. Noble and Bredesha Tynes. Peter Lang Publishing. Accessed online)

The earliest reference to Tressie McMillan Cottom’s game changing coinage of “roaming autodidact” is from her presentation at MIT in July 2014. It is such a perfect description of the idealised online learner – effortlessly grazing learning from MOOCs, Wikipedia and sundry open courseware whilst remaining resolutely white, male, western and comfortably off – that it feels somehow timeless.

Tressie McMillan Cottom, Catherine Cronin, Audrey Watters and others have taken the concept as a jumping-off point to understand the worlds of those left behind by online learning, and have begun to tackle the huge ingrained assumptions that colour learning design, resource sharing and platform development – slowly unpicking the lazy thinking that prevents learning online being available to all, and rooting perspectives on learning inside the lived reality of learners, wherever we may meet them.

This is hugely important work. But what happened to all the roaming autodidacts?

Well… They became Nazis.

Or some of them did.

Mencius Moldbug is a roaming autodidact. He’d be the first to admit how much he has drawn on the resources shared by Project Gutenburg, Wikipedia and various OpenCourseWare efforts to create the reactionary neo-feudal monarchical restorationist system of thought described in “Unqualified Reservations” (a body of work supposedly feted by the likes of Steve Bannon and Peter Theil).

He’s undeniably well read. Late last year I paddled through some of the surface waters of his world (and of parallel realms such as Nick Land’s “Dark Enlightenment”) as a painful and possibly misguided attempt to understand precisely what was going on in 2016. I can’t claim to be familiar with the majority of sources he cites (I don’t think anyone could), but the same could be said for any serious book in the social sciences.

When people write PhDs (and I take a moment to honour the sheer work each of you who have done this put in) they draw together bodies of knowledge that have never before been drawn together. They synthesise it, make links and draw conclusions. Examiners (and again, I take a moment…) cannot be expected to be on top of this entire corpus. But what they are incredibly on top of is the safe ways that all of this information (data) can be drawn together and built on. Methods. Statistical, historical, socio-cultural, scientific – these are our tools of discernment. And these are the things that a PhD Viva is designed to allow you to defend.

A roaming autodidact has little use for methods. He (and yes, it is always a “he”) does not need methods to draw conclusions. He needs sources. A researcher knows she can find a source to validate just about any crazy idea, a roaming autodidact knows he can find a source to validate his crazy idea– but he does not know that this is universally true.

Moldbug practices “slow history”, explaining it thus:

“The student of slow history, who has no faith at all in consensus wisdom, official truth, and “everybody knows” chestnuts, is willing to rest enormous judgments on a single, indisputable, authentic primary source”


And again:

“The nice thing about reading a primary source from 1942 is that you are assured of its “period” credentials, unless of course someone has hacked Time’s archive. The author cannot possibly know anything about 1943. If you find a text from 1942 that describes the H-bomb, you know that the H-bomb was known in 1942. One such text is entirely sufficient.”


This may be “slow”, but it is not “history” in any academic sense. It is cherry-picking. It is appeal to anecdote. Just because this one guy said something in 1942 doesn’t make it any more reliable than this one guy at the bar last Saturday. It’s a single point of information. A datum, if you like.

To start drawing anything reliable at all from it we need a few more sources. And not just the next ones we find, we need a strategy to find a balanced, representative sample of these. And then we can start doing some work around context, purpose, reliability.

But of course, that’s just academic consensus. What do academics know? Moldbug has a problem with consensus (and, indeed, academia) – drawing on a lay perspective of consensus being innately suspicious, and an absence of substantial counter-evidence being doubly so.

As he puts it in relation to Anthropogenic Global Warming:

“The unusual trustworthiness of science, despite the fact that scientists are humans and humans are not generally trustworthy, exists when (a) hypotheses are falsifiable, and (b) the professional institutions within which scientists operate promote, broadcast, and reward any falsification. We can trust a consensus of scientists on a problem for which (a) and (b) are true, because we are basing our trust on the fact that, if the hypothesis is false, a large number of very smart people has tried and failed to discover its error. This is not, of course, impossible. But it is at least unlikely.”


But there speaks a guy who doesn’t hang out with academics much. The ones I know love controversy. They love being the voice speaking out against the tide. It’s a great way to get keynote gigs and well-cited papers. Disagreement is the lifeblood of the only academia I recognise. Hell – I’ve never seen a bunch of academics agree on which pub to go to, on the correct citation method for journal articles… If you want to make absolutely sure of academic argument, try attempting to enforce consensus from above (I used to work for a funding council…).

Open education, and open culture, has put a great deal of information into the public arena. Much of it is primary in nature – undigested, undifferentiated. The terms of common open licences do not allow us to care how it is used after release – and it would be perhaps unfair to castigate Project Gutenberg or MITOCW for the genesis of the alt-right and the birth of Trumpism.

Mike Caulfield is doing as much as anyone to work out what we need to do afterwards.

“My solution to the post-truth crisis is to develop a culture of collaborative explanation and exploration via development and use of new and different tools.

My belief is that humans have a couple modes of working with truth. Some are adversarial and propagative, and some are exploratory and collaborative. The adversarial mode is killing us.”

(Mike Caulfield)

Collaborative explanation is academia at its best. It is the guts of the scientific method – where a perfectly executed rebuttal is a cause for joy as truth is further revealed. It is how humans really get stuff done, whereas the adversarial mode (election campaigns are the best example that comes to mind) is how we stop things being done. It is a mode of enquiry that it is important we inculcate the next generation of roaming autodidacts in before they become Nazis. Or some of them do.

Adversarial explanation is academia at its worst. It’s the “I know something you don’t” mode of debate – praising esoterica, and using sources as weapons. It’s the mode of debate that leads to conspiracies and polarisation – “hidden secrets” and arcana. It can make for wonderful storytelling, mesmeric speaking and writing. But fundamentally it is a model of smartness that celebrates breadth and denigrates synthesis. And it is classic Moldbug. He throws sources and connections at you so fast unpicking it and critiquing it all becomes an exercise in translation – with text like that you can only read and react. His obscurantism is a false signifier, adding the illusion of credibility to his painful and regressive positions on race, crime and governance.

Caulfield’s recent work has been focused on the development of online tools to foster what he prefers to term “choral explanation” – multiple voices synthesising into consensus.

But the will is as important as the tool. And teaching roaming autodidacts the will to collaborate, corroborate and develop as a natural everyday response to a primary source is the next great task of the open movement.

The HE Bill in the Lords: Once More, With Feeling

Previously on the HE and R Bill… – I’ve been following progress with considerable interest, but the coverage has been so good that I’ve not felt inclined to write anything here. The text of the bill passed through the House of Commons without any changes, despite significant issues with both the drafting and the underlying policy. After some feisty exchanges in the Lords the bill has been substantially altered, both by the government and – on six occasions so far – by others. A small government majority in the commons means that these could be overturned again during “ping-pong” (the adorably named process by which the commons and lords come to agreement via conflicting votes and all-night sittings). But this is neither the most urgent or most visible legislative matter the government are trying to manage….

As the House of Lords spend another afternoon going through the motions, it is interesting to consider our own Higher Education and Research Bill alongside the wider work of the upper legislative chamber as we speed towards the Easter recess.

As I’ve hinted in previous weeks on Twitter I’ve got a theory that pressures on parliamentary time will prove to be a defining factor in the shape of the future Act. If peers are together in continuing to block key aspects we need to ask how willing the government are to spend the time needed to push through their own vision, and to throw the various opposition and cross-bench amendments – if they don’t cut the mustard – out.

Since the Commons stages – where, lest we forget, not one single amendment was made to the text of the bill – we have seen significant government climb-downs and alterations. Pressure from peers is beginning to realise the kind of government rethink that was asked for in the commons committee and third reading. No longer under the spell of Jo Johnson’s draftsmen, we can never tell quite where the line will be drawn as regards what initially appeared to be essential parts of the policy framework.

At this point one has to step back and consider what it is that the Bill is actually trying to do, and why. Which – in all honesty – is surprisingly little. Some regulatory changes, based more on a wish to remove HEFCE and tidy up a powerpoint slide than any new possibilities offered? Changes to sector entry and exit tickets – and of course the TEF! – as yet another attempt to make HE work like a market? In many arguments, most notably those made in independent HE, this is sold as a revolutionary policy package – but after this is complete will the sector honestly get to rest in peace for a few years?

As the the dawn of prorogation approaches, there may be cause to for the government to lament this lack of vision. The Higher Education and Research Bill is just one of many bills the Government are shepherding through the Lords at the moment, and may not be what they feel is the most pressing legislative issue currently standing.

The Criminal Finances Bill, for example awaits further committee sessions, and a report. The Third Reading of the Digital Economy Bill on 29th March may not run sweetly. Bills exist around Lords Reform that could become more important to the government after recent “rebellions” – votes on the HE Bill have been just one part of a series of votes that have left Lords walking closer to the fire. Following issues around business rates in the budget, Sajid Javid’s Local Government Finance Bill could be resurrected and forced through before May – his Neighbourhood Planning Bill (with a third reading this week in the Lords) could be equally controversial. A Prisons and Courts Bill is currently in the Commons but could be progressed with haste if the increasingly clear problems in that system continue to make headlines. Closer to home the Technical and Further Education Bill has a committee report in the Lords at the end of March. All this alongside numerous debates, committee reports and other parliamentary business.

And, of course the European Union (Notification of Withdrawal) Bill. This, more than anything, is the clear government priority currently. Amendments in the Lords gave a lot of people something to sing about last week but when “ping-pong” beings – and may be seen on both Monday and perhaps Wednesday as interventions throughout the HE Bill report – no-one can be sure at what point consensus will be realised. Lords will be keen to demonstrate their value as scrutineers in a de-politicised second chamber on this one-in-a-generation constitutional issue, but will be anxious not to be seen as defying the will of the people. It’s a difficult line to walk.

If that bill is delayed – or if other means are found to delay the Prime Minister’s Brexit timetable – this could mean further work for peers. Couple this with an already packed last 25 or so days of sitting (we don’t know for sure exactly how long, but this is a best guess) and something will have to give.

If that something is the HE Bill (either losing it entirely, or mollifying the Lords with even more substantial amendments than were introduced before the report stage) then where do we go from here? It would be an ignominious coda to Johnson’s first substantial legislation, and a sorry end to a project that was perhaps more concerned with messaging and effect than genuine regulatory improvement.

Honestly, if Daniel Hannan can – with a straight face – compare Brexit to the Lord of the Rings I see no reason why I can’t compare the HE Bill to an episode of Buffy…

Thoughts on open education at UC Berkeley and the ADA

Like many people I’m disappointed by UC Berkeley’s decision to remove a range of “legacy” openly licensed online resources from public access YouTube and iTunes U, linked to from their portal. This represents 20,000 audio or video recordings of lectures from between 2004-2015, which will be moved behind an institutional sign-in. And in particular I feel that comments like “Finally, moving our content behind authentication allows us to better protect instructor intellectual property from ‘pirates’ who have reused content for personal profit without consent” are a very bad look, no matter what the context.

Lecture recordings from on-campus provision are generally not great in quality or educational utility unless they have been specifically packaged for online/remote consumption. This process would likely involve exactly the kind of accommodations that are rightly required under the Americans with Disabilities Act (ADA) – at the very least transcription, and the deliberate use of teaching methods and resources suitable for remote learning. And consumer channels like YouTube and iTunes are hardly the best means of distributing a full package of learning materials. I should emphasise that this is good practice for supporting all learners.

A further issue with lecture capture is the likely use of copyrighted material within slides – for a “mass” operation like the one at Berkeley these are notoriously hard to police and check from recordings – as the slides were never provided alongside the recordings (a practice that would have gone at least some way towards addressing the ADA issue) even basic tools like reverse image search were unavailable.

The issue was brought to the attention of Berkeley and the Department of Justice by the National Association of the Deaf. A review of the case found that the complaint was a legitimate one, and that Berkeley (as a public body) were not meeting the requirements of the Americans With Disabilities Act, Title II.

Since 2015 Berkeley had already stopped posting new lecture recordings on the publicly available channels – this, coupled with the bizarre statement on piracy (how was the university losing money? why were they not enforcing the BY-NC-ND license they had chosen?) leads me to have a reasonable suspicion that the ADA judgement is just useful justification for a decision that had already been made.

However, Berkeley will continue to offer lecture capture as a service to enrolled students, and will to continue to share material via their EdX imprint, BerkeleyX – noting in the statement regarding the withdrawal of the legacy content that: “Berkeley will maintain its commitment to sharing content to the public through our partnership with EdX ( This free and accessible content includes a wide range of educational opportunities and topics from across higher ed.”

EdX, of course, famously had their own run in with ADA back in 2015. Despite claiming that they were not subject to ADA as they were not offering a “public accommodation” (and hell, deaf people hardly buy any certificates of completion…) the DoJ required that they sign an agreement to provide accessible accommodations. Note that they claim that this does not extend to course content, but the DoJ disagrees.

Current UC Berkley offerings on EdX do not meet ADA requirements. Though a decent transcript is offered, and this is downloadable, neither audio or text-to-speech versions of figures presented during videos are available. The example below is from the first video I encountered on “GG101x: The Science of Happiness“.

The table in the screen grab above (which I am claiming as “fair use”) is taken, unattributed (other than in the well-hidden Course Bibliography rendered in that legendarily accessible file format the PDF!) from Uchida, Y., & Ogihara, Y. (2012). Personal or interpersonal construal of happiness: A cultural psychological perspective. International Journal of Wellbeing, 2(4), 354-369. doi:10.5502/ijw.v2.i4.5 . The IJW make all articles available under an CC-BY-NC-ND license … a license that the legal team at Berkeley presumably know well 🙂

So at least one of Berkeley’s offerings on EdX does not meet the ADA requirements that EdX were required to meet, and also uses openly licensed content in breach of licensing terms (the attribution did not meet expected best practice, EdX is arguably a commercial concern and a derivative work was used). Apparently “UC Berkeley […] content has been discovered on for-profit websites, which use either a subscription fee or on-page advertising.” so I’m super glad I didn’t pay for that certificate…

For those interested in the legal background to the Berkeley decision, you could do worse than to read up on the way the 2015 EdX ruling offered notice that a website hosting learning content could be seen as a “place of education” for ADA/s502 purposes. I enjoyed this article from Cooley LLP and you might too.

And for those interested in the amazing history of this open education initiative at Berkeley, Audrey has you covered.

Rethinking “Edtech”

I was asked to offer some perspective on the wider idea of edtech – what follows covers investment management, theories of learning, education reform politics, innovation theory and around 80 years of history. Some may be surprised at the scope – I would argue that it is not enough to understand how, to truly make an intelligent decision we need to at least consider why.

I should note that I was asked to give a personal and idiosyncratic view, so just to be absolutely clear these are my own opinions only. 

As an investment category, defined perhaps by the breathless coverage of EdSurge and TechCrunch, EdTech is old news. The last boom years, such as they were, largely sit between 2012 and 2015, with the latter year seeing $18bn of investment attracted into the sector. Those with longer memories may recall a similar bear market at the turn of the century, aligned to the wider “dot com” boom. (and fans of TechCrunch may be interested to learn of the FinTech boom that immediately followed it)

The boundaries of the category are variously drawn, but generally encompass teaching and administrative adoption of technology and infrastructure. There is a smaller, but separate, market segment encompassing research technology with links to commercial R&D, cloud storage and big data analytics and metrics (which you could trace back, if you wanted, to ISI). Academic research infrastructure and support in itself is too small a market to consider separately for most mainstream investors – and is primarily supported by government funding.

Investors of the sort that cover EdTech are operating with a high appetite for risk, and will expect a low number of their investments to offer significant returns. This plays into the fail-fast ethos in wider Silicon Valley, but tends to favour vivid ideas rather than well-considered interventions, and incremental innovation rather than revolutionary ideas (which would have a longer-term return). Very few “EdTechs” are actually making a return on their investments, a scant few (online course provider Udacity, for instance) are even turning a working profit. The model for funders is to grow mindshare and a user base, before being acquired by a larger tech company (Google, Microsoft, Blackboard…) – again, as in wider Silicon Valley.

As a historic project, your modern edtech (in the sense of mechanical or digital aids to the process of education) sits very much on a line drawing from a behaviourist (Skinnerian) model of learning. Drawing on ideas of repetition and reward, it underpins drill-and-kill learning tools such as Duolingo, and many test preparation or content delivery packages.

A later strand drawing on constructivist and social constructivist theories of learning (Durkheim, Illich, Papert through perhaps to someone like George Siemens) emphasised the agency of the learner to make sense of the world around them, drawing on networks of peers. The rise of social media around 2008 spurred the development of “connectivism”, a postulated theory concerning the way networks comprising human and non-human members interact, grow and learn (rhizomatically).

Cognitive learning theories (Piaget, also Badderly, Chomsky) are the basis of the “personalisation” agenda wherein technology can “adapt” within bounded states to suit individual learner needs – much of what is described as “AI” in learning, and indeed many of the models of learning that define AI research – are cognitivist.

And outside of learning theories all together, you have the same drives around efficient management of information that define the wider tech-boom. Administrative technology also has the advantage that the burden of proof is seldom asked for – access to information is an axiomic good.

You could connect these trends together to explain something like the MOOC, which started with an explicitly connectivist underpinning but pivoted quickly (with the pressure of growth and massification) to a behaviourist model, though with a cognitive science gloss via the collection and use of administrative user data.

But why would you? Simply put, these ideas underpin the majority of edtech development. Despite the neo-mania of EdTech as narrative (as Audrey Watters notes “the best way to predict the future is to write a press release”, and I would agree), it is a surprisingly conservative field in terms of approach, although an army of silicon valley patent lawyers would love to convince you otherwise.

Part of the leverage that the field has on education policy makers comes from the wider narrative of Education Reform. Joining parents and educators with genuine concerns about the quality of education with investors and politicians looking to improve the profitability of education, this narrative – which I love to characterise as “Education is broken” – underpins many of the machinery of education (Charter school, free school, challenger institutions…) changes that open up education to “disruptive innovation”.

Harvard Business Administration researcher Clayton Christensen first postulated that idea of disruption, and he applied it to education in his 2008 book ‘Disrupting Class‘. Simply put, the concept of low-end disruptive innovation suggests that any established market can be destabilised by the entry of a new actor offering a similar but inferior product at a vastly lower price. This new actor initially serves a niche interest and does not provide the features of premium products in the marketplace but through repeated innovation it expands and improves to serve wider needs and increases profitability.

However, this theory has been debunked specifically within education (by none less than Christiansen himself in 2013), and more generally as a fundamental narrative of innovation (Jill Lapore in 2014 is flat-out superb). As attractive as the idea of low cost innovation may be to investors, it has not and does not explain innovation as it actually happens.

Entrepreneurial state theory – as described by Mariana Mazzucato in her book of the same name, sees a role for the long term, stable nature of state funding in supporting and developing innovation. An example would be the support in defence spending for early cybernetics projects that became VR, networked communication, responsive software (and also pigeon-guided bombs – courtesy of one BF Skinner… but not every experiment is a success…) and underpin much of what became Edtech.

There are people better qualified than me to talk about theories of innovation, but I will content myself to mentioning Von Hippell’s lead user theory – broadly watching the working practices of expert practitioners, identifying where existing processes or technologies are shortcutted, then working with practitioners to design tools to simplify these short-cuts.

So what is “an EdTech”? Despite overweening claims around innovation, the easiest way is to characterise their intended mechanism. An EdTech uses one or more of the three educational theories above (either knowingly or, more commonly, implicitly) to either sell into existing education providers, or to attempt to disrupt these providers by establishing alternate providers and selling to learners. As hype around the central category has grown, more generally applicable administrative interventions have been branded as edtech.

Actual sales (in terms of money being exchanged for goods or services) are rare, as the focus is on growing a user-base and associated hype in order to be acquired by a larger enterprise. (This is just mainstream Silicon Valley business practice).

But do “EdTechs” improve education? It is difficult to say. Certainly to read the press releases that have flooded the inboxes of education or technology journalists – very few cover both, so it has been possible to exploit gaps in knowledge (see Audrey Watters “What every techie should know about education“) – would indicate that we now live in a golden age of cheap, ubiquitous, personalised and effective learning.  And yet.

Certainly the things that do improve education as a wider are often far removed from the mythologised moment of learning – administrative system interoperability, open licensing for academic content – solving, in other words, known problems as reported by expert practitioners.

(Careful readers will note that I owe a huge debt to Audrey Watters, Phil Hill, Michael Feldstein, Rolin Moe and many others.)

An uncomfortable realisation

Politics and love are the only forms of constraint possible between free people

Sir Bernard Crick, In Defence of Politics


We are the conservatives.

We agitate for the maintenance of the great lasting structures of government – for the rule of law and the authority of judges, for parliamentary procedure and the letter of statute, for the smooth reassuring functionality of legislative and constitutional bodies long since atrophied through disuse. We call for rational military collaboration, streamlined and open international trade, the unimpeded free movement of people. For the slow iterative progress of science and philosophy.

We support free speech, but within sensible limits. We support free expression, but balance this against a right not to be offended. We uphold the democratic will of the public, but measure it against the sage counsel of the technocrats and the learned.

We do this, because the world is in the throes – at last! – of popular revolution led ostensibly by the workers but fermented by a new breed of public intellectual. And we – the revolutionary left – find ourselves on the side of the establishment.

Sooner or later someone’s going to catch the imagination of these people with some new magic. At the bottom of it will be a promise of regaining the feeling of participation, the feeling of being needed on earth—hell, dignity.

Kurt Vonnegut, Player Piano

(with thanks to Helen Beetham)

Incomplete reading list – 2016

Because everyone is listing and capsule reviewing books that they’ve read, this is a few that stuck in the mind this year. Be warned, there are a lot, and the list is incomplete.

The Polygamist King: A True Story of Murder, Lust, and Exotic Faith in America – John J. Miller

Very short (kindle single) thing – just wanted to read more about the Strangites after researching a post about Mormons and leadership.

In Defence of Politics
Sir Bernard Crick

Recommendation via Helen Beetham. One of those plain, smart, books that makes you rethink your radical stances. Now I have the tools to defend the political mainstream, should I need to.

Player Piano
Kurt Vonnegut

Vonnegut’s first (I think) novel. Human dignity after automation, a surprisingly modern theme that bled well into the Trump/Brexit WTF themes of 2016.

What a Carve Up!
Jonathan Coe

Number 11
Jonathan Coe

Number 11 was the new Coe, and a purported sequel to What a Carve Up – which is the better book if you like your grotesque political satires.

So You’ve Been Publicly Shamed
Jon Ronson

Interesting, but kept hoping for him to draw parallels before the internet age (tabloid hits to courtly gossip) that he never did. Public shaming is one of civilisation’s great control mechanisms – dealing with it as a modern phenomenon is hardly scratching the surface. Also reminded me that I wanted to re-read “The Scarlet Letter”, which I have yet to.

Brexit: What the Hell Happens Now?: Everything You Need to Know about Britain’s Divorce from Europe
Ian Dunt

Quite. A succinct, yet terrifying, summary. Emboldened me that brexit may not be a thing that actually happens.

A Gentle Introduction to Unqualified Reservations
Mencius Moldbug

A bunch of Moldbug (from *that* research). Still washing the stink out of my brain.

Everything Belongs to the Future
Laurie Penny

Slight. Ultimately disappointing – which was a shame as I loved her writing on the alt-right this year.

The Elephant in the Room: A Journey into the Trump Campaign and the “Alt-Right”
Jon Ronson

Another Kindle single – which I picked up because I felt sure that Ronson (or Theroux) must have interviewed Trump during the late 80s-90s “wilderness years”. They didn’t, but this was as close as I could find.

Literature Against Criticism: University English and Contemporary Fiction in Conflict
Martin Paul Eve

I’d long suspected that literary studies (as mainstream subject of undergraduate study) would have an influence on literature. And Martin Eve got to the bottom it far better than I could. Following @CowEyePress on twitter alongside reading this was illustrative – and I’d love to introduce the pair somehow (if one of them wasn’t a partially fictional construct).

Clifford D. Simak

Cogdog recommendation. A post-human civilisation led by dogs. I loved it for the sense of hope. Looking back, an interesting parallel read to “Player Piano”.

Joseph Heller

Because 2016.

Voodoo Histories: How Conspiracy Theory Has Shaped Modern History
David Aaronovitch

Also because 2016. As conspiracies hit the mainstream I felt I owed it to myself to understand how they worked. It didn’t help,  but at least I know now.

Homo Deus: A Brief History of Tomorrow
Yuval Noah Harari

Loved this – dataism as a post-human religion and (at last!) a proper definition of liberal humanism. There’s not many futurists I enjoy (Bryan Alexander and Martin Hamilton as obvious exceptions if they read this!) but will look out for other Harari work. I need to pick back up his ideas on animals.

Judas Unchained (Commonwealth Saga Book 2)
Peter F. Hamilton

Every camping holiday needs a stomping great space-opera saga – this was nicely done.

Station Eleven
Emily St. John Mandel

Post-apocalyptic touring orchestra and chorus visits an airport community in the ruins of Canada. Some lovely, affecting, touches.

Warren Ellis

How can I resist a rest-home for burnt out futurists? Story was a bit meh, but the world-building was excellent.

Neal Stephenson

Story was a bit meh, but the world building was excellent (x1000). One of the things that made me want to read about Mormons in 2016, as another example of a fully documented creation mythos.

The Best of All Possible Worlds
Karen Lord

This one got mixed up with “Seveneves” in my head, and lost out to a better described multi-species human future.

The Water Knife
Paolo Bacigalupi0

Recommendation from (I think) Pat Lockley. Nicely realised near-future, with a great story. Another reminder that our future may be to be led by gangsters.

Stoner: A Novel
John Williams

Token campus novel – I think everyone read it this year. If you haven’t, you should.

Martin Paul Eve

I should admit that I’d read pretty much most things that Martin writes.

Nothing is True and Everything is Possible: Adventures in Modern Russia
Peter Pomerantsev

Properly loved this. Russia is another key to understanding the modern world, and I keep going back to this collection of insights from Russian scripted reality television. I’ve a feeling Adam Curtis read this too.

Snow Crash
Neal Stephenson

Everyone kept referring to this. Another Neal Stephenson book – great world-building, meh storytelling.

Neptune’s Brood
Charles Stross

If you only read one post-human thriller about forensic accountancy and robot religion: make it this one.

The Nightmare Stacks: A Laundry Files novel
Charles Stross

The new one.

Just Say No: The Spectator On The 1975 Referendum
The Spectator

Because I wanted to understand Vote Leave from a historical perspective.

Adrian Barnes

Most people forget how to sleep. The collapse of civilisation ensues. In Canada. I guess I just love books set in Canada.

Richard Powers

The digressions about biological warfare were less interesting than the digressions about C20th classical composition. But the digressions about C20th classical composition were amazing.

Red Plenty
Francis Spufford

Series of short stories about soviet Russia and cybernetics. Again, stuff about Russia.

The Man Who Wouldn’t Stand Up
Jacob M. Appel

Kind of (on reflection) a companion piece to “So You’ve Been Publicly Shamed”

Look Who’s Back
Timur Vermes

2016 was the year that Godwin’s law was repealed. This is a translation of the hugely successful German satire on the return of you-know-who.

Capitalism, Socialism, and Democracy
Joseph A. Schumpter

Austrian school LOLs.

Flat Earth News: An Award-winning Reporter Exposes Falsehood, Distortion and Propaganda in the Global Media
Nick Davies

Was expecting great things from this, but didn’t really tell me anything I didn’t know about the death of journalism.

The Trial
Franz Kafka

An insight into my own personal 2016.

Superforecasting: The Art and Science of Prediction
Philip Tetlock

A fair few books about prediction this year. None of them really matched up to Nate Silver – see below.

Shoeless Joe
W. P. Kinsella

The book that inspired “Field of Dreams”, which got stuck in my head early this year. It was an excellent read and I would recommend it.

Respect: The Life of Aretha Franklin
David Ritz

That rare beast – a biography by someone who loved their subject but was not blind to their flaws.

The Silo Effect: Why putting everything in its place isn’t such a bright idea
Gillian Tett

Read it. Didn’t help me understand why people worry about silos so much.

Frank Zappa: The Complete Guide to his Music
Ben Watson

Zappa The Hard Way
Andrew Greenaway

Somewhere in my head there is a post about Zappa’s 88 tour band and parallels to both Trump and contemporary academia. It’ll happen one day. This was source material (the Ben Watson should be “The Negative Dialectics of Poodle Play”, but no-one should read this more than once…)

Broken Vows: Tony Blair The Tragedy of Power
Tom Bower

Can’t remember why I read this – probably looking for more stuff about Michael Barber. Wasn’t any.

Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets
Nassim Nicholas Taleb

The Signal and the Noise: The Art and Science of Prediction
Nate Silver

Two more books about prediction – both of which I would want to go back to. Taleb’s authorial voice annoys me –  couldn’t get through “Black Swan” for instance – but this was readable.

Cryptocurrency: How Bitcoin and Digital Money are Challenging the Global Economic Order
Paul Vigna

Bitcoin: The Future of Money?
Dominic Frisby

Because this year we all learned about Blockchain. Either of these would have done admirably.

The Origins of Higher Learning: knowledge networks and the early development of universities
Roy Lowe, Yoshihito Yasuhara

A history of higher education that is clear that higher learning is globalised by nature. Great to move me from my early European HE history fixation.

Boomerang: The Meltdown Tour
Michael Lewis

One of my favourite financial journalists on the global, personal impact of the 2008 financial crash. The stories about Iceland particularly stuck in my head.

The Working Life: The Promise and Betrayal of Modern Work
Joanne B. Ciulla

“Timely” is the word that comes to mind. Would have liked more on “the history of the future of work” (sounds like an Audrey Watters post!)

Taking Our Country Back: The Crafting of Networked Politics from Howard Dean to Barack Obama
Daniel Kreiss

Really liked this on modern political campaigning. I think this was a rewritten PhD.

The Myth of the Rational Voter: Why Democracies Choose Bad Policies
Bryan Caplan

He didn’t know either.

Ready Player One
Ernest Cline

The novelisation of an imaginary John Hughes film from a story by Mike Judge. Recommened by both my son and Bryan Alexander.

Dangerous Medicine: Problems with assuring quality and standards in UK higher education
Paul Greatrix


Debt: The First 5,000 Years
David Graeber

Want to go back to this one too. The parallels between societal organisational structures, religion, learning… and, of course, the idea of debt.

Surrender the Pink, revisited.

A purported romance novel, Carrie Fisher’s “Surrender the Pink” is both smarter and bleaker than you may expect. As you would expect, the writing is taut and bitterly funny – as you wouldn’t expect the central message is one of love without hope and of giving up on all of your dreams.

As Fisher was a woman, her fiction is always seen and reviewed through an autobiographical lens – “Surrender the Pink” being canonically The One About Being Married To Paul Simon. Quite why anyone would assume that anything a smart, inventive woman writes is somehow only about her (and her in relationship to a man, at that…) can be left perhaps as an exercise for the reader.

And the reader does get a lot of exercise. The writing is manic and “bitty” – in the sense of feeling a little like a compendium of pre-imagined bits, fragments of a (blistering) stand-up routine as the inner lives of Dinah Kaufman, soap-opera treatment writer and inveterate self-analyst. You need to pay attention to the many coded layers – of parody, irony, reference and wordplay – in order to follow what is a fairly slight but powerful tale of love (whatever that may be) lost (whatever that means).

A central conceit neatly upends that “women’s writing as autobiography” cliché – the central relationship of the book is paralleled as the central relationship of the (perfectly awful) daytime TV drama called “Heart’s Desire”. The book ends with Dinah enjoying a no-strings affair with the actor who plays the proxy-of-her-ex-husband character, whilst the assumed knight in shining armour (from a narrative perspective, at least) arrives late, irrelevant and useless. The only possible happy ending is Dinah’s constant worries about relationships becoming a background theme to her life, rather than a dominant melody – dialling her feelings down to a dull roar, as she puts it. Or it could be read as a retreat from messy reality to neat fiction. Or the realisation that everything that she wanted was unimportant. Or unattainable. Or both.

Fisher likes to land highbrow references, and parallels, as a further layer within the circularity of ideas. It makes the point that we play out, in relationships of whatever sort, the tropes and archetypes of fiction – emphasising the supreme power of lived fiction to shape reality. And in the same breath, shows the weakness of fictional devices – highbrow and lowbrow alike – in analysing interpersonal interaction.

Arguing about relationships between men and women from what amounts to a post-feminist position, Fisher’s carefully constructed similes and allusions can be read as arcane knowledge – arcane in the purest sense as of being useless to her protagonist. Dinah is able to contain multitudes (indeed, literally – her moods, Pam and Roy, separated and understood as both being necessary) and contradictions in ways that fictional characters are generally not subject to.

If the book has a weakness, it is the power of Fisher’s writing – her quick, wordy humour overwhelms on occasion the voices of individual characters, making it difficult care too much for anyone but Dinah – and to swallow enough exasperation  to care much about Dinah. There’s an anti-Emma-esque quality to her dogged insistence to fail to make sense, in fascinating ways, of the lives and relationships of herself and others.

Post-truth in the truest way, the book speaks of places where truth fears to tread – and of the horror of being someone’s “black swan from hell”. It’s something I have re-read often, for reasons I’m  grateful for even if I’m not sure I entirely understand them myself.

Three Kings

… came from the East Coast. I never though I’d end 2016 writing about the assassination of a presidential candidate and  church governance in mid c19th America, but – I guess – 2016.

So that Helen Beetham (her newish blog linked there, add it to your RSS reader straight away) asked me what the deal was with the Evangelical Right and Trump. So I went and read some stuff about how some of them feel about it, and tried to make sense of how someone from such a background could get to the position of voting for him.

Turns out there are significant differences of opinion within the community: on the ethics of voting for someone who espouses a decidedly non-evangelical lifestyle, and also on the very usage of the term Evangelical – that latter perhaps drawing to a close the “Moral Majority” era that started in the 80s with Jerry Falwell.

But then it occurred to me that most of the Abrahamic religions do have an element of waiting for a King about them, or wanting to go back to having Kings again (1 Samuel 8, for example) – so perhaps there may be a religious cross-over between neo-reactionism and religion? And I found a couple of posts taking about “taking the bread pill” (even the terrifying extreme edges of the church still love dad puns). That link there is to a pretty serious racial nationalist movement, from what I can tell. So be warned.

But this seems more like a fringe curio – even though Moldbug’s use of secular Presbyterianism as a proxy for a motivating force for the American establishment does kind of build the scaffold from the other side a little (and many Holy books do lay down a pretty solid base for racism and misogyny if you read them wrong).

“What is to be done? Who of all these parties are right; or, are they all wrong together? If any one of them be right, which is it, and how shall I know it?”

I’ve been reading a lot about the early latter day saints as a path into trying to understand governance, publication and power in America. Almost uniquely amongst major strands of religion, the followers of Joseph Smith left a great deal of documentation concerning corporate structure and legalities stemming from the organisation of the nascent Church and the separation (or otherwise, let’s not forget Joseph Smith’s 1844 presidential candidacy) of Church and state.

Smith was – of course – assassinated in Nauvoo, Illinois not long after he declared his candidacy, which on a decidedly non-theocratic platform that included radical prison reform, an end to slavery, small government, Native American rights, the establishment of a central bank and the possible annexation of Texas and Canada. In a presidential race that reflected the growing controversies around what became known as Manifest Destiny and the Slave Question these were populist liberal (small-l) views presented in order to make a concerted reach for power, and emphasised Smith’s conventionality as a candidate.

The back story, of course, is a little more complex: Smith, as Mayor (or “General”) of Nauvoo sought redress for the treatment of his people in Zion, Missouri. As no other presidential candidate appeared to be willing to promise support for the Mormon people the decision was made that he would run for office.

Post-assassination (as the culmination of a hugely complex story involving the restriction of press freedom) the Mormon people sought new leadership (and possibly prophecy) from the established structures of the Church.

Over a period of month, three major claims to leadership were made – underpinned by three ideas of the nature of governance that are what I am aiming (600 words in!) to mainly talk about.

  • Sidney Rigdon was the most senior remaining Church official after Smith and his Deputy were murdered at Carthage Jail
  • Brigham Young could be seen as first-amongst-equals within a Council of Twelve – overlapping in personnel with various other bodies as senior advisors, counsel, and administrators working on behalf of Smith.
  • And James Strang‘s claim was made on the basis of spiritual revelation, a continuance of the direct prophetic tradition.

So we have hierarchy, consensus and ideology as three motivating principles for organisational decision making (which really would be a better plot for a musical…). And, as each candidate took a part of the Church with them, we can almost see how the logic of each plays out over time.

  • The Rigdonites headed east to Pennsylvania, but their alternate church did not last, sustaining through later years via claims of continued prophecy rather than hierarchy. The Church of Jesus Christ (Bickertonite) is the surviving remnant of this strand, taking an informal name from a former Rigdonite – William Bickerton – who reorganised the Church and formalised the doctrinal split with the mainstream Latter Day Saints. (For rock trivia fans, 70s shock-rocker Alice Cooper was brought up as a Bickertonite.)
  • The Strangites headed north-east, to Michigan. Adherents were energised by a string of revelations and newly discovered scripture, and the group settled on Beaver Island in Lake Michigan. Strang saw his position as king rather than president, though his hyper-localised theocratic monarchy did not prevent him from sitting in the Michigan House of Representatives and founding Manitou County. The increasing commercial importance of trade in the area, and Strang’s increasingly alarming diktats (including forced conversion for all island residents and the perennially popular stipulations about the nature of ladies’ bloomers) led to his assassination by two lapsed (escaped?) church members.  Strangites do still exist in two factions, though numbers are small and no presence remains on Beaver Island.
  • The majority of Smith’s followers went with Brigham Young to what became Salt Lake City, Utah, and this branch constitutes the majority of Mormons we know today. The church grew and flourished under Young’s organisational skill and management.

So – “Kings bad (with a tendency to despotism), Hierarchy ineffective, Consensus good” is one secular lesson that could be gleaned here. Getting things done in any walk of life involves organising, motivating and managing people and it turns out that Mormons are pretty good at that even by worldly standards.

In numerous pieces about his faith written throughout his career, Clayton Christensen comes back to the idea of “state-of-the-art” Christianity. He expresses the restoration of the gospel by Joseph Smith as an example of “The Lord’s disruptive technology“. But it could also be argued that it was continuation and consensus, not disruption, that led to the success of the church.

Which is perhaps a lesson our national leaders may wish to take to heart.

“The Bannon of Heaven”

This – incredibly – is only one of the blog posts I currently have on the boil that touches on the Church of Jesus Christ and the Latter Day Saints, though this one is – I think – merely a coincidence.

Like nearly the entire western world I’ve been thinking about fake news and the negotiation of constructed realities as performed online, and like maybe 40-50 smelly edtech hippies I’ve been wondering how to apply what I learned from #ds106 to this now rather pressing problem.

But then – via Cogdog-style happenstance – and prompted partially by the man-dog himself’s recent and intriguing post on a “Networked Narratives” course he is running with Mia Zamora for no other reason than it needs to be done (I hope to be there) – I got into a bad-old-days-of-blogging nostalgia-fest and in looking up whether anyone had re-invented Google Reader yet. I’m on inoreader at the moment, since you ask.

Whilst meandering, I stumbled across the term “bloggernacle” – which, well, I’d use anything called “bloggernacle” and I think I speak for us all in saying that. Turns out that there is a huge Mormon blogging scene. Open education folks will know that that LDS (Latter Day Saints, which I understand is the more accurate way to describe people of that faith) and OER are intertwined in various wonderful ways, so I was mildy interested to see whether online activities of the two shared a common source. Instead I found a link to something altogether more #ds106-ian.

In mid-2005 several prominent LDS bloggers put together a group blog (called “Banner of Heaven”) based around a bunch of invented characters. The idea was primarily to “to explore the potential of blogging as a story-telling form”, with subsidiary goals of reflecting back what they perceived as the primary concerns of LDS blogging at the time. This link is to what you might call the learning objectives of the exercise – read it.  (most of this post comes from a 2010 “behind the music” style retrospective by one of the original authors on By Common Consent. It was only at this point that the original text was made public.)

They came up with six characters:

  • SeptimusH – a shy inactive former missionary, who all too often ends up dealing with dead cows.
  • MirandaPJ – a feminist from Lewiston, Idaho, who confiscated her husband’s xbox.
  • JennMailer – a perky but insecure young woman with very traditional views.
  • Mari Collier – Miranda’s sister, kind and faithful, but with a troubled past.
  • Aaron B Cox – representing the more combative end of blogging and the more… unique… expression of scriptural fundamentalism
  • Greg Fox – a non-church member who loved to hang out with the others, but was often disappointed with what he found.

These were both (semi-) realistic positions current in the LDS online milleau at the time, and astutely drawn comic characters in their own right. I’m sure coming from my position 10 years or more on I’m missing a lot of the subtlety – and would never be able to spot the point where the stereotypes were amped up to the point of lunacy and people began to spot that something wasn’t right.

Yes – the group blog was not explicitly presented as “fake” – people believed that the characters were real, and began both to worry about the situations described, alternately empathizing and judging, and to share their own stories in response.

From an educational point of view, this is great stuff. But for an online community in the first flushes of blog enthusiasm, perhaps not so much. Another LDS blog, Nine Moons, did the inevitable expose and the initial comments, from the hip young things the joke was aimed at, are fairly good natured. But by the time they began the “guess the famous blogger” competition things start turning a little more sour, and those outside of the community began to take a view. (Church and Federal) Legal issues were brought up. Senses of a community were lost. Hands – indeed – were wrung. Pearls were clutched.

But reading the comments to some of these posts, ten years on, is uncomfortable.  There is a genuine sense of betrayal. People that were accepted as friends are no longer “real”. Ideas of what constituted a part of the lived experience of peers needed to be rexamined.

It’s not hyperbole to say that this “experiment” had a long and lasting effect on the community that was forming around it and other writers.

One of the co-authors, writing a year later, notes:

Here is the interesting part: no one really remembers much about Banner itself; instead, what everyone recalls is the outrage. Either you remember the deceit, or you remember the pound of flesh publicly exacted from the Bannerites. Few of us recall reading Banner or the ideas laid out by the characters. Bannergate has sucked the work dry.

The power of the reaction, the brutality of the analysis, destroying the meaning of the original text.

All of us have the urge to create a new reality online – either explicitly in coming up with something like the saga of Dr Oblivion, or in the work of Helen Keegan; or implicitly, in presenting a version of ourselves that is just plain nicer/funnier/smarter/more interesting than the decaying sack of bones, flesh and Doobie Brothers lyrics that exists in three-dimensional reality.

And when you do create a reality, do you pull away the mask and risk confusion and alienation. Or run the risk of keeping the story going, ending with some of the horror we have seen emerging from known falsehoods in 2016 (this week’s shitshow: pizzagate).

Pizzagate brought home to me that the fake news scare is just digital storytelling gone awry – and gives me the hope that I know some people that may have a handle on what to do next.

Teaching digital storytelling is probably one of the most important things anyone can be doing right now… and for my part (without a role of my own) I want to volunteer to help out those of you who are taking this to the streets.

(bonus conspiracy theory)