All posts by dkernohan

Towards a Paleoconnectivism Reader #opened14

This is a complement to Jim Groom’s notes from our joint presentation (sadly missing one Brian Lamb) at OpenEd14. There’s a lot more stuff I want to write about from that conference, and from the awesome UMWHackathon I was lucky enough to participate in afterwards. But this is a start.

On birth myths

Last year in Park City we were honoured to be able to hear Audrey Watters speaking about the apocalyptic preoccupations of the culture that has grown up around education technology.

Our work here today is a look at the other end of the mythological journey – the birth myths of open education. We know them well – Sebastian Thrun inventing massive open online learning in 2011, George Siemens inventing massive open online learning in 2008… MIT (and/or the Hewlett Foundation) inventing sharing learning materials in 2002…

Birth myths, even more so than apocalyptic narratives, are ahistorical. They tie in with a phallogocentricism of the concept of creation as a single act by a single person (generally a man…) rather than a whole set of pre-existing conditions and preoccupations.

Paleoconnectivism is an attempt to recontextualise our current work in looking at the pre-creation history of the concepts and interests we share. It’s an attempt to begin to clear the way for a literature, a research base that connects with other work in cognate fields.

As George Siemens wrote recently:

 “I can’t think of a trend in education that is as substantive as openness that has less of a peer reviewed research base. Top conferences are practitioner and policy/advocacy based. Where are the research conferences? Where are the proceedings?”

We could add – where are the roots in the fields that openness sprang from? Where are the connections to long standing work in copyright reform, education studies, communication studies, philosophy?

Larry Lessig and the First World War (a worked example)

What were the causes of World War 1?

That’s right, the cause of World War 1 was ethics in games journalism.

Or at least, ethics in journalism. The power of the fourth estate.

On the first day of this conference, Larry Lessig talked about “tweedism”, the idea that the interests of those who funded politics would always prevail over the choice of politics offered to the electorate. His analysis omitted the power of journalism of all forms to shape politics, and even to start wars.

Alfred Harmsworth began his career writing for Tit-Bits. This was a UK periodical that collected the best of other journalism from around the world, based on reader recommendations and occasionally reader contributions, and presented it in weekly issues [Students of the history of copyright will note a parallel with the late c18th journals like Mathew Carey’s American Museum that excerpted UK copyright scientific materials and republished in the largely (at the time) lawless US. Which was the way the US became a superpower, and is another story that I also didn’t get to tell last year.]

Basically, it was Reddit.

He moved on from there to found what was essentially Quora, a periodical called “Answers To Correspondents” where readers could write in to ask or answer questions of and for other readers. This quickly became a hugely popular publication, and the profits from this enabled him to buy and found a range of UK newspapers including the Times, the Daily Mirror and – most terrifyingly – the astoundingly popular Daily Mail in 1896. He became ennobled – Lord Northcliffe.

Throughout the early 1900s, all of these papers pursued a belligerent and, frankly, xenophobic line against the rival European power of Germany, using their near-blanket control of public opinion to force more and more hawkish policymaking from the government of the time.

One of the few papers he didn’t control, the Star, noted:

 “Next to the Kaiser, Lord Northcliffe has done more than any living man to bring about the war”

During the war his papers brought down the British Government of Asquith over an alleged shortage of munitions, and had David Lloyd George installed as minister for munitions in the following coalition government. When Lloyd George became Prime Minister in 1916, Northcliffe turned down a proffered ministerial post and was made Director of Propaganda.

Not Lessig’s “green power”, not the power of popular opinion – something else. The curated and managed mass opinion used to shape policy. (Even now, it is widely considered that the Daily Mail receives and prints more readers letters than any other UK paper). Somehow this all feels very modern, and very relevant as we consider popular resistance to a more progressive agenda. And, though I loved Lessig’s presentation, this was an aspect of policy making that his analysis missed.

The Sheer Pace of Change (back to edtech)

 One means of shaping popular opinion is to emphasis the sheer pace of change. Again, Audrey touched on this last year – but consider this from Martin Bean of FutureLearn and the UK Open University:

 “Perhaps the most difficult thing for those of us in higher education to get to grips with is the sheer pace of change”

He’s right, in a way. Things change so slowly. Old battles are refought, old divisions redrawn. Old ideas are lost and, perhaps, rediscovered.

“Educational institutions, too, are expected to change themselves so they can somehow be one step ahead of (or just catch up with) where people already are. Resistance to change is presented as resistance to what is natural and inevitable, like fighting a rising tide or an avalanche (yes, these are the same metaphors used in MOOC-hype articles – no coincidence). Universities are depicted as recalcitrant in the face of changing external circumstances, the latest of which is the ascent of the digital” – Melonie Fullick

 There is a vested interest in a fast rate of change, and the interest comes from – as always – people with things to sell. Education is more like a glacier than an avalanche. Change is slow, but relentless and final – arching fissures in the landscape that remain long after the reasons are forgotten.

The Time of the Cyclops (in the country of the blind…)

Martin Bean worked for the Open University in the UK, an institution that began as the “University of the Air” – shaped by and inspired by technology.

 “Between church and lunch I wrote the whole outline for a University of the Air.” – Harold Wilson

 As the University charter  sets out:

“The objects of the University shall be the advancement and dissemination of learning and knowledge by teaching and research by a diversity of means such as broadcasting and technological devices appropriate to higher education, by correspondence tuition, residential courses and seminars and in other relevant ways, and shall be to provide education of University and professional standards for its students and to promote the educational well-being of the community generally”

 The OU has both a remit to, and a history of, experimenting with new technologies. FutureLearn is one example, another is Cyclops – which was designed in the late 70s and used in trials until the mid 80s. It extended the then-contemporary use of phone conferencing, and was seen as a less technical alternative to the full on CoSY web-conferencing (multiple-email list) action in stuff like DT200, which we’ll come to later.

No-one appears to have recorded, what – if anything – Cyclops stands for. My best guess is Control Your Class Like Orthodox ProfessorS.

Mike Sharples is now Pedagogic Lead at FutureLearn, but he was also one of the key team at the OU working on Cyclops. Here’s some notes from a presentation about it he gave in 2009.

Students preferred it to the alternatives… so why isn’t it used now? Framework for evaluation at three different levels: Micro, Meso and Macro –usability, usefulness, efficiency
  • Micro layer – worked at this level! Familiar system – like an overhead projector, true wysiwis, students operated it with no training.
  • Meso layer – tutors adapted it to their teaching style, tutor station with graphics pad
  • Macro layer – matched students needs, wrong business model, saved student travel costs, but increased OU costs for facilitator and line charges”

A familiar attempt to capture student attitudes at the time, is detailed in Bates’ 1984 book. The Role Of Technology in Distance Education:


And from a longer paper [McConnell, David and Sharples, Mike, “Distance Teaching by Cyclops: An educational evaluation of the Open University’s telewriting system”, British Journal of Education Technology, vol 14 issue 2 (May 1983)]

McConnell and Sharples

Precisely why adding graphics to telephone teaching would make it more effective is not discussed in any of the literature I am able to find. What the telephone teaching added to distance learning was the connection with the others,and  although early work focused on content the key was the connection.

Elsewhere in the 80s education technology literature (specifically in the Robin Mason edited “Mindweave“) researchers were clear that further work should draw on fields that study human communication. For example:

 “Finally, in the user arena, we need to continue to do, and to make use of, fundamental work on the characteristics and processes of human communication, at the individual (cognitive and psycho-affective) level as well as on the social (group interaction and cooperative working) level”  (Peter Zorcoczy in “Mindweave”, p262)

The student experience research I cited earlier suggested that a visual focus of attention  was one of the primary benefits that the Cyclops system can offer. But is all digital content just a “visual focus of attention”? Some pretty lights to look at whilst the learning happens elsewhere?

#DT200 is your new #4LIFE

 “It could be argued that the inherent pedagogical characteristics of CMC are independent of whether it us used in a distance or campus-based environment. They revolve around two very important features of the medium:

* it is essentially a medium of written discourse, which nevertheless shares some of the spontaneity and flexibility of spoken communication

* it can be used as a powerful tool for group communication and for co-operative learning” (Anthony Kaye in “Mindweave”, p10)

Computer Mediated Communication (via tools like Guelph’s CoSY system) was the big noise in the early-mid 80s, with the OU’s own DT200 of legend being one of the first courses to use such a system with (comparatively) inexperienced distance learners.

This was the first time the OU had used CMC as a primary means of supporting learning. Opinions of students were, at best, mixed:

 “A series of questions about the convenience of electronic communications was included in the questionnaire for the course database. These show that about 60-70% of students returning questionnaires found [CMC] less effective for contacting their tutor, getting help, socializing and saving time and money in travelling” (though methodological issues around survey timing)” (Robin Mason in “Mindweave”, p123)

 “There seem to be a lot of people with axes to grind, particular things which interest them which they put into the conference which aren’t really relevant to the course at all. Sometimes they are interesting to read, but it is pretty much pot luck – you don’t know what you will get out of them” (student quoted by Robin Mason, as above)

“Before we started I had naïve visions of vast amounts of stimulating conversation going on […] By and large this has not happened and I have learnt that electronic communication is both hard work and time consuming. There is also concern about social isolation produced by the new technology, the electronic communicator can spend a large part of his or her time alone, neglecting the family and perhaps having little time left over for face to face interaction.” (student quoted by Robin Mason, as above)

 As Mason concluded, “Conferencing did not have a high enough profile on the course to be a medium for discussing course issues in depth” (p137)

Fundamentally, the people who liked computer mediated conferencing, liked it. It made sense as a supplement to other modes of interaction, especially amongst interested groups. But it was a long time before eLearning (as it became) became a standard offer at the OU, especially given the expense of providing modems, loans for computers and when contributions towards academic and support  time spent responding online were added up.

This was, of course, in line with the more theoretically grounded research writing at the time:

 “Although technology is important for any mediated activity, it cannot automate what is in reality a social encounter based on specific social practices. These social practices are unusually complex because of the difficulty of mediating organized group activity in a written environment. Failures and breakdowns occur at the social level far more than the technical level” Feenberg in  “Mindweave”, p28

 The message the keeps coming across is that this is difficult stuff. Not really difficult technically – at least, not in 2014 – but difficult conceptually. Interacting and learning in this way online is not “like” social media, any more than it is “like” a face-to-face conversation. It is something different. And, until a learner is used to it, it is something that can be very complex.

Networks, not work.

 “This message map analysis shows a complex web of interaction composed of many interconnected linkages. This visual mapping of the comment linkages supports reported observations that online discussions are not linear and that complex referencing occurs […] collaborative learning is predicated upon interaction; analyses of on-line course indicate highly synergistic and interactive learning patters. There is dynamic interaction and weaving of ideas” Linda Harasim in “Mindweave”, pp56-57

 We still don’t really understand the implications of this, despite the huge growth in social and learning analytics. I’ve seen so many diagrams that just demonstrate that a lot of people talked to a small number of people. We’re still staring at these images of networks as if they will reveal something about what makes them work.

You might think that this post is just another example of edtech nostalgia. But I’m not here to laugh at old dreams of the future. To me it is a salutary reminder that so much of the work has yet to be done. We’ve improved the technology, we have yet to improve our understanding of the underlying issues. As “open education” becomes a field of inquiry rather than advocacy, this is the unfinished business left to us by our predecessors.

 “Everything may be possible eventually through technology – but we should ensure that what is done through technology is what we want, no less in distance education as in other aspects of our lives.” Tony Bates in “The Role of Technology in Distance Education”, p230)

(why a paleoconnectivism reader? well, originally we had some thoughts of launching a call for chapters for a book covering all this stuff. It may still happen. Most of what I have written here is taken from dusty old books retrieved from academic library clearances. Next time someone comes to relearn this I want them to have some chance of finding an artefact to work from. 80s and early 90s history is a bit of a blind spot for the internet, sadly…)

Book Launch: A New Order

A New OrderI made a book!

You can get a real actual physical object, with pages and a cover and everything, from It’s £10 plus whatever post and packing is to where you live. (I think it will also get on to the evil online bookstore of your choice, eventually)

You can also get a pdf version for your various ebook reading devices and apps. That one is free, and you’re just downloading it from me. Whilst it lacks the tangibility of the other version, all of the links do work and the other stuff is as near identical as you may expect.

There is also an ebook version, which is currently on and will gradually be sucked out onto the big evil bookstores (Amazon’s Kindle Store, the iBookstore, Barnes & Noble NOOK bookstore, the Kobo bookstore…) over the back end of this year. This ebook version does not have any hyperlinks, because EPUBs suck. It’s free (as in £0.00), and free ( as in CC-BY-SA).

The content is the least worst of the various posts I’ve done here over the last 4 years (so, of course you could just read most of the content here). Possibly more exciting is an exclusive introduction/reader guide from the man who started all this madness, Brian Lamb.

The superb cover image, and indeed the header image of this blog, have been created by the super-talented Rob Englebright.

A lot of this creation and publication happened at the UMW Hackathon, so big thanks to Jim, Martha, Tim, Ryan, Andy and the team for hosting us at DTLT (big fan). The “publishing books” track was primarily Audrey Watters and myself, so look out for her book (of her talks from this year, which have all been amazing)


OK, so shall we talk about ethics in games journalism?

This is a post about VERY BAD THINGS that are happening/have happened on the internet. This is a trigger warning because I understand not everyone wants to or can healthily read about stuff like this. There are very few explicit words or images in the post, but there may be more if you follow the links (which I have kept to a safe minimum here, though you can read all of the horrible things you like if you look hard enough. Or see Audrey’s post.)  

Male and “EdTech” readers – yes this post does concern you. You don’t get to write this off as “some horrible men are being horrible to some women”. This is our internet, this is our culture, this is our responsibility.

Gamergate is a thing, and some people are still trying to defend it. It’s a leaderless, structureless, and – quite possibly – lawless entity which seems to exist primarily to (a) threaten and humiliate women involved in the games industry or “games culture”, (b) threaten other women and men who speak up for the women that have been threatened, and (c) say “BUUUUT ethics in games journalism” when taken to task about the first two activities.

Threatening people is no way to win an argument. Defining or agreeing what the argument is about would be a first step, maybe.

The history of the “thing” is already well documented and goes something along the lines of “messy break-up whining goes viral, rape threats happen, but ethics in game journalism”. Turns out a woman may have slept with a man she met who works in the same industry as she works in (I know, right…), and the man was not responsible for writing a non-existent review of an award winning game she produced. Because ethics in games journalism.

Obviously this is the cue for people (and by people I mean a very small number of men) to get angry and start hurling accusations and threats about. As this is the internet and the internet loves drama, a whirlpool of journalism and journalism-flavour content is scattered all around the bits of the web that us normal folks might go on, thus drawing more people in to point out that threats of sexual violence are not ever a good thing and thus having similar threats hurled at them. Because ethics in games journalism.

This annoys me. Ethics in games journalism is actually an interesting thing to write about. Games, and gaming culture, is interesting. There are legitimate concerns about ethics in games journalism, although to be entirely accurate these concerns do not extend to the sex lives of game developers or game journalists.

The real ethical issue in games journalism concerns the huge amount of advertising and sponsorship income gaming publications get from people who make games.

Once there was a time called “the 90s”, and the Amiga was was pretty much the gaming platform of choice. So there were loads of magazines that reviewed Amiga games, and in pre-internet times these reviews were the main way in which tedious game-obsessed schoolboys (like your humble scribe) could get information regarding which games (if any) they would want to spend their scarce funds on.

And these reviews were, in the main, awful. They reviewed unfinished games, they gave high scores to terrible games (because game reviewing was, and is, mostly about the scores – which [by ancient convention, the reasons for which are lost forever] are measured as a percentage against the platonic ideal of the best game ever.), and they used screen-shots that all looked suspiciously similar to each other.

The usual deal would be that an evil software publisher would sidle up (oh yes, SIDLE up) to an unscrupulous magazine and whisper “oh deary me, what am I going to do with this *huge advertising budget* for our new game. Which magazine should I buy ads in?”. At this point the magazine in question would adopt a certain position and offer to review said new game in glowing terms, even if it wasn’t (a) very good or (b) finished as long as they could have an “exclusive“.

Why? Because … yup… ethics in game journalism. Always has been.

Videogames magazines and videogames publishers nowadays exist solely as a mutual-support network aimed at squeezing money out of your pockets and into theirs. They know only too well that the days of games mags are numbered, so they have no interest in building reader loyalty, and hence no interest in integrity. All they want is to get as much cash out of you as possible before they die forever. And the best way of doing that is by hyping publishers’ games, artificially inflating readers’ enthusiasm, getting lucrative advertising from the publishers in return, and meanwhile cutting back on staff and budgets to the point that even reviewers naive enough to want to do their job properly simply don’t have the time or the resources for it.

- Stuart Campbell, probably around 2004.

Or how about giving an excellent score to a terrible game that was unfinished, using a version that bore no resemblance to the game that was released. In 1987.

Ethics. Journalism. Games. And actually a fair point, one that deserves criticism (as opposed to, say, threats of sexual violence).

At the time there was one magazine that wouldn’t arbitrarily give a dull game 73% because they didn’t want to annoy people, and that was Amiga Power (which pioneered new and exciting ways of annoying people).

It had a short-lived but still worthwhile “Amiga Power Policy of Truth”, which was an aspiration rather than a law:

 1. We won’t review unfinished games just to claim an exclusive.
2. We don’t pander to games publishers – we say what we really think.
3. We only use experienced, professional reviewers.
4. We won’t bore you with mountains of technical-jargon-hardware tedium.
5. We take games seriously, because you do too.

Because ethics in game journalism. Miraculously achieved without any rape threats at all.

-- intermission --
- "So, this isn't really about edtech at all is it. Just Kernohan doing his usual thing about some moderately interesting bit of recent history"
 - "Aye typical. Can't see any kind of edtech connection at all. Best get on and promote MOOCs/flipped classroom/iPads as some kind of educational panacea then..."
 - "Yes, thank god that so many edtech journalists will just blindly regurgitate any press release you hand them. Apart from a few of them, who always complain about violent misogyny on the internet which has nothing to do with edtech"
 - "No, absolutely nothing whatsoever. Just because someone had a bad experience interacting online, doesn't mean that interacting online isn't the best thing ever"
 - "Typical, bring things that aren't about selling shiny new disruptive ideas into edtech. Like that bloody Kernohan. I tell you what, I totally won't be downloading his hotly anticipated forthcoming ebook."
 - "No, me neither. I love fixed-width fonts though. Almost as good as being a man and not experiencing violent personal threats including the sharing of my name and address on the internet."
-- --

But is it just about the money? Or is there something else?

Have you ever (you the actual reader, not you the imagined reader voice that I just did that tricksy postmodern unreliable narrator thing with) heard of the New Games Journalism? It’s an idea a chap named Kieron Gillen had, which basically amounts to a post-modern turn for games journalism. This is way back in 2004 – so long, long after the Amiga was safely dead – but drew on the same kind of subjective player experience that they pioneered (apart from Your Sinclair). Alert readers will note that Gillen here manages to express this idea without threatening to violently sexually assault anyone.

Ethically, this was a very smart move as it emphasised the primacy of the individual writer’s experience. How does it feel for me to play this game? At worst, you got a blogger who had taken two junior asprin and a can of shandy pretending he was Hunter S. Thompson. At best, you got something that both functioned as consumer journalism and as literature in it’s own right. (IPR note, I’d quote from Gillen’s “manifesto” but it has such a bizarre custom license on that I am not sure I am permitted to)

Game reviews as art? Some responded with the horror that you might expect when confronted with the idea that subjectivity is not to be denied but welcome. For example Ram Raider suggests that

“The principle is that an NGJ article should centre around the writer and his experience. Taken at face value, this sounds quite sensible. Unfortunately, applied to an industry full of giant egos, this has resulted in a breed of articles that are more about the writer telling the world about himself.”

And, tellingly:

“Gillen’s not a bad guy when you meet him in the flesh, and it’s a shame to see his name brought to prominence with an issue that we can already see the community lashing out at.”

Really, people did make threats. And started alleging corruption and favouritism (but I don’t think the people involved had sex with each other. At least, I hope not) All down to ethics in game journalism. Or just wanting better game journalism. Or wanting to deal with the nastier bits of what was now connected life via the medium of video games and writing about them.

There was, inevitably a backlash, and to characterise this I want to point at what Gary Cutlack was doing with UK:Resistance during this period. Here’s his take on the issues with “new games journalism”, of which – it was fair to say – he was not a fan.  Cutlack was one of the earliest games bloggers(1), and the way his work descended into SEGA-related nostalgia and bitterness towards the end of the 15 years(!) of blog archives is oddly affecting.

Despite the “proper games journalism” stance in the article linked to above, I’d always read the site as being written in character as a parody of the “socially inept gamer” cliche. (Actually his twitter feed is still like that, except he doesn’t write much about gaming any more – shit, maybe he *really* is  that depressed… )

It takes a special kind of man to post pictures of a Sonic the Hedgehog branded popcorn machine onto a wordpress blog. And to write about his growing disenchantment with gaming and game culture for an audience that had grown with him. But in the latter years he primarily focused on “reader submissions”, which reflected the concerns of the readers back on themselves – which became downright disturbing.

UK:R became harder for me to read as the hugely disturbing “new gaming culture” became entrenched in the comments. When he closed the site in 2011, I understood why. Newer, and angrier, expressions of this culture – such as Yahtzee Croshaw’s “Zero Punctuation” – moved the rape jokes from the comment section to above the line. Because ethics in game journalism. (Incidently, this is the closest Croshaw has come to writing about GamerGate. I’d like him to write more.)

At his best, Croshaw is very funny indeed. Why he keeps adding the unfunny, disturbing and horrible bits is a mystery to me. He can clearly write well without them.

There are lots of games journalists out there – some of them are good writers, some of them are not, some of them are ethical, some of them are not. I’m naive enough to think that we are all publishers, and we all have the responsiblity to write ethically and transparently, to write well, and not to use threats of sexual assault if someone disagrees with us.

If you are concerned about ethics in games journalism (or EdTech journalism, or political journalism) remember that in commenting and responding to the work of others YOU ARE A JOURNALIST. Be ethical.

(1) It’s not a blog, it’s a web site.

Graduate Employability and the New Economic Order

That Lawrie keeps asking me interesting questions. This post comes from one that he asked me recently. 

“A new publication issued today by the Higher Education Statistics Agency (HESA) sets out, for the first time, figures describing the First Destinations of students leaving universities and colleges of higher education throughout the UK.”

No, not today. The 9th of August 1996, when a still-damp-behind-the-ears HESA published the results of a ground-breaking survey concerning where students end up after they complete their degree. More than the rise (and rise) of fees, more than the expansion of the system, more (even) than the growth of the world wide web; the publication of these numbers has defined the shape and nature of modern higher education.

Before this time (and records are hazy here, without disturbing my local library for bizarre out of print 90s educational pamphlets from the National Archive ) universities and colleges careers advisory services did their own surveys of graduate destinations, which were annually grouped by the DfEE.  Though this produced interesting data, national ownership across a relatively newly unified HE sector was clearly the way to integrity.

And also league tables.

Here at last was a metric that promised to convert investment in Higher Education into “real world” economic benefit. Beyond the vague professorial arm waving, and the lovely glowy feeling, some hard return on investment data.

We’re pre-Dearing here, so obviously Lord Ron and team had a thing or two to say in their his 1997 report. Though being careful not to provide a “purely instrumental approach to higher education” (4.2), the report makes a number of gestures towards the need to encompass employer requirements in the design and delivery of HE courses. Some of these (4.14) recommendations are as stark and uncompromising as anything in Browne (or Avalanche)

  • above all, this new economic order will place a premium on knowledge. Institutions are well-placed to capitalise on higher education’s long-standing purpose of developing knowledge and understanding. But to do so, they need to recognise more consistently that individuals need to be equipped in their initial higher education with the knowledge, skills and understanding which they can use as a basis to secure further knowledge and skills;

“New Economic Order”, eh? Of course, I’ve gone over some of this history before, in particular the 20 year English habit of building new universities at the drop of a capitalist’s stovepipe hat. What was new in Dearing was the idea of embedding these values into a wider definition of what it means to be a university.

The Blunkett-led DfEE commissioned a report entitled “Employability: Developing a Framework for Policy Analysis” from the Institute for Employment Studies, which was delivered by Jim Hillage and Emma Pollard in 1998. (If the idea of a framework for policy analysis is ringing faint alarm bells in the ears of alert FOTA readers, then yes – the late 90s saw a certain Dr Barber influencing the development of education policy in England.)

What Hillage and Pollard do is provide three key elements of scaffolding to the burgeoning employability agenda in education (note: not solely HE)

  • A literature review, and definition of the term
  • A “framework” for policy delivery, to (yes) “operationalise” employability
  • Some initial analysis of the strengths and weaknesses of the various available measures of employability.

I’m very close to just quoting huge chunks of this report as it is such a perfect encapsulation of the time.

Their definition (p11)
Their definition (p11)

You have to love “labour market efficiency”, don’t you?

Hillage and Pollard make an attempt to split the employability of an individual into a set of attributes (eg p21); “Assets” (knowledge, skills and attitudes), which are “deployed” (career planning and goals), then “presented” (interview and application). “Context” dangles off the end as a late admission that other things going on in the world, or in the life of an individual can have a powerful effect.

Again, very much of the time, the report is cautious but optimistic about the methods of measuring employability – noting that although “output measures” (such as our first destination survey) can be useful, the wider context of the state of the labour market needs to be taken into account.

“Intermediate indicators” (the possession of appropriate skills and knowledge) are easier to measure. You could read across to competency-led course design and the whole world of “learning outcomes” here.

The final indicator type analysed is “perceptual” – broadly, what do employers think of the “employability” of their intake? Again context is key here, and there is an immediacy bias – in that the skills required to do a particular task (I’ll call them “role skills”) are separate from the wider concerns of the individual in being “employable” in a wider way.

But if this document has a theme, it is that the individual needs to take responsibility for their own employability. The learner is complicit in their own subservience to an economic and value-generation system, with the educator merely a resource to be drawn on in this process.

It is this model of education – now measured without qualification – that has come to dominate HE. It is a conceptualisation tied in with institutional (and often academic) support of a neo-liberal system without question. (A neoliberal system, I may add, that is looking none-too-healthy at the moment). This is a model that is being problematised by Professor Richard Hall and others. And this is why (Lawrie) that HE in England is markedly less political than in countries without a fully integrated and developed employability agenda.

Here’s the 2011 White Paper: “To be successful, institutions will have to appeal to
prospective students and be respected by employers”
(14) and “We also set out how we will create the conditions to encourage greater collaboration between higher education institutions and employers to ensure that students gain the knowledge and skills they need to embark on rewarding careers”  (3.2)

Good luck.

Yes, it’s not #QAmageddon !

So; #QAmageddon remains the conversational topic of choice amongst wonk-kind, and merits a semi-interested “meh” from everyone else. For those just joining us, HEFCE sneaked out a fairly wide-ranging consultative review of quality assessment in HE and are inviting comments from all and sundry stakeholders, to include (but not limited to) “the sector”, the NUS and government. There’s a (limited) FAQ and a letter to vice-chancellors from HEFCE.  The QAA, as one might expect, have responded with a bit of history – Million+ have weighed in, as have the Russell Group.

Other stuff that you’ll be wanting to read includes the Wonkhe coverage (don’t skip the comments), Derfel Owen’s blog and this from Hugh Jones at Sweeping Leaves. The Times Higher Education coverage has not really added anything to the debate so far.

There seems to be two emerging explanations – either this is high HE agency politics, or a belated attempt to bring more transparency into the commissioning of external bodies to undertake statutory duty. As an academic, you should care about neither of them.*

QA for HE teaching is one of those areas that makes less sense the more you look at it. Ostensibly, HEFCE have a statutory duty to assure the quality of provision in public funded universities. They employ the QAA to carry this out for them. But what the QAA actually does in this area is to assess the quality of institutional teaching quality assurance processes. They look at documentation describing these processes, and check that outputs from these processes exist.

When you complain about being over-monitored, or having to do loads of administrative form-filling, or that innovation is hamstrung by compliance requirements – you are complaining about your institutional processes… which may or may not be staggeringly over-engineered, antiquated, unwieldy or just plain terrible. You are not complaining about the QAA, which is generally something that only your registrar can sensibly do.

QAA sets out their expectations regarding internal processes in the UK Quality Code for Higher Education, which you could see as a kind of checklist for UK quality managers. (There is even a checklist version of it if you are in a hurry.)

I’ll wait here for you, go and have a little look.

Right – it’s not actually that bad, is it? The code sets out stuff that you’d probably want to do if you are a self-respecting university : keeping proper records, getting stuff validated in a useful way. Even if you wanted to start a co-operative university outside of the state system, you’d want to know who was studying there and what you were intending to teach them, and have some kind of an admissions system…

… so, when you respond to the HEFCE consultation, you could talk about how an over-enthusiastic interpretation of the quality code is engendering a cult of managerialism,  how academic staff are being swamped with data requests at the expense of actual academic stuff like teaching and research. It doesn’t – on one level – matter if your institution is being critically examined by the QAA or an interested prospective student, both need to be left with the impression that the place is trustworthy, human and focused on supporting learning and discovery. Do existing quality processes – considered en masse – offer that impression?

You could talk about how a focus on process is a poor but necessary proxy for a focus on something as intangible as educational quality, but note that a centrality of process can have a negative effect on delivery. You could remind the institution that as a customer (and yes, you are a customer, you employ all the other people that work in an institution to help you be an academic. Even the Vice Chancellor and the Registrar. You spend the [overwhelming amount of the] proceeds of *your* labour on the support systems that form the non-academic component of your institution. They are yours – you are not theirs.) you want to see processes that support academia not perpetuate institutional structures.

But, moreover, you could make a case for the relationship between the student and the academic being paramount. For a human-scale higher education that does not see interactions simply as data points. Yeah, you’ll have to dress all of the hippy stuff up in management language – but this is an opportunity for radical change greater than anything in recent years.

As yet, we don’t even know how to feed in. But we need to ensure that we can and do, and for an academic simply being aware (and keeping an eye on) this transparent process as it involved is important. I’ll try to highlight the opportunities on here in the weeks to come.


*The Agency fight club interpretation explains, to an extent, the apparent suddenness of the announcement, and the way it has caught a lot of commentators  by surprise. Though the contract between the QAA and HEFCE for teaching quality assessment in public universities is approaching one of the triennial review points, there is no expectation of a public consultation at this point.

Indeed, one carried out by HEFCE in 2012 pronounced the current arrangements broadly fit for purpose as a basis for meeting the emerging needs of the system. Para 4:

“General support was also expressed for our proposal to use the QAA’s existing method of  Institutional Review as the basis of building a risk-based approach, given the success of this new method in ensuring rigorous, robust review which fully involves students, but is proportionate in regulatory terms.”

Basically, the “agency politics” line is predicated in the existence of a sudden and massive falling out between HEFCE and QAA management. It’s not for me to comment on what may (or may not) be happening between the two organisations – I would only to venture to suggest that this is not the way that similar disagreements between groups and agencies with overlapping interests have played out in the past. Washing one’s dirty linen in public would be a very strange choice for HEFCE or the QAA to make (if, indeed, either organisation has soiled linen to deal with).

I lean towards the “transparency” explanation – which goes along the lines of HEFCE bedding into a new role as a purchaser of specific services on behalf of the sector, supported via a BIS funding line separate from mainstream teaching funding allocation. You’d be wondering why BIS don’t procure these services themselves – and I don’t think you’d be the only one wondering that.  The HEFCE argument here would be an old one – it is a “buffer body” that has a unique understanding of the English HE sector that can get the best value for money by providing precisely what the sector needs.

It can demonstrate this value by being open and transparent in the procurement process. Even the panel that runs the consultation process that designs the tender documents that prospective service delivery agencies will apply to [breathes] is being openly constituted before our very eyes. Such transparency, very open, wow.

“You can’t always get what you want. But if you try sometimes well you just might find you get what you need”

The VLE is dead” is not dead. The past month has seen posts from Peter Reed, Sheila MacNeill, and D’Arcy Norman offering the “real world” flip-side to the joyous utopian escapism of  edtech Pollyanna Audrey Watters.  Audrey’s position – that the LMS (learning management system [US, rest of world])/VLE (Virtual Learning Environment, formerly Managed Learning Environment – MLE [UK]) constrains and shapes our conception of technology-supported learning (and that we could and should leave it behind) – is countered by the suggestion that the LMS/VLE allows for a consistency and ease of management in dealing with a large institution. 

To me there are merits in both positions, but to see it as a binary is unhelpful – I don’t think we can say that the LMS/VLE is shaping institutional practice, or that institutional practice is shaping or has shaped the LMS/VLE. To explain myself I need to travel through time in a very UK-centric way, but hopefully with a shout-out to friends overseas too.

We start at the end – an almost-random infrastructure of tools and services brought into being by a range of academics and developers, used to meet local needs and supported haphazardly by a loose network of enthusiasts. It’s 1998, you’re hacking with (the then new) Perl 5, and your screensaver is SETI@home.

But how do we get the results of the HTML quizzes that you are doing for your students on an ~-space website (after having begged your sysadmin to let you use CGI) across to the spreadsheet where you keep your other marks, and/or to your whizzy new student records system that someone has knocked up in Lotus Notes?

Copy and PasteKeep two windows open

Maybe copy from a printout

 What if there was some automagical way to make the output of one programme input into the other? Then you could spend less time doing admin and more time teaching (isn’t that always the promise, but never the reality?)

Remember, this was before Kin Lane. We were not quite smart enough to invent the API at this time, this was a couple of years down the line.  But the early work of the Instructional Management System project could easily have proceeded along similar lines.

IMS interoperability standards specified common ways in which stuff had to behave if it had any interest whatsoever in working with other stuff. The founding of the project, by Educause in 1997, sent ripples around the world. In the UK, the Joint Information Systems Committee (JISC) commissioned a small project to participate in this emerging solution to a lack of interoperability amongst tools designed to support learning.

That engagement with IMS led to the Centre for Educational Technology Interoperability Standards… CETIS.

As I’ve hinted above, IMS could very easily have invented APIs two years early. But the more alert readers amongst you may have noticed that it is 1998, not 1997. So all this is ancient history. So why 1998?

In a story that Audrey hinted at the CETIS 2014 conference – it’s like she knew! – some of those involved in IMS were imagining an alternative solution. Rather than bothering with all these crazy, confusing standards wouldn’t it be much easier if we could get a whole educational ecosystem in a box. Like an AOL for the university. Everything would talk to everything else (via those same IMS standards), and you would have unlimited control and oversight over the instructional process. Hell, maybe you could even use aggregated student data to predict possible retention issues!

Two of those working for IMS via a consultancy arrangement at the time were Michael Chasen and Matthew Pittinsky. Sensing a wider market for their understanding of the area, they formed (in 1997) a consultancy company named Blackboard. In 1998 they bought CourseInfo from Cornell University, and started to build products based on their idea of a management system for learning.

The big selling point? It would allow courses to be delivered on the World Wide Web. Let’s put a date on it.  29th April 1998.

In the UK, this development looked like the answer to many problems, and JISC began to lead a concerted drive to manage take-up of “instructional management systems”, or (as “instructional” is soo colonial) “managed learning environments”.

JISC issued a call for institutional projects in 1999. The aim of these projects was not simply to buy in to emerging “in a box” solutions, but to join up existing systems to create their own managed environment. Looking back, this was a typically responsive JISC move, there was no rush to condemn academics for adopting their own pet tools, merely to encourage institutions to invent ways of making this feasible on an increasingly connected campus.

JISC was, as it happened, undergoing one of their periodic transitions at the time,because:

“[…] PCs and workstations are linked by networks as part of the world wide Internet. The full impact of the potential of the Internet is only just being understood.”

One of the recommendations stated:

“The JISC […] finds itself trying to balance the desire to drive forward the exploitation of IT through leading edge development and pilot projects with the need to retain production services. […] At present about 20% of the JISC budget is used for development work of which less than a quarter is to promote leading edge development work. This is lower than in previous years. This run down of development work has been to meet a concern of the funding councils that the predecessors of the JISC were too research oriented. […]Given that the future utility of the JISC depends on maintaining UK higher education at the leading edge there should be more focus on development work.”

(sorry for quoting such a large section, but it is a beautifully far-sighted recommendation. For more detail on JISC’s more recent transition, please see the Wilson Review.)

So, there was an emphasis on homegrown development at the leading edge, and a clear driver to invest in and accelerate this – and there was funding available to support it. In this rich and fertile environment, you would imagine that the UK would have a suite of responsive and nuanced ecosystems to support academia in delivering technology-supported tuition. What happened?

Some may try to blame a lack of pedagogic understanding around the tools and systems that are being deployed. JISC commissioned a report from Sandy Britain and Oleg Lieber of the University of Bangor in 1999: “A Framework for Pedagogical Evaluation of Virtual Learning Environments“. By now (one year on), the UK language had shifted from MLE to VLE.

The report notes that as of 1999 there was a very low take up of such tools and systems. A survey produced only 11 responses(!), a sign of a concept and terminology that was as yet unfamiliar. And of course, institutions were being responsive to existing practice:

“Informal evidence from a number of institutions suggests that few are currently attempting to implement a co-ordinated solution for the whole institution, rather many different solutions have been put into operation by enterprising departments and enthusiastic individual lecturers. […] It may not be an appropriate model for institutions to purchase a single heavyweight system to attempt to cater for the needs of all departments as different departments and lecturers have different requirements.”

Like many at the time, Britain and Lieber cite Robin Mason’s (1998) “Models of Online Courses” as a roadmap for the possible development of practice. Mason proposed:

  • The “Content Plus Support Model”, which separated content from facilitated learning and focused on the content.
  • The “Wrap Around Model”, which more thoughtfully designed activities, support and supplementary materials as an ongoing practice around a pre-existing resource.
  • The “Integrated Model”, which was primarily based around student-led interaction with academic support, content being entirely created within the course.

This is an astonishingly prescient paper, which I must insist that you (re-)read. Now.

It concludes:

“Just as the Web turns everyone into a publisher, so online courses give everyone the opportunity to be the teacher. Computer conferencing is the ideal medium to realize the teaching potential of the student, to the advantage of all participants. This is hardly a new discovery, merely an adaptation of the seminar to the online environment. It is not a cheap ticket to reducing the cost of the traditional teacher, however. Designing successful learning structures online does take skill and experience, and online courses do not run themselves. It is in my third, “integrated model” where this distinction is most blurred, as it provides the greatest opportunities for multiple teaching and learning roles.”

This is a lesson that even the UK Open University (to whom Mason was addressing her comments) have struggled to learn. I leave the reader to add their own observation about the various strands of MOOCs with respect to this.

Britain and Lieber, meanwhile end with a warning.

This […] brings us back to the issue of whether choosing a VLE is an institutional-level decision or a responsibility that should be left in the hands of individual teachers. It raises the question of whether it is possible (or indeed desirable) to define teaching strategy at an institutional rather than individual level

A footnote mollifies this somewhat, noting that issues of interoperability and data protection do need to be considered by institutions.

In 2003, JISC undertook their first review of MLE/VLE activity. The report (prepared by Glenaffric Consulting) suggested that the initial enthusiasm for the concept had been tempered both by a general disenchantment with the potential of the web after the first dot-com bubble had burst, and by an understanding of the pressures of running what was becoming a mission-critical system. One key passage (for me) states:

“[A] tension is apparent between the recognised need for generally applicable
standards for the sector, and the institutions’ need for systems that provide the
functionality that they require for their specific business processes. In this context,
witnesses were critical of the drive to impose a standards-based approach when the
specifications themselves were not complete, or adequately tested for widespread

The pressure to “get it right first time” outweighed the ideas of building for the future, and it was into this gap that commercial VLEs (as a single product) offered a seemingly more practical alternative to making myriad systems communicate using rapidly evolving standards.

By 2003, only 13% of institutions did not use at least one VLE. By 2005, this had dropped to 5%, and by 2008 the question no longer needed to be asked, and the dominance of Blackboard within this market (through acquisitions, notably of WebCT) was well established.

But remember that the VLE emerged from a (perceived or actual) need to allow for interoperability between instructional and learning systems. A need amplified by funding and advice designed to future-proof innovative practice. We may as well ask why Microsoft became a dominant desktop tool. It just worked. It was there. And it became the benchmarks by which other solutions were measured.

To return to my opening tension – I wonder if both institution and system have been driven to current norms by a pressure for speedy and reliable ease of use. To manage the growing administrative burden in a newly massified and customer focused higher education.

Reliablity. Standardisation, not standards-informed development. And the ever-flowing pressure for rapid and transformative change. Where did that come from?

And that is why we talk about politics and culture at education technology conferences. I saw her today, at the reception…



You’ll Never Hear Surf Music Again #altc #altc2014

“Strange beautiful grass of green
with your majestic silver seas
Your mysterious mountains I wish to see closer…”

What is social media like? Speaking at the 2014 UCISA conference, Clay Shirky put the collaborative structures that have been built up around web technology in a category of their own. He asked: Is [facebook] like other media? Is [facebook] like a table? Or is [facebook] like [facebook]?

It transpired that we are dealing with a new category. Shirky argues that as information technology moves deeper and deeper into the world of human communication,  it allows users to use the data trails they create to develop meaningful insights into their lives and interactions.

Social media, in 2014, is more media than social. Every organisation has a person or a team, usually in the communications department, with a contractual remit to be “social”. There is a policy, not usually an entirely written one, that determines what constitutes “social” for other members of staff. Falling the wrong side of the line causes trouble. And believe that these lines are policed.

(Paul is always on about this...)
(Paul is always on about this…)

Just ask Thomas Docherty (a former Head of English at Warwick) about sharing and surveillance) . At a conference celebrating the republication of “Warwick University Limited” – a book describing the levels of political surveillance of academic staff and students in the 1970s were subject to – he noted that:

” Academics and students, if interested in material research and learning, have to work in the shadows, in clandestine fashion”

At least, had he been present at the conference, he would have noted this. I quote from a letter he sent whilst forbidden to enter the campus or make contact with his students.

As things stand, we know very little about his suspension, other than what has been released by the institution, which reassures us that his trenchant and freely expressed political views and membership of the Council for the Defence of British Universities are not the reason for this unusual punishment. At the time of publication Thomas Docherty is still suspended (some say indefinitely), and has been for 240 days.

(image from the WarwickStudentsForDocherty Facebook group)
(image from the WarwickStudentsForDocherty Facebook group)

Writing about her experiences at Worldviews2013, Melonie Fullick noted:

Those starting out in academic life need to receive the message, loud and clear, that this kind of “public” work [new ways of engaging those outside of academia, primarily social media] is valued. They need to know that what they’re doing is a part of a larger project or movement, a more significant shift in the culture of academic institutions, and that it will be recognized as such. This will encourage them to do the work of engagement alongside other forms of work that currently take precedence in the prestige economy of academe.”

Docherty is hardly the only example of an outspoken academic who has been censured by an institution, and there are many far, far more telling tales of social media and the way it reacts to outspoken opinions. I just use the example as it is a local one. But far more insidious is the kinds of self-censorship that many of us must participate in. “No religion or politics”, as the old saying goes.

But our employers  (and ourselves) are not the only critical readers here. The networks themselves monitor and respond to the emotions and ideas we choose to express. The recent Facebook research on mood contagion, though welcome in open publication, reminds us just how much attention platforms pay to what we share – and, almost as a given, how valuable this information can be.

Witness also the controversy around the migration to Facebook Messenger on mobile platforms. The New York Times suggested the backlash was “part confusion, part mistrust“. Really, users have been spoiling for a fight with Facebook for a long time, a misunderstanding of how android permissions work (an application can record sound and take pictures, thus it needs to be allowed to use the microphone and camera…) feeds a building resentment of “move fast and break things”. Which itself has become the less quotable “move fast with stable infra“.

Couple this with the dense web of connections that can be built up around a single persona and we see the true cause of the Nymwars – far from improving online conversation, as google claimed when improving YouTube comments, drawing activity together across numerous sites raises the value of this data. As our picture becomes more complete, we can be better understood by those who wish to understand us.  To inform us. To sell to us. And to police us.

For the moment, an uneasy truce has been called. The real name is not required – the single identity remains. It seems hopelessly naive to think our real names could not be determined from our data if needed. By whoever feels the need to.

Compared to Facebook, we’ve always given twitter rather a free ride. But this too, with the introduction first of sponsored tweets and then of other tweets we may find interesting, becomes less about our decisions and more about our derived preferences. This is made explicit in the new onboarding process. Twitter in 2014 is a long way from twitter in 2007.

There has been the beginnings of a movement away from this total spectrum sharing – platforms like Snapchat and Whatsapp connect people with their friends directly – the idea of the network comes through forwarding and very selective sharing. Networks like Secret and Whisper do away with the idea of “whole-person” media – anonymous “macros” (words+image) are shared based on location only.

secret example

Though each will create a trail, these are not publicly viewable and are difficult to integrate with other trails. Shirky sees the creation of a trail as being something that empowers the user – “If there is a behavior that matters to them, they can see it and detail it to change that behavior” – a position that tends towards to the ChrisDancyfication of everything.

We use social media trails (and online activity, for that matter) like we use cloud chambers, to draw and assert links between events that are visible only in retrospect. It’s a big shift from sharing altruistically and to build connections, to sharing as a side-effect of self-monitoring.

I’ve rambled a little, but the central thesis I’m building here is:

(To be fair, it's difficult to get off Facebook...)

(To be fair, it’s really difficult to get off Facebook…)

  • as social media users, we are becoming aware of the value of the aggregated data we generate.
  • our interactions with social media platforms are characterised by mistrust and fear. We no longer expect these platforms to use our data ethically or to our advantage.
  • We expect others to use what we share to our disadvantage.
  • So – we share strategically, defensively, and using a lot of the techniques developed in corporate social media
  • and emerging new media trends focus on either closely controlled sharing or anonymous sharing.

Shirky’s position on the inexorable domination of the “social” clearly does not mesh with these trends – and this throws open the question of the place of social media in academia. Bluntly, should we be recommending to learners that they join any social network? And how should we be protecting and supporting those that choose to.

Social media has changed underneath us, and we need to respond to what social media is rather than what it was.

Alan (cogdog) Levine recently quoted from Frank Chimero:

We concede that there is some value to Twitter, but the social musing we did early on no longer fits. My feed (full of people I admire) is mostly just a loud, stupid, sad place. Basically: a mirror to the world we made that I don’t want to look into.”

I’d add, for the reasons above, “dehumanising” and “potentially dangerous”.

Levine glosses this beautifully:

Long long ago, in a web far far away, everything was like neat little home made bungalows stretched out on the open plain, under a giant expansive sky, where we wandered freely, exploring. Now we crowd among densely ad covered walkways of a shiny giant mall, never seeing the sky, nor the real earth, at whim to the places built for us.”

He’s a man that uses social media more than nearly anyone I know, myself included. And now he deliberately limits his exposure to the noise of the influence he has. He develops his own work-arounds to preserve and foster the things he finds important. Because he (and we) cannot rely on social media to continue acting in the same way. You can’t rely on tagging. You can’t rely on permanence. You can’t rely on the ability to link between services. You can’t even rely on access.

Tony Hirst is one of the most talented data journalists I know. In his own words:

“I  used to build things around Amazon’s API, and Yahoo’s APIs, and Google APIs, and Twitter’s API. As those companies innovated, they built bare bones services that they let others play with. Against the established value network order of SOAP and enterprise service models let the RESTful upstarts play with their toys. And the upstarts let us play with their toys. And we did, because they were easy to play with.

But they’re not anymore. The upstarts started to build up their services, improve them, entrench them. And now they’re not something you can play with. The toys became enterprise warez and now you need professional tools to play with them. I used to hack around URLs and play with the result using a few lines of Javascript. Now I need credentials and heavyweight libraries, programming frameworks and tooling.”

After facing similar issues – with syndication, stability, permanence, advertising – Jim Groom (and others) are experimenting with forms of “social media” that are platform independent. Known, the webmention protocol, and similar emerging tools stem from the work of IndieWebCamp – a distributed team dedicated to providing a range of alternatives to corporate social media. They work to the following principles:

  • your content is yours
  • you are better connected
  • you are in control

The first fits in nicely with ongoing work such as Reclaim Hosting, but for me the key aspect is control. One of the many nice aspects of these tools is that they are not year zero solutions – they start from the assumption that integration with other (commercial) networks will be key and that conversation there was as important as “native” comments. Compare Diaspora – which initially positioned itself as a direct alternative to existing networks (and is erroneously described in the press as a network where “content is impossible to remove“). With user-owned tools  you own what you share plus a copy of what is shared with you, and you have final control over all of this. Publish on your Own Site, Share Everywhere (P.O.S.S.E.)

Of course, this doesn’t lessen the risk of openly sharing online – these risks stem for the kind of corporations that employ us and that we entrust our data to. But it does help users keep control of what they do share. Which is a start.

But a start of what? We already seeing weak signals that young people (indeed all users) are drifting away from social networks, almost as fast as those who hope to talk to them are adopting the same networks. The quantified self is moving towards the qualified self, as users begin to understand and game the metrics that they are supposedly using for their own purposes.

People are more complex than activity trails and social networks suggest. The care taken to present facets (or even to perpetuate the illusion of an absence of facets). The ways they find to get answers out systems not set up to respond to questions.

Social media has changed. It’s the same tune, but a different song.

Ben Werdmuller (Known developer) suggests, in a recent post:

“The web is the most effective way there has ever been to connect people with different contexts and skills. Right now, a very small number of platforms control the form (and therefore, at least to an extent, the content) of those conversations. I think the web is richer if we all own our own sites – and Known is a simple, flexible platform to let people do that.”

In 2014 suspicion about the actions of the super-social media platforms has reached fever pitch. Are we approaching a proper social media backlash? What does this mean for teaching online, and do projects like “known” offer another way ?

“Your people I do not understand
And to you I will put an end
And you’ll never hear
Surf music again.”

(though the theme to Coronation Street, became “Third Stone From The Sun“, which became “Dance with the Devil“, which became”I’m Too Sexy“…)

[EDIT: 23/09/14 – Times Higher Education (£) are reporting that Docherty’s suspension will end on 29th September,  269 days after it commenced. Warwick University (“university of the year”) have not made any comment regarding the reason for the suspension, or why it has ended, but it is understood that the disciplinary process will still continue. Because obviously he hasn’t been punished enough.]

Academia at the heart of the (complex adaptive) system – the new teaching quality enhancement.

Sneaking out while you were on holiday comes a very nice report commissioned from the Higher Education Research and Evaluation team at Lancaster by the Higher Education Academy on behalf of HEFCE. Entitled “The role of HEFCE in teaching and learning enhancement: a review of evaluative evidence“, this report facilitates interviews based primarily on the post 2003 White Paper enhancement activities that you may recall from an earlier post.

The underpinning questions can neatly be summarised as “should our post-Funding Council have a role in Teaching Quality Enhancement? and if so what should it be?”. And this comes down to some lovely stuff on theories of change, linked to policy instruments and mechanisms:

  • Contagion from good examples – pilots/beacons (eg CETLs)
  • Technological determinism – bid-and-deliver (eg FDTL/TLTP)
  • Resource-driven (rewards and sanctions) – formula funding (eg TQEF)
  • Rhetorical support from institutions – strategy-driven formula funding (eg early TQEF/SPS)
  • Professional imperative – the “professionalism of teaching” narrative (eg UKPSF, maybe NTFS)
  • Market driven – consumer empowerment (eg NSS & links through to Quality Assurance)

(freely adapted from figures 3.3 and 3.4 of the report, as I disagree with a few of the mappings)

One of the key criticisms levelled at previous enhancement work is the lack of clarity or consistency around these models of change, and – perhaps more importantly, about the rationale for the choice of said model of change. But, as is also made clear – the sheer complexity both of the teaching system in English HE and the system that exists to drive the enhancement of it makes such clarity more of a goal than an expectation.

“It is always tempting to make decisions based on a technical-rational understanding of change processes. However,  we know that micro-political and macro-political processes as well as the robust defence of turf, careers, reputations and position mean that change is more often a process of ‘muddling through’ in a loosely-coupled way than a rational process of successive goal setting and achievement. It is clear that the situation depicted by complex-adaptive systems theory is closer to the reality of higher education in England than the picture painted by more rationalistic theories.” (pp26-27)

For me (as I think I have mentioned on a few previous occasions) the Von Hippel model of user-driven innovation neatly cuts through a lot of this as it supports systemic actors in hacking and optimising the reality of the system they perceive. On the ground this would look something like the late, lamented Jisc LTIG system of selective small to medium scale investments in interesting practice developments that could be scaled up and shared.

Of course, the difficulty is always in scaling up and sharing, as institutional differences mitigate against a lot of the easier gains from sharing practice. The trick that has always been missed is feeding back the wider picture of the issues individuals and teams are struggling with in order to support and evidence institutional adaptations (and indeed systemic adaptations, but at a point of mission divergence these are perhaps less likely). It is possible, even likely, that institutional adaptations would draw on project experience, but this would not be essential.

An explicitly iterative, user- (not “student”, as students are not the users of systems that are constraining learning, they see a second-order detriment) focused intervention like this meets the report’s slightly pessimistic point that “building on the best of the past while attempting to rectify anomalies and deleterious practices is a strategy that
has more chance of success than imposing completely new models.“. It’s a strong punt on “bottom-up” rather than “top-down”, if you like. Or “bottom-up” driving “top-down”.

The elephant in the room, is, of course, academic (and support) staff terms and conditions. Permanent austerity leaves staff attempting to do more (teaching, admin, outreach) with less (time, money, security, trust) – even in a time of relative institutional wealth. Fundamentally the most useful investment in the student experience any institution can make is an investment in happy, secure and trusted academic and support staff – who are then free to meet student needs in intelligent and individualised ways.

Many of the old faithful models of change are built upon a presupposition of academic institutions that are made up of reasonably permanent academics, who have both the time and the space to try new stuff. With an increasingly casualised and temporary workforce, coupled with a teaching funding model that seems to primarily exist to remove any sense of continuity or security, and multiplied by an empty-headed insistence on measuring all of the things (because “continuous improvement” against defined targets like it was the 50s and Taylorism was a thing.)

A Von Hippel-informed intervention based around individual actors within this system would likely develop a number of unexpected work-arounds that pose awkward questions. Why does the semi-automated workload management system suck so hard? Why do room allocation and IT support work against each other? Why do people have to put certain things on BlackBoard but not others? Why don’t module approval processes reflect the reality of module development processes? And so on.

Enhancing teaching may be as simple as allowing people the space and time to teach, and offering invesment in individuals and teams who go beyond that. There’s no big reveal to that, no “Christmas Tree” of shiny fascinators. But it may just work.

So let’s look at the postulated “critical success factors” in the report:

  • has efficient and effective ways of establishing need and of measuring the real costs (including ‘hidden’ costs) and effects of interventions;

My model produces evidence of need as a part of the investment process. And “real costs” are neatly controlled.

  • once established, priorities are addressed consistently, with clear leadership, over extended periods of time and with consistent attention paid to long-term sustainability;

I’d honestly argue that this was a little bit top-down. Other than an emphasis on empowering staff who teach as change agents and experimenters. But clearly some degree of attention made to setting principles (not priorities) and sticking to them would be welcome.

  • makes best use of the particular specialisms and missions of the different bodies focused on enhancement by encouraging a ‘joined-up’ enhancement strategy;

Enhancement is a crowded space, and as I work in one of the bodies that spend time in this space I will say nothing other than: don’t forget about SEDA.

  • is inclusive of the student voice and collective student interests;

This is a tricky one as collective student interests may not mesh with the “student voice” as caricatured in much public policy making. As the report notes (p11) “[…] the actual voices of students were missing in many policies and initiatives, that when students’ interests were discussed in them it was often on the basis of attribution rather than evidentially. This is of course true too of many government pronouncements: for example the white paper ‘Students at the Heart of the System’ (2011) deployed a model of student needs and
interests, not the voices of actual students or their representatives.”

Basically we are fooling ourselves if we decide what student needs are based on a blunt survey instrument like the NSS (& lest we forget, 85% of students are satisfied with their overall experience, which is hardly a mandate for radicalism).

  • has adequate planning times and planning processes which made provision for engagement across the sector, based on a robust causal theory of change and mindful of usability characteristics;

This feels like the reverberations of a much older critique of enhancement activity around bidding processes. The wheel of fashion currently points away from such processes, but when things inevitably drift back that way it is clear that some kind of overall plan and change concept would be needed, ideally one that learnt from the worst excesses of the past whilst keeping the good stuff.

  • is nuanced enough to take account of different institutional missions and contexts in doing that;

Now this is where my Von Hippel suggestion has legs – some of the criticism levelled (fairly I feel) at previous programmes around enhancement is that they are based around a policy maker’s assumption that will (at best) hold true in a minority of institutions. Sector mission  differences are only going to increase unless Red Ed turns out to be a lot more red than I give him credit for. [though I say that, the last person to nationalise Higher Education in any meaningful way was Mrs Thatcher)

  • is effective in converting politicians’ sometimes unrealistic visions into realistic proposals. Is effective too in mitigating the effects of politicians’ predilection for big, high-profile, expensive projects involving ‘tape-cutting’ media events by reshaping them into effective innovations;

The “no more CETLs” clause. (although there was never a ministerial launch for the £315m CETLs programme). The quest for things to announce has been less of a draw in austerity Britain, with ministers preferring to announce cuts.

  • is able to effect changes beyond the ‘usual suspects’ to those deep in the heart of day-to-day teaching and learning, effecting a culture change across the system which incorporated a genuine commitment to evaluate practices, to address deficiencies and to build on successes.

Again, my Von Hippel model would work nicely here given a proper press launch. Surely a great cultural change would be for academics to start trying to do useful things, having the space and encouragement to do so.

Of course, we’d have to sort out teaching funding to do that properly…

HEFCE’s response is considered and offers what they feel are the key points of learning, all of which I can agree with to a certain extent:

  • A more strategic approach. (though I’d say that this needs to be strategic in the sense of considered and committed, rather than specifying specific changes at the outset)
  • Proper evaluation, and coherent planning based on this evidence.
  • Multi-agency (and multi-level) approaches. (here I note the need for serious programme-management firepower as these can be complex to implement)

[Postscript: if all this stuff sounds like your idea of fun HEFCE are looking for someone to make sense of the entire enhancement space, with a side-order of sorting out teaching funding more generally.]


Anyone want to buy a loan book?

So it appears that I can’t take a week’s summer holiday without someone coming up with a stupid Higher Education funding idea that won’t work. On this occasion, as on many others, it is David Willetts – member of Parliament for Havant and erstwhile Higher Education minister, and public cheerleader for the writings of your humble scribe.

Letting it all hang out on the FT Op-Ed page (and now, due to a paywall, Pastebin), Willetts, argues that institutions should be buying the government’s HE loan debt, effectively loaning fees to their own undergraduates for repayment on graduation. This, goes the argument, would maybe incentivise institutions into producing graduates that would earn more money.

A few issues there.

1. The coalition loan book smells funny. It has never shown any sign of producing the returns that the government once suggested it would. Even if you could offer a collateralised debt obligation to the university in question, eventually said institution would have to swallow a loss of anything up to a half of the face value. Banks won’t eat it, and, as the events of 2008 taught us, banks will eat almost anything.

2. Other products are available. If you are an English VC sitting on a spare few million, and fancy (for whatever reason) owning your own slice of medium term public debt, why not buy Gilts? The rate of return is both effectively guaranteed and half-way decent, unlike the student loan book. Even the government has been buying their own gilts via quantitative easing.

3. Most English universities do not have that kind of ready money, except for maybe two of them (go on, guess…). To get a piece of this oh-so-exciting loan action, your common-or-garden university would have to head for the bond markets to raise the required capital. Famed Shirley Bassey impersonator1 Dominic Shellard, (also VC of De Montfort University) raised £120m via this route. However, on 15th April 2014, Moody’s changed DMU’s outlook from “stable” to “negative”, noting “a reduction of support for higher education in government policy as well as a reduction in the oversight exercised by HEFCE would hurt the sector’s credit profile.” Basically, DMU can get good bond prices because the assumption is that the English HE sector is financially healthy and robustly regulated. Which it was until about 2010, though it now flounders without statutory regulation and a RAB charge that is heading ever skywards.

4. Of course, the pachyderm in the parlour is that this isn’t the first time Willetts has punted this rubbish idea. Oh no. Way back in the early years of this decade myself and a certain Dr McGettigan spotted Two Brains’ keenness to apportion individual RAB charges to institutions for similar reasons – broadly to grow provision in subjects and institutions where graduates were deemed more likely to be able to repay loans.

It appears he was stalled in these ambitions at stage four of Sir Humphrey Appleby’s five-stage formula for stalling cabinet ministers:

“Stalling Cabinet Ministers: the 5-stage formula
1. The administration is in its early months and there’s an awful lot to do at once.
2. Something ought to be done but is this the right way to achieve it?
3. The idea is good but the time is not ripe.
4. The proposal has run into technical, logistic and legal difficulties which are being sorted out.
5. Never refer to the matter or reply to the Minister’s notes. By the time he taxes you with it face to face you should be able to say it looks unlikely if anything can be done until after the election.”

Happily, with a new – although as yet untried – minister in post it would appear that this idea, along with others of Willetts’ wizard wheezes, will be kicked into the long grass as Greg Clarke (who he?) gets on with the single item on his to-do list: organising a proper cross-party review of HE funding before the election.

1. This *actually happened*, although entertainingly the YouTube version of the video is now private. Incidentally, DMU just happened to be one of the first UK institutions to enter the bond market, and I used to study there.


Don’t buy a national HE funding model until you’ve read this….

So Wings Over Scotland ask how much the Scottish “free tuition” policy costs annually. It’s a tricky and potentially loaded question, which I guess is why he asked it.

[CyberNat note: I’ve not got an “official” position on #indyref – basically it’s a matter for Scotland to decide and I’m staying out of it. Big chunks of both sides of my family are Scottish, and I don’t want to annoy any of them. And I know “Wings over Scotland” is controversial, however I liked Amiga Power so I’m happy to crunch a few numbers in return for the boundless joy that the Five Hardy Jokes have brought me over the years. ]

On one level, it’s fairly easy to give a number – the Scottish Funding Council spent £635,825,107 on tuition1 at Scottish Universities. That’s from table 1A in the link, and is the actual spend, rather than being based on the model (where an amount of funding is attached to each student based on the course they are studying and a few other variables).

But here’s where the question gets interesting. Is this more or less expensive than HE in England? Allow me to drop some science on those assembled:

To spend £635,825,107 on tuition, the Scottish government spend £635,825,107 from their budget for that year. They do this because they think investment in higher education leads to national prosperity.

BIS (in England) have done a pile of research into this – they list the national market benefits of HE as being something like: greater tax revenue, faster economic growth,  greater innovation and labour market flexibility, increased productivity of co-workers (they mean that ALL workers benefit from working with HE-educated colleagues) and less public spending in other areas. They state, flat out, that 20% of English economic growth between 82-05 is due to having more graduates in the workforce.

(Why am I quoting English publications not Scottish ones? Well, the BIS one is the most recent one – and it is a literature review drawing on studies that look at HE all over the world. So I’m guessing these are applicable, in a general sense, globally).

However, in England we’ve decided to try to spend some of the financial gains from investing in HE in advance. When the English government wants to spend money on university tuition it has to start by lending most of it to students. It lent £15bn this year. (Then there’s some extra bits it pays via HEFCE, depending on student course choices and suchlike.) In 30 years time, it would get an unknown percentage of these student loans back  as they graduate and get jobs. Current best reckoning is that it would get around 55%2 back at today’s prices (well down from the 70% return that was expected when the policy was launched) after 30 years.

With this in mind – let’s say both England and Scotland spent £100 a year on tuition; actually, England hand out much more per student than Scotland does. There’s been a lot of talk of a “Funding Gap” between English and Scottish HE (basically English HEIs getting more cash in any given year than comparable Scottish ones), which became the reason that Scottish institutions now charge fees to English students. But let’s say both England and Scotland spent £100 a year on tuition…

Scotland would spend £100. each. year. from. general. taxation. For arguments’ sake they get at least £100 of benefit for this spend from the enhanced economic growth and all the rest, that comes from having £100 worth of graduate (in reality, it is a lot more than that, as described above).

England borrow £100 each year, and get £55 back in 30 years time. So, assuming they get the same £100 of benefit, they’ve already spent £45 of it. But! – a chunk of the money that £100 worth of graduate would otherwise have been pumping into the wider economy over that 30 years has been spent on fee loan repayments. So you’d will be wondering if that would have a negative effect on the overall benefit of HE spending on the economy?

The simple answer is that we don’t know precisely, but it looks likely. Much of the benefit coming from graduates back into the economy involves them spending more and paying more tax. Both of these activities are constrained by having to pay a loan back.

Some very smart folks at a think-tank called the Higher Education Policy Institute did a bunch of work when this policy was first announced in 2010, modelling all of this to see whether the policy would ever be cost effective. Their magic number was a 54% return – if the estimates of student loan repayments dropped below 54%, the funding system would never break even. Remember, it’s already more expensive than the Scottish model in any given year – this is about whether it would ever actually break even: if there would ever be a year in which more money comes in via loan repayments than is spent on new loans.

Working with the figures presented by BIS, HEPI figured out that this break-even point would be in at least 30 years by which time we would – of course – all have flying cars. But as we edge closer and closer to the 54% return this magical day is postponed before being cancelled entirely.

(Given that BIS are pretty much NEVER going to admit that projected return rates fall to this level, I’m going to make a punt and guess that they already have.)

If you think back a few paragraphs to that stuff about wider economic benefits and our pessimistic guess that £100 spend now gives us £100 benefit after 30 years, £45 of this benefit is gone – forever – because the money was never paid back. The remaining £55 has to be used to shell out £100 in loans to that years students. And obviously £55 is lot less than £100, so the government is spending more money in that year which it doesn’t have. Austerity LOLs.

The English system is basically a bet that the additional benefits to the wider economy from having graduates in it will outweigh the costs of servicing loans. If this bet doesn’t pay off, the government loses money.

Confused yet? It gets worse. One of the things that BIS  (those useless, cretinous, morons) want to do to bridge the gap between fee loan income and outgoings in the early years of the system is to SELL the loan book. This means that some mates of theirs in the private sector give the government a pile of money now for the right to receive the repayments as they come in via Student Finance England. This is another bet, with the private sector buyer paying much less than the face value of the loans based on how much money the reckon they will get back.

So, someone might buy £100 worth of student loans for £30, then sit back and watch the repayments roll in. This means the government gets £30 now, and NOTHING in 30 years. And what does it spend the £30 on? That’s right, MORE STUDENT LOANS, which can then be sold for less than their face value.

You may be getting the impression that English HE funding is a byzantine nightmarish cross between a ponzi scheme and the seedier end of the Las Vegas strip. And that Scotland is well out of it. (Alert readers may also be suspecting that the Scottish system is also cheaper (both in the short term and long term) and offers more benefit to the wider economy.)

So, returning to our initial question regarding whether Scottish spending on HE is more or less expensive than English spending on HE, the answer is almost certainly that it is less expensive.  It is definitely less expensive within any given year, and very likely to be less expensive in the long term.

[If you want to read more about the crazy, Andrew McGettigan’s book is a good place to start]

1 – note this is just direct spending on tuition, and doesn’t include stuff like research funding. Incidently – this is nonsense, as I’m sure was pointed out at the time.
2 – this is usually expressed as being a 45% “RAB” charge on the loans, “RAB” meaning Resource Accounting Budget. But I expressed it as a 55% repayment rate both because it is easier for the lay reader to follow and because I didn’t want to make the inevitable joke about Rab C Nesbitt3
3 – the word “joke” here used in a very loose sense. As it was by the writing staff of the BBCs “regional” comedy, Rab C. Nesbitt