9 things to watch out for in 2015

So after an unaccountably excellent attempt at predicting the key news stories in education technology and higher education policy for this year, I feel  compelled to have another stab – and suggest what 2015 may hold. I should note I’ve just spent some time #ConferenceCrashing at the superb SRHE2014 conference, and many of the ideas currently buzzing around my head will have come from conversations I had there.

1. Education policy in politics - I’m not going to win any points by predicting a UK general election next year, or an unusual result that is likely to mark a decisive shift away from the two party politics that have dominated the country since the second world war. Neither will it have escaped many peoples notice that none of the seven significant parties  (eight if you count the Liberal Democrats) contesting seats have a clear policy to address the now widely recognised deficiencies in funding processes, quality assurance processes and legislation that make HE such a spectacular mess at the moment.

What I am predicting is HE policy being a point of clear distinction between parties. Unlike 2010, where everyone waited for the Browne Review, there is space now to generate policy positions that will reveal a lot about what kind of country each party believes we need to be. Access or elitism, internationalism or isolationism, economic engine or radical heart? The expanding network of great UK HE policy blogs (not least Wonkhe.com and Critical Education) will be a huge part of this national debate.

2. Academia against the institution – Battle lines are increasingly being drawn between students, academics, support and ancillary staff on the one hand, and institutional leaders and senior management on the others. Campaigns like #3cosas, #copsoffcampus and the myriad #freeeducation protests have taken the lead in challenging managerialism and the pursuit of cost savings above the welfare of human beings. The heart-breaking story of Professor Stephan Grimm at Imperial, the funding-target driven layoffs at Warwick, the bizarre saga of Professor Thomas Docherty (again at Warwick… seriously what is happening there…) are bellwethers for a wider culture of fear and control that have made working at a UK HE institution a series of compromises and an ever expanding job that eats into your health and your family life.

I’ve heard too many stories of senior institutional managers out of control and out of touch, the saga at Plymouth is as yet the most visible but there is a lot more to come out from institutions of all types. Our union (UCU) has a huge role to play in seeing that light and political heat is focused on this unfortunate tendency, and I think 2015 will be the year when many of these stories come to light. The struggles of academics and support staffs for fair pay and fair conditions are liable to take longer, but with students and (increasingly) public opinion on their side we should see some movement on this too. Expect more strikes, more protests, more hard questions asked of institutions – and hopefully some answers.

3. W(h)ither the MOOC – I’ll come straight out and predict that at least one major platform will either close entirely or move away from offering free and accessible online courses in 2015. (I know Udacity kind of mostly have, but another one) Investors have waited and waited for the disruptive moment that MOOCs promised, and I don’t think they will wait another twelve months without advocating some kind of a sustainable business model.

There is a lot of work to be done around accredited online instruction, and I predict that institutional offers will take up some of the latent demand for low-cost courses that the MOOC experiment has revealed. But these courses will compete on quality and value, rather than price.

4. Teaching quality enhancement metrics - the ongoing HEFCE work on “learning gain” was a surprise to many on announcement this year, and the findings may prove to be some of the most significant policy drivers in teaching quality enhancement next year. “Learning and Teaching” has had a difficult time over the last few years with the rise of the vocabulary of the “Student Experience”, and learning gain looks like a way of stifling the remainder still further with a faux-scientific focus on quantitative measures. Just this morning, Professor Richard Hall issued another of his barnstorming communiques – I demand you all watch the “dashboarding” video and read the text carefully.

Organisations like the SRHE and ALT (both which I intend to join next year, having been hugely impressed with their work this year) may be the two major vehicles of dissent to this agenda, and the combination of the theoretical rigour of the former and the pragmatism and history of the latter will be a powerful combination.

So I predict that: learning gain will inform the key policy arguments about learning technology in 2015, and that we will see a welcome collaboration between SRHE and ALT in response.

5. Independent researchers - (no Martin, I’m not saying “Guerilla Researchers“!) I am an independent researcher, so are most of the people that read these posts and work in these area. Grants and projects are now hard to come by, institutional support for non-income generating research is increasingly limited – and the likely funding decisions linked to the REF will limit this support further.

Like it or not, much of the significant work on education technology and education policy will be done by people in their own time, and with little or no external funding. My prediction here is that independent researchers in a number of non-science fields will begin to organise themselves for mutual support and benefit.

6. Authenticity - One of the most interesting sessions I sneaked into at the SRHE conference used the language of Queer Theory to examine various aspects of academic life. Though the vocabulary and conceptual framework were not familiar to me, the feeling in the room was incredible. We were talking about real lived experiences, not as data points but as artefacts on their own that could not be challenged or reduced to fit a pattern. And it was powerful.

Much of the wider cultural debate about austerity has shifted from measurement to the recounting experiences – government ministers can argue about statistics all day, but when greeted with the actuality of a life lived (or a life lost, all too often) it is more difficult to dismiss. Many of the most powerful arguments made about the condition of academia in 2015 will be not be framed in financial or statistical language. They will be pure, beautiful and true.

7. Students as ______ ? – The “student as consumer/customer” arguments are largely played out in the UK. Clearly students are paying, and have always paid, with their time and attention as well as their money. What we’ve not seen yet is a proper attempt to define the relationship of the student with their institution, with their subject and with their tutors in language that both encompasses and moves beyond the transactional language beloved by our government.

Sometime in 2015 we will see the development of a proper position that sees the consumer aspect of these interactions as one part of a very complex whole. And this will help us design institutions and processes that will support the entirety of the student experience – away from the “customer always knows best” reductions of the way the NSS has been implemented.

8. Uncapturing the lecture – It seems that lecture capture is capturing everything! Coupled with the increasing prevalence of the mandated deposit to the VLE, it seems we have reduced the lecture to an artefact rather than celebrating it as a performance. Journalists and dubious consultants line up to describe the lecture as dead, deficient or just plain dull. And this language is parroted and amplified by those looking to sell the content that is intended to replace it.

In musical terms, we can see the required .ppt as the score, the capture as a recording. But the live, interactive and responsive experience of the lecture (and lecture-style teaching techniques, just to be pedagogically neutral here) is of far greater value than any of the ways we have of capturing it. I predict a resurgence of the lecture – as outreach, as destination and as the cornerstone of the higher education experience.

9.  Collaborative tools – I’ve struggled to find an actual education technology this year, because so much of edtech this year has been a glossy restatement of Taylorism and Skinnerism – a retreat to the very worst of instrumental education (or “skills delivery” to use the argot of the times).

But the things I do see that I like are the tools that enable distributed collaboration. Ward Cunningham’s  Smallest Federated Wiki (popularised in my PLE by the ever-amazing Mike Caulfield ) is one such example – a very different one that I perhaps understand a little more is Known. As John Willbank’s superb keynote address at OpenEd14 impressed upon me the need for tools for collaborative research, so Kin Lane‘s advocacy opened my eyes to the possibilities and the concepts embodied by GitHub (cue Pat Lockley eyeroll as he’s been banging on about this to me for years).

So in 2015 the technologies that will impress us most will be collaboration tools of various purposes, returning perhaps to Tim Berniers-Lee’s original conceptualisation of a web that is readable, writeable and editable.

9 things to watch in 2014 – redux

I’m usually not one to brag, but round about this time last year I did a bit of future-gazing and knocked out “9 things to watch in 2014” and a quick glance around suggests a fair measure of accuracy.

  • Virtual realityFacebook bought Oculus Rift, Google Cardboard happened, and celebrity sweary-keynote giver Donald “Lectuuurrres” Clarke jumped on the bandwagon so hard it creaked
  • Algorithmic policy and the knowledge worker shift - well, I still think it is coming but maybe we didn’t see as much this year in HE as I expected. In the public sector, at least. Business strategy, and marketing strategy, is heavily algorithm driven. But I’m going to score this as a near miss.
  • Data literacy - We sure need it. Even Forbes now think so.
  • Personal data trails – I’ve seen many, many presentations and posts about personal data anxiety and the effect on students. Uppermost I would suggest Catherine Cronin’s ALT-C keynote. But I’ll also highlight this from Ben Goldacre, and this contextualisation to HE from Brian Kelly.
  • Corporate courses - just today I saw FutureLearn, apparently with sincerity, highlighting the experience of global megacorp Vouchercodes.co.uk on one of their courses. Coursera are all over this, as Audrey points out. FutureLearn even use their courses to train their own staff, with a textbook 80% drop out rate (MOOC story of the year for me)
  • Open Classrooms – Phonar (and PhonarNation) continue to go from strength to strength, as do the other courses I mentioned last year. I thought we’d see more press about this, but on reflection this is stuff that happens under the radar of the PR/journalism nexus. So I’ll score this one as a near miss too.
  • Challenges to institutions#copsoffcampus . Any number of university occupations, protests, demos, but the Warwick stuff (great year for Warwick, Times Newspapers University of the Year 2015!) really brought into focus the disconnect between management on the one hand, students and academics on the other.
  • Effectiveness metrics - Oh god. And how. How many people need to die or leave HE before this stops? I guess the REF results next week will offer some clue.
  • More funding chaos - Andrew McGettigan’s work on funding concerns around the new breed of private HE has been one of the stories of the year. And although PG loan access is welcome (as was the end of the UG cap before it) we still have no idea how to pay for either, and we enter the election campaign with HE funding looking likely to be a major political issue.
  • (User data bubble – this was a bonus, and longer term. But we are on our way. )

I award myself 7.5 out of 9, an approx 80% accuracy rate. Predictions for next year will follow in the next few days.

Now what kind of a guru are you, anyway?

Heroism against Humanity

This is a disconnected train of thought set running after watching “The Imitation Game”, reading a blog post about historic Russian Cyberneticism by Mark Johnson, and attending a workshop on Text Data Mining.

Something seems to have happened recently to change our cultural understanding of the nature of heroism. Broadly speaking, our initial idea of a hero was someone who was more humane than your average human, someone who could do great deeds as an expression of what humanity could become. If you want that in a statement, “an imagination of what a more powerful humanity could be”

What I’m arguing is that a contemporary statement defining heroism would be “an imagination of what a power beyond humanity could be”. Something in between the old idea of a god, and big data science.

In “the imitation game” there is a curious twist to the story, almost as a false ending, after the team break the Enigma code (the story of process, incidentally, plays back the five-people-in-a-room-doing-crosswords model of Bletchley – which speaks of a need to replay the popular conception rather than to challenge it. But that’s another matter, and one that Dr Sue Black takes up admirably in her review.). The team realise that they have all of the data they need to understand and predict axis forces activity, but are narrowly stopped from saving millions of lives by Alan Turning throwing a phone at the floor to remind them that the primary concern was to ensure that the breaking of Enigma was kept secret.

What follows is a jaw-dropping re-telling of the course of world war two almost as a stage-managed process, managed between a statistical analysis of the likely consequences of acting on certain information, MI6 subterfuge and even a bit of Soviet espionage.

None of this, even more jaw-droppingly, is actually true. It was an invention for narrative purposes. Policies around the use of decoded information (which incidentally happened many times, to Enigma and other protocols, before Turing’s team’s Polish-inspired breakthrough with the Bombe) were in existence even before the start of the Second World War, and this was handled by an entirely separate department (Hut 4, for completeness sake… I’ve stood in it).

So for narrative reasons the mere fact of the mathematical and mechanical ingenuity of Alan Turing – a fine, and true story, but one that needs to be seen in the context of other work at the time, not least the contributions of Bill Tutte, Tommy Flowers and many others in breaking the Lorentz cipher – was not enough. He had also to make difficult ethical decisions outside of mere human considerations such as saving lives. Much of the latter hang-wringing is about the use of this god-like power brought about by the analysis of a large dataset.

And this is a very modern portrayal of heroism indeed – making decisions with ramifications beyond life and death for a higher good, based on a machine-like grasp of the entirely of a data set and its implications. Think Doctor Who. Think (the modern) Sherlock. Think Batman, Iron Man/Tony Stark, Lucy, George Clooney’s character in Gravity… take your pick.

Think Sebastian Thrun. Think Sal Khan.

Think Mark Zuckerburg. Daphne Koller.

The common thread is an ability to think beyond human concerns, to transcend individual interactions to see truths and answers in the sum of those interactions.

So what does it mean to live in a world where are heroes are those who understand the general rather than the specific, the reality of the data rather than the reality of experience?

Which brings me to the Soviet attempts to employ cybernetics to build the ideal socialist state. As Mark Johnson notes:

“In 1959, Anatoly Kitov proposed to the Kremlin that a computer system was developed to manage the whole Russian economy providing real-time feedback on production. This ambitious request was rejected, although it remained a long-held dream of Kitov and other scientists: a programming language called ALGEM (a variant of ALGOL-60) was developed to assist in the realisation of this economic management system.”

As Mark recounts, huge swathes of the Soviet computer programme were abandoned as the Kremlin decided to standardise on IBM!

This prefigures the Chilean Project Cybersyn, which was a far better known (and more advanced) attempt at the same goal: to direct the use of resources for the good of the workers.

In the west we currently live in a civilisation where resources are distributed by algorithm, but these support the profits of merchants (and not yer bourgeoisie neither) rather than the welfare of workers as any dairy farmer will happily explain to you.

And in academia we are already a long way in to a similar disruption, as regular expressions are capable of reading and excerpting from more academic literature than any single human being.  The UK government has already invested £73million in “big data” research, and the development (and sharing) of datasets for future mining has become a huge component of research grants in all fields.

There’s a palpable, almost childlike, delight in the scale of the research that will be possible in the future. But it is only the funding conditions of the present that mean the high volume and low quality of academic research are unmanageable by human eyes. For years we have used policies that have given us more and more data, so it is only natural that we turn towards the development of tools to manage and use it.

It is already very difficult to receive funding or demonstrate impact for a research project based on existing literature or data. Academic fields, old and new, cry out for annotated bibliographies, literature reviews and meta-analyses. But we don’t have the political will to fund them. Or, more accurately, to fund people to conduct them.

But we will, it seems, fund people to build machines to do similar things.

The “heroism” here is a faster, cheaper means of conducting academic research via automation. The question is – and the one which several teams grapple with worldwide – is this for the benefit of the workers or the merchants?

Towards a Paleoconnectivism Reader #opened14

This is a complement to Jim Groom’s notes from our joint presentation (sadly missing one Brian Lamb) at OpenEd14. There’s a lot more stuff I want to write about from that conference, and from the awesome UMWHackathon I was lucky enough to participate in afterwards. But this is a start.

On birth myths

Last year in Park City we were honoured to be able to hear Audrey Watters speaking about the apocalyptic preoccupations of the culture that has grown up around education technology.

Our work here today is a look at the other end of the mythological journey – the birth myths of open education. We know them well – Sebastian Thrun inventing massive open online learning in 2011, George Siemens inventing massive open online learning in 2008… MIT (and/or the Hewlett Foundation) inventing sharing learning materials in 2002…

Birth myths, even more so than apocalyptic narratives, are ahistorical. They tie in with a phallogocentricism of the concept of creation as a single act by a single person (generally a man…) rather than a whole set of pre-existing conditions and preoccupations.

Paleoconnectivism is an attempt to recontextualise our current work in looking at the pre-creation history of the concepts and interests we share. It’s an attempt to begin to clear the way for a literature, a research base that connects with other work in cognate fields.

As George Siemens wrote recently:

 “I can’t think of a trend in education that is as substantive as openness that has less of a peer reviewed research base. Top conferences are practitioner and policy/advocacy based. Where are the research conferences? Where are the proceedings?”

We could add – where are the roots in the fields that openness sprang from? Where are the connections to long standing work in copyright reform, education studies, communication studies, philosophy?

Larry Lessig and the First World War (a worked example)

What were the causes of World War 1?

That’s right, the cause of World War 1 was ethics in games journalism.

Or at least, ethics in journalism. The power of the fourth estate.

On the first day of this conference, Larry Lessig talked about “tweedism”, the idea that the interests of those who funded politics would always prevail over the choice of politics offered to the electorate. His analysis omitted the power of journalism of all forms to shape politics, and even to start wars.

Alfred Harmsworth began his career writing for Tit-Bits. This was a UK periodical that collected the best of other journalism from around the world, based on reader recommendations and occasionally reader contributions, and presented it in weekly issues [Students of the history of copyright will note a parallel with the late c18th journals like Mathew Carey’s American Museum that excerpted UK copyright scientific materials and republished in the largely (at the time) lawless US. Which was the way the US became a superpower, and is another story that I also didn’t get to tell last year.]

Basically, it was Reddit.

He moved on from there to found what was essentially Quora, a periodical called “Answers To Correspondents” where readers could write in to ask or answer questions of and for other readers. This quickly became a hugely popular publication, and the profits from this enabled him to buy and found a range of UK newspapers including the Times, the Daily Mirror and – most terrifyingly – the astoundingly popular Daily Mail in 1896. He became ennobled – Lord Northcliffe.

Throughout the early 1900s, all of these papers pursued a belligerent and, frankly, xenophobic line against the rival European power of Germany, using their near-blanket control of public opinion to force more and more hawkish policymaking from the government of the time.

One of the few papers he didn’t control, the Star, noted:

 “Next to the Kaiser, Lord Northcliffe has done more than any living man to bring about the war”

During the war his papers brought down the British Government of Asquith over an alleged shortage of munitions, and had David Lloyd George installed as minister for munitions in the following coalition government. When Lloyd George became Prime Minister in 1916, Northcliffe turned down a proffered ministerial post and was made Director of Propaganda.

Not Lessig’s “green power”, not the power of popular opinion – something else. The curated and managed mass opinion used to shape policy. (Even now, it is widely considered that the Daily Mail receives and prints more readers letters than any other UK paper). Somehow this all feels very modern, and very relevant as we consider popular resistance to a more progressive agenda. And, though I loved Lessig’s presentation, this was an aspect of policy making that his analysis missed.

The Sheer Pace of Change (back to edtech)

 One means of shaping popular opinion is to emphasis the sheer pace of change. Again, Audrey touched on this last year – but consider this from Martin Bean of FutureLearn and the UK Open University:

 “Perhaps the most difficult thing for those of us in higher education to get to grips with is the sheer pace of change”

He’s right, in a way. Things change so slowly. Old battles are refought, old divisions redrawn. Old ideas are lost and, perhaps, rediscovered.

“Educational institutions, too, are expected to change themselves so they can somehow be one step ahead of (or just catch up with) where people already are. Resistance to change is presented as resistance to what is natural and inevitable, like fighting a rising tide or an avalanche (yes, these are the same metaphors used in MOOC-hype articles – no coincidence). Universities are depicted as recalcitrant in the face of changing external circumstances, the latest of which is the ascent of the digital” – Melonie Fullick

 There is a vested interest in a fast rate of change, and the interest comes from – as always – people with things to sell. Education is more like a glacier than an avalanche. Change is slow, but relentless and final – arching fissures in the landscape that remain long after the reasons are forgotten.

The Time of the Cyclops (in the country of the blind…)

Martin Bean worked for the Open University in the UK, an institution that began as the “University of the Air” – shaped by and inspired by technology.

 “Between church and lunch I wrote the whole outline for a University of the Air.” – Harold Wilson

 As the University charter  sets out:

“The objects of the University shall be the advancement and dissemination of learning and knowledge by teaching and research by a diversity of means such as broadcasting and technological devices appropriate to higher education, by correspondence tuition, residential courses and seminars and in other relevant ways, and shall be to provide education of University and professional standards for its students and to promote the educational well-being of the community generally”

 The OU has both a remit to, and a history of, experimenting with new technologies. FutureLearn is one example, another is Cyclops – which was designed in the late 70s and used in trials until the mid 80s. It extended the then-contemporary use of phone conferencing, and was seen as a less technical alternative to the full on CoSY web-conferencing (multiple-email list) action in stuff like DT200, which we’ll come to later.

No-one appears to have recorded, what – if anything – Cyclops stands for. My best guess is Control Your Class Like Orthodox ProfessorS.

Mike Sharples is now Pedagogic Lead at FutureLearn, but he was also one of the key team at the OU working on Cyclops. Here’s some notes from a presentation about it he gave in 2009.

Students preferred it to the alternatives… so why isn’t it used now? Framework for evaluation at three different levels: Micro, Meso and Macro –usability, usefulness, efficiency
  • Micro layer – worked at this level! Familiar system – like an overhead projector, true wysiwis, students operated it with no training.
  • Meso layer – tutors adapted it to their teaching style, tutor station with graphics pad
  • Macro layer – matched students needs, wrong business model, saved student travel costs, but increased OU costs for facilitator and line charges”

A familiar attempt to capture student attitudes at the time, is detailed in Bates’ 1984 book. The Role Of Technology in Distance Education:

CyclopsStudentAttitudes

And from a longer paper [McConnell, David and Sharples, Mike, “Distance Teaching by Cyclops: An educational evaluation of the Open University’s telewriting system”, British Journal of Education Technology, vol 14 issue 2 (May 1983)]

McConnell and Sharples

Precisely why adding graphics to telephone teaching would make it more effective is not discussed in any of the literature I am able to find. What the telephone teaching added to distance learning was the connection with the others,and  although early work focused on content the key was the connection.

Elsewhere in the 80s education technology literature (specifically in the Robin Mason edited “Mindweave“) researchers were clear that further work should draw on fields that study human communication. For example:

 “Finally, in the user arena, we need to continue to do, and to make use of, fundamental work on the characteristics and processes of human communication, at the individual (cognitive and psycho-affective) level as well as on the social (group interaction and cooperative working) level”  (Peter Zorcoczy in “Mindweave”, p262)

The student experience research I cited earlier suggested that a visual focus of attention  was one of the primary benefits that the Cyclops system can offer. But is all digital content just a “visual focus of attention”? Some pretty lights to look at whilst the learning happens elsewhere?

#DT200 is your new #4LIFE

 “It could be argued that the inherent pedagogical characteristics of CMC are independent of whether it us used in a distance or campus-based environment. They revolve around two very important features of the medium:

* it is essentially a medium of written discourse, which nevertheless shares some of the spontaneity and flexibility of spoken communication

* it can be used as a powerful tool for group communication and for co-operative learning” (Anthony Kaye in “Mindweave”, p10)

Computer Mediated Communication (via tools like Guelph’s CoSY system) was the big noise in the early-mid 80s, with the OU’s own DT200 of legend being one of the first courses to use such a system with (comparatively) inexperienced distance learners.

This was the first time the OU had used CMC as a primary means of supporting learning. Opinions of students were, at best, mixed:

 “A series of questions about the convenience of electronic communications was included in the questionnaire for the course database. These show that about 60-70% of students returning questionnaires found [CMC] less effective for contacting their tutor, getting help, socializing and saving time and money in travelling” (though methodological issues around survey timing)” (Robin Mason in “Mindweave”, p123)

 “There seem to be a lot of people with axes to grind, particular things which interest them which they put into the conference which aren’t really relevant to the course at all. Sometimes they are interesting to read, but it is pretty much pot luck – you don’t know what you will get out of them” (student quoted by Robin Mason, as above)

“Before we started I had naïve visions of vast amounts of stimulating conversation going on […] By and large this has not happened and I have learnt that electronic communication is both hard work and time consuming. There is also concern about social isolation produced by the new technology, the electronic communicator can spend a large part of his or her time alone, neglecting the family and perhaps having little time left over for face to face interaction.” (student quoted by Robin Mason, as above)

 As Mason concluded, “Conferencing did not have a high enough profile on the course to be a medium for discussing course issues in depth” (p137)

Fundamentally, the people who liked computer mediated conferencing, liked it. It made sense as a supplement to other modes of interaction, especially amongst interested groups. But it was a long time before eLearning (as it became) became a standard offer at the OU, especially given the expense of providing modems, loans for computers and when contributions towards academic and support  time spent responding online were added up.

This was, of course, in line with the more theoretically grounded research writing at the time:

 “Although technology is important for any mediated activity, it cannot automate what is in reality a social encounter based on specific social practices. These social practices are unusually complex because of the difficulty of mediating organized group activity in a written environment. Failures and breakdowns occur at the social level far more than the technical level” Feenberg in  “Mindweave”, p28

 The message the keeps coming across is that this is difficult stuff. Not really difficult technically – at least, not in 2014 – but difficult conceptually. Interacting and learning in this way online is not “like” social media, any more than it is “like” a face-to-face conversation. It is something different. And, until a learner is used to it, it is something that can be very complex.

Networks, not work.

 “This message map analysis shows a complex web of interaction composed of many interconnected linkages. This visual mapping of the comment linkages supports reported observations that online discussions are not linear and that complex referencing occurs […] collaborative learning is predicated upon interaction; analyses of on-line course indicate highly synergistic and interactive learning patters. There is dynamic interaction and weaving of ideas” Linda Harasim in “Mindweave”, pp56-57

 We still don’t really understand the implications of this, despite the huge growth in social and learning analytics. I’ve seen so many diagrams that just demonstrate that a lot of people talked to a small number of people. We’re still staring at these images of networks as if they will reveal something about what makes them work.

You might think that this post is just another example of edtech nostalgia. But I’m not here to laugh at old dreams of the future. To me it is a salutary reminder that so much of the work has yet to be done. We’ve improved the technology, we have yet to improve our understanding of the underlying issues. As “open education” becomes a field of inquiry rather than advocacy, this is the unfinished business left to us by our predecessors.

 “Everything may be possible eventually through technology – but we should ensure that what is done through technology is what we want, no less in distance education as in other aspects of our lives.” Tony Bates in “The Role of Technology in Distance Education”, p230)

(why a paleoconnectivism reader? well, originally we had some thoughts of launching a call for chapters for a book covering all this stuff. It may still happen. Most of what I have written here is taken from dusty old books retrieved from academic library clearances. Next time someone comes to relearn this I want them to have some chance of finding an artefact to work from. 80s and early 90s history is a bit of a blind spot for the internet, sadly…)

Book Launch: A New Order

A New OrderI made a book!

You can get a real actual physical object, with pages and a cover and everything, from lulu.com. It’s £10 plus whatever post and packing is to where you live. (I think it will also get on to the evil online bookstore of your choice, eventually)

You can also get a pdf version for your various ebook reading devices and apps. That one is free, and you’re just downloading it from me. Whilst it lacks the tangibility of the other version, all of the links do work and the other stuff is as near identical as you may expect.

There is also an ebook version, which is currently on Lulu.com and will gradually be sucked out onto the big evil bookstores (Amazon’s Kindle Store, the iBookstore, Barnes & Noble NOOK bookstore, the Kobo bookstore…) over the back end of this year. This ebook version does not have any hyperlinks, because EPUBs suck. It’s free (as in £0.00), and free ( as in CC-BY-SA).

The content is the least worst of the various posts I’ve done here over the last 4 years (so, of course you could just read most of the content here). Possibly more exciting is an exclusive introduction/reader guide from the man who started all this madness, Brian Lamb.

The superb cover image, and indeed the header image of this blog, have been created by the super-talented Rob Englebright.

A lot of this creation and publication happened at the UMW Hackathon, so big thanks to Jim, Martha, Tim, Ryan, Andy and the team for hosting us at DTLT (big fan). The “publishing books” track was primarily Audrey Watters and myself, so look out for her book (of her talks from this year, which have all been amazing)

 

OK, so shall we talk about ethics in games journalism?

This is a post about VERY BAD THINGS that are happening/have happened on the internet. This is a trigger warning because I understand not everyone wants to or can healthily read about stuff like this. There are very few explicit words or images in the post, but there may be more if you follow the links (which I have kept to a safe minimum here, though you can read all of the horrible things you like if you look hard enough. Or see Audrey’s post.)  

Male and “EdTech” readers – yes this post does concern you. You don’t get to write this off as “some horrible men are being horrible to some women”. This is our internet, this is our culture, this is our responsibility.

Gamergate is a thing, and some people are still trying to defend it. It’s a leaderless, structureless, and – quite possibly – lawless entity which seems to exist primarily to (a) threaten and humiliate women involved in the games industry or “games culture”, (b) threaten other women and men who speak up for the women that have been threatened, and (c) say “BUUUUT ethics in games journalism” when taken to task about the first two activities.

Threatening people is no way to win an argument. Defining or agreeing what the argument is about would be a first step, maybe.

The history of the “thing” is already well documented and goes something along the lines of “messy break-up whining goes viral, rape threats happen, but ethics in game journalism”. Turns out a woman may have slept with a man she met who works in the same industry as she works in (I know, right…), and the man was not responsible for writing a non-existent review of an award winning game she produced. Because ethics in games journalism.

Obviously this is the cue for people (and by people I mean a very small number of men) to get angry and start hurling accusations and threats about. As this is the internet and the internet loves drama, a whirlpool of journalism and journalism-flavour content is scattered all around the bits of the web that us normal folks might go on, thus drawing more people in to point out that threats of sexual violence are not ever a good thing and thus having similar threats hurled at them. Because ethics in games journalism.

This annoys me. Ethics in games journalism is actually an interesting thing to write about. Games, and gaming culture, is interesting. There are legitimate concerns about ethics in games journalism, although to be entirely accurate these concerns do not extend to the sex lives of game developers or game journalists.

The real ethical issue in games journalism concerns the huge amount of advertising and sponsorship income gaming publications get from people who make games.

Once there was a time called “the 90s”, and the Amiga was was pretty much the gaming platform of choice. So there were loads of magazines that reviewed Amiga games, and in pre-internet times these reviews were the main way in which tedious game-obsessed schoolboys (like your humble scribe) could get information regarding which games (if any) they would want to spend their scarce funds on.

And these reviews were, in the main, awful. They reviewed unfinished games, they gave high scores to terrible games (because game reviewing was, and is, mostly about the scores – which [by ancient convention, the reasons for which are lost forever] are measured as a percentage against the platonic ideal of the best game ever.), and they used screen-shots that all looked suspiciously similar to each other.

The usual deal would be that an evil software publisher would sidle up (oh yes, SIDLE up) to an unscrupulous magazine and whisper “oh deary me, what am I going to do with this *huge advertising budget* for our new game. Which magazine should I buy ads in?”. At this point the magazine in question would adopt a certain position and offer to review said new game in glowing terms, even if it wasn’t (a) very good or (b) finished as long as they could have an “exclusive“.

Why? Because … yup… ethics in game journalism. Always has been.

Videogames magazines and videogames publishers nowadays exist solely as a mutual-support network aimed at squeezing money out of your pockets and into theirs. They know only too well that the days of games mags are numbered, so they have no interest in building reader loyalty, and hence no interest in integrity. All they want is to get as much cash out of you as possible before they die forever. And the best way of doing that is by hyping publishers’ games, artificially inflating readers’ enthusiasm, getting lucrative advertising from the publishers in return, and meanwhile cutting back on staff and budgets to the point that even reviewers naive enough to want to do their job properly simply don’t have the time or the resources for it.

- Stuart Campbell, probably around 2004.

Or how about giving an excellent score to a terrible game that was unfinished, using a version that bore no resemblance to the game that was released. In 1987.

Ethics. Journalism. Games. And actually a fair point, one that deserves criticism (as opposed to, say, threats of sexual violence).

At the time there was one magazine that wouldn’t arbitrarily give a dull game 73% because they didn’t want to annoy people, and that was Amiga Power (which pioneered new and exciting ways of annoying people).

It had a short-lived but still worthwhile “Amiga Power Policy of Truth”, which was an aspiration rather than a law:

 1. We won’t review unfinished games just to claim an exclusive.
2. We don’t pander to games publishers – we say what we really think.
3. We only use experienced, professional reviewers.
4. We won’t bore you with mountains of technical-jargon-hardware tedium.
5. We take games seriously, because you do too.

Because ethics in game journalism. Miraculously achieved without any rape threats at all.

-- intermission --
- "So, this isn't really about edtech at all is it. Just Kernohan doing his usual thing about some moderately interesting bit of recent history"
 - "Aye typical. Can't see any kind of edtech connection at all. Best get on and promote MOOCs/flipped classroom/iPads as some kind of educational panacea then..."
 - "Yes, thank god that so many edtech journalists will just blindly regurgitate any press release you hand them. Apart from a few of them, who always complain about violent misogyny on the internet which has nothing to do with edtech"
 - "No, absolutely nothing whatsoever. Just because someone had a bad experience interacting online, doesn't mean that interacting online isn't the best thing ever"
 - "Typical, bring things that aren't about selling shiny new disruptive ideas into edtech. Like that bloody Kernohan. I tell you what, I totally won't be downloading his hotly anticipated forthcoming ebook."
 - "No, me neither. I love fixed-width fonts though. Almost as good as being a man and not experiencing violent personal threats including the sharing of my name and address on the internet."
-- --

But is it just about the money? Or is there something else?

Have you ever (you the actual reader, not you the imagined reader voice that I just did that tricksy postmodern unreliable narrator thing with) heard of the New Games Journalism? It’s an idea a chap named Kieron Gillen had, which basically amounts to a post-modern turn for games journalism. This is way back in 2004 – so long, long after the Amiga was safely dead – but drew on the same kind of subjective player experience that they pioneered (apart from Your Sinclair). Alert readers will note that Gillen here manages to express this idea without threatening to violently sexually assault anyone.

Ethically, this was a very smart move as it emphasised the primacy of the individual writer’s experience. How does it feel for me to play this game? At worst, you got a blogger who had taken two junior asprin and a can of shandy pretending he was Hunter S. Thompson. At best, you got something that both functioned as consumer journalism and as literature in it’s own right. (IPR note, I’d quote from Gillen’s “manifesto” but it has such a bizarre custom license on that I am not sure I am permitted to)

Game reviews as art? Some responded with the horror that you might expect when confronted with the idea that subjectivity is not to be denied but welcome. For example Ram Raider suggests that

“The principle is that an NGJ article should centre around the writer and his experience. Taken at face value, this sounds quite sensible. Unfortunately, applied to an industry full of giant egos, this has resulted in a breed of articles that are more about the writer telling the world about himself.”

And, tellingly:

“Gillen’s not a bad guy when you meet him in the flesh, and it’s a shame to see his name brought to prominence with an issue that we can already see the community lashing out at.”

Really, people did make threats. And started alleging corruption and favouritism (but I don’t think the people involved had sex with each other. At least, I hope not) All down to ethics in game journalism. Or just wanting better game journalism. Or wanting to deal with the nastier bits of what was now connected life via the medium of video games and writing about them.

There was, inevitably a backlash, and to characterise this I want to point at what Gary Cutlack was doing with UK:Resistance during this period. Here’s his take on the issues with “new games journalism”, of which – it was fair to say – he was not a fan.  Cutlack was one of the earliest games bloggers(1), and the way his work descended into SEGA-related nostalgia and bitterness towards the end of the 15 years(!) of blog archives is oddly affecting.

Despite the “proper games journalism” stance in the article linked to above, I’d always read the site as being written in character as a parody of the “socially inept gamer” cliche. (Actually his twitter feed is still like that, except he doesn’t write much about gaming any more – shit, maybe he *really* is  that depressed… )

It takes a special kind of man to post pictures of a Sonic the Hedgehog branded popcorn machine onto a wordpress blog. And to write about his growing disenchantment with gaming and game culture for an audience that had grown with him. But in the latter years he primarily focused on “reader submissions”, which reflected the concerns of the readers back on themselves – which became downright disturbing.

UK:R became harder for me to read as the hugely disturbing “new gaming culture” became entrenched in the comments. When he closed the site in 2011, I understood why. Newer, and angrier, expressions of this culture – such as Yahtzee Croshaw’s “Zero Punctuation” – moved the rape jokes from the comment section to above the line. Because ethics in game journalism. (Incidently, this is the closest Croshaw has come to writing about GamerGate. I’d like him to write more.)

At his best, Croshaw is very funny indeed. Why he keeps adding the unfunny, disturbing and horrible bits is a mystery to me. He can clearly write well without them.

There are lots of games journalists out there – some of them are good writers, some of them are not, some of them are ethical, some of them are not. I’m naive enough to think that we are all publishers, and we all have the responsiblity to write ethically and transparently, to write well, and not to use threats of sexual assault if someone disagrees with us.

If you are concerned about ethics in games journalism (or EdTech journalism, or political journalism) remember that in commenting and responding to the work of others YOU ARE A JOURNALIST. Be ethical.


(1) It’s not a blog, it’s a web site.

Graduate Employability and the New Economic Order

That Lawrie keeps asking me interesting questions. This post comes from one that he asked me recently. 

“A new publication issued today by the Higher Education Statistics Agency (HESA) sets out, for the first time, figures describing the First Destinations of students leaving universities and colleges of higher education throughout the UK.”

No, not today. The 9th of August 1996, when a still-damp-behind-the-ears HESA published the results of a ground-breaking survey concerning where students end up after they complete their degree. More than the rise (and rise) of fees, more than the expansion of the system, more (even) than the growth of the world wide web; the publication of these numbers has defined the shape and nature of modern higher education.

Before this time (and records are hazy here, without disturbing my local library for bizarre out of print 90s educational pamphlets from the National Archive ) universities and colleges careers advisory services did their own surveys of graduate destinations, which were annually grouped by the DfEE.  Though this produced interesting data, national ownership across a relatively newly unified HE sector was clearly the way to integrity.

And also league tables.

Here at last was a metric that promised to convert investment in Higher Education into “real world” economic benefit. Beyond the vague professorial arm waving, and the lovely glowy feeling, some hard return on investment data.

We’re pre-Dearing here, so obviously Lord Ron and team had a thing or two to say in their his 1997 report. Though being careful not to provide a “purely instrumental approach to higher education” (4.2), the report makes a number of gestures towards the need to encompass employer requirements in the design and delivery of HE courses. Some of these (4.14) recommendations are as stark and uncompromising as anything in Browne (or Avalanche)

  • above all, this new economic order will place a premium on knowledge. Institutions are well-placed to capitalise on higher education’s long-standing purpose of developing knowledge and understanding. But to do so, they need to recognise more consistently that individuals need to be equipped in their initial higher education with the knowledge, skills and understanding which they can use as a basis to secure further knowledge and skills;

“New Economic Order”, eh? Of course, I’ve gone over some of this history before, in particular the 20 year English habit of building new universities at the drop of a capitalist’s stovepipe hat. What was new in Dearing was the idea of embedding these values into a wider definition of what it means to be a university.

The Blunkett-led DfEE commissioned a report entitled “Employability: Developing a Framework for Policy Analysis” from the Institute for Employment Studies, which was delivered by Jim Hillage and Emma Pollard in 1998. (If the idea of a framework for policy analysis is ringing faint alarm bells in the ears of alert FOTA readers, then yes – the late 90s saw a certain Dr Barber influencing the development of education policy in England.)

What Hillage and Pollard do is provide three key elements of scaffolding to the burgeoning employability agenda in education (note: not solely HE)

  • A literature review, and definition of the term
  • A “framework” for policy delivery, to (yes) “operationalise” employability
  • Some initial analysis of the strengths and weaknesses of the various available measures of employability.

I’m very close to just quoting huge chunks of this report as it is such a perfect encapsulation of the time.

Their definition (p11)
Their definition (p11)

You have to love “labour market efficiency”, don’t you?

Hillage and Pollard make an attempt to split the employability of an individual into a set of attributes (eg p21); “Assets” (knowledge, skills and attitudes), which are “deployed” (career planning and goals), then “presented” (interview and application). “Context” dangles off the end as a late admission that other things going on in the world, or in the life of an individual can have a powerful effect.

Again, very much of the time, the report is cautious but optimistic about the methods of measuring employability – noting that although “output measures” (such as our first destination survey) can be useful, the wider context of the state of the labour market needs to be taken into account.

“Intermediate indicators” (the possession of appropriate skills and knowledge) are easier to measure. You could read across to competency-led course design and the whole world of “learning outcomes” here.

The final indicator type analysed is “perceptual” – broadly, what do employers think of the “employability” of their intake? Again context is key here, and there is an immediacy bias – in that the skills required to do a particular task (I’ll call them “role skills”) are separate from the wider concerns of the individual in being “employable” in a wider way.

But if this document has a theme, it is that the individual needs to take responsibility for their own employability. The learner is complicit in their own subservience to an economic and value-generation system, with the educator merely a resource to be drawn on in this process.

It is this model of education – now measured without qualification – that has come to dominate HE. It is a conceptualisation tied in with institutional (and often academic) support of a neo-liberal system without question. (A neoliberal system, I may add, that is looking none-too-healthy at the moment). This is a model that is being problematised by Professor Richard Hall and others. And this is why (Lawrie) that HE in England is markedly less political than in countries without a fully integrated and developed employability agenda.

Here’s the 2011 White Paper: “To be successful, institutions will have to appeal to
prospective students and be respected by employers”
(14) and “We also set out how we will create the conditions to encourage greater collaboration between higher education institutions and employers to ensure that students gain the knowledge and skills they need to embark on rewarding careers”  (3.2)

Good luck.

Yes, it’s not #QAmageddon !

So; #QAmageddon remains the conversational topic of choice amongst wonk-kind, and merits a semi-interested “meh” from everyone else. For those just joining us, HEFCE sneaked out a fairly wide-ranging consultative review of quality assessment in HE and are inviting comments from all and sundry stakeholders, to include (but not limited to) “the sector”, the NUS and government. There’s a (limited) FAQ and a letter to vice-chancellors from HEFCE.  The QAA, as one might expect, have responded with a bit of history – Million+ have weighed in, as have the Russell Group.

Other stuff that you’ll be wanting to read includes the Wonkhe coverage (don’t skip the comments), Derfel Owen’s blog and this from Hugh Jones at Sweeping Leaves. The Times Higher Education coverage has not really added anything to the debate so far.

There seems to be two emerging explanations – either this is high HE agency politics, or a belated attempt to bring more transparency into the commissioning of external bodies to undertake statutory duty. As an academic, you should care about neither of them.*

QA for HE teaching is one of those areas that makes less sense the more you look at it. Ostensibly, HEFCE have a statutory duty to assure the quality of provision in public funded universities. They employ the QAA to carry this out for them. But what the QAA actually does in this area is to assess the quality of institutional teaching quality assurance processes. They look at documentation describing these processes, and check that outputs from these processes exist.

When you complain about being over-monitored, or having to do loads of administrative form-filling, or that innovation is hamstrung by compliance requirements – you are complaining about your institutional processes… which may or may not be staggeringly over-engineered, antiquated, unwieldy or just plain terrible. You are not complaining about the QAA, which is generally something that only your registrar can sensibly do.

QAA sets out their expectations regarding internal processes in the UK Quality Code for Higher Education, which you could see as a kind of checklist for UK quality managers. (There is even a checklist version of it if you are in a hurry.)

I’ll wait here for you, go and have a little look.

Right – it’s not actually that bad, is it? The code sets out stuff that you’d probably want to do if you are a self-respecting university : keeping proper records, getting stuff validated in a useful way. Even if you wanted to start a co-operative university outside of the state system, you’d want to know who was studying there and what you were intending to teach them, and have some kind of an admissions system…

… so, when you respond to the HEFCE consultation, you could talk about how an over-enthusiastic interpretation of the quality code is engendering a cult of managerialism,  how academic staff are being swamped with data requests at the expense of actual academic stuff like teaching and research. It doesn’t – on one level – matter if your institution is being critically examined by the QAA or an interested prospective student, both need to be left with the impression that the place is trustworthy, human and focused on supporting learning and discovery. Do existing quality processes – considered en masse – offer that impression?

You could talk about how a focus on process is a poor but necessary proxy for a focus on something as intangible as educational quality, but note that a centrality of process can have a negative effect on delivery. You could remind the institution that as a customer (and yes, you are a customer, you employ all the other people that work in an institution to help you be an academic. Even the Vice Chancellor and the Registrar. You spend the [overwhelming amount of the] proceeds of *your* labour on the support systems that form the non-academic component of your institution. They are yours – you are not theirs.) you want to see processes that support academia not perpetuate institutional structures.

But, moreover, you could make a case for the relationship between the student and the academic being paramount. For a human-scale higher education that does not see interactions simply as data points. Yeah, you’ll have to dress all of the hippy stuff up in management language – but this is an opportunity for radical change greater than anything in recent years.

As yet, we don’t even know how to feed in. But we need to ensure that we can and do, and for an academic simply being aware (and keeping an eye on) this transparent process as it involved is important. I’ll try to highlight the opportunities on here in the weeks to come.


 

*The Agency fight club interpretation explains, to an extent, the apparent suddenness of the announcement, and the way it has caught a lot of commentators  by surprise. Though the contract between the QAA and HEFCE for teaching quality assessment in public universities is approaching one of the triennial review points, there is no expectation of a public consultation at this point.

Indeed, one carried out by HEFCE in 2012 pronounced the current arrangements broadly fit for purpose as a basis for meeting the emerging needs of the system. Para 4:

“General support was also expressed for our proposal to use the QAA’s existing method of  Institutional Review as the basis of building a risk-based approach, given the success of this new method in ensuring rigorous, robust review which fully involves students, but is proportionate in regulatory terms.”

Basically, the “agency politics” line is predicated in the existence of a sudden and massive falling out between HEFCE and QAA management. It’s not for me to comment on what may (or may not) be happening between the two organisations – I would only to venture to suggest that this is not the way that similar disagreements between groups and agencies with overlapping interests have played out in the past. Washing one’s dirty linen in public would be a very strange choice for HEFCE or the QAA to make (if, indeed, either organisation has soiled linen to deal with).

I lean towards the “transparency” explanation – which goes along the lines of HEFCE bedding into a new role as a purchaser of specific services on behalf of the sector, supported via a BIS funding line separate from mainstream teaching funding allocation. You’d be wondering why BIS don’t procure these services themselves – and I don’t think you’d be the only one wondering that.  The HEFCE argument here would be an old one – it is a “buffer body” that has a unique understanding of the English HE sector that can get the best value for money by providing precisely what the sector needs.

It can demonstrate this value by being open and transparent in the procurement process. Even the panel that runs the consultation process that designs the tender documents that prospective service delivery agencies will apply to [breathes] is being openly constituted before our very eyes. Such transparency, very open, wow.

“You can’t always get what you want. But if you try sometimes well you just might find you get what you need”

The VLE is dead” is not dead. The past month has seen posts from Peter Reed, Sheila MacNeill, and D’Arcy Norman offering the “real world” flip-side to the joyous utopian escapism of  edtech Pollyanna Audrey Watters.  Audrey’s position – that the LMS (learning management system [US, rest of world])/VLE (Virtual Learning Environment, formerly Managed Learning Environment – MLE [UK]) constrains and shapes our conception of technology-supported learning (and that we could and should leave it behind) – is countered by the suggestion that the LMS/VLE allows for a consistency and ease of management in dealing with a large institution. 

To me there are merits in both positions, but to see it as a binary is unhelpful – I don’t think we can say that the LMS/VLE is shaping institutional practice, or that institutional practice is shaping or has shaped the LMS/VLE. To explain myself I need to travel through time in a very UK-centric way, but hopefully with a shout-out to friends overseas too.

We start at the end – an almost-random infrastructure of tools and services brought into being by a range of academics and developers, used to meet local needs and supported haphazardly by a loose network of enthusiasts. It’s 1998, you’re hacking with (the then new) Perl 5, and your screensaver is SETI@home.

But how do we get the results of the HTML quizzes that you are doing for your students on an ~-space website (after having begged your sysadmin to let you use CGI) across to the spreadsheet where you keep your other marks, and/or to your whizzy new student records system that someone has knocked up in Lotus Notes?

Copy and PasteKeep two windows open

Maybe copy from a printout

 What if there was some automagical way to make the output of one programme input into the other? Then you could spend less time doing admin and more time teaching (isn’t that always the promise, but never the reality?)

Remember, this was before Kin Lane. We were not quite smart enough to invent the API at this time, this was a couple of years down the line.  But the early work of the Instructional Management System project could easily have proceeded along similar lines.

IMS interoperability standards specified common ways in which stuff had to behave if it had any interest whatsoever in working with other stuff. The founding of the project, by Educause in 1997, sent ripples around the world. In the UK, the Joint Information Systems Committee (JISC) commissioned a small project to participate in this emerging solution to a lack of interoperability amongst tools designed to support learning.

That engagement with IMS led to the Centre for Educational Technology Interoperability Standards… CETIS.

As I’ve hinted above, IMS could very easily have invented APIs two years early. But the more alert readers amongst you may have noticed that it is 1998, not 1997. So all this is ancient history. So why 1998?

In a story that Audrey hinted at the CETIS 2014 conference – it’s like she knew! – some of those involved in IMS were imagining an alternative solution. Rather than bothering with all these crazy, confusing standards wouldn’t it be much easier if we could get a whole educational ecosystem in a box. Like an AOL for the university. Everything would talk to everything else (via those same IMS standards), and you would have unlimited control and oversight over the instructional process. Hell, maybe you could even use aggregated student data to predict possible retention issues!

Two of those working for IMS via a consultancy arrangement at the time were Michael Chasen and Matthew Pittinsky. Sensing a wider market for their understanding of the area, they formed (in 1997) a consultancy company named Blackboard. In 1998 they bought CourseInfo from Cornell University, and started to build products based on their idea of a management system for learning.

The big selling point? It would allow courses to be delivered on the World Wide Web. Let’s put a date on it.  29th April 1998.

In the UK, this development looked like the answer to many problems, and JISC began to lead a concerted drive to manage take-up of “instructional management systems”, or (as “instructional” is soo colonial) “managed learning environments”.

JISC issued a call for institutional projects in 1999. The aim of these projects was not simply to buy in to emerging “in a box” solutions, but to join up existing systems to create their own managed environment. Looking back, this was a typically responsive JISC move, there was no rush to condemn academics for adopting their own pet tools, merely to encourage institutions to invent ways of making this feasible on an increasingly connected campus.

JISC was, as it happened, undergoing one of their periodic transitions at the time,because:

“[…] PCs and workstations are linked by networks as part of the world wide Internet. The full impact of the potential of the Internet is only just being understood.”

One of the recommendations stated:

“The JISC […] finds itself trying to balance the desire to drive forward the exploitation of IT through leading edge development and pilot projects with the need to retain production services. […] At present about 20% of the JISC budget is used for development work of which less than a quarter is to promote leading edge development work. This is lower than in previous years. This run down of development work has been to meet a concern of the funding councils that the predecessors of the JISC were too research oriented. […]Given that the future utility of the JISC depends on maintaining UK higher education at the leading edge there should be more focus on development work.”

(sorry for quoting such a large section, but it is a beautifully far-sighted recommendation. For more detail on JISC’s more recent transition, please see the Wilson Review.)

So, there was an emphasis on homegrown development at the leading edge, and a clear driver to invest in and accelerate this – and there was funding available to support it. In this rich and fertile environment, you would imagine that the UK would have a suite of responsive and nuanced ecosystems to support academia in delivering technology-supported tuition. What happened?

Some may try to blame a lack of pedagogic understanding around the tools and systems that are being deployed. JISC commissioned a report from Sandy Britain and Oleg Lieber of the University of Bangor in 1999: “A Framework for Pedagogical Evaluation of Virtual Learning Environments“. By now (one year on), the UK language had shifted from MLE to VLE.

The report notes that as of 1999 there was a very low take up of such tools and systems. A survey produced only 11 responses(!), a sign of a concept and terminology that was as yet unfamiliar. And of course, institutions were being responsive to existing practice:

“Informal evidence from a number of institutions suggests that few are currently attempting to implement a co-ordinated solution for the whole institution, rather many different solutions have been put into operation by enterprising departments and enthusiastic individual lecturers. […] It may not be an appropriate model for institutions to purchase a single heavyweight system to attempt to cater for the needs of all departments as different departments and lecturers have different requirements.”

Like many at the time, Britain and Lieber cite Robin Mason’s (1998) “Models of Online Courses” as a roadmap for the possible development of practice. Mason proposed:

  • The “Content Plus Support Model”, which separated content from facilitated learning and focused on the content.
  • The “Wrap Around Model”, which more thoughtfully designed activities, support and supplementary materials as an ongoing practice around a pre-existing resource.
  • The “Integrated Model”, which was primarily based around student-led interaction with academic support, content being entirely created within the course.

This is an astonishingly prescient paper, which I must insist that you (re-)read. Now.

It concludes:

“Just as the Web turns everyone into a publisher, so online courses give everyone the opportunity to be the teacher. Computer conferencing is the ideal medium to realize the teaching potential of the student, to the advantage of all participants. This is hardly a new discovery, merely an adaptation of the seminar to the online environment. It is not a cheap ticket to reducing the cost of the traditional teacher, however. Designing successful learning structures online does take skill and experience, and online courses do not run themselves. It is in my third, “integrated model” where this distinction is most blurred, as it provides the greatest opportunities for multiple teaching and learning roles.”

This is a lesson that even the UK Open University (to whom Mason was addressing her comments) have struggled to learn. I leave the reader to add their own observation about the various strands of MOOCs with respect to this.

Britain and Lieber, meanwhile end with a warning.

This […] brings us back to the issue of whether choosing a VLE is an institutional-level decision or a responsibility that should be left in the hands of individual teachers. It raises the question of whether it is possible (or indeed desirable) to define teaching strategy at an institutional rather than individual level

A footnote mollifies this somewhat, noting that issues of interoperability and data protection do need to be considered by institutions.

In 2003, JISC undertook their first review of MLE/VLE activity. The report (prepared by Glenaffric Consulting) suggested that the initial enthusiasm for the concept had been tempered both by a general disenchantment with the potential of the web after the first dot-com bubble had burst, and by an understanding of the pressures of running what was becoming a mission-critical system. One key passage (for me) states:

“[A] tension is apparent between the recognised need for generally applicable
standards for the sector, and the institutions’ need for systems that provide the
functionality that they require for their specific business processes. In this context,
witnesses were critical of the drive to impose a standards-based approach when the
specifications themselves were not complete, or adequately tested for widespread
application.”

The pressure to “get it right first time” outweighed the ideas of building for the future, and it was into this gap that commercial VLEs (as a single product) offered a seemingly more practical alternative to making myriad systems communicate using rapidly evolving standards.

By 2003, only 13% of institutions did not use at least one VLE. By 2005, this had dropped to 5%, and by 2008 the question no longer needed to be asked, and the dominance of Blackboard within this market (through acquisitions, notably of WebCT) was well established.

But remember that the VLE emerged from a (perceived or actual) need to allow for interoperability between instructional and learning systems. A need amplified by funding and advice designed to future-proof innovative practice. We may as well ask why Microsoft became a dominant desktop tool. It just worked. It was there. And it became the benchmarks by which other solutions were measured.

To return to my opening tension – I wonder if both institution and system have been driven to current norms by a pressure for speedy and reliable ease of use. To manage the growing administrative burden in a newly massified and customer focused higher education.

Reliablity. Standardisation, not standards-informed development. And the ever-flowing pressure for rapid and transformative change. Where did that come from?

And that is why we talk about politics and culture at education technology conferences. I saw her today, at the reception…

 

 

You’ll Never Hear Surf Music Again #altc #altc2014

“Strange beautiful grass of green
with your majestic silver seas
Your mysterious mountains I wish to see closer…”

What is social media like? Speaking at the 2014 UCISA conference, Clay Shirky put the collaborative structures that have been built up around web technology in a category of their own. He asked: Is [facebook] like other media? Is [facebook] like a table? Or is [facebook] like [facebook]?

It transpired that we are dealing with a new category. Shirky argues that as information technology moves deeper and deeper into the world of human communication,  it allows users to use the data trails they create to develop meaningful insights into their lives and interactions.

Social media, in 2014, is more media than social. Every organisation has a person or a team, usually in the communications department, with a contractual remit to be “social”. There is a policy, not usually an entirely written one, that determines what constitutes “social” for other members of staff. Falling the wrong side of the line causes trouble. And believe that these lines are policed.

(Paul is always on about this...)
(Paul is always on about this…)

Just ask Thomas Docherty (a former Head of English at Warwick) about sharing and surveillance) . At a conference celebrating the republication of “Warwick University Limited” – a book describing the levels of political surveillance of academic staff and students in the 1970s were subject to – he noted that:

” Academics and students, if interested in material research and learning, have to work in the shadows, in clandestine fashion”

At least, had he been present at the conference, he would have noted this. I quote from a letter he sent whilst forbidden to enter the campus or make contact with his students.

As things stand, we know very little about his suspension, other than what has been released by the institution, which reassures us that his trenchant and freely expressed political views and membership of the Council for the Defence of British Universities are not the reason for this unusual punishment. At the time of publication Thomas Docherty is still suspended (some say indefinitely), and has been for 240 days.

(image from the WarwickStudentsForDocherty Facebook group)
(image from the WarwickStudentsForDocherty Facebook group)

Writing about her experiences at Worldviews2013, Melonie Fullick noted:

Those starting out in academic life need to receive the message, loud and clear, that this kind of “public” work [new ways of engaging those outside of academia, primarily social media] is valued. They need to know that what they’re doing is a part of a larger project or movement, a more significant shift in the culture of academic institutions, and that it will be recognized as such. This will encourage them to do the work of engagement alongside other forms of work that currently take precedence in the prestige economy of academe.”

Docherty is hardly the only example of an outspoken academic who has been censured by an institution, and there are many far, far more telling tales of social media and the way it reacts to outspoken opinions. I just use the example as it is a local one. But far more insidious is the kinds of self-censorship that many of us must participate in. “No religion or politics”, as the old saying goes.

But our employers  (and ourselves) are not the only critical readers here. The networks themselves monitor and respond to the emotions and ideas we choose to express. The recent Facebook research on mood contagion, though welcome in open publication, reminds us just how much attention platforms pay to what we share – and, almost as a given, how valuable this information can be.

Witness also the controversy around the migration to Facebook Messenger on mobile platforms. The New York Times suggested the backlash was “part confusion, part mistrust“. Really, users have been spoiling for a fight with Facebook for a long time, a misunderstanding of how android permissions work (an application can record sound and take pictures, thus it needs to be allowed to use the microphone and camera…) feeds a building resentment of “move fast and break things”. Which itself has become the less quotable “move fast with stable infra“.

Couple this with the dense web of connections that can be built up around a single persona and we see the true cause of the Nymwars – far from improving online conversation, as google claimed when improving YouTube comments, drawing activity together across numerous sites raises the value of this data. As our picture becomes more complete, we can be better understood by those who wish to understand us.  To inform us. To sell to us. And to police us.

For the moment, an uneasy truce has been called. The real name is not required – the single identity remains. It seems hopelessly naive to think our real names could not be determined from our data if needed. By whoever feels the need to.

Compared to Facebook, we’ve always given twitter rather a free ride. But this too, with the introduction first of sponsored tweets and then of other tweets we may find interesting, becomes less about our decisions and more about our derived preferences. This is made explicit in the new onboarding process. Twitter in 2014 is a long way from twitter in 2007.

There has been the beginnings of a movement away from this total spectrum sharing – platforms like Snapchat and Whatsapp connect people with their friends directly – the idea of the network comes through forwarding and very selective sharing. Networks like Secret and Whisper do away with the idea of “whole-person” media – anonymous “macros” (words+image) are shared based on location only.

secret example

Though each will create a trail, these are not publicly viewable and are difficult to integrate with other trails. Shirky sees the creation of a trail as being something that empowers the user – “If there is a behavior that matters to them, they can see it and detail it to change that behavior” – a position that tends towards to the ChrisDancyfication of everything.

We use social media trails (and online activity, for that matter) like we use cloud chambers, to draw and assert links between events that are visible only in retrospect. It’s a big shift from sharing altruistically and to build connections, to sharing as a side-effect of self-monitoring.

I’ve rambled a little, but the central thesis I’m building here is:

(To be fair, it's difficult to get off Facebook...)

(To be fair, it’s really difficult to get off Facebook…)

  • as social media users, we are becoming aware of the value of the aggregated data we generate.
  • our interactions with social media platforms are characterised by mistrust and fear. We no longer expect these platforms to use our data ethically or to our advantage.
  • We expect others to use what we share to our disadvantage.
  • So – we share strategically, defensively, and using a lot of the techniques developed in corporate social media
  • and emerging new media trends focus on either closely controlled sharing or anonymous sharing.

Shirky’s position on the inexorable domination of the “social” clearly does not mesh with these trends – and this throws open the question of the place of social media in academia. Bluntly, should we be recommending to learners that they join any social network? And how should we be protecting and supporting those that choose to.

Social media has changed underneath us, and we need to respond to what social media is rather than what it was.

Alan (cogdog) Levine recently quoted from Frank Chimero:

We concede that there is some value to Twitter, but the social musing we did early on no longer fits. My feed (full of people I admire) is mostly just a loud, stupid, sad place. Basically: a mirror to the world we made that I don’t want to look into.”

I’d add, for the reasons above, “dehumanising” and “potentially dangerous”.

Levine glosses this beautifully:

Long long ago, in a web far far away, everything was like neat little home made bungalows stretched out on the open plain, under a giant expansive sky, where we wandered freely, exploring. Now we crowd among densely ad covered walkways of a shiny giant mall, never seeing the sky, nor the real earth, at whim to the places built for us.”

He’s a man that uses social media more than nearly anyone I know, myself included. And now he deliberately limits his exposure to the noise of the influence he has. He develops his own work-arounds to preserve and foster the things he finds important. Because he (and we) cannot rely on social media to continue acting in the same way. You can’t rely on tagging. You can’t rely on permanence. You can’t rely on the ability to link between services. You can’t even rely on access.

Tony Hirst is one of the most talented data journalists I know. In his own words:

“I  used to build things around Amazon’s API, and Yahoo’s APIs, and Google APIs, and Twitter’s API. As those companies innovated, they built bare bones services that they let others play with. Against the established value network order of SOAP and enterprise service models let the RESTful upstarts play with their toys. And the upstarts let us play with their toys. And we did, because they were easy to play with.

But they’re not anymore. The upstarts started to build up their services, improve them, entrench them. And now they’re not something you can play with. The toys became enterprise warez and now you need professional tools to play with them. I used to hack around URLs and play with the result using a few lines of Javascript. Now I need credentials and heavyweight libraries, programming frameworks and tooling.”

After facing similar issues – with syndication, stability, permanence, advertising – Jim Groom (and others) are experimenting with forms of “social media” that are platform independent. Known, the webmention protocol, and similar emerging tools stem from the work of IndieWebCamp – a distributed team dedicated to providing a range of alternatives to corporate social media. They work to the following principles:

  • your content is yours
  • you are better connected
  • you are in control

The first fits in nicely with ongoing work such as Reclaim Hosting, but for me the key aspect is control. One of the many nice aspects of these tools is that they are not year zero solutions – they start from the assumption that integration with other (commercial) networks will be key and that conversation there was as important as “native” comments. Compare Diaspora – which initially positioned itself as a direct alternative to existing networks (and is erroneously described in the press as a network where “content is impossible to remove“). With user-owned tools  you own what you share plus a copy of what is shared with you, and you have final control over all of this. Publish on your Own Site, Share Everywhere (P.O.S.S.E.)

Of course, this doesn’t lessen the risk of openly sharing online – these risks stem for the kind of corporations that employ us and that we entrust our data to. But it does help users keep control of what they do share. Which is a start.

But a start of what? We already seeing weak signals that young people (indeed all users) are drifting away from social networks, almost as fast as those who hope to talk to them are adopting the same networks. The quantified self is moving towards the qualified self, as users begin to understand and game the metrics that they are supposedly using for their own purposes.

People are more complex than activity trails and social networks suggest. The care taken to present facets (or even to perpetuate the illusion of an absence of facets). The ways they find to get answers out systems not set up to respond to questions.

Social media has changed. It’s the same tune, but a different song.

Ben Werdmuller (Known developer) suggests, in a recent post:

“The web is the most effective way there has ever been to connect people with different contexts and skills. Right now, a very small number of platforms control the form (and therefore, at least to an extent, the content) of those conversations. I think the web is richer if we all own our own sites – and Known is a simple, flexible platform to let people do that.”

In 2014 suspicion about the actions of the super-social media platforms has reached fever pitch. Are we approaching a proper social media backlash? What does this mean for teaching online, and do projects like “known” offer another way ?

“Your people I do not understand
And to you I will put an end
And you’ll never hear
Surf music again.”

(though the theme to Coronation Street, became “Third Stone From The Sun“, which became “Dance with the Devil“, which became”I’m Too Sexy“…)

[EDIT: 23/09/14 – Times Higher Education (£) are reporting that Docherty’s suspension will end on 29th September,  269 days after it commenced. Warwick University (“university of the year”) have not made any comment regarding the reason for the suspension, or why it has ended, but it is understood that the disciplinary process will still continue. Because obviously he hasn’t been punished enough.]