“The VLE is dead” is not dead. The past month has seen posts from Peter Reed, Sheila MacNeill, and D’Arcy Norman offering the “real world” flip-side to the joyous utopian escapism of edtech Pollyanna Audrey Watters. Audrey’s position – that the LMS (learning management system [US, rest of world])/VLE (Virtual Learning Environment, formerly Managed Learning Environment – MLE [UK]) constrains and shapes our conception of technology-supported learning (and that we could and should leave it behind) – is countered by the suggestion that the LMS/VLE allows for a consistency and ease of management in dealing with a large institution.
To me there are merits in both positions, but to see it as a binary is unhelpful – I don’t think we can say that the LMS/VLE is shaping institutional practice, or that institutional practice is shaping or has shaped the LMS/VLE. To explain myself I need to travel through time in a very UK-centric way, but hopefully with a shout-out to friends overseas too.
We start at the end – an almost-random infrastructure of tools and services brought into being by a range of academics and developers, used to meet local needs and supported haphazardly by a loose network of enthusiasts. It’s 1998, you’re hacking with (the then new) Perl 5, and your screensaver is SETI@home.
But how do we get the results of the HTML quizzes that you are doing for your students on an ~-space website (after having begged your sysadmin to let you use CGI) across to the spreadsheet where you keep your other marks, and/or to your whizzy new student records system that someone has knocked up in Lotus Notes?
What if there was some automagical way to make the output of one programme input into the other? Then you could spend less time doing admin and more time teaching (isn’t that always the promise, but never the reality?)
Remember, this was before Kin Lane. We were not quite smart enough to invent the API at this time, this was a couple of years down the line. But the early work of the Instructional Management System project could easily have proceeded along similar lines.
IMS interoperability standards specified common ways in which stuff had to behave if it had any interest whatsoever in working with other stuff. The founding of the project, by Educause in 1997, sent ripples around the world. In the UK, the Joint Information Systems Committee (JISC) commissioned a small project to participate in this emerging solution to a lack of interoperability amongst tools designed to support learning.
That engagement with IMS led to the Centre for Educational Technology Interoperability Standards… CETIS.
As I’ve hinted above, IMS could very easily have invented APIs two years early. But the more alert readers amongst you may have noticed that it is 1998, not 1997. So all this is ancient history. So why 1998?
In a story that Audrey hinted at the CETIS 2014 conference – it’s like she knew! – some of those involved in IMS were imagining an alternative solution. Rather than bothering with all these crazy, confusing standards wouldn’t it be much easier if we could get a whole educational ecosystem in a box. Like an AOL for the university. Everything would talk to everything else (via those same IMS standards), and you would have unlimited control and oversight over the instructional process. Hell, maybe you could even use aggregated student data to predict possible retention issues!
Two of those working for IMS via a consultancy arrangement at the time were Michael Chasen and Matthew Pittinsky. Sensing a wider market for their understanding of the area, they formed (in 1997) a consultancy company named Blackboard. In 1998 they bought CourseInfo from Cornell University, and started to build products based on their idea of a management system for learning.
The big selling point? It would allow courses to be delivered on the World Wide Web. Let’s put a date on it. 29th April 1998.
In the UK, this development looked like the answer to many problems, and JISC began to lead a concerted drive to manage take-up of “instructional management systems”, or (as “instructional” is soo colonial) “managed learning environments”.
JISC issued a call for institutional projects in 1999. The aim of these projects was not simply to buy in to emerging “in a box” solutions, but to join up existing systems to create their own managed environment. Looking back, this was a typically responsive JISC move, there was no rush to condemn academics for adopting their own pet tools, merely to encourage institutions to invent ways of making this feasible on an increasingly connected campus.
JISC was, as it happened, undergoing one of their periodic transitions at the time,because:
“[...] PCs and workstations are linked by networks as part of the world wide Internet. The full impact of the potential of the Internet is only just being understood.”
One of the recommendations stated:
“The JISC [...] finds itself trying to balance the desire to drive forward the exploitation of IT through leading edge development and pilot projects with the need to retain production services. [...] At present about 20% of the JISC budget is used for development work of which less than a quarter is to promote leading edge development work. This is lower than in previous years. This run down of development work has been to meet a concern of the funding councils that the predecessors of the JISC were too research oriented. [...]Given that the future utility of the JISC depends on maintaining UK higher education at the leading edge there should be more focus on development work.”
(sorry for quoting such a large section, but it is a beautifully far-sighted recommendation. For more detail on JISC’s more recent transition, please see the Wilson Review.)
So, there was an emphasis on homegrown development at the leading edge, and a clear driver to invest in and accelerate this – and there was funding available to support it. In this rich and fertile environment, you would imagine that the UK would have a suite of responsive and nuanced ecosystems to support academia in delivering technology-supported tuition. What happened?
Some may try to blame a lack of pedagogic understanding around the tools and systems that are being deployed. JISC commissioned a report from Sandy Britain and Oleg Lieber of the University of Bangor in 1999: “A Framework for Pedagogical Evaluation of Virtual Learning Environments“. By now (one year on), the UK language had shifted from MLE to VLE.
The report notes that as of 1999 there was a very low take up of such tools and systems. A survey produced only 11 responses(!), a sign of a concept and terminology that was as yet unfamiliar. And of course, institutions were being responsive to existing practice:
“Informal evidence from a number of institutions suggests that few are currently attempting to implement a co-ordinated solution for the whole institution, rather many different solutions have been put into operation by enterprising departments and enthusiastic individual lecturers. [...] It may not be an appropriate model for institutions to purchase a single heavyweight system to attempt to cater for the needs of all departments as different departments and lecturers have different requirements.”
Like many at the time, Britain and Lieber cite Robin Mason’s (1998) “Models of Online Courses” as a roadmap for the possible development of practice. Mason proposed:
- The “Content Plus Support Model”, which separated content from facilitated learning and focused on the content.
- The “Wrap Around Model”, which more thoughtfully designed activities, support and supplementary materials as an ongoing practice around a pre-existing resource.
- The “Integrated Model”, which was primarily based around student-led interaction with academic support, content being entirely created within the course.
This is an astonishingly prescient paper, which I must insist that you (re-)read. Now.
“Just as the Web turns everyone into a publisher, so online courses give everyone the opportunity to be the teacher. Computer conferencing is the ideal medium to realize the teaching potential of the student, to the advantage of all participants. This is hardly a new discovery, merely an adaptation of the seminar to the online environment. It is not a cheap ticket to reducing the cost of the traditional teacher, however. Designing successful learning structures online does take skill and experience, and online courses do not run themselves. It is in my third, “integrated model” where this distinction is most blurred, as it provides the greatest opportunities for multiple teaching and learning roles.”
This is a lesson that even the UK Open University (to whom Mason was addressing her comments) have struggled to learn. I leave the reader to add their own observation about the various strands of MOOCs with respect to this.
Britain and Lieber, meanwhile end with a warning.
“This [...] brings us back to the issue of whether choosing a VLE is an institutional-level decision or a responsibility that should be left in the hands of individual teachers. It raises the question of whether it is possible (or indeed desirable) to define teaching strategy at an institutional rather than individual level”
A footnote mollifies this somewhat, noting that issues of interoperability and data protection do need to be considered by institutions.
In 2003, JISC undertook their first review of MLE/VLE activity. The report (prepared by Glenaffric Consulting) suggested that the initial enthusiasm for the concept had been tempered both by a general disenchantment with the potential of the web after the first dot-com bubble had burst, and by an understanding of the pressures of running what was becoming a mission-critical system. One key passage (for me) states:
“[A] tension is apparent between the recognised need for generally applicable
standards for the sector, and the institutions’ need for systems that provide the
functionality that they require for their specific business processes. In this context,
witnesses were critical of the drive to impose a standards-based approach when the
specifications themselves were not complete, or adequately tested for widespread
The pressure to “get it right first time” outweighed the ideas of building for the future, and it was into this gap that commercial VLEs (as a single product) offered a seemingly more practical alternative to making myriad systems communicate using rapidly evolving standards.
By 2003, only 13% of institutions did not use at least one VLE. By 2005, this had dropped to 5%, and by 2008 the question no longer needed to be asked, and the dominance of Blackboard within this market (through acquisitions, notably of WebCT) was well established.
But remember that the VLE emerged from a (perceived or actual) need to allow for interoperability between instructional and learning systems. A need amplified by funding and advice designed to future-proof innovative practice. We may as well ask why Microsoft became a dominant desktop tool. It just worked. It was there. And it became the benchmarks by which other solutions were measured.
To return to my opening tension – I wonder if both institution and system have been driven to current norms by a pressure for speedy and reliable ease of use. To manage the growing administrative burden in a newly massified and customer focused higher education.
Reliablity. Standardisation, not standards-informed development. And the ever-flowing pressure for rapid and transformative change. Where did that come from?
And that is why we talk about politics and culture at education technology conferences. I saw her today, at the reception…