Learning gain, again

Perceptions of the future of teaching quality monitoring have come a long way since I last wrote about HEFCE’s strange fascination with quantifying how much students learn at at university. A full consultation concerning the ongoing review of QA processes detonated in late June , swiftly followed by the summer’s all-consuming speculative think-piece generator, the TEF.

Today- alongside the announcement of 12 year-long institutional projects to “pilot” a bewildering range of metrics, e-portfolios, skills assessments and pig entrail readings – HEFCE released the research conducted for them by RAND Europe. Interestingly, RAND themselves are still waiting for a “co-ordinated concurrent release with other publication outlets”.

(screengrab: 13:45BST, 21/09/2015)
(screengrab: 13:45BST, 21/09/2015)

The report itself does have a rushed feel to it – shifting typography, a few spelling, grammatical and labelling howlers – which itself is unusual given the high general quality of HEFCE research. And why would RAND label it as “withdrawn”? But I’ve heard from various sources that the launch was pre-announced for today at some point late last week, so – who knows.

We started our journey with an unexpected public tendering exercise back in June 2014 though this is also shown as being launched in May of the same year. The final report, according to the contract viewable via the second link in this paragraph, was due at the end of October 2014, making today’s publication nearly a year behind schedule.

So over a year of RAND Europe research (valued at “£30,000 to £50,000) are presented over 51 variously typeset pages, 10 pages of references (an odd, bracketless, variant of APA if you are interested) and 5 appendices. What do we now know?

RAND set out to explore “explore[…] the concept of learning gain, as well as current national and international practice, to investigate whether a measure of learning gain could be used in England.”

They conclude [SPOILERS!] that the purpose to which learning gain is put is more important than any definition, there is a lot of international and some UK practice of varying approaches and quality, and that they haven’t got the faintest idea as to where you could do learning gain in the UK but why not fund some pilot studies and do some more events.

Many of the literature review aspects could have been pulled off the OECD shelf – Kim and Lalancette (2013) covers much of the same ground for “value added” measures (which in practice includes much of what RAND define as learning gain, such as the CLA standardised tests and the Wabash national study), and adds an international compulsory-level analysis of practice.

Interestingly, the OECD paper notes that “[…] the longitudinal approach, with a repeated measures design often used in K-12 education, may not be logistically feasible or could be extraordinarily expensive in higher education, even when it is technically possible” (p9) whereas RAND are confident that “Perhaps the most robust method to achieve [comparability of data] is through longitudinal data, i.e. data on the same group of students over at least two points in time” (p13).

The recommendation for a set of small pilot studies, in this case, may appear to be a sensible one. Clearly the literature lacks sufficient real world evidence to make a judgement on the feasibility of “learning gain” in English higher education.

By happy coincidence, HEFCE had already planned a series of pilots as stage two of their “learning gain” work! The “contract” outlines the entire plan:

“The learning gain project as a whole will consist of three stages. The first stage will consist of a critical evaluation of a range of assessment methods and tools (including both discipline-based and generic skills testing), with a view to informing the identification of a subset that could then be used to underpin a set of small pilots in a second stage, to be followed by a final stage, a concluding comparative evaluation. This invitation to tender is solely concerned with the first stage of the project – the critical review”(p5)

So the RAND report has – we therefore conclude – been used to design the “learning gain” pilot circular rather than as a means of generating recommendations for ongoing work? After all, the circular itself promised the publication of the research report “shortly” in May 2015 (indeed, the pdf document metadata from the RAND report suggests it was last modified on 27 March 2015, the text states it was “mid-January” when drafting concluded) – and we know that the research was meant to inform the choice of a range of methods for piloting.

The subset comprising “standardised tests, grades, self-reporting surveys, mixed methods and other qualitative methods” that was offered to pilot institutions does echo categorisation in the RAND report (for example in section 6.3.2, the “Critical Overview” the same headings are used.)

However, a similar list could be drawn from the initial specifications back in May 2014.

  • Tools currently used in UK institutions for entrance purposes (e.g. the Biomedical Admissions Test) or by careers services and graduate recruiters to assess generic skills
  • Curriculum-based progress testing of acquisition of skills and knowledge within a particular discipline
  • Standardised tests, such as the US-based Collegiate Learning Assessment (CLA), the Measure of Academic Performance (MAPP) and the Collegiate Assessment of Academic Proficiency (CAPP).
  • Student-self- and/or peer-assessed learning gain
  • Discipline-based and discipline independent mechanisms
  • Other methods used by higher education providers in England to measure learning gain at institutional level
  • International (particularly US-based) literature on the design and use of standardised learning assessment tools in HE […]
  • Literature on previous work on learning gain in UK HE
  • UK schools-based literature on the measurement of value-added (p7)

In essence, RAND Europe have taken (again, let us be charitable) 10 months to condense the above list into the list of five categories presented in the HEFCE call for pilots. (The pilots themselves were actually supposed to be notified in June 2015, though they seem to have kept things a carefully guarded secret until Sept 16th, at least. Well done, Plymouth!).

It is unclear, though unlikely, whether teams developing institutional bids had sight of the RAND report during the bid development process. And it is doubly unclear why the report wasn’t released to a grateful public until the projects were announced.

But the big question for me is what was the point of the RAND Report into Learning gain?

  • It didn’t (appear) to inform HEFCE’s plan to run pilot projects. There were already plans to run pilots back in 2014, and whereas the categories of instrument types to use “RAND language” this would be equally possible to derive from the original brief.
  • It was released at the same time as successful bids were announced, and thus could not (reasonably) have contributed to the design or evidence base for institutional projects. (aside: wonder how many of these pilots have passed through an ethical review process)
  • It didn’t significantly add to a 2013 OECD understanding of the literature in this area. It referred to 6 “research” papers (by my count) from 2014, and one from 2015.
  • There was a huge parallel conversation about an international and comparable standard, again by the OECD, during the period of study. We (in England) said “goodbye” as they said “AHELO”, but would it not have made sense to combine background literature searches (at least) with an ongoing global effort?

Though I wouldn’t stay I started from the position of unabashed enthusiasm, I have been waiting for this report with some interest. “Learning gain” (if measured with any degree of accuracy and statistical confidence) would be the greatest breakthrough in education research in living memory. Drawing a measurable and credible causational link between an intervention or activity and the acquirement of knowledge or skills: it’s the holy grail of understanding the education process.

There’s nothing in this report that will convince anyone that this age-old problem is any closer to being solved. Indeed, it adds little to previous work. And reading between the lines of the convoluted path from commission to release, it is not clear that it meaningfully informed any part of HEFCE’s ongoing learning gain activity.

All told, a bit of a pig’s ear.

7 thoughts on “Learning gain, again”

  1. –“”Learning gain” (if measured with any degree of accuracy and statistical confidence) would be the greatest breakthrough in education research in living memory”

    Ahh… memory… now there’s a question! I guess we could just see if students can remember stuff… They must have learnt it if they can remember it! Easy to measure… maybe give them a quick examination!

    1. It’s the best proxy we’ve had for years. Now “learning gain” changes the game by suggesting… different examinations that test the memory of different things.

    Mentions

  • 💬 Tim Klapdor
  • 💬 Mike Boxall
  • 💬 @MikeBoxall1 tried to document the whole sorry saga here: followersoftheapocalyp.se/learning-gain-…
  • 💬 RT @dkernohan: By me - earlier - on the latest developments in the saga of #learninggain followersoftheapocalyp.se/learning-gain-…

Leave a Reply

Your email address will not be published. Required fields are marked *