So does Audrey Watters . At least that, was one of the strangest accusations made of our double-header keynote on day two of #opened13 (a conference I intend to document more fully in due course). Followers of the followers will no doubt have seen the multimedia that I foisted on the audience instead of a proper keynote with inspirational pictures.
But – yes, numbers = bad. Here’s two stories I saw on twitter today.
First up, another one of those millionaire rockstars looking to fix broken education with broken business analytics. Paul Tudor Jones is a hedge-fund manager who never learnt anything from his educational experience at all. But still, he feels that education – when properly business-ified – is a path out of poverty for millions. An early quote on his workspace tells us a lot:
His computer projects his fund’s market positions onto the wall, blinking when any share changes price, its overall performance channeled into a moving graph (which just so happens, as with most days for the past few decades, to be pointing up). “I sit here and watch these all day,” says Jones, describing his daily work routine.
So here we are, a man who makes things happen by watching numbers. Guess what he wants to do to the US public schools system? Go on, guess.
Jones says his goal is to get the U.S. educational system in the top [PISA?] quartile of developed countries in the next ten years. Twenty years from now he wants the U.S. at number one. (The irony: Jones says he himself got very little out of his own education, with the exception of a journalism class. “My B.A. in economics was zero help in my profession today.” Instead, he says that countless hours playing games during his school years–poker, chess, backgammon–were the experiences that “prepared me for what I do today.”)
Jones says this will entail a “vertically integrated approach at getting all stakeholders in this–the parents, students and teachers–to acknowledge the problem, then get involved in this transformation,” by working with schools, teachers and parent-teacher associations in applying the Robin Hood [the honest-to-god actual name of his charitable foundation] method of best practices. That means, among other things, longer school days and years, better teacher and principal training and true evaluation and accountability.
Vertically. Integrated.
Just when I thought the day couldn’t get any better, it transpires that Michael Barber and his friends at Pearson have been busy repackaging deliverology into the idea of efficacy. You can play along at home, either with a printout or by sharing your innermost educational secrets with a multinational publishing company.
Barber, writing for the new Pearson “Open For Learning” [no really!] group on LinkedIn says:
As we all know, this is an urgent challenge. Every child needs a high quality education, and we must do everything we can to provide this for all.
So to provide this everyone’s favourite educational publisher wants to standardise the collection of learning (output) metrics so it can use graphs and such to prove that it has successful and useful products. As opposed to, I suppose, asking educational professionals whether they work. There’s a video, which features Michael Barber talking over some mournful-sounding piano and strings. (I should have patented that.)
The tool itself draws heavily on the Deliverology approaches of traffic-lights (those special red/red-amber/amber-green/green ones) and trajectories, and is a sterling example of Barber selling the same 20 year-old discredited business process to someone else. I take my hat off to you, sir.
So, in both of these cases we have people who by their own admission would rather deal with numbers and measurements than with people. Trying to make education better.
I believe that metrics-first approaches like these are flawed for the following reasons:
- Lazy hypothesising. If you start from a premise that a larger number is better, you are buying in to a whole bunch of implicit and often under-theorised assumptions. It is required that we look at the actual measures themselves: what do they mean, what do they tell us, what do they not tell us?
- Poor quality data. Education is not, and will never be standardised. This is why serious educational researchers take a lot of care in choosing samples, and attempt to make them representative.You’d think that “big data” would be better than this, but larger samples tend to be self-selecting (eg learners that completed or were entered for a certain test). These, and other, artefacts in large data sets need to be identified and compensated for, because…
- Incomplete presentation. A graph on an infographic tells you nothing at all about education, without accompanying contextualisation, methodology, and highlighted anomalies. “But policy-makers are too busy to read all that” comes the response: frankly if people are unwilling to properly engage with data they should not be policy makers. It is very tempting just to look at a line-graph and choose the tallest line, but this is not policy making, this is shape-matching. And so many visualisations tell you nothing more than “we have lots of data”.
- An end to argument. You can’t argue with data. Well, you can, but to do so you need a set of conceptual and critical tools that are outside of the grasp of many (pupils, parents…) who interact with education every day. I can sit here on my smug little blog till the cows comes home and pick apart data, but I’ve benefited from a lengthy and expensive education and work in a role that gives me time to think about and investigate such things. If you start throwing numbers around as if they were facts, you are disenfranchising a large number of people whose voices need to be heard in these conversations.
- Comparison. If you give someone two sets of data the temptation is for them to munge them together in some way, so they can compare them. Serious data scientists know how hard this is, policy makers think you can just plot them on the same axis and make valid comparisons.
Nobody – I repeat: nobody – is saying that quantitative approaches to research are invalid, but I am saying that such research should be done with the appropriate safeguards so the results can be used to make high-quality decisions. All too often we see nothing but context-less graphs and tables of test results, and in the wrong hands these are more dangerous than any weapon you care to name.
I don’t hate numbers, but I do love people. The individual experience is the most valuable measure of educational effectiveness we have, but it tells us very little about how effective that same education may be for others. What it does tell us, though, is reliable and worth engaging with. We owe it to the world to end our fascination with big data and start engaging with real and messy reality.
I like Jones’ second quote. It sounds to me like he’s saying knowledge of economics does not help him, but knowing how to game a system does. I’m sure gaming the system will benefit education every bit as much as it benefited our economy.