So the Pearson Summit happened last week, at the Fontainbleu, Miami. (Next week in the same room, Michael McDonald).
And whereas I salute the guys at Pearson for opening #pearsonsummit for the world to see – seriously guys, you may have gone a bit far.
You want the wireless code? It’s online.
You want to see who was in which team building group? Aye.
The official summit spotify playlist? rock on.
You want to register to attend? Gotcha. (seriously, it would have been worth it, they got iPads)
You want to sign the Pearson Pledge? I can’t help you with what it was (but I’m guessing something to do with efficacy…), but do sign away. And check out the tins of treacle, Jerry Javelina, the Pearson Bird and numerous people called “Dave”.
If you want to get stuck in, the seekrit official iPad app is out there. Have a dig. Let me know what you find.
And of course the first day was live streamed for your viewing pleasure, and is still up for you to enjoy now.
Plenty’o’lols for all, but what did we learn from the experience (other than someone is making a mint selling iPad apps to Pearson, which is kind of ironic given the whole LA schools thing…)?
Having listened in to the first day, it feels like the last roll of the dice. To say efficacy – the in-house deliverology-esqe way of figuring out whether their stuff works in the classroom is being pushed hard is an understatement. They mean it, maaaaan. FOTA favourite Sir Michael Barber plays the bewildering part of process evangelist – met often with incomprehension as he attempts to solve decades-old problems in education research with a series of surveys used to generate metrics.
Briefly, education is riven with confounding variables – you get the chance to do studies, but need to take into account that each data point refers to a particular learner in a particular context. You could cite the textbook as a key input, or what the student had for breakfast. And these problems do not go away with the bigness of the data – PISA and (Pearson’s closely linked) the Learning Curve are similarly useless for anyone other than politicians and policy-makers looking at made-up league tables.
So that’s the publisher value proposition – with Pearson stuff (increasingly EdTech rather than boring old books) students can learn more betterer. And look, they have data and graphs to prove it.
We know the graphs will be nonsense (this is, after all, Michael Barber) – but institutional managers don’t. And institutional managers don’t talk to the likes of us.
The Pearson Efficacy tools and guidance are, again, out there in the open on the web. I cannot think of any more urgent educational research or education journalistic task than to understand and critique it, in terms and in places that senior managers can understand.
The patient unpicking of the MOOC hysteria by the community I like to convince myself I am part of has been useful in this way. But that was a dry run for diving in, intelligently and thoughtfully, to the morass of data and ideas that constitutes Pearson Efficacy.
Or Pearson will have graphs, we won’t have graphs, and the reasons that we don’t will not be enough to hold back the tide.
[Edited to fix some links – Cheers Tony – 08/02/15]