I’m still thinking about (and digesting) the “Decoding Organisation” book that the good folks of TRU decided to buy me. But this quote from another source made me sit up in recognition:
In the early 1970s [sociologists] carried out a large survey of superintendents, principals, and teachers in San Francisco school districts. The initial reports indicated that something was amiss in these organizations. Reforms were announced with enthusiasm and then evaporated. Rules and requirements filled the file cabinets, but teachers taught as they pleased and neither principals nor superintendents took much notice. State and federal money flowed in and elaborate reports went forth suggesting compliance, but little seemed to change in the classrooms. Studies of child-teacher interactions in the classroom suggested that they were unaffected by the next classroom, the principal, the district, the outside funds, and the teacher training institution.
Or so quotes Robin Hanson, from a paper he can’t be bothered to cite properly(!). He sees this as a “dictator-like teacher autonomy”: “Schools are designed to, and do, stifle student imaginations. So why would we care much if teacher imaginations get stifled in the process? Do we care if prison guard imaginations gets stifled?”.
So far, you may think, so standard edutech ‘disrupt all of the things!’ talk – though the main pull-quote has lovely implications for the analysis of the university as a chaotic organisation, which is what I’m currently warming up to do. But Robin Hanson is an interesting chap and worthy of further consideration. He takes most of the credit (pdf) for one of strangest ideas in US foreign policy in the last 10 years.
First coming to light in 2003, the Policy Analysis Market (PAM) was an audacious attempt to harness the power of the free market in order to identify likely terrorist threats. Participants (who needed to show no evidence of expertise in foreign policy, or indeed identity) could bet (and win) actual money on a range of likely acts. This approach seems to sit neatly between crowd-sourcing and rewarding “informants” in a traditional intelligence industry manner.
It is easy, and indeed was easy, to write this idea off as a right-wing fantasy – the invisible hand of the market solves everything. And that, coupled with a hefty dollop of ‘nothing is more serious than the safety of our nation’ rightist hand-wringing is pretty much what happened – sparking a scandal so great that it even caused John Poindexter (of Iran-Contra fame) to resign. So with all of this media frenzy, the actual research project (and it was only a research project) never got started.
I’d started thinking about PAM again after reading Mike Smithson’s analysis of political punditry versus the (UK!) betting markets during the US election:
Throughout the long night of the White House race the most striking feature for the punters was how the betting markets were much faster responding to events and the information available than any of the so-called pundits.
Again, lots of anonymous predictions come closer to the mark than a smaller number of “expert” ones, and the offer of reward to predictors leads to the possibility of non-open information being used (“cheating”, as academics would call it).
And as the press began to call it, in relation to activity on Coursera massive online courses. A large number of participants, with varying levels of expertise, competed to answer non-trivial questions. And clearly some used “forbidden” information to do so.
Now Coursera is not so much a new model of education as a tool to produce test data in order to draw quantitative conclusions on every aspect of educational performance. [just realised as writing, I'm also describing a traditional university in maybe 3-5 years time]. This approach (at least, on this huge scale) was pioneered by Candace Thille’s team within Carnegie Mellon’s OLI project.
Where PAM and political predications via betting markets actively hope for “cheats” in order to gain better quality data, Coursera and OLI are looking for an honest failure to predict correctly in order to improve what I can only, in this context, describe as market intelligence products (or as I might used to have called them, lectures). As Andrew Ng (co-founder of Coursera) described:
[...] While reviewing answers to a machine learning assignment, [I]
noticed that 2,000 users submitted identical wrong answers to a programming
assignment. A k-means clustering analysis revealed that the student errors
originated with switching two lines of code in a particular algorithm. This
information was used to improve the underlying lecture associated with the
[a useful academic counter-example here would be Galaxy Zoo]
Now the casual reader (hello both!) will probably be wondering what I am getting at here! It’s clear that both PAM and Coursera/OLI, whilst ostensibly set up for widely differing reasons, both are really looking for what you might call the “interesting outlier” in order to improve and expand upon the intelligence resources provided by in-house experts. It was Pauli who famously remarked of an uninteresting paper “It is not even wrong” – my suspicion is for both examples that a “right” answer is “not even wrong” and thus uninteresting.
And the top quotation on teacher autonomy – is the subtext not that it is impossible to get good quality comparable data on teaching methods whilst classroom practices are so varied?
But – finally, and chillingly – a university substitute that is actually hoping for wrong answers from students? That raises far deeper ethical questions than PAM ever did.
[edtech diaspora postscript: Hanson is clearly sensible enough to read - and cite - Martin Weller]
[further reading postscript: And Hanson maintains a great archive of PAM related materials on a dedicated corner of his web presence]