This is a disconnected train of thought set running after watching “The Imitation Game”, reading a blog post about historic Russian Cyberneticism by Mark Johnson, and attending a workshop on Text Data Mining.
Something seems to have happened recently to change our cultural understanding of the nature of heroism. Broadly speaking, our initial idea of a hero was someone who was more humane than your average human, someone who could do great deeds as an expression of what humanity could become. If you want that in a statement, “an imagination of what a more powerful humanity could be”
What I’m arguing is that a contemporary statement defining heroism would be “an imagination of what a power beyond humanity could be”. Something in between the old idea of a god, and big data science.
In “the imitation game” there is a curious twist to the story, almost as a false ending, after the team break the Enigma code (the story of process, incidentally, plays back the five-people-in-a-room-doing-crosswords model of Bletchley – which speaks of a need to replay the popular conception rather than to challenge it. But that’s another matter, and one that Dr Sue Black takes up admirably in her review.). The team realise that they have all of the data they need to understand and predict axis forces activity, but are narrowly stopped from saving millions of lives by Alan Turning throwing a phone at the floor to remind them that the primary concern was to ensure that the breaking of Enigma was kept secret.
What follows is a jaw-dropping re-telling of the course of world war two almost as a stage-managed process, managed between a statistical analysis of the likely consequences of acting on certain information, MI6 subterfuge and even a bit of Soviet espionage.
None of this, even more jaw-droppingly, is actually true. It was an invention for narrative purposes. Policies around the use of decoded information (which incidentally happened many times, to Enigma and other protocols, before Turing’s team’s Polish-inspired breakthrough with the Bombe) were in existence even before the start of the Second World War, and this was handled by an entirely separate department (Hut 4, for completeness sake… I’ve stood in it).
So for narrative reasons the mere fact of the mathematical and mechanical ingenuity of Alan Turing – a fine, and true story, but one that needs to be seen in the context of other work at the time, not least the contributions of Bill Tutte, Tommy Flowers and many others in breaking the Lorentz cipher – was not enough. He had also to make difficult ethical decisions outside of mere human considerations such as saving lives. Much of the latter hang-wringing is about the use of this god-like power brought about by the analysis of a large dataset.
And this is a very modern portrayal of heroism indeed – making decisions with ramifications beyond life and death for a higher good, based on a machine-like grasp of the entirely of a data set and its implications. Think Doctor Who. Think (the modern) Sherlock. Think Batman, Iron Man/Tony Stark, Lucy, George Clooney’s character in Gravity… take your pick.
Think Sebastian Thrun. Think Sal Khan.
Think Mark Zuckerburg. Daphne Koller.
The common thread is an ability to think beyond human concerns, to transcend individual interactions to see truths and answers in the sum of those interactions.
So what does it mean to live in a world where are heroes are those who understand the general rather than the specific, the reality of the data rather than the reality of experience?
Which brings me to the Soviet attempts to employ cybernetics to build the ideal socialist state. As Mark Johnson notes:
“In 1959, Anatoly Kitov proposed to the Kremlin that a computer system was developed to manage the whole Russian economy providing real-time feedback on production. This ambitious request was rejected, although it remained a long-held dream of Kitov and other scientists: a programming language called ALGEM (a variant of ALGOL-60) was developed to assist in the realisation of this economic management system.”
As Mark recounts, huge swathes of the Soviet computer programme were abandoned as the Kremlin decided to standardise on IBM!
This prefigures the Chilean Project Cybersyn, which was a far better known (and more advanced) attempt at the same goal: to direct the use of resources for the good of the workers.
In the west we currently live in a civilisation where resources are distributed by algorithm, but these support the profits of merchants (and not yer bourgeoisie neither) rather than the welfare of workers as any dairy farmer will happily explain to you.
And in academia we are already a long way in to a similar disruption, as regular expressions are capable of reading and excerpting from more academic literature than any single human being. The UK government has already invested £73million in “big data” research, and the development (and sharing) of datasets for future mining has become a huge component of research grants in all fields.
There’s a palpable, almost childlike, delight in the scale of the research that will be possible in the future. But it is only the funding conditions of the present that mean the high volume and low quality of academic research are unmanageable by human eyes. For years we have used policies that have given us more and more data, so it is only natural that we turn towards the development of tools to manage and use it.
It is already very difficult to receive funding or demonstrate impact for a research project based on existing literature or data. Academic fields, old and new, cry out for annotated bibliographies, literature reviews and meta-analyses. But we don’t have the political will to fund them. Or, more accurately, to fund people to conduct them.
But we will, it seems, fund people to build machines to do similar things.
The “heroism” here is a faster, cheaper means of conducting academic research via automation. The question is – and the one which several teams grapple with worldwide – is this for the benefit of the workers or the merchants?