Well, he would say that, wouldn’t he?

This is – in the vaugeest sense – a response to Martin Weller’s thoughts on “Open Education and the un-enlightenment” that I’m putting here just to stop people speculating that I’ve been replaced by a robot that only talks about citations and Yacht Rock.  In deference to the post-fact world, I am not citing authorities in this post*.

Are we living in a “post-fact” world? And is this a new development, or a more prominent iteration of a previously identified trend?

From Brexit to Corbyn, Trump to Saunders, media commentators have been bashing us over the head with a half-brick labelled “post-fact” or “the new populism” as a way to give us a handle on a world in which the arbiters of information are continually under suspicion.

This itself represents a failure of higher education, as there already exists a theoretical toolkit to deal with this very state of affairs, and it has been widely taught at undergraduate level since at least the late 80s.

Post-modernism is what a group of academics in the latter half of the C20th decided to label our growing cultural distrust of the grand narrative, coupled with the cultural theory informed (and, to a lesser extent post-structuralist) approaches to analysing reality using techniques developed to discuss fiction.

What this has resulted in is a conspiracisation of reality – where explanations from those in positions of power are distrusted, and the tools and tropes of fiction are used to propose plausible alternate possibilities. If you like, it is the scientific method if you removed the basis in scholarly literature and abandoned the concept of falsifiablility.

I’m going to use the idea of the conspiracy theory as a way to understand what is going on, and I am using the term without an implict value judgement. To me, a conspiracy theory:

  • Is a narrative of resistance – it implicitly distrusts received wisdom from any external source. It puts the case of a group party to hidden truth against a (postulated) far better resourced group that actively hides this truth, on behalf of an unaware mass population.
  • Is non-falsifiable – counterfactuals are simply distrusted rather than incorporated, significant counterfactuals are seen as a means to “suppress the truth”. Attack or ridicule are seen as an admission of culpability.
  • Is coherent – it sets up and positions itself with relation to dichotomous relationships, it attempts to explain a wide range of activity in a unified and non-contradictory way. (Conspiracy theories often use tools from fiction, such as plot structures, tropes, motivational fallacies and structural critiques of culture – in order to do this)
  • Is evangelical – ideas are designed to be spread memetically, codes and systems of reference are used to identify others and cohere a group culture.
  • Is plastic – the theory narrative will shift to encompass new ideas, or align with other theories. If a component of a theory is thoroughly repudiated, it is routed around.

Open Education itself is a conspiracy, if you like. Evil publishers are profiting from the unequal distribution of information, where they do take steps to address this they are “openwashing”. They do this because they hate learning where a profit is not made. If we attain a critical mass, open education will replace the textbook publishing industry.

If you are sitting there thinking “who, us?”, ask yourself what information would falsify your belief in open education. What information would falsify your other beliefs?

So I’m making a semiotic shift by proposing another way to think about the “post-factual” within work on post-modernity, cultural theory and (in particular) the study of conspiracy theories – and I’m suggesting we also examine our own practice and beliefs with these tools.

In essence, we live in a culture that loves to tell itself stories – comprised of similarly prolix sub-cultures. The sub-culture that constructs the most compelling narrative becomes the dominant culture, and then other sub-cultures attempt to develop narratives in opposition that attempt to displace this.

Stories can be compelling without being true. And the fact that people chose to base their lives around compelling stories is neither unusual nor concerning. As a sub-culture (which I’m going to go ahead and put all of us gathered here in, though not on an exclusive mono-cultural basis* – structuralism! wheeee!) “the elite” privileges certain forms of “truth” within a narrative based on a deliberately developed high standard of proof.

I like high standards of proof, because I like being confident that other members of the “elite” sub-culture (that’s you, dear reader) will validate my contribution to a narrative. This is why it is so horribly hard to write this post without using references or appeals to authority – I have to get over the idea that the esteemed Prof Weller is going to trash my contributions.

Others do not have to, or indeed intend to, appeal to the “elite” sub-culture in constructing or contributing to a narrative. This doesn’t mean that proof or authority isn’t used, just that these may not be in a form that we are used to dealing with or responding to.

If we want to understand why we keep losing arguments (getting to the nub of the matter) we need to get better at understanding how these arguments work and how strategies to win them work. Or we need to come up with another form of argument that works for us better than it does for other people. Or we need to get better at widening our little group to include other people.

___________________________________________

* if you must, Frederick Jameson’s “Post-modernism, or, the cultural logic of late capitalism” is a useful starting point, David Aaranovich’s “Voodoo Histories” is good on the nature of the conspiracy theories, Helene Cixous’ “Le Rire de la Meduse” is an underpinning set of ideas on multiple cultural narratives that more people should read, and any decent UK Cultural Studies anthology would be worth a look for a grounding in the ideas of the field.

** cos I’m an articulate extroverted middle-class able-bodied white heterosexual cis male western European with a university education that participates on a well-paid basis in an information economy – although this makes me FUCKING AWESOME at being a member of the “elite” it does not describe everyone who may subscribe to “elite” values here. Which in itself is a pretty brutal critique of “elite” culture…

 

 

Citation standard governance structures, for fun and profit.

(the bulk of this text is from a paper I wrote to support the work of the DCIP. I’m sharing it here in case anyone else finds it useful. All glaring issues and inaccuracies are my fault alone, please leave a note and I will update.)

Briefly, a citation is an in-text link to a reference in a list of references at end of a work. Though there are some systems that focus on citations (Harvard, Vancouver…) or references (ANSI/NISO, ISO/BS) only, within commonly used systems it is more usual to see a coverage of both aspects alongside more general “style guide” material.

Many styles were developed around the requirements of particular publishers or journals, but have since expanded into widely used guidance. Some have been heavily commercialised, others are available to view online for free. There’s an argument to be made about open accessibility to what are, in essence, gateways to academic publishing – but here my focus is on openness in the sense of transparency. How, and why, are changes made to citation/referencing rules?

The bulk of this post is in the form of a list of citation/reference styles, alongside an indication of where they are currently used and the way they are administered. You’ll see (broadly) four categories of style administration:

  • Developed on behalf of a publisher or professional body by a specifically hired external author/editor.
  • Developed on behalf of a professional body or publisher(s) by a committee or other individual/group drawn from that body.
  • Developed by a standards organisation.
  • Unmaintained/consensus.

In the short term, if you wanted to improve or modify mainstream citation practice you would go via the two major standards organisations. Both – it could be argued – are overdue updates, and the mechanisms by which such an approach could be made are transparent and clearly defined. Both NISO/ANSI and ISO/BS standards are likely to be relied on in the refinement of subject area and, at a secondary level, journal-specific level. This would not be a speedy process, but with concerted lobbying it may be possible to achieve a wide coverage for any changes in around five years.

However, there are two major obstacles to overcome. The first would be the near-impossibility of seeing complete coverage. Whilst the convergence of requirements towards a small set of standards has been an ongoing trend, there are many journals that – for unique reasons of specialism, or through sheer obstinacy – will continue to mandate specific presentational methods. These may include, but are not limited to, modifications of mainstream standards, previous versions of mainstream standards, or entirely distinct and unique methods. Short of contacting each “outlier” journal directly there would be no means of achieving complete coverage.
The second major obstacle concerns the likely development of research metrics over the next ten years. James Wilsdon’s “The Metric Tide” is simply the most prominent example of a trend away from an uncritical acceptance of citation-count based metrics – newer methods of analysis, such as semantometrics, examine contextual information gleaned from the position and sentiment of a citation. Citation (as opposed to reference) practice is primarily based on academic custom – changing ingrained habits could be very difficult indeed, and journals would likely be reluctant to depart existing norms even if the “canonical” documentation of these norms was altered.

Those citation/reference methods in full

International Standards – these primarily deal with references, and may either be used directly by journals or inform the ongoing development of other style guides. As the projects of international standards organisations, these are openly constituted committees which are explicitly open to question and suggestion via well documented routes.

  • ANSI/NISO Z39.29 (last updated 2005) and covers bibliographic references. This standard underlies other styles and is also used directly by PubMed/Medline. Note that JATS (another NISO standard) supports the XML markup of references in a number of styles. The standard is managed by committee/working group and suggestions  are welcome via standard NISO contact details (nisohq@niso.org).
  • ISO/BS 690:2010 (last updated 2010) is titled “Information and documentation – Guidelines for bibliographic references and citations to information resources”. In the UK it is sometimes cited as, “Harvard British Standard”. The standard is managed by the ISO Identification and Description Committee (ISO/TC 46/SC) which can be contacted easily via the details on that page. ANSI provide secretariat, and confusingly the named secretary (Todd Carpenter) works for NISO.

Citation styles – these could best be described as “conventions” rather than standards, though many have spawned wider style guides. In these latter cases, an invited editor will draw on other style guides in an attempt to be as inclusive as possible without being needlessly complex. The post by RD Harper on the “Chicago” process is instructive here on methods – note that “Chicago” and “Turabian” are aimed, as complete style guides, primarily at students. The “Vancouver” method is another outlier in that it has a close association with the ICMJE, with the maintenance of a committee alongside an invited author and strong links to the NCBI style guide.

  • Chicago – the Chicago Manual of Style is based on what is widely accepted as the “Chicago” method of citation (primarily author/date within parenthesis, but there is also a footnote variant.) The 16th edition of the manual (published in 2010) is managed by the University of Chicago Press is aimed at general/student use. Russell David Harper was the invited editor, and offers an interesting perspective of the process of developing a style guide.
  • Turabian  – A Manual for Writers of Research Papers, Theses, and Dissertations is a variation on the Chicago style [Author/date, and footnote]. The 8th Edition (published 2013) is also aimed at a  general/student audience. It corresponds to the 16th ed of the Chicago Manual of Style, and each edition is managed by an invited editor. Originally developed by Kate Turabian, a former graduate school dissertation secretary at the University of Chicago, more recent editions have been managed by a range of editors and the 8th edition was updated by the “University of Chicago Press editorial staff”.
  • Oxford The New Oxford Style Manual (sometimes referred to as the new “Hart’s Rules”) is known for a footnote/endnote citaton style, though the manual also covers references.  The current edition is the 3rd, which was published in 2016 and incorporates the 2014 version of “Harts Rules”. Anne Waddingham was editor in chief of the 2014 edition of Hart’s Rules, she is currently a freelance editorial consultant.
  • Harvard citation is a surname-and-date-in-parenthesis method that refers to a common practice rather than a given publication. There is no central authority and “Harvard” does not constitute a full style guide. There is some doubt as to its origin (though an 1881 paper by Edward Laurens Mark is often given as the source) , but it is not owned, and indeed is explicitly disowned, by Harvard University. This BMJ article offers a partial history of the practice.
  • Vancouver is closely associated with the work of the ICMJE, with their “Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals” often given as an authority. It is an ordinal number parenthetical citation method. It links to the confusingly-titled NCBI publication Citing Medicine for referencing practice. The recommendations were updated in 2015, and are managed by an ICMJE sub-committee, with the standard ICMJE email address given as a point of contact. “Citing Medicine” complies with NISO Z39.29 and names Karen Patrias as the (invited) lead author of the 2007 second edition.

Legal citation very much exists within a separate world, and two common examples are included here for completeness only. The OSCLA, with an invited author and a clear mechanism for feedback, contrasts with the very commercial and closed auspices of the Bluebook.

  • OSCLA (the “Oxford University Standard for the Citation of Legal Authorities“) is the main UK standard for legal citation. The current edition is the fourth (2012) and the guide is updated “every two-to-three years”. It is managed by an invited editorial team, and invites feedback via oscola@ law.ox.ac.uk.
  • Bluebook – “The Bluebook: A uniform system of citation” is the major US legal citation and reference guide, and has been developed on a commercial basis by four major US legal journals. The most recent edition (the 20th) was produced in 2008. Concerns about cost and access have led to the independent, openly accessible, Indigo Book which offers a compatible style guide.

Major subject area guides – the APA and the MLA are two of the most widely used citation and referencing standards, used respectively across the majority of social sciences and arts/humanities subjects. These are style guides in the fullest extent, covering every aspect of academic writing marginalia. The development of style guides is undertaken within the committee or secretariat structure of the society in question.

Other subject area style guides – generally much sparser than the “big two”, these are generally developed as a function of journal publication, and have more in common with guidance for submission. It can be difficult to find details of authorship, though the AMA and ACS (as the two largest in this category) are clearer about processes. At the other end, we are dealing with what are basically journal-specific guidance notes, and at this level it may be easier to encourage journals to adopt a standardised system where possible.

  • IEEE – The Institute of Electrical and Electronic Engineers’ “Editorial Style Manual“, and parallel “Citation Reference” are widely used in engineering disciplines. These appear to have been last updated in September 2009 by Deborah Graffox, who – at the time – worked for the IEEE press.
  • APS – the American Physics Society “Online Style Manual” is undated and appears unmaintained, though mentions a previous edition in 2003. It is very difficult to see who was responsible for developing it, I would suspect the APS journals team but the guide is not mentioned in their advice for authors, which instead points to two contradictory journal specific guides that do not refer to the online style manual.
  • AMA – The AMA Manual of Style reached a 10th Edition in 2007. AMA is widely used in medical research. It operates an email address for feedback (stylemanual@jamanetwork.org) and is developed by an AMA authorial committee.
  • ACS – “The American Chemistry Society Style Guide: Effective Communication of Scientific Information” was last updated in 2006 (the 3rd edition) It was developed at the time by two invited co-authors: Anne M. Coghill and Lorrin R. Garson.

There are many other citation/reference style guides (I’ve heard numbers in excess of 2,000 bandied about) at subject, publisher and journal level. But these are the main offenders.

Extreme citation and the birth of the web.

[in which the citation practices of the inventor of the worldwide web and the co-founder of OpenLibHums are compared – and I attempt to interest people in data citation.]

Tim Berners-Lee’s “Information Management: A Proposal” is “gray” literature (an internal memo at CERN, never formally published), and nonetheless one of the most influential and highly cited documents in history.  However, in all of the published academic work devoted to this short memo, Sir Tim’s citation style has never received the critique it deserves.

Until now.

In-text citation broadly comes in two-and-a-half flavours – either the entire reference appears within a footnote or endnote (Oxford style), or a parenthetical key using ordinal numbers (Vancouver) or author surname and date plus an identifier if needed (Harvard) style directs the reader to the appropriate part of a list of references at the end of the work .

Sir TBL’s innovation was to provide citations as short text, combining the ease of location of Vancouver with the immediacy of Harvard. Many of his references are to resources where an author name is not present, or where significant additional text is required alongside a (broadly ACM-style) reference.

For instance, [HYP88] refers to a special edition of “Communications of the ACM” entitled “Hypertext on Hypertext” written in Hyperties syntax and sold on floppy disk, containing the full text of eight papers from the July edition of Comm ACM.

[HYP88]

(Not quite the first e-Journal, though: that was Richard H. Zander’s Flora Online in 1987, available by subscription on disc, and free via a BBS. This was primarily a directory of source code for academic software in botany.)

But back to TBL’s citation style. If we see a citation as working on two levels, both as an immediate aide memoir to those cognisant of the current state of a given field (a surname and a year being enough to indicate the work in question) and as a way for those less familiar with the field to find an article in question (via looking it up in a list of references) it can be seen as a neat way of achieving this in a field where resources may be well known but citation is not a situational norm.

As is, of course hypertext, combining three levels of information that can indicate a source: the semantic context of the surrounding text, the raw URL viewable via a mouseover, and (hopefully) the full resource – or at least a means of obtaining it – via a link.

Kathleen Fitzpatrick (@Kfitz on twitter) considers the place of the academic citation in a world of hypertext in a recent article for the LA review of books. As the lead editor of the most recent edition of the venerable MLA Handbook, she is well placed to reflect:

All of our current citation formats were invented for a print-based universe, in which each book or article gave the impression of standing alone. Bibliographic notes and markers connect these many individual texts into a broader, ongoing conversation. But now that we live in a world in which no text need be an island, in which scholarly publications are increasingly delivered digitally and so can be literally interconnected via links and embeds, it is reasonable to ask whether citations are still necessary.

Her conclusion draws on the need for a future scholar to understand not just the name and authorship of the work referred to, but also the precise version of said work

[P]ublications and other cultural objects are no longer quite as fixed in format as they were, and their very malleability may heighten the importance for future scholars of knowing precisely which version today’s researcher consulted.

A problem Martin Eve will know well, from his examination of multiple versions of David Mitchell’s Cloud Atlas. It turns out there are two substantially differing versions of the text of the novel, stemming from UK and US translations and differentiable by place of publication and ISBN.

We have unique identifiers for texts in the form of ISBNs but we have become complacent about assuming that all editions are equal on first publication. When we write of ‘Cloud Atlas’, to what are we referring? Is it the textual edition cited in the bibliography? Nominally, yes, but more often the assumption is that we mean this to refer to the ur-structure, the named entity of a text that is ‘the novel’.

(if you are wondering how he dealt with the two versions of the text within the paper -which he references as “Mitchell, D (2008). Cloud Atlas. London: Sceptre” and “Mitchell, D (2008). Cloud Atlas. New York: Amazon Kindle” – he uses ‘P’ for paper and ‘E’ for e-book in discussion, with what amounts to occasional full references in text as citations(!), the former as “(Mitchell, Cloud Atlas [Sceptre, 2008],’P’)” and cunningly never referring to the latter in the text of the paper (but I’d guess “(Mitchell, Cloud Atlas [Amazon Kindle, 2008],’E’)”).

Clear?

So neither links or citation/referencing are ideally adapted to dealing with such issues – although Tim Berners-Lee might have elegantly sidestepped all this by using [CLOUDATLASPRINT08] and [CLOUDATLASEBOOK08] in text 🙂

This is bad news for those submitting to a journal which rigidly applies a citation/reference style (which is most of them). And is not really an issue that EndNote, Zotero or Mendelay (those masters of arcane styles) can help us with either.

The reason I am concerning myself with such arcana is because I’m really boring the citation of research data (of which there may be multiple versions) presents similar problems, except all the time rather than in rare cases like Cloud Atlas. With a paper comparing sensor readings taken milliseconds apart, conventional citation is just not going to cut it – which of course is why we use unique identifiers (though there is an issue in indicating relations between datasets which is where PCDM comes in…). It’s another reason why we cannot simply bring traditional citation processes over to datasets.

A meander into institutional designation

[For those jumping up and down and shouting “The HE&R bill!” at me – patience. It’s not law yet, and I’ll get to it at the end.]

So just what is a “higher education institution” in the UK? Section 71 of the 1992 F&HE act currently defines a HEI as one (or more) of these three things:

  • a university1
  • a higher education corporation
  • a designated body, designated by the Secretary of State where a full-time equivalent institutional enrolment number for courses of higher education exceeds 55 per cent. of its total full-time equivalent enrolment number.

Point three gives us the clue that an institution will not be a HEI just because it delivers higher education. Indeed, even if it delivers only HE, it will not become a HEI unless designated as such by the Secretary of State.

At this point, it may also apply for teaching and/or research degree awarding powers, currently via the QAA. And also for university status (currently via the QAA and Privy Council). And also for institutional designation for HEFCE funding and student fee loans. And it may submit an access agreement to OFFA so it can charge fees up to the maximum amount. Or, in any of those cases, it may decide not to. But it needs to be a HEI in order to even think about doing these things (unless it happens to be the Archbishop of Canterbury).

Prior to achieving HEI status, an institution will have one or more designated courses (eligible for student fee loans and, if applicable for the subject of study in question, HEFCE funding). The degree courses it delivers will be validated by a HEI partner, sub-degree courses may instead be validated by Pearson.

If you want to know the status of a particular institution as regards these various attributes, all of this is reflected on the HEFCE register of HE Providers. As long time readers of “Followers…” may recall.

The HE&R Bill takes some steps to clear some of this mess up, notably combining HEI status, HEFCE funding designation, T/RDAP and University title into a single process and makes the access agreement a mandatory component.

It took a while to for this to occur to me as well – but all of these lists, designations and attributes have one thing in common. They refer to the “teaching” end of Higher Education. So what’s the deal for research?

Of course, there is nothing stopping anyone from conducting research into any thing they like in any way they choose. Indeed, this may be paid for by any number of bodies for no other reason than they think that that institution/organisation in question might be quite good at it.

The two jewels in the research funding crown, however, are QR funding and research council funding. How does an institution get access to both edges of “dual support”?

QR funding is the stuff that is linked to the REF – you have to be in the REF and get the required score in order to get the money. It’s up to each institution to decide whether to enter the REF and to work out whether the return via QR plus the reputational benefit is equal to the effort put in.

You have to be an institution designated to be eligible for HEFCE funding in order to reap the rewards of QR. Obviously if you want the support for research degrees that HEFCE may provide alongside this, you need to be eligible to award them.

The entry requirements for the REF itself are surprisingly hard to find. The long and terrifying list of FAQs don’t shed any light, and my best guess from the language in the guidance is that only “Higher Education Institution” status is required, as per the F&HE Act 1992 Section 71 definition at the top of this post (and as far as I am aware, the Archbishop of Canterbury didn’t submit to REF2014). No private providers submitted to REF2014 (not even the University of Buckingham), but I can’t find anything in the documentation to say that they couldn’t.

There is one main anomoly. The Institute of Cancer Research has designated courses at postgraduate level via the University of London but is not a HEI, but it DOES submit to the REF. I’m also unsure of the status of the Institute of Zoology (claims to be HEFCE funded, submitted to the REF, but isn’t on the HEFCE register – I think it hides under UCL somewhere).

And Writtle College (shortly to become Writtle University College – many congratulations!) wouldn’t have had TDAPs (or University College status) at the start of the REF 2014 period. However they were designated as an HEI in 1994 so were eligible to submit to the REF from that point.

So you’d think that HEI status (or being the institute of Cancer Research) may also be your path to research council funding, and you’d be wrong.

There are three groups of institutions eligible for research council funding:

  • institutions in receipt of grant funding from a UK HE funding council (like HEFCE) can apply to any research council. Though the wording is unclear, this means designation at an institutional level (thus eligibility for HEFCE QR in England)
  • Long-term established research institutes can apply to any research council. These are places where the research councils have made a long-term investment, and where they are a primary funder.
  • Independent Research Organisations (IROs) can apply to one or more research councils as agreed in either a “responsive” (whenever they decide to) or “managed” (with permission on each occasion) mode.

To become the latter, there are some fairly detailed eligibility criteria . An IRO needs to be a charity or similar, a legal entity not owned by business or for primarily research purposes by any part of the public sector, and it needs to have the internal capacity to conduct research. Even beyond this, the bar for IRO designation is high and an assessment is made for a five year period (automatically renewed if funding is awarded) by the Grants Governance Committee accross all research councils, even where designation is sought for a single research council.

Though IRO status is a high bar, it is not as high as the designations currently offered by HEFCE – certainly organisations as diverse as the Tate, RAND Europe and the Institute for Fiscal Studies currently hold IRO status but would absolutely not be eligible to become a HEI.

But so what? you may ask.

Remember that the QR funding “Research England” side of HEFCE becomes a part of UKRI (the research councils umbrella body) as a part of the HE&R Bill. Why would one sub-committee of an organisation run a completely different set of eligibility criteria to all of the others. Why would an institution need to have OfS designation to get QR funding, but not to get a research council project? Why would one part of UKRI run a competition only for a subset of the institutions funded by all the others?

The thrust of modern HEI policy has been on simplification of regulatory structures – what would make Research England the exception?

If you are that rare breed, a research manager in an FE college strongly committed to research (that probably delivers some HE on the side), I think you should be having a careful read of that IRO document…

 


1. [What’s the difference between a University and a University College? Size. A University has to, since 2012, have more than 1,000 FTE students. Before 2005 you needed 4,000FTE and RDAPs, with the latter requirement lost at that point.]

FOTA Brexit Nonsense Update 3 – oh god no

(part one) (part two)

This is the post I hoped I wouldn’t be writing – but welcome to Boris Johnson’s “sunlit meadows“.

maxresdefault[1]

We’ve been and gone and done it, and now it is all about the consequences. Already today we’ve seen the UK lose a triple-A agency rating, stock markets plummet, the pound at a 31-year low against the dollar (good thing George Osborne calmed the markets this morning, eh?).

Meanwhile, every pathetic racist in the land feels emboldened to make life miserable and uncomfortable for everyone that looks even mildly foreign. Scotland wants out of the UK if it means leaving the EU, as does Gibraltar, as does Northern Ireland.

The wheels are already coming off the Vote Leave promises, there are widespread reports of “buyers remorse” amongst leave voters, and the heroes of the hour (Gove, Johnson Major, Farage, Carswell) are conspicuous by their absence.

Both major political parties are currently imploding, with Cameron’s replacement due to start in September and Corbyn’s any time from tomorrow.

And this is T+4 on our “independence day”.  In Book of Revelation terms (which is really the only viable comparison we have) we still haven’t got to the end of the letters to the seven churches – and this with the unlikely figure of Nick Clegg as St John of Patmos.

There are so many questions, and so few answers, I feel justified in stepping into the bounds of probabilistic interpretation whilst (hopefully) dropping some constitutional science on those assembled.

What almost certainly won’t happen…

rerun of the referendum. Yes – there were lies told. But movements legally challenge the referendum result, and/or require a new one (perhaps on modified rules) are doomed to failure. The reason is simple: the referendum wasn’t legally binding. Parliament is sovereign so this and all other UK referendums (with the exception of the 2011 AV referendum which had specific provision in this regard) are advisory only. That said, it is pretty clear advice that would be politically difficult to ignore…

A speedy negotiation and withdrawal. To leave the EU, parliament has to empower the Prime Minister of the day to enact Article 50 of the Lisbon Treaty (The Treaty on the Functioning of the European Union). We know now that we won’t have a prime minister (to all sensible intents and purposes) in post until September at the earliest. And even then, both houses of parliament would have to agree (majority) to grant this power.

As both houses have significant majorities that are opposed to leaving the EU, and as a person who may well become prime minister (I know, I’m sorry…) has already expressed a desire to take things slowly this would be no mean accomplishment. And though the EU may demand speedy resolution, not least to settle the uncertainty that will likely affect economies around the world, they have no power to force Britain to declare article 50.

Those who voted to “take control” may reflect here that this is possibly the last point we would have control over our relationship with Europe, so given our complete lack of preparedness to negotiate time is very much not of the essence.

Anarchy in the streets. Angry citizens on both sides will march, chant and campaign. Millions of people will sign entirely useless petitions. Some idiots will be violent and deeply unpleasant, they will be (rightly) arrested and imprisoned. But as a population looks to the government to protect them from the consequences (devaluation, economic slowdown, job losses…) of the referendum it is unlikely to simultaneously enact full communism/a neo-fascist junta.

What probably won’t happen…

Brexit. Boom. I went there. For the reasons given above, I would be hugely surprised if Britain ever uses article 50. And as there are no other ways to leave the EU, that would very much be that. Practically nobody in government wants to actually leave the EU, even Boris has been clear all along he wants to use a vote to leave as a bargaining tool.

As we progress through a long, painful, summer it is likely that public mood will tend towards “bregret” as the bad news continues. A new PM in September may opt to seal the deal via a snap election won on an explicit anti-brexit stand – possibly with some concessions from the EU to sweeten the deal.

Pariah lawmaking. As the international community attempts to navigate what are already difficulty delicate geo-political and economic waters, the need to punish and isolate the UK will be unlikely to be forefront in their mind. The remorseless logic of capital, and the “function” of the market in idenifying and pricing in risk will make things far worse for us Brits – punative policymaking and sanctions from other nation-states or international groupings are very unlikely to.

However, there is a small chance that if the brexit vote sparks a wave of similar referendums in Europe the EU may wish to make an example of the United Kingdom- though I’d guess it is more likely that we would be already be a cautionary tale thanks to the invisible hand.

What probably will happen…

A UK general election in 2016. Yup – I heard you liked politics so I put some politics on your politics. A general election (already hinted at by Cameron and others) is our grand brexit get-out-of-jail-free card, and an incoming prime minister is likely to want a more recent “remain” mandate to counter the “leave” vote.

A new political landscape. Both major parties are fractured and in disarray – the Tory “leave” right has far more in common with UKIP than with the modernising “remain” wing. It’s been the longest relationship break-up in history but I’d be surprised to see an intact Conservative party as the clamour to remain grows.

Ditto, alas, Labour. Corbyn’s determination to hang on till the bitter end highlights the difference between the labour left and the centrists, who themselves probably have more in common with the Liberal Democrats or – indeed (whisper it) – the Tory centre.

What almost certainly will happen…

More chaos: of the economic, cultural and political varieties. Also sporting. Despite these reading almost like predictions no-one has any idea what will happen or when. It will be an unsettled summer reflected in financial markets and political paralysis.

Scottish independence. Sorry, England, but we’re not going to get away with this again. Even without a brexit, it’s clear to everyone that Scotland and England are on very different trajectories and can no longer trust a Westminster Government to act in their best interests.

Bonus round

Stuff that I am interested in but I have literally no answers to yet includes:

  • The very dodgy legal ground that depriving individuals of their EU citizen rights would be on. We generally don’t restrict peoples rights (these days, he says, drawing a veil unconvincingly over our nasty imperial past) unless they are criminals.
  • Effects on European Central Bank investment: despite us not being in the Eurozone the ECB invests heavily in British industry. Will it continue to during this interregnum?
  • The now legendary British Bill of Rights and the UK leaving the ECHR. I’m assuming Brexit panic will give the government chance to finally abandon this insane idea, but I also have a horrible feeling it may be a sop to ardent “give me my country back” folks in the event of a non-brexit. Please no.

FOTA Brexit Nonsense Update 2

(the second in a short series)

It’s been a quarter of a year since I last turned white pixels black with the aim of getting some sense out of the ongoing EU referendum debate. In the intervening time, both sides of the debate have opted to abandon theatre and scaremongering, simply stating facts soberly, and with the barest minimum of interpretation, in order that the Great British Public can make an informed choice come June 23rd.

Or, as it turns out, not.

At this stage, most sensible humans have largely tuned the whole thing out – paying attention briefly to amusing asides like the chap failing to burn the EU flag. Or Bpoplive.

As each side describes the other as desperatedesperate… (it’s the insult du jour, folks – the worst thing is being seen as wanting to win so badly you tell your supporters what they want to hear, be it that Grandma needs to be put in a home or that it’s all a massive gubmint conspiracy) a cavalcade of desperate – in many senses – voices are wheeled out for our edification.

There are people in high places in Britain who believe in all seriousness that a word or two from popular crowdpleasers like George Osborne and Michael Gove would affect the thinking of the populace. There are those who consider Kate Hoey and Jeremy Corbyn compelling public speakers.

Any debate that pits Jeremy Clarkson against our own Donald-Trump-with-a-Latin-a-level man of the people Alexander Boris de Pfeffel Johnson should by rights be one to savour, but when something as compellingly awful as Boris’ various attempts at alternate future journalism are met with nary a shrug, we have to consider what else is going on.

The purpose of the Referendum, lest we forget, is to heal a decades-old rift in the Conservative party. Blue blood, blue Audi and blue rinse must come together as one.

And we already know it won’t work. The referendum has already failed before a single vote is cast.

Cameron has been promising a referendum for more than 10 years, both to fend off the forces of senile xenophobia in the form of UKIP and his own fractious backbenches – now it appears he’ll lose either way. Meanwhile the rest of the country looks on in an appalled fascination, contemplating arrant nonsense about either TTIP or mass immigration, depending on personal preference.

If you’d indulge me in a little critical theory, the various competing meta-narratives are only congruent in brief moments of intertextuality – with the sheer immanence of the spectacle itself a reflexive attempt to unify a fractured discourse. Or, as we say on Teesside: I canna be doin’ wi’ this bag o’ doyles, I swear down.

Two tribes

A wonderful survey by YouGov back in March demonstrated that it was possible to stereotype each voting tendency with a high degree of confidence.

If you are over 50, are of social groups C2,D and E, live in the Yorkshire, East Anglia or the West Midlands, have no formal education beyond the age of 16 – you are more likely to vote leave. Whereas if you are under 39, are of social groups A, B and C1, live in London, Scotland or Wales, have a university degree – you are more likely to vote remain.

One can, and nearly everyone does, read their own prejudices into these gaps. There are equally profound party splits, with Conservative and UKIP voters more likely to vote leave, and Green, Liberal Democrat, SNP and Labour supporters likely to vote remain.  (Incidentally, men and women are both split around equally on the subject)

CikTydfWkAAGUL0[1]

But though it is clearly an interesting survey and one worthy of further study, it has exacerbated the already deep split in the electorate. It is not one ideology about the world and Britain’s place in it against another, it has become the old versus the young, the rich versus the poor, the university graduate against the labourer. Dangerous stuff.

How much?

By now, even the most avid news-avoider must be aware that the £350million a week figure being bandied about by leave campaigners is nonsense – it’s the equivalent of saying that a pint costs £20 because that’s the size of the note you gave the barmaid, whilst ignoring the change that you get back and, indeed, the value of the beer alongside the other delights of the pub.

In gross terms we are talking about a membership fee of £250m/week. In net terms our national contribution is £120 million a week, including our rebate, other EU spending in the UK and the amount we count as part of our international aid spending.

Even this £120 million does not include any calculation of the other benefits we get from EU membership, such as an increase in foreign direct investment (the estimation made by the linked paper suggests that this outweighs the net membership costs by a factor of 10!) Or the savings that transnational companies make due to unified trading standards. Or… well, you can take it from here.

Coming over here, running our essential services…

I don’t even want to talk about immigration at this point – it’s depressing.

I could write paragraphs on how much the NHS benefits from migration, how reliant many of our industries are on low-paid migrant labour, how economic migrants work hard doing the jobs we don’t want to or are unable to, from Professor of Theoretical Chemistry to central London barista, whilst paying by far more taxes than they ever receive in social security.

It would’t make the blindest bit of difference.

Neither would a few hundred words on the plight of Syrian refugees, asking little more than the right to live in a house that hasn’t been bombed – leaving the world they had built lives and families in with literally nothing, forced to start afresh, drawing on only the skills (often including fluent English) and determination they themselves offer. As boats began to attempt to land on the east coast, one of our most popular politicians was most scared of human corpses despoiling our beaches.

If you are voting to leave to stem the flow of immigration, you can be assured that it won’t work. Actually – if you are lucky, it won’t work… we need the skills of foreign nationals to manage our country.

This is only going to get worse. There will be more Brexit nonsense, and less decent reporting to make your mind up about. Here’s a few of my favourite academic (because most readers of this blog tend to be academics) sources of analysis:

The always excellent LSE blogs have a European Policy portal, which covers more of the soft social science end of the debate. CEPR run VoxEU, which is great for economic analysis. Oxford have an excellent international politics blog. Based at King’s College, the “UK in a Changing EU blog” is a fantastic initiative.  “The Conversation” have an EU referendum portal, and with the previous example this is generally an easier read though not always of the same high quality.

Good luck.

 

 

 

 

 

 

 

 

 

The Pocket and the Politician

In a more general context, a key question that still needs to be addressed is the origin of the [long-range correlations], which are common in a variety of systems. The underlying system must be sufficiently complex, described by a nonlinear differential equation (or many of them), and there must be a proper amount of feedback. However, the origins might largely variate from system to system, and it is difficult to generate universal models that could qualitatively describe, e.g., heart-beat intervals, magnetoconductance oscillations, and drumming intervals in the same footing.

(Räsänen et al, 2015)

So as the message discipline and the mask of respectability falls away, we might view this as a Democratic opportunity.  People who think that words matter– that they should be used responsibly and not to manipulate people through subtle emotional cues embedded in euphemisms and dysphemisms–can celebrate the loss of [Frank] Luntz’s influence.

(Daily Kos, 2015)

Frank Luntz, and men like him (in the UK you could look at Philip Gould or Peter Mandelson, there’s also Lynton Crosby, Karl Rove,  Jim Messina … ) can be seen as the last of a dying breed of political messaging specialists or “spin doctors”. The great, devastating political campaigns of the 90s and 00s were successful only in their own terms – to the outside observer they led to a parade of “machine politicians” who sought power by surrendering ideals.

Luntz (and the others) worked by means of a focus group. Thousands of hours of recorded conversations gave them an insight into terms and language that “played well”, often the language that later appeared on billboards and in interviews started in the mouth of an ordinary member of the public – the return via the ears and eyes was orchestrated precisely to bring about a “resonance” based on the repetition of language already perceived as “common sense”.

toryScum[1]

In other words, phraseology such as that used in the image above (“It’s not racist to impose limits on immigration”: in that case explicitly rendered in a “personal” hand – though amended, unofficially in another) reinforces what was identified as an underlying pulse of popular discourse.

Repetition amplifies the sentiment- but it will never feel entirely natural. And the power of repetition relies on exact repetition, requiring a huge amount of message discipline. The later has become a politico-industrial pseudo-science – devoted to the idea of communication without any of the communicative (empathetic) aspects.

So these nuggets of distilled phraseology are seen as a way to make a minimum viable impression on a carefully selected target market. Though the use of “found” phrases (from research or focus groups) is common, these are generally decided on centrally within an organisation before being fed out to often-nonplussed adherents and staff.

controls-on-immigration-mug-440x440[1]

Effective? possibly, in the short term. But, as the rise of Trump (and, indeed, Sanders) and the continuing bewildering relevance of Boris Johnson assert, perhaps an idea that suffers from the attentions competing narratives of popular influence.

The theory of message discipline discussed both the quantity and quality of messages – not only must messages be carefully aligned to the language of the target group, but they must be presented uncluttered with other messages. (Lyotard fans should be pricking up their ears round about now). A focus on a few simple messages striates the communicative space, but very broadly and with significant liminal possibilities for a demagogue to exploit. The larger, and “broader” the grouping, the less likely a message discipline approach can capture the full spectrum of opinion and emotion, and the more likely that an off-message individual can find underutilised resonances to exploit.

This years GOP primaries demonstrated not just one (Trump), but a number (Cruz, Carson…) of counter-message candidates who were able to exploit a distrust of such a poorly-expressed and tightly constrained narrative from an “establishment” (itself a loaded, and counter-message term). In Britain, Johnson’s opportunistic and self-centred embrace of Brexit can be seen as a similar attempt to capitalise on years of counternarrative positioning as bumbling, off-message, Boris. Ditto the unexpected and unpredictable rise of Jeremy Corbyn.

Those of you who read the first quote, above, and maybe the underlying paper (and you should!) may wonder where precisely I am going with this. The Trump phenomenon as anti-establishment posture has been donetodeath (alas not literally) all over the popular press. But I daresay none of them have considered fractal patterns within the hi-hat part of a Michael McDonald track in this context.

Jeff Porcaro is a machine. Seriously – it was his brother Steve that suggested the use of samples to power the legendary Linn LM-1 drum machine, and Jeff himself learned to programme one (notably and unmistakably on George Benson’s 1981 “Turn your love around“) . Basically any early 80s LA pop record that has ever made you think “wow… those drums…” – that was Jeff.

Four years before his untimely death in a bizarre gardening accident, Jeff recorded a hugely influential drum instructional video. Here he talks about his hi-hat part in the track Rasanen et all discussed – “I keep forgetting“.

If you have any kind of a musical background you may now pick your jaw back up off the floor.

What I want to note here is both the fluid and utterly mesmeric way he can place any technique on any subdivision of the bar, effortlessly, every time – and the way he makes it sound so fluid and natural that you can help but move. Drummers are generally either technical players or groove monsters, Porcaro’s feel defined the early 80s as he managed to be both.

Last year (yes, I just said that so I could say “it’s been a year since they went away, Rasanen et al…“) a team of researchers analysed the timing and volume of that single-handed 16th hi-hat part and deduced that it very clearly wasn’t as exact as it initially sounds. Here’s the numbers…

journal.pone.0127902.g003

The A parts (the intro and verse) tend to slow down, the B parts (the chorus) speed up – both very slightly, but still measurably. This is done for musical reasons, to accentuate changes of mood in the song. Despite this you can still see a periodicity in the shorter spikes representing a “pushed” accent on the same 16th note of every two-bar phrase. This is the “long range correlation” which connects the precision of a virtuoso with the undeniable groove of a human being.

Jeff could very easily have programmed the same part in, indeed the George Benson track above uses a broadly similar feel. But if he had, these micro-fluctuations in timing and power would be lost and the track would feel very different.

And no-one told him precisely what to play – he had a feel which he interpreted in his own way to the benefit of the song.

Message discipline could be compared to the hypothetical use of the drum machine, the human effect is lost even though it can be closely simulated by expert programmers. Any movement, organisation or political party that designs in message discipline designs out the fluidity and freedom that allows for a virtuosic interpretation of values and ideals to the detriment of wider goals. You get the precision, but what people really react to is the pocket – not a place where you hold a message but where a message gently holds you.

Baby’s first semantic blog citation metrics #oer16

Note: If you want to see semantometrics done for real to have a read about the Jisc-supported work by Petr Knoth and Drahomira Herrmannova. My approach is loosely inspired by their research – which I’ve been fortunate enough to see presented on a couple of occasions – but in terms of reliability, technical finesse and generally having a clue I’m very much a Robbie Dupree to their Doobie Brothers. I should also be clear that what follows is my own efforts entirely.

In all honesty, and if we look at Vivien Rolfe’s superb systematic reviews of open education, we’d have to conclude that literature in the area leaves a fair bit to be desired. Faced with this I’ve heard it said on a number of occasions that “the good stuff is in the blogs”, and I decided that the time had come to test this.

If open education blogs do have academic merit, I would expect them to be cited in the more traditional literature, both around the subject each covers and further afield. This might seem circular – but as there are clearly gaps in the literature one might reasonably expect blog posts to be filling these.

For the purposes of this experiment (as presented at OER16) I looked at five blogs that I felt were consistently high quality, and that I had seen frequently referenced in conference presentations and similar:

Now the “scholarly graph” (the ways in which publications are interconnected by a network of citations) is notoriously hard to mine, unless you happen to be Thompson Reuters or are in a position to give them money. I am neither, so I needed to use the tools I had available to me, which are necessarily incomplete and variable in coverage.

Google Scholar is far from perfect, but it does let me search for references in a round-about way. What I did was searched for the root domain of the blog as a text string, to generate a corpus of literature that included (most likely) a link or citation to a specific page of the blog. I then went to export details of the question and found that a simple export to .csv is not possible. You can export individual records to a small selection of bibliographic software, but not as a whole.

I eventually found the marvelously named “Harzing’s Publish or Perish” which appears to use some Tony Hirst-esque page-scraping magic to automate what would otherwise be a labourious task. I cleaned out duplicate records and self-blog citation (pages from the blog itself returned, which happened quite a lot for popular individual posts), then rendered to the spreadsheet available here. (I also had a special issue with David Wiley – hey, don’t we all! – as many early papers cited his Open Publication license as as a means of licensing their work. I got round this by searching only for citations to his “blog” directory.)

[I also searched for “edupunk” just out of interest – as this was a 2008 term coined to deal with a range of activities that included blogging instead of formal publication. To be honest, it didn’t tell me very much other than that the majority of publications on edupunk are in Spanish.]

Publish or perish also returns a bunch of those research power statistics that occasionally come up in conversation – and it was tickled to play with the h-index for each corpus of papers I returned. The h- (or Hirsch) index is generally seen as an author-level metric, but it can be calculated to characterise any group of papers. In this instance it gives a reasonable measure of the kind of influence that the papers that cite each blog have.

  • David Wiley – 23
  • George Siemens – 38(! – this is very high for an education related subject)
  • Connectivism.ca – 29
  • Martin Weller – 20
  • Audrey Watters – 12

All of these are – if you are the kind of person who cares about your h-index – pretty respectable showings. In particular it should be noted that Audrey is a journalist (and a damned fine independent one whom you should support!) who makes no pretense to be writing academic research or comment.

[I should note here that I also looked at commonality between pairs of blogs – are any particular two likely to be cited together. The short answer is yes – people who cite George Siemens’ blog also tend to cite David Wiley’s blog. The data is on the spreadsheet.]

My next step was to find the five most common words used in the titles of the papers citing each blog, and compare these to the top five words used in the blog itself (using the standard list of English stop-words built into voyant tools.) A proper look at this topic might take more words from each source, and employ a weighting based on the ratios between word counts, but at this point it was Sunday evening before the conference.

oersema

It’s possible to do a manual sort to look for any interesting patterns – in this case what struck me is how often citing papers talked about technology-related issues or *shudder* MOOCs, whereas the blogs were more likely to consider students or use terms like OER.

oersemb

I decided to automatically compare titles to the list of common terms from the blog in question using a simple excel formula – I used an average of the number of instances where the title did contain one of the common terms to create an index of semantic prediction – basically a higher number (with 1 being the highest possible) meant that the terms commonly used in the blog were terms likely to be found in titles of papers citing the blog. Here’s how that stacks up:

  • Wiley – 0.70
  • Siemens – 0.77
  • Connectivism – 0.58
  • Weller – 0.58
  • Watters – 0.12

Now Petr Knoth and Drahomira Herrmannova postulate that a greater semantic distance between the citing paper and the cited resource implies a greater contribution to knowledge – works that bridge disparate areas of research could be seen as more valuable as new knowledge has been synthesised via a new connection formed (kinda rhizomatic, wouldn’t you say?)

By this measure Audrey’s work can be said to have the greatest contribution to knowledge, with David and George much more likely to be cited in the field in which they write. Whether this tells us anything meaningful in the grand scheme of things is open to question, not least because of the general shonkiness of my methods – this being just a first look which has much scope for improvement.

It also made me think about Cameron Neylon‘s concerns about our poor understanding of the nature of citation. Citation is one of those things that seems simple at first thought, but has a huge layer of social and cultural practices built on top of it. Unless we have a better understanding of the reasons for each citation (as quote source, acknowledgement, attribution of ideas or methods, cultural norm in a domain of research….) we can’t really assume that all citations carry equal weight – though most serious citation metrics do so. Some of this may be categoriseable using Knoth and Herrmannova’s deep text analysis alongside a carefully designed system of categories and indicators (another reason I am watching that project and other citation experiments with huge interest.)

So – to answer my initial questions:

  • is all the good stuff in the blogs? There is clearly a lot of good stuff in blogs, which is frequently cited by literature that itself is highly cited. I’d love to look at other areas of research with similarly important blogs to compare – any suggestions would be welcome!
  • are blogs cited only within the domains they write? broadly no, though some blogs are more often cited in the domains they most closely identify than others. Though Audrey’s blog citations showed a low level of semantic prediction, this may be because there were lower citations overall, and I would like to refine the metric to account for this (possibly via some form of sampling?)
  • is this interesting enough to look at further? Absolutely!

Here’s the slides from mine and Viv’s presentation at OER16

The Chain

In the 90s, actuaries (and actuarial science) were at a bit of a low ebb as all kinds of public and private institutions had issues with pension liabilities. The actuaries – whose job it was to make detailed predictions of the likely long term demands on these funds and compare these to what the funds actually had in them – had taken a decades long jag of Panglossian assurance that all was for the best in the best possible of all worlds.

One of the odder things they did was to allow fund managers to claim that equities (stocks and shares) were more valuable than any other kind of investment (say government bonds, commodities, derivatives…). This was an accounting convention which allowed “profits” for returns on these equities (dividends or profits from sales, and gains in realisable value during the inflationary years) several years in advance – thus reducing the net liabilities of the pension funds.

By the mid 90s the flaws in this approach were beginning to show – and by the time of the equities crash in the early 00s (that’d be the dot-com bubble bursting) it was clear that something had to be done, and pension funds began de-risking – selling equities (at a loss) and using the proceeds to buy less risky products with smaller, but near-guaranteed returns.

Or so you’d think.

What actually happened in many cases was – having been burned by equities, and egged on by the interests of the defined-benefit pension-holders themselves – pension fund managers sought newer forms of investment that could offer a comparable return but at a lower risk. And a product was there to meet their needs.

David X Li, another actuary, began his financial career investigating ways of quantifying risk. Drawing on the copula function (specifically Gaussian Copulas) he came up with a way of predicting correlation between financial securities. Basically, he could turn the correlation between the likelihood of mortgage holder A defaulting and the likelihood of mortgage holder B defaulting into a single number – simple enough for investment bankers to understand.

Even though Li specifically warned against it, this (comparatively) simple output from a (largely poorly understood) formula became the driving force of the market for collateralised debt obligations – CDOs. (basically a CDO piles a whole bunch of debt into big box, and then slices it into segments rated at different levels of risk. You can buy a low rate of return with little risk of default (the “senior tranche”), or a high rate of return with a much larger rate of default. Li’s formula supported the generation of synthetic CDOs (sometimes CDO-squared), wherein the more-difficult-to-sell lower tranches were put into new boxes and resliced into tranches, thus generating new “senior trances” that could be sold.

And that -it has to be said – ended badly for the pensions of the world.

So where next for your savings and pensions? What about something that combined the the volatility and risk of equity with the anonymity, fungibility and algorithmic obfuscation of the CDO? What if we could combine these features with an entire lack of regulation or state backing, and – just for fun – a huge user base in illegal activity? Ready to fill your boots?

Such a product already exists.

There is significant global debate as to what species of financial instrument a “blockchain-derived currency” really is – a debate that has implications for taxation and licensing by governments.

  • Certainly it is tradeable against other currencies, with a widely fluctuating exchange rate – though it is backed neither by commodities nor a government/central bank.
  • With no extrinsic value, and no value backed by a known trustworthy party, it is clearly not itself a commodity, although it behaves as such.
  • It is issued by what could be considered a decentralised autonomous organisation in return for (computational) work conducted – work that is essential to the continuing viability of the organisation. But unlike equity it offers no voting rights or investor protections.
  • There is even some discussion as to whether or not it is a derivative.

What blockchain-derived currencies do offer is a lower administrative cost for transactions, where these transactions are simple. For on-network transactions there are no direct costs – though obviously the conversion to another currency at either end has a cost, and the “hidden” costs of power, connectivity and processing time are not factored in.

In terms of security, blockchains prioritise anonymity and encryption over direct trust. Indeed, “trust” is something of a dirty word – the industries that blockchain is slated to disrupt are generally those based on the need for trust.

Considering say, a bank: it offers some anonymity (it shouldn’t disclose the contents of my account to anyone without my permission), and some transparency (I am able to read about the strategy of the bank at my leisure, and monitor activity via publication and meeting). But the main reason to use a bank is for one of trust – I can (generally) assume that a bank will not do anything to jeapordise the money I have invested in it, and if this trust is misplaced I can (generally) assume that a central bank or nation state will ensure that my losses are mimimised.

By amping up the anonymity (it is impossible for anyone to know the content of my account, or link it to me) and transparency (all transactions are publicly listed), blockchain technology removes the need for trust – for all I (or indeed anyone) knows, Satoshi Nakamoto could be anyone or anything, and the other blockchain participants could be thieves and scoundrels or possibly even libertarians. The Ayn Rand forum accepts bitcoin payments – *shrug*

Bitcoin – to use the most common example, is a deflationary currency. There will only ever be 21m bitcoins – as transactions become more widespread the intention is to stop rewarding “mining” (the generation of the crypographic hash that is used to publicly record a transaction) directly with coins and instead allow miners to receive transaction fees. This change will happen automatically and was designed in to the bitcoin system

So – like CDOs – a smart algorithm has removed the need for me to trust anyone, and – like the old accounting convention around equities – an implication of guaranteed growth and return is undermined by deflationary reality. All of this is governed by the design of the algorithm – requiring (ironically) that we trust only in the wisdom of an anonymous coder that the various variables and correspondences have been set correctly.

Cue myriad screams of “this sounds AWESOME, how can we apply it to edtech?”

When silicon valley sneezes, edtech catches a cold about three-to-five years later. So if you’ve recovered from the “year of the MOOC” (yep, every year since 2012 it seems) then brace yourself for the first of many years of the blockchain! Hurrah!

Blockchain has provided a handily fashionable hook to attach increasingly moribund ideas like micro-credentialing and badges to. Of course we have to actually trust the issuers of the badges in questions, ensure that badges can be rescinded if this trust breaks down on either side, and despite this somehow have a means to incentive miners to keep growing the chain.

It’s fair to say that the blockchain is as poor a fit for micro-credentials as learning itself is – though this has not stopped various initiatives to combine the two. Despite this, there are genuinely interesting uses of blockchain technology which may have educational resonance: Ethereum is one that I see as being worth keeping an eye on more generally, and Mark Johnson is (as usual) thinking rich and stimulating thoughts.

Audrey Watters is researching this area in more detail, and I would highlight her ongoing struggles as a means of staying abreast of the tide of distributed anonymised ledger based nonsense that this year will bring.

 

 

FOTA Brexit nonsense update 1

I’ve a horrible feeling that there will be a few of these over the coming months – primarily because so much nonsense is going to be talked and so few facts will be checked. 

The practice of calling a referendum (or plebiscite) to resolve major political questions via a direct expression of democratic will has a surprisingly brief history within UK government. Only two UK-wide referenda have ever been held: the first was the 1975 vote on membership of the European Community (“Do you think the United Kingdom should stay in the European Community (Common Market)?”), and the second was the 2011 vote on changes to the voting system for general elections (“At present, the UK uses the “first past the post” system to elect MPs to the House of Commons. Should the “alternative vote” system be used instead?”).

A third referendum will be held on 27th June this year, again on European Community membership (“Should the United Kingdom remain a member of the European Union or leave the European Union?”).

So, in UK politics we tend to call nationwide referenda when:

  1. one or more parties are hopelessly split at an existential level.
  2. This issue at stake is so obscure and complex that only a very small number of wonks will understand it.

In the past everyone has taken the opportunity to air whatever prejudices they happen to have before the status quo option prevails and a significant number of UK politicians enter a decades-long sulk. The political mainstream will then take the result as a mandate to carry on doing whatever they were doing anyway, and the whole thing will be a massive waste of time, money and air.

There’s no real reason to suppose anything different will happen this time. But it’s fun to take the opportunity to nail some common myths being bandied about, and hopefully add a small leavening of fact to the huge number of words that will be deployed for very little purpose.

Myth 1 – “Brussels Bans X”

Easiest one first. Starting in the 80s, initially in one Boris Johnson’s columns for the Telegraph, it became fashionable to make up outright lies about “barmy Brussels” and supposed EU “diktats” or “laws” that stopped honest, brave, British folks from doing – well, anything. Because only about 100 people in the UK actually pay any attention to what the EU does on a day-to-day basis, these are repeated and embellished rather than fact-checked. The whole genre exists as an insight primarily into the often vivid imaginations of UK journalists.

The London office of the EU has painstakingly refuted pretty much all of these stories (latterly the preserve of the Express, Mail and Sun) on their superb “Euromyths” blog. Here’s an A-Z list that covers everything from 1992-2015. It’s at once impressive in scope, and depressing as to how much nonsense has been passed off as news and how much the heirs to Boris (and Boris himself) continue to dissemble. You can follow their day to day struggle on the blog.

Myth 2 – “… unelected bureaucrats…”

I mentioned “barmy Brussels” above – to any lover of British tabloid journalese the full phrase is “barmy Brussels bureaucrats”. The idea of the EU as the last repose of the earlier meddling, lazy, civil servant stereotype is difficult to shake off.

Allow me to drop some legislative process on those assembled. Like our own dear Westminster Government the EU has two legislative chambers, a presidential role and a civil service. However, in pretty much every way the EU is more democratic and more accountable than the Westminster equivalent.

  1. The Lower Chamber. In the UK we have the “House of Commons”, full of our representatives (MPs) that we vote for every five years. In the EU we have the “European Parliament” full of our representatives (MEPs) that we vote for every five years. (we do ruin this somewhat by voting for UKIP people who take all the expenses they can but don’t do any actual work). In Westminster there are political parties who generally vote as a block and ensure the government gets their way. In the European Parliament there are loose groupings which change in every parliament, but MEPs generally vote independently.
  2. The Upper Chamber. In the UK we have the “House of Lords”, made up of people who are appointed there primarily by dint of their birth, penchant for arse-licking or job (if they happen to be a Bishop in the Church of England). It’s probably the least democratic legislative body -outside of actual dictatorships – in the entire world. By contrast, the EU has the European Council of Ministers (not the European Council or the Council for Europe), which is made up of the Ministers of State from each member state who have responsibility for the topic in question. (So if the topic was, say, immigration the UK would send Teresa May. Yikes.) This reflects the actual government of each country, voted for by popular vote.
  3. The Presidential Role. In the UK we have the Queen (gawd bless her) as the titular head of state. She basically waves at things, and rubber-stamps laws passed by parliament. Once a year she makes a speech (written by “her” government) that sets out what “her” government will do. The only way you get to be the UK head of state is to be born by the previous one. The EU has the “European Council“, which does have a president (rotating every six months) but is made up of the heads of the constituent state governments – so David Cameron in our case. The European Council acts as the leadership of the EU, you become a member by being prime minister of an EU member country and then become president by waiting your turn.
  4. The Civil Service. Dead easy – we have the Civil Service, the EU has the European Commission. Both help the legislative bodies and leadership in drafting and implementing decisions. The UK Civil service is led by the Head of the Civil Service supported by Permanent Secretaries (24) and employs some 447,000 people. The European Commission is led by a president, supported by a College (28 senior staff) and employs a little over 23,000 people.
  5. Other bits. The UK has a Supreme Court, the EU has a Court of Justice. The UK has an Audit Office, the EU has a Court of Auditors. The UK has a central bank that controls the pound, the EU has a central bank that controls the euro.

So to summarise, the EU has the same legislative and executive structures as the UK, except the EU ones are – in general – more democratic. Indeed, a recent ERS investigation into the state of EU democracy concluded with a set of recommendations… for Westminster!

Myth 3 – TTIP

If you’re on any form of social media, or if you read George Monbiot in the Guardian you’ll have heard something of these trade agreement negotiations. These are some things you think you know about TTIP:

  • It’s secret
  • It’s an EU plot
  • It’s a way to destroy democracy, sell off the NHS, and make kittens look sad.
  • If we leave the EU then TTIP will go away, somehow.

None of these things, in the grand tradition of EU journalism in the UK, is actually true.

Here’s the EU TTIP twitter account – have a read down it. (That’s just one of the many ways they are communicating – here’s the main website, here’s the complete set of negotiating texts, here’s a statement from the chief negotiating officer on the last (12th) round of talks.  There’s even a snapchat channel (!)) So it’s only a secret in the sense that very few people are reading it.

As for being an EU plot, you may have spotted that the EU are negotiating on our behalf with the US. We can see, read, and comment on the EU position – a recent speech by EU Trade Minister Cecilia Malmström suggests that these are taken into account in developing the EU position. For instance, concerns about secret (ISDS) courts are addressed in the EU position – they won’t be secret, they’ll be live streamed and led by an actual real judge. Concerns about public sector bodies like the NHS are written into exemptions that are negotiated for.

The alternate (“brexit”, or “flexit”) position is that something very like TTIP would be negotiated between the UK and the US, and the UK and EU (and the UK and Canada). Rather than the fairly sensible Malmström (a centrist, vaguely New Labour-sh, Swedish politician) these would be led by Sajid Javid – a man who freely admits to the central place Ayn Rand’s “The Fountainhead” has played in his life. For those who want to preserve the NHS as we know it, it would seem that the EU negotiating team would get a far better deal.

There’s so much more to go into, but generally you are more likely – it seems to me – to be misinformed by our press than the EU.