OECD attacks English HE funding model

Andreas Schleicher. You may remember his name from Andy Westwood’s superb wonkhe post about the ongoing use of an “OECD endorsement” as to the sustainability of the current English HE funding system.

Despite Andy’s solid debunking of this canard (basically he initially referred to the previous regime, then wrote a personal blog post claiming that we had one of the best systems that includes student fee loans), drawing on the actual words of Dr Schleicher himself, Johnson Minor’s first speech also refers to:

a transformed financial situation; as the OECD says, we are one of the only countries in the world to have found a way of sustainably funding higher education.

Of course Schleicher is not the OECD, and is not making a pronouncement on behalf of the OECD – he is stating his personal opinion, just as I am doing here. I can understand why BIS researchers speechwriters make the elision (it’s common among people who don’t really understand social media) but it is not correct.

That’s the story so far.

During the summer I’ve been enjoying Pearson’s Tumblr entitled “If I were secretary of state for education…“. No, really, I have. The schick is that they ask a bunch of edu-policy luminaries (including David Blunkett, a former SoS) what they would do if they were minister for education. And publish it on Tumblr, because publishing things is hard.

I’ll be honest, Michael “Dr Target” Barber opining that he would “not tinker with structures or get in the way of successful schools” doing what they want to do” was my favourite initially, with AC Grayling‘s call to end the “closed shop of higher education” a close second.

But then I read Andreas Scheicher’s contribution. Sure, there’s the expected madness about increasing class sizes and claiming that “google knows everything”. But just look at his final point:

Leaders in high performing school systems seem to have convinced their citizens to make choices that value education more than other things. Chinese parents invest their last money into the education of their children, their future. Britain has started to borrow the money of its children to finance its current consumption. I would work hard with my fellow Secretaries to change that.

“Britain has started to borrow the money of its children to finance its current (educational) consumption” – this sounds suspiciously like a reference to HE funding – where the future income of our children is used to pay for their current educational consumption. And Schleicher would work hard to change that.

Sure – it’s tenuous. But no more so than BIS’s claims to OECD blessing on our expensive and ill-conceived funding method.

Principles – if you don’t like ’em, we have others

Just in case people find it useful, this is a worked example of responding to that “principles” section that tends to crop up in policy consultation. Any policy idea, even the most nakedly and arbitrary ideological nonsense, will have “principles” – because where would we be if we didn’t have principles?

(who said “Top Shop”?)

Whereas it may seem to be a fairly innocuous set of “motherhood-and-apple-pie” stuff that pretty much everyone would nod through, these principles serve to both frame and constrain the debate in and around the paper that follows. The majority of consultation responses will concentrate on later question as these directly impinge on institutional or organisational activity or commitment – and most people who wade through stuff like this are paid to do so by said organisation or institution.

But as I’m responding on my own account I’m more concerned with the assumptions that underpin the consultation, and less concerned with any interim projections of likely effects. This means that I’m hyper-vigilant (almost comically so) to the nuances in phrasing and meaning within these short and apparently uncontroversial statements.

There is a huge value in making an independent and personal response to a consultation – and I would encourage all wonks and wonks-in-training to have a crack at a couple (HEFCE QA would be a good one, also have a look at the BIS student loan repayment threshold freeze if you fancy getting stuck in to a bit of finance . It’s a great personal learning exercise, and it can sometimes have a positive effect on national policy-making.

[for the avoidance of any doubt, what follows is an excerpt from a personal response to the QA consultation, that explicitly does not reflect the views of any organisation, grouping, political party or secret society. It is presented in the public domain (cc-0), so you may reuse it without citation if you wish]

Question 1: Do you agree with our proposed principles to underpin the future approach to quality assessment in established providers?

I have responded to each principle in turn.

  1. Be based on the autonomy of higher education providers with degree awarding powers to set and maintain academic standards, and on the responsibility of all providers to determine and deliver the most appropriate academic experience for their students wherever and however they study.

This principle attempts to address the new complexity of the institutional landscape in this area. Broadly “providers of HE” may or may not be approved “HE providers” – with or without institutional undergraduate and/or research degree awarding powers – and may or may not hold the title “university” (and may or may not have tier 4 sponsor status).

For the purposes of academic quality assurance it is not clear why a distinction is drawn here between “HE providers with degree awarding powers”, and “all providers”. For the latter, the designation process already requires that a particular course meets “quality” criteria via the QAA Higher Education Review and consequent annual monitoring. This process explicitly examines the ability of any provider to manage quality and academic standards.[1] The principle should surely be (as was the case until very recently) that all HE should be delivered to the same academic standards and assured to the same academic standards wherever it is delivered.

The use of “autonomy” in one case and “responsibility” in the other also exacerbates this artificial divide. The current system of QA requires that all HE delivery is supported by an institutional system that manages and ensures academic quality and academic standards and this principle should be defended and maintained.

2. Use peer review and appropriate external scrutiny as a core component of quality assessment and assurance approaches.

A purely internal system of scrutiny would not be fit for purpose in ensuring the continued high standard of English HE provision. Though internal institutional monitoring (both data-led and qualitative) will support the maintenance of standards, the “gold standard” is comparability with peers and adherence to relevant national and global requirements. The existing QAA Higher Education Review process (which is common to existing providers and new entrants) directly ensures that peers from across the sector are involved in making a judgement on institutional quality assurance and quality assessment processes.

3. Expect students to be meaningfully integrated as partners in the design, monitoring and reviewing of processes to improve the academic quality of their education.

The key here is a “meaningful” integration, beyond mere committee membership. Academic staff at all levels should also have a role in designing, monitoring and reviewing processes – this would be a key factor in developing processes that are genuinely useful in ensuring a quality academic experience for students without an unreasonable institutional burden.

As James Wilsdon noted in “The Metric Tide”[2], “The demands of formal evaluation according to broadly standardised criteria are likely to focus the attention system of organisations on satisfying them, and give rise to local lock-in mechanisms. But the extent to which mechanisms like evaluation actually control and steer loosely coupled systems of academic knowledge is still poorly understood.” (p87)

It is therefore essential that both internal and external systems of quality assurance take into account the well-documented negative effects of a metrics-driven compliance-based culture, and it would appear that a meaningful integration of students, academic staff and support staff into the design as well as the delivery of these processes would be an appropriate means to do this.

4. Provide accountability, value for money, and assurance to students, and to employers, government and the public, in the areas that matter to those stakeholders, both in relation to individual providers and across the sector as a whole.

This principle should be balanced very carefully against principle (a), above. Assessment of “value for money”, in particular, should be approached with care and with greater emphasis on longer-term and less direct benefits than are currently fashionable. The risk of short-term accountability limiting the ability of academia to provide genuinely transformational and meaningful interventions in the lives of students and society as a whole is implicit within the current model of institutional funding, and a well-designed system of QA should balance rather than amplify this market pressure.

5. Be transparent and easily understood by students and other stakeholders.

It is difficult to argue against this principle, though simplicity must be balanced with a commitment to both academic and statistical rigour. HEFCE will doubtless remember the issues with over-simplified NSS and KIS data leading to a misleading and confusing information offer to prospective students, as documented in some of the HEDIIP work around classification systems[3] – and should also note the findings of their own 2014 report into the use of information about HE provision by prospective students.[4]

6. Work well for increasingly diverse and different missions, and ensure that providers are not prevented from experimentation and innovation in strategic direction or in approaches to learning and teaching.

It is important here to draw a distinction between experimentation and innovation in learning and teaching practice, which is a central strength of UK HE as evidenced by a substantial body of literature and practice, and experimentation and innovation in institutional business models.

The former should be encouraged and supported, with specific funding offered to individual academics and small teams with the ability to innovate in order to meet existing or emerging learner or societal needs. Funding and opportunity for research into Higher Education pedagogy and policy are severely limited, and in order that experimentation can be based on sound research further investment is needed. Organisations such as the ESRC, SRHE, Higher Education Academy, BERA, SEDA, Jisc and ALT should be supported in addressing this clear need.

The latter should also be encouraged and supported, but the risk to students and the exchequer is far greater here and this should be mitigated and managed carefully. Recent activity in this area has demonstrated risks around the needs of learners being insufficiently met, risks around accountability for public funds, risks around investment being diverted from core business, and risks around reputational damage for the sector as a whole.  In this area experimentation should be evidence-based, and the exposure of learners and the exchequer to the negative consequences of experimentations should be limited.

7. Not repeatedly retest an established provider against the baseline requirements for an acceptable level of provision necessary for entry to the publicly funded higher education system, unless there is evidence that suggests that this is necessary.

Recent research conducted for HEFCE by KPMG concluded that the majority of the “costs” associated with quality assurance in HE come from poorly-designed and burdensome processes at an institutional level, and multiple PSRB engagements. As such, it is difficult to make an argument to limit national engagements as the data and materials will most likely be collected and prepared regardless.

Interim engagement could focus on targeted support to reduce the internal cost of QA activity via expert advice on designing and implementing systems of assurance, and optimising institutional management information systems (MISs). The QAA and Jisc would be best placed to support this – and engagements of this nature would provide much greater savings than simply limiting the number of external inputs into institutional processes.

Of course, QAA support for PSRBs in designing and implementing robust yet light-touch reviews would be a further opportunity for significant savings.

8. Adopt a risk- and evidence-based approach to co-regulation to ensure that regulatory scrutiny focuses on the areas where risk to standards and/or to the academic experience of students or the system is greatest.

Again, it is difficult to argue against this – though a definition of co-regulation (I assume this refers to the totality of sector QA to include national, institutional and subject area specific processes) would be beneficial. Risk monitoring should primarily focus on responsiveness in order to encompass unpredictable need, especially as relates to business model innovation.

9. Ensure that the overall cost and burden of the quality assessment and wider assurance system is proportionate.

This principle should explicitly refer to the overall cost and burden of QA and assurance as a whole, rather than just national processes. The KPMG report was clear that the majority of costs are linked to institutional data collection and PSRB-related activity, and it is here that the attention of HEFCE should be primarily directed.

10. Protect the reputation of the UK higher education system in a global context.

HEFCE and the QAA should continue to work with ENQA, EQAR and INQAAHE, to ensure that the global QA context is paramount in English and UK assurance activity.

11. Intervene early and rapidly but proportionately when things go wrong.

This should continue as is currently the case, with HEFCE (as core and financial regulator), QAA (as academic quality assurance specialists), UCU (as staff advocate) and both OIA and NUS (as student advocates) working together to identify and resolve issues.

13. Work towards creating a consistent approach to quality assessment for all providers of higher education.

Consistency of approach is less important than consistency of academic standards, and as such this principle appears to work in opposition to principle (5). QA approaches at an institutional level should be adaptable to identified needs amongst a diversity of providers and activity.

[1] https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/450090/BIS-15-440-guidance-for-alternative-higher-education-providers.pdf

[2] http://www.hefce.ac.uk/media/HEFCE,2014/Content/Pubs/Independentresearch/2015/The,Metric,Tide/2015_metric_tide.pdf

[3] http://www.hediip.ac.uk/subject_coding/

[4] http://www.hefce.ac.uk/pubs/rereports/Year/2014/infoadvisory/Title,92167,en.html

(if anyone is interested in my responses to the remaining questions, I’d be happy to share. Do leave a comment or send a twitter DM)

First the tide rushes in. Plants a kiss on the shore…

I’m genuinely at a loss to describe how good James Wilsdon’s report of the independent review of the role of metrics in research assessment and management (“The Metric Tide“) is. Something that could so easily have been a clunky and breathless paean to the oversold benefits of big data is nuanced, thoughtful and packed with evidence. Read it. Seriously, take it to the beach this summer. It’s that good.

It also rings true against every aspect of the academic experience that I am aware of – a real rarity in a culture of reporting primarily with an ear on the likely responses of institutional management. Wilsdon and the review team have a genuine appreciation for the work of researchers, and recognise the lack of easy answers in applying ideas like “impact” and “quality” to such a diverse range of activity.

Coverage so far has primarily centred on the implications for research metrics in REF-like assessments (the ever eloquent David Colquhoun and Mike Taylor are worth a read, and for the infrastructure implications Rachel Bruce at Jisc has done a lovely summary) but towards the end of the report come two chapters with far-reaching implications that are situated implicitly within some of the more radical strands of critique in contemporary universities. Let it be remembered that this is the report that caused none less than the Director of Research at HEFCE to suggest:

What if all UK institutions made a stand against global rankings, and stopped using them for promotional purposes?

(which was unexpected, to say the least).

Chapters 6 (“Management by metrics”) and 7 (“Cultures of counting”) are a very welcome instance of truth being spoken to power concerning the realities of the increasing binary opposition between academic staff and institutional management via the medium of the metric. Foregrounded in the Wilsdon’s introductory mention of the tragic and needless death of Stephan Grimm , the report is clear that the use of inappropriate and counter-productive metrics in institutional management should not and cannot continue.

Within this cultural shift [to financialised management techniques], metrics are often positioned as tools that can drive organisational financial performance as part of an institution’s competitiveness. Coupled with greater competition for scarce resources more broadly, this is steering academic institutions and their researchers towards being more market-oriented.

Academics should have a greater control over their own narrative (the report laments the outsourcing of performance management to league tables and other commercially available external metrics), and this narrative should not be shaped by the application of inappropriate metrics. The “bad metrics prize” looks an excellent way to foreground some of the more egregious nonsense.

Fundamentally, the purpose of a higher education institution should not be to maximise its income – it should be to provide a sustainable and safe environment for an academic community of scholars. That’s pretty much straight out of Newman, but in 2015 it feels more like a call to arms against an environment focused on competition for funding.

A decision made by the numbers (or by explicit rules of some other sort) has at least the appearance of being fair and impersonal. Scientific objectivity thus provides an answer to a moral demand for impartiality and fairness. Quantification is a way of making decisions without seeming to decide. Objectivity lends authority to officials who have very little of their own.” [T.M Porter]

With an uncompromisingly honest epigraph, Chapter 7 lays the blame for this state of affairs firmly at the door of poor-quality institutional management. Collini, Docherty and Sayer are cited with tacit approval for perhaps the first time in an official HEFCE report.  Broadly, the report argues:

  • That managers use metrics in ways that are not backed up by what the metric actually measures.
  • That managers use metrics in a way that is heavy-handed, and insensitive to the variety implicit in university research.

Institutional league tables and Journal Impact Factors (JIFs) receive particular criticism as being opaque, inappropriate and statistically invalid. But it is noted that managers use these indicators (the preferred term) in order to absolve themselves from making qualitative decisions that are open to accusations of bias and secrecy.

Many academics are complicit in this practice, arguing either for transparency or from a perceived advantage to themselves over their peers. This wider cultural issue is seen as being outside of the scope of this report, and being only sparsely documented, but this boundary prompts the obvious question: which report will focus on these wider issues? (the Wilsdon report does call for more research into research policy – to me this could and should be extended to a call for urgent research into higher education policy and culture more generally.)

A section on “gaming” metrics, and one on bias against interdisciplinary research,  rehearses what is currently widely known on this practice (no mention of Campbell’s Law!) and again calls for an expansion of the evidence base. I know that much work under the collective umbrella of SRHE and BERA over the years has touched on these issues, and perhaps both organisations and others[1] need to plunder their archives and ensure that what evidence has been presented can be represented in an openly readable form.

It’s clear that the RAE/REF has had an impact: on the type of research conducted, where it is published and how it is built upon. This influence has already been noted and used in a welcome way with the recent requirements on open access. But as well as adding new stipulations, the older ideas about status and quality that underpin the REF (and for that matter, peer assessment) need to be examined and reconsidered.

If it is impossible to stop people producing research in the image of the REF requirements, maybe we need to change the requirements in order that interesting research is produced. But, as is noted, many of the constraining factors are applied at an institutional or departmental level – and it is these multiple nanoREFs that are likely to have the greatest day-to-day impact on the research-active academic. These require local management practice changes, rather than national policy changes, to become less painful and it is perhaps time to consider intervening directly rather than using levers designed to drive up research quality.

The goal of “reducing complexity and cost” within research policy is a commendable one, and I am sure few will be waving the flag for the current labour-intensive system of assurance and assessment: the “gold standard” is a heavy one, and we should investigate lightening the load wherever quality would not be affected. The Wilsdon review argues, cogently, that the trend towards quantitative tools is already having significant adverse effects, and indicates that efficiency may not be the only goal we need to keep in mind. As such, it is a major contribution to the ongoing health of academia and (perhaps) the first mainstream indicator of a wider resistance to poorly applied metrics in all areas of university life. Designers of teaching metrics should take careful note.



[1] for example Kernohan, D and Taylor, D.A (2003)”What Is The Impact Of The RAE?”, New Era In Education 84(2), pp56-62 […]

Territorial Pissings

It seems that the look of the summer for HE policy makers is a way of monitoring and assuring teaching quality via a set of data-driven metrics.

First up, HEFCE’s ongoing quality assurance consultation stepped up a notch with one of their old-fashioned “early” consultations on the principles that would underpin a new system of institutional QA.  Spectacularly failing to demonstrate that this would be more efficient or provide better results than the current model [read the KPMG report for maximum LOLs], and monstered by everyone from Wonkhe to the Russell Group, HEFCE took to their own blog to defend the proposals a mere 48 hours later.

A day later, it was the turn of BIS – with a speech from new HE bug Johnson Minor at Universities UK. He offered what the tabloid end of the HE press would call a “tantalising glimpse” of the future Teaching Excellence Framework (manifesto bargaining fodder if ever I saw it), drawing a hurried HEFCE response clarifying that their tremendously important two-day-old QA proposals were different, though linked, to the emergent TEF.

Learning Gain is the wild card in this mix. If it worked properly it would be the single biggest breakthrough in education research of the last 100 years. It won’t work properly, of course – but it will burden students and staff with meaningless “work” that exists for no other reason than to generate metrics that will punish them.

None of this, of course, has any appreciable impact on actual students, but the idea of them being “the heart of the system” underpins everything. Remember, undergraduates, you may never see your tutor as she’s preparing another data return for one of these baskets, but it is all for your benefit. Somehow.

For all the differentiation, it is difficult to slip an incomplete HESES return between the two sets of proposals as they currently stand. Deciding to look at a basket of output measures as a way of assuring and/or enhancing quality is the epitome of an idea that you have when you’ve no idea – the artistry and ideology comes in with selecting and weighing the raw numbers (as Iain Duncan Smith so ably demonstrated today).

Future generations of policy makers are limited to tweaking the balance to return the answers that they expect and require. Whilst the sector themselves focus on gameplaying rather than experimentation in an innumerate “improve-at-all-costs” world. And the students? – well, the NSS results are going up. Aren’t they?

William Davis, in “The Happiness Industry” writes about the financialisation of human interaction – the replacement of human voices with a suite of metrics that can be mapped to known responses. This is basically akin to Philip’s Economic Computer with a flawed model of cause and effect wielded for particular policy goals, and controlling the lives of millions. The advent of social media allows us all to have a greater voice in policy making – at precisely the time that policy making as we know it is disappearing.

Both the TEF and the new QA model advance the “dashboard model” of policy analysis, and a managerial rather than leaderly approach to institutional management – neither expose the important assumptions that underpin the measurements. Sure, it’s fun to watch the emerging turf war between BIS and HEFCE – and it is fun to read the guarded snark of the Russell Group – but we’re really seeing poor quality policymaking being disguised by a whiff of big data.

 

So, how big a deal is unfunded research in UK HE?

Those of you who’ve worked with me will know that I have the habit of asking “interesting” questions. Nearly always at exactly the wrong time, and in a way that entirely derails what we are meant to be focusing on. This is one of them. [DISCLAIMER: Shoddy, beer-fuelled data analysis follows. BIG CAVEAT: Don’t phone, it’s just for fun.)

Research policy making is often flawed because we don’t take account of the vast number of academics who research without external funding, often without institutional knowledge and occasionally in direct contradiction to their contract of employment. I keep saying this in conversations about research policy, and eventually people ask how big a deal this really is. No-one knows.

There’s huge implications, of course. Everything from article processing costs, to data storage, to library use and lab consumables is based on a calculation of how much research is being done in a given department/institution/system of HE and what the cost implications are likely to be. It worries me that at a national level, the assumptions are all too often based on directly (grant) funded research (and the connected idea that you could just bash any additional costs onto the project budget or gouge out a load more overhead costs).

Like all the best questions, this one is a bit of a rabbit hole.

  • There’s differences between institutions and the way they handle cost assumptions, discipline areas and the amount of money they may have available as grants (pause for Education researchers to chuckle ruefully).
  • There’s QR, and the varying ways that it may or may not filter down into actual budgets for actual researchers (and the fact that most subject areas in most institutions barely get enough for a round of drinks)
  • There’s different academic contract types, and the assumptions that may be based around them.
  • There’s academics on teaching-only contracts researching in their own time using their own money.  There’s people who have left the sector all together (or were never in it in the first place) who are still producing amazing research.
  • And there’s any amount of organisations, offering any amount of grants for any amount of money, with no single way of identifying all of them.

There’s basically a whole PhD in getting a reliable to answer to this question (a response that many of my questions elicits… maybe I should do one of those one day…). I could use something as sweet as RIOXX and open access publications, or data coming out of something like academia.edu (hmmm…) or ORCID profiles (yay!). Each of those has strengths and drawbacks, but you could mash it up with some survey data and stuff from HESA and get an interesting answer. But for the moment, how about a quick’n’dirty approximation?

In 2013 the Centre for Business Research at Cambridge University, along with the UK Innovation Research Centre produced a report for BIS entitled “The Dual Funding Structure for Research in the UK: Research Council and Funding Council Allocation Methods and the Pathways to Impact of UK Academics”. What a title!

It was a strange report, attempting to analyse the link between research performance (as measured by the beloved RAE/REF), research funding methods (basically grant or no grant) and research motivations. It drew on an earlier, richer 2010 survey supporting a report into Knowledge Exchange – and I was largely drawn to it because it has a very large, very representative sample (balanced for gender, subject, seniority… not for contract type – too few teaching-only – but you can’t have everything) of 21,170 academics employed in UK HE in the summer of 2009.

It’s a really nice data set but – if you had only read the earlier report – you wouldn’t know that it could shed any light on our question. The 2013 report re-interrogated the same data, but mashed in from the Research Councils details of any grant that the academics surveyed may have held at the time of the survey.

In the summer of 2009 there were 179,040 academics in the UK (giving us a healthy sample-size of a little over 11%). In that year the research councils were the biggest providers of UK research grants offering around £1.3bn worth of grants.

The 2013 report [section E3] notes that QR is closely correlated both with research funding council grants and other grants.

E3It is also worth noting the useful summary of research funding volume by source in section E2. It draws on HESA data, which is also used to generate the figures and charts in one of my favourite HEFCE publications, the “Guide to Higher Education”. Here’s the 2009 version, which has figures for each strand of research funding for that year on page 29.

page29

Way down on page 89 of the 2013 CBR report, we see the first instance of our sample mapped to grant activity: of the 22,170 surveyed academics, 18,972 did not have a research council grant (86%). We can draw this out to a broad subject area level using the data from same question:

  • Arts and Humanities: of 3,674 academics in this area 3,234 (88%) did not have a research council grant.
  • Sciences: of 11,270 academics in this area 9,120 (81%) did not have a research council grant.
  • Social Sciences: of 7,226 academics in this area 6,583 (91%) did not have a research council grant.

Now we know that research council (RC) grants are very strongly correlated with QR income, and that QR income is strongly correlated with other sources of income at an institutional and departmental level. We’ve not, sadly, got strong enough data to do this at an individual level to get an idea of the prevalence of non-research council grants.

But we can look at the proportionality of each form of grant income, based on the HEFCE/HESA data alluded to earlier in the report. If we ignore QR for the moment, RC is the single largest source of UK research funding.

Comparatively:

  • Charities made grants to the value of 61% of RC grants
  • Central & Local Government/Public Sector made grants to the value of 47% of RC grants
  • UK industry made grants to the value of 18% of RC grants
  • Other sources of grants were to the value of 44% of RC grants. (all of this in 2009)

So, as all of that is about 169% of the value of RC grants that year (or £2.3bn if you’d rather), our best case assumption is that a completely different set of academics got grants in the same kind of numbers as that amount of RC funding (worst case is, of course, that the same academics hoovered up these grants too – which is actually more likely given how concentrated research funding tends to be).

In a thrilling parallel to my policy making career till about 2012, I’m now going to allocate a proportion of this £2.3bn of grants to our sample of academics, assuming that they are similar in size (unlikely) and similarly distributed across broad subject areas (very, very, unlikely).

Therefore:

  • In arts and humanities, I allocated a grant to 1184 more academics. So 68% are still unfunded.
  • In sciences, 5784 extra academics got a grant, leaving 49% with no grant at all.
  • In social sciences, I awarded 1730 other academics a grant, so now only 76% are still unfunded.
  • In total 61% of academics are unfunded in our best case scenario.

Remember we agreed that we felt that things were liable to tend towards the worst case (where the same academics got most of these other grants). In that world:

  • In arts and humanities, I allocated a grant to 304 more academics. So 80% are still unfunded.
  • In sciences, 3634 extra academics got a grant, leaving 68% with no grant at all.
  • In social sciences, I awarded 1087 other academics a grant, so now only 85% are still unfunded.
  • In total 75% of academics are unfunded in our best case scenario.

So, it appears (even given the insane approximations and dubious rounding) I can be certain that significantly more than half of all academics are not in receipt of external funding for their research. The true figure is likely to lie somewhere between 61% and 75% of academics. Sobering stuff.

A note on Canadian Trademark law as it applies to the U of Guelph “OpenED” mark

Clint Lalonde has been investigating a strange state of affairs in which it appears the University of Guelph has been restricting the use of the word “OpenED” via a trademark. Brian Lamb has been amplifying this.

Here’s the Canadian IP office page for the OpenED mark (#0922139), which cites the University of Guelph as the applicant.

You’ll see the mark has been advertised, but not registered. In Canadian IP law, the advertisement stage is a chance for “opposition” to be expressed. This is normally a period measured in months, unless an opposition has been registered. In this case the advertisement happened in December 2013.

An institutional legal team could probably order further details on the case.

In the relevant issue of the Trademark Journal (where the mark has to be advertised), the following text appears under “OpenED”:

“The Registrar hereby gives public notice under subparagraph 9(1)(n)(ii) of the Trade-marks Act, of the adoption and use by University of Guelph of the badge, crest, emblem or mark shown above.”

Subparagraph 9(1)(n)(ii) of the Act reads:

“No person shall adopt in connection with a business, as a trade-mark or otherwise, any mark consisting of, or so nearly resembling as to be likely to be mistaken for […] any badge, crest, emblem or mark […] of any university […] in respect of which the Registrar has, at the request of Her Majesty or of the university or public authority, as the case may be, given public notice of its adoption and use”

So what we are really dealing with here is not a trademark but a public notice of the adoption and use of the “OpenED” mark – such that no business can register it as a trademark. Or in Canadian IP legal jargon, a “prohibited mark”.

Here’s a useful article on Prohibited Marks in Canadian IP law, and here is another.

A prohibited mark exists indefinitely and restricts the use of a mark without the permission of a public body in this case (Guelph). The only redress would generally be a judicial review, which is expensive and tricky.

For me, judicial comments around Kirkbi AG v Ritvik Holdings Inc would appear to apply in this case (though the case refers to trademarks rather than prohibited marks):

“the [Trademarks] Act clearly recognizes that it does not protect the utilitarian features of a distinguishing guise. In this manner, it acknowledges the existence and relevance of a doctrine of long standing in the law of trade-marks. This doctrine recognizes that trade-marks law is not intended to prevent the competitive use of utilitarian features of products, but that it fulfils a source-distinguishing function. This doctrine of functionality goes to the essence of what is a trade-mark.”

As in the cited case (the “Lego” case) it would appear that “OpenED” is a common contraction of “open education” which (as with the design of the studs on lego bricks) is be a utilitarian feature of a product, allowing for interoperability, rather than a source-distinguishing function. Nothing in the “OpenED” prohibited mark refers to a distinct and source-specific feature of open education materials from Guelph.

I’m not yet sure how one could contest a prohibited mark other than via a judicial review, but I’ve contacted the Canadian IP office to find out. One means may be to apply for a trademark that infringes the prohibited mark and let the review process sort it out – and if awarded the mark immediately publish an undertaking not to use it to restrict the actions of others.

“No-one puts flowers on a flower’s grave”

(or: The Left is broken, someone should do something)

Brazil MOI

No-one talks about work.

Or, more accurately, no-one talks about the absence of work.

For more than a century, the role of the left has been to protect the worker against his employer. Workplace rights, workplace safety universal healthcare and education, the minimum wage, universal pensions and out-of-work benefits.

All these things exist because employers would not provide them if left to their own devices. The state has historically acted as a counterbalance to the interests of employers on behalf of the workers. Employers want to get the maximum value out of workers, the state intervenes in order that workers are protected.

It turned out that for a long time, improving the conditions of workers helped increase profit (though as Marx wryly notes in Capital every positive change for workers was bitterly fought against by business owners.)

Then came the productivity crisis.

Productivity is defined by economists as the ratio of units of outputs per unit of input. It is the value added by a production process – if you took wages and raw material costs and structural costs and suchlike, how much greater in worth is what you end up with?

In the early 21st century the gradual year-on-year productivity increase faltered and stopped. This builds on the productivity paradox that accompanied the first wave of computerisation. Not only did improving worker conditions not improve productivity, not only did technology not improve productivity, nothing else appeared to improve productivity either.

Serious people like the Bank of England, NIESR and the IMF are “puzzled” by the way that productivity has stayed flat throughout the sub-prime loan crisis, across multiple national economies and employment sectors.

In a parallel trend, we see the increasing potential for automation in the workplace and beyond. Machines can lift more, crunch more and iterate more than people can – they are the likely source of the next wave of productivity. So why, in the medium-to-long term, pay people?

This is what a convincing offer from a left-wing party should be based on. The technology I have hinted at already exists and is already beginning to be used in a whole range of sectors. Fundamentally, the current growth in employment is where people are (still) cheaper to use than machines. The service sector will likely last longer than most, but this would not a long term bet I’d want to be making.

The best possible future involves an alternative to waged work – most likely to be something along the lines of the universal citizen’s income. The alternative is pretty much starvation as a route to depopulation.

Left-wing parties are more likely to have the ideological scaffold to support a managed retreat from waged work. The Green Party in the UK already has this as a worked-out policy. But any politician talking about growth, jobs or state support without mentioning this oncoming storm is not to be trusted.

And educators? Educators are currently teaching children that will live as adults under this system (or will die off, if you’d rather). What are they teaching them and what value will this be after the end of waged work?

These are the discussions I’d like to be having.

[Update: some other useful resources and commentary coming in via twitter….

Tony Hirst pointed me to “Humans Need Not Apply” – a short documentary produced by The New Aesthetic.

Mike Caulfield has written extensively on this topic, and linked to a very interesting graph in a post nearly a year ago. (I don’t take it personally – we’re *all* a year behind Mike Caulfield…)

Graham Atwell, writing in the superb Pontddysgu, suggests that an increase in automation will lead to a need for higher skills in the workforce.

Anya Kamenetz linked to Ben Schiller’s marvellously headlined survey of advances in job automation: “Yes, robots really are going to take your job and end the american dream“]

So, who won?

Nationalism won.

Whether of a progressive or a nostalgic hue, the arguments of local exceptionalism have won out over the cold logic of globalisation.

It would be a mistake – and one made both by the mainstream media and in the wounded anger of the old left – to see Conservatives as being all the same. Despite the millions poured in to the campaign as corporate donations, despite the rhetoric around “the party of the rich”, the party sinks or thrives on their local associations.

Lynton Crosby’s campaign played into the preoccupations of the old and the frightened who make up this dwindling band – above all economic stability and sovereignty. This was a counter-factual set of arguments: even George Osborne had given up on the actuality of austerity by 2012, and David Cameron has committed himself to making the argument to stay in the European Union. All of the major campaign announcements, from the rail fare caps and the mild but unmistakable anti-corporate rhetoric to the extra money for the NHS, addressed the concerns of this party member demographic.

New Conservative candidates were, for the most part, drawn from a pool of local activists – the list was notable for an absence of SpADs and party researchers. The election day story a few of us spotted – on the Tory campaign database issues – was less of an difficulty than we expected, and may even have been a positive factor . This was about local knowledge, often personal knowledge, of a constituency. The lack of central campaign control via VoteSource and centrally imposed candidates, far from being a disaster, may well have been key.

Most coverage so far (it’s a little after lunch on Friday 8th) has portrayed the SNP landslide in Scotland as an entirely separate trend – not so. Scottish Nationalism is also a locally-based movement based on a network of local activists. After the recent and intensely fought referendum campaign, these are activists with a close understanding of their constituencies. The difference is the direction of the trend. The local Conservative party membership is shrinking and will continue to shrink. Though this is a minor triumph of old fashioned local Conservatism, it may well be the last – and the major triumph of the SNP will likely be much longer lived.

So two of the three major parties in Westminster are there on the strengths of MPs with particularly strong links, and strong accountability, to their local party. Far, far more difficult to control – especially when it comes to controversial issues that play against the local party concerns – than the phalanx of machine politicians all parties have been previously been criticised for.

Ed Miliband, to be fair to him, has invested a great deal of time and effort into growing and sustaining the party base. Labour has a growing membership and a base of young activists who – whilst not as encouraging as the spectacular growth of the SNP and the Greens. He’s spoken about localism, and here (as on many things) his instinct has been right. This time the national party didn’t trust them to campaign without control. Maybe next time will be different – but don’t the left always say that?

A very quick prediction of the work of the second Cameron administration.

  • The contradictions inherent within the Conservative party, and even within the manifesto itself, will become increasingly more apparent.
  • Cameron’s own unpopularity with local party associations will be damaging. His “one nation” philosophy will be against the mood of the party.
  • It will be difficult for him to pass legislation and make arguments that are pro-corporate and supra-national. The EU referendum is an obvious example. Rebellions will be even more common than in the last government – and actual or threatened defections to UKIP may be a factor in this.

It is too early to think about the future of Labour and the left – suffice to say I feel Ed Miliband was wrong to resign at this stage as I think he would have been well placed to do the difficult job of opposing the government effectively whilst leading the required internal debate. I’m sure I’ll write more about this in the weeks to come.

I would, however, predict another election earlier than 2020.

On (Higher) Education Research

It was round about the time that Hack Education Weekly News started to feel like it was repeating itself. It was when conferences  re-occurred, and keynotes were duplicates. How could we move on, how could we build…

Or perhaps it is best to see it in the “disruptive” movement. That one press release. Not the “education is broken” one, the other one. The one with the numbers, and the projections. Where we sneer a little (maybe), and say that it isn’t proper research, and that it is unethical.

Proper ethical research in online education is hard to find, because it is increasingly hard to do. If you are reading this you are probably already painfully away that there is precious little research funding available for education, less for higher education research, and practically none for online higher education research.

A lot of great research is still being done in this area, but not as anyone’s day job. It’s evenings and weekends for those working in academia… and (increasingly) for those who no longer (or never have) worked in an institution.

Working without a grant, or without expectations, can be liberating (though only if you don’t think to hard about your unpaid academic labour). Without the need to report regularly, or to demonstrate impact to a schedule.

Without the need for ethical approval.

Sadly, there are some things in research practice that do need the kind of things only access to university systems provide. Sometimes you need to prove you are part of an institution in order to get a grant, or speak at a conference. Sometimes you want access to an academic library, or to institutional data. Sometimes you do want to make sure your research meets ethical guidelines.

There have always been independent researchers, in many fields. Indeed the early modern era was awash with them. But it was also awash with terrible, unethical research practices. Part of the process of localising most research within an institutional structure was to provide a solid ethical and methodological basis for ongoing research – providing for better research, and kinder research.

Education – in particular – has suffered from a certain dialogue with ethical and methodological parallels drawn by those with a scientific mindset. Is “experimenting” on students and learners ethical? Can it even tell us anything about learning that is generalisable? Serious researchers can answer these sophomoric questions, but there are so few serious researchers left.

And the game is almost up.

Enterprises as diverse as HEFCE and Pearson are designing new forms of what is, effectively, education research. Ignoring the old verities in the simple pursuit of a comparable data point.

The word “crisis” is overused. But in this case it is, I think justified.

So what now?

I see a space for what I’m going to call an “open ethics”. A peer ethical (and methodological, because so many ethical issues in research are just clumsy and unfamiliar methodology) review panel, conducted transparently and openly.

It should be possible to draw on educational research expertise. It may even be possible to pay them, by offering a similar service to the wilder air-quotes “research” of our friends in Silicon Valley and Central London in order to subsidise the independent and semi-detatched academics that are pushing research forward.

No one benefits from the avalanche (haha!) of poor quality and dubiously ethical research in education today. The occasional gems we find just highlight the greyness of the slurry, and it is the latter that dominates the op-ed pages and thinkpieces.

I just wanted to put this idea out there to see what happened next. And to see where (if anywhere) there was support.

UK General Election 2015 for the confused and apathetic

When it comes to general elections, people tend to turn to the biggest political geek they know and ask them to explain it. For a lot of people I know, this means that they ask me. So this is a basic post on how to (or how I try to) read the election, how it works and what might happen. Though I have my own political views and preferences, I’ve left them out of this post.

If you’ve read this far, REGISTER TO VOTE. That’s a link to the official site, you have to do it before the 20th April. Even if you were registered to vote at the same address last time round, you still need to register, the process has changed. You can do it online, it takes five minutes.

Done that? No? Go back and do it, then read on. I’m serious, I’m banning you from the rest of this post unless you are registered to vote.

Part 1 – general elections for the terrified

As a newly registered UK voter, your next question is probably “who should I vote for?”. Forget everything you thought you knew about UK political parties and do the Vote for Policies quiz. This is both faster than reading all the party manifestos (which, let’s face it, you weren’t going to do) and more helpful for you in deciding who to vote for.

Politics is not football. You do not have to vote for the same people as you always have, or that your friends and family vote for. If the “Vote for policies” quiz outcome feels very odd to you, try also the Political Compass test which will tell you where you sit on the political spectrum and which parties are closer to the way you feel about the world.

At this stage, you probably have a party in mind. It would be a good idea to spend some time reading about them, possibly even reading their manifesto. Here’s a list of links to PDF versions of the manifestos of the biggest parties.

If you’ve chosen another party, you will need to check whether they are running in your constituency because you can’t vote for them otherwise. (Or you might be in Northern Ireland, who have different parties that I know next to nothing about!) No matter what any one else tells you, it is fine to vote for a smaller party if they care about the same things you do.

The manifesto is, effectively, a promise that each party makes concerning what they will do if they are the government. Without exception, they are long, tedious documents and very few people read any of them, so feel free to skim or just read the parts that interest you. Manifestos are not legally binding contracts, especially when it comes to a coalition government (as we will see later in this post).

But politics in the UK is, fundamentally, local. You’ve most likely had a load of leaflets through your door already, and if you are anything like me, you’ve read none of them. Despite the general election being a national (UK wide) election, you are electing some one to stand up to the bit of the UK that you live in (called your constituency).

To find details of your constituency, look for the place you live on Wikipedia, and the constituency page is linked to from that page. You need the “UK Parliament” constituency – usually in a box on the right-hand side of Wikipedia pages. The constituency page will tell you a little bit about the constituency, who was elected there the last few times, and who is standing for election there this year. Another useful resource is electionleaflets.org , which indexes leaflets used by local candidates and can be a useful way of finding out where they stand on particular local issues without going through the bin.

Having done all this, all you have to do is stroll up to your local polling station and complete a voting slip. You give your name and address, they give you a slip, you fill it in privately, put it in the sealed box and that’s you done. The location of your local polling station will be on your voting card, or if you’ve lost it – don’t worry if you have, you don’t need it to vote – you can call your local election office to find out where your polling station is. It will be near your home, rather than your place of work, and will be open from 7am-11pm.

After 11pm, all the votes are taken to a big hall somewhere in your constituency, and counted. You are allowed to go and watch this, but there’s no reason for you to do so unless you like looking at tired people shuffling paper. Some time in the early hours of the morning a result will be announced (it’ll be live on TV for those who care), and the person with the most votes is your Member of Parliament.

Part 2 – but who is going to win?

No one.

Seriously, no one. All of the indicators that politics nerds like me care about (and that I’ll tell you about later) suggest that no one party will win enough constituencies to have more seats than all the other parties put together, so no party will be able to form a majority government.

Majority governments have good points and bad points – they’re what we’ve had in the UK for most of our history, and they mean that the manifesto of the winning party is likely to be (mostly) implemented. But a majority government usually means that no other party gets a real say in how the country is governed.

There are three other kind of governments, and it is likely we will have one of these (or some combination of these) resulting from the 2015 election.

A coalition government is what we have now, where two or more parties agree on enough issues that they can form a government together. These are largely stable, and are common in Europe and elsewhere, but have been rare in UK parliamentary history.

A minority government is when one party has to convince at least some people from some of the other parties to vote for their ideas on each and every thing they try to do. It is at huge risk from a “confidence vote”, which is where someone from another party has suggested that the country has no confidence that government can safely govern.

A confidence and supply arrangement is when one party agrees to support another in terms of supply (voting for the budget) and confidence (voting with the other party if there is a “confidence” vote). Other than that, it is the same as a minority government, just a little bit more stable.

So having found a party you support, and read about the promises that it has made in their manifesto, there is literally no need for you to pay any attention to the rest of the campaign (seriously! although some people like that kind of thing, you are not a bad person for being bored with it) EXCEPT FOR paying attention to what parties are saying about working together. If there is a party you don’t like (for whatever reason) and a party you do like says it is willing to work with them in one of the three ways above, you might want to rethink your vote. Or you might not, again – up to you.

Here is what I can tell, so far, about what might happen after May 7th.

The polls suggest that we will have a hung parliament, where no one party has enough votes to form a majority. They suggest that the two largest parties will be Labour and the Conservatives, in terms of a share of the national vote – which – because of quirks in our UK electoral system means that Labour will have slightly more seats than the Conservatives, but not enough more to form a government.

The polls predict that the next biggest parties will be (in order of the number of seats they will hold) the Scottish National Party (SNP), UKIP, the Liberal Democrats (LDs), Plaid Cymru (PC) and the Green Party.

[if you are interested in polls and polling, two good places to start are Electoral Calculus and the UK Polling Report. Polling is a far from exact science, and it is not statistically safe to extrapolate from a single poll or even a summary of polls to an exact result. Some people think that the betting markets (the sum of all the bets that people put on the election) are also a good method of prediction, if you are interested in this start at Political Betting]

We also know what each party has said about working with other parties.

Neither Labour nor the Conservatives have said that they want to work with any other parties, both are still hoping to win an overall majority. Most of the calculations that political geeks are making are based around the practicalities of forming a majority, and the expressed preferences of the smaller parties.

The Liberal Democrats have said that they will work with either Labour or the Conservatives in a coalition. They reckon they can temper what they see as the excesses of both parties. However, the LDs are likely to lose a lot of seats this year and would be unlikely to be able to see a majority government formed by their coalition with either of the main parties. The LDs have also ruled out working with UKIP in any way, but have not expressed a preference regarding the SNP, PC, or the Greens.

The SNP, Plaid Cymru and the Greens have all indicated that they would support a Labour-led government on an issue-by-issue basis which may (I think) extend as far as a confidence and supply agreement. They have all ruled out working with a Conservative-led government and/or with UKIP but I’m not aware that they have ruled out working with the LDs. This is the most significant grouping, as Labour working with the SNP would work out as a parliamentary majority. Plaid Cymru and the Greens are both likely to have only a very small number of seats in the new parliament, but would still be keen to be involved.

UKIP have ruled out working with Labour in any way. Their stance towards the Conservatives keeps changing, but I could see them at least supporting a Conservative-led government on an issue-by-issue basis. But it is unlikely (on current polling) that UKIP will win a large enough number of seats to make a Conservative-led government possible.

Based on the above and on current polling, it is most likely that Labour will lead the next government with the support of the SNP and others (which could include any of the other main parties with the exception of the Conservatives and UKIP). A Conservative-led government supported by the LDs, or a Labour-led government supported by the LDs are the only other plausible outcomes (again, based on current polling). The Guardian Poll Projection is, for me, the best visual way of understanding this.

So the post-election government will involve at least two parties being able to implement at least some of their manifesto. Precisely what form a new government will take will be hammered out largely behind closed doors between the morning of the 8th May and the morning of the 18th May (when Parliament will formally re-open and the business of government will start again.)

Finally, 99% of everything you will read about the election will be biased towards one party or another. Take everything with a huge pinch of salt, do research yourself on issues that interest you (fullfact.org is a great starting place). Above all, vote out of hope and not fear, and vote for what you believe in.