Has the academic spring sprung?
Posted by Thomas Butts, on 23 April 2012
Two weeks ago the Guardian newspaper, the safe port of call for most left-leaning liberal academics in the UK, devoted its entire front page to the rise of open access publishing in what it called ‘The Academic Spring’. For those of us working at the coalface, whilst this exposure was and is entirely welcome, it feels a little premature. Can we really compare the open access movement to the Arab Spring? And what would constitute an ‘academic’ Spring anyway? Much of the article’s emphasis was on the move by the Welcome Trust to jump on the ‘academic spring’ bandwagon ostensibly begun by the Field Medal-winning Tim Gowers, the Rouse Ball Professor of Mathematics at Cambridge.
Gowers delivered a blog post on 21st January protesting at the exorbitant practices of the Dutch publisher Elsevier, not least of which is the exceedingly high subscription rates they charge universities. In essence, the argument runs that the situation, whereby public money funds scientific research by academics that is submitted to learned journals for free, peer-reviewed by academics for those same journals for free, and then that same research is sold back to academics in the form of huge subscription charges paid ultimately by public money, is not just unsustainable but immoral. It is an argument that has received much support both within and outside the scientific community and has been at least partially responsible for Wellcome’s announcement that all research funded by them must be freely available six months after publication. The thinking goes that this will encourage academics to publish their work in open access journals and platforms, and it just so happens that Wellcome’s own new open access journal eLife, is about to start accepting submissions.
Please don’t misunderstand me; there is much to be said for open access publishing and for large funding bodies throwing their weight behind it. Likewise, the movement to reform the business models of huge publishing companies such as Elsevier that Prof. Gowers has spawned (almost 10 000 academics have signed up to www.thecostofknowledge.com, the advocacy site taking on Elsevier) is without question in the long-term interests of science and the academic enterprise.
The Arab Spring though, was (and hopefully is) a movement that has ousted (and hopefully will oust) repressive military dictatorships across the Middle East. To couch the debate over publication business models in the scientific world in the language of this outpouring of popular will seems to me a bit misplaced. Such as it is, the academic spring is not a movement directed at individual governments, but at international business practices – in that sense it shares more in common with the Occupy movement than the Arab Spring. It is not an undirected and unpredictable public protest movement, but a quiet and deliberate articulation of objection to a single company, in adherence to a well thought through and principled position. It is then, certainly academic. I’m just not sure it has sprung yet.
All the emphasis on the type of publishing market that would best serve science has in my view distracted from the fact that there is a much more fundamental issue that undermines the scientific enterprise in the 21st century: the existence (or rather the perception of the existence) of a market in ideas. The idea that the fruits of research are quantifiable pervades current thinking. They are not. In a sense this perception parallels (at least in the UK) the move by successive governments to treat education more generally as a market, where consumers (families) ought to have choice between products (schools and universities). In terms of science, governments, funding bodies and universities, in that order, are responsible for this. The notion that it is possible to define in a short period of time the ‘outcome’ of scientific research is one that has pervaded recent thinking in the distribution of scientific money in particular in the UK but also, I think, everywhere else. Indeed, the idea that measuring such outcomes or their impact (I hate that word!) is an excellent way to judge the quality of an educational institution is almost unquestioningly accepted, it seems, by those in power. I would argue that measuring the outcome of research is not only inappropriate: it is impossible.
And this is where we come back to the business model of scientific publication. Basing the funding of the scientific enterprise, and the people that accomplish it, on publication outcomes is as short-sighted as it is prevalent. The rise of open access publishing may help, but the real problem is that there is no alterative to judging people by the cachet of their publications. In a world where the amount of scientific literature continues to spiral upwards, and where financial pressures mount on governments, in terms of both allocating their science budgets, and assessing how to do so, the appeal of using statistics and metrics to short-cut good judgement is perhaps inevitable. Likewise, the insistence of funding bodies on concrete direct consequences of the research they fund that are demonstrable to their political paymasters are understandable. Finally, the duty of universities to play the game and jump through whatever hoops are necessary in order to maximise their income from both government and competitive research grants is self-evident. The problem with all of these things is that they, and the system they constitute, rest upon a fundamental philosophical flaw: that it is possible to rank scientific research. There is no such thing as a market of ideas unless you give them a monetary value and sell them to somebody. But there is no alternative that has been articulated by anyone, even despite the growing recognition of the inherent problems in the scientific structure. The real academic spring has not yet sprung.
A very well balanced piece, Thomas !!
“-spring”.
It’s the new “-gate”.
very interesting read.
we had a discussion related to this earlier this year, for those who are interested, here´s the link:
https://thenode.biologists.com/society-journals-and-the-research-works-act/
“… a fundamental philosophical flaw: that it is possible to rank scientific research”.
I do not agree entirely: with the benefit or retrospective, it can actually be quite easy (e.g. the works of Newton on Physics vs. the works of Newton on… Alchemy!).
What I would say is that it is “extremely-difficult-if-not-impossible to rank scientific research with automated metrics in the short-term”. Which is what is in place.
However, there is a need to allocate resources in the short term and therefore to rank science in the short term, even with anticipation. And for that there is no substitute for our peers’ judgement, as it has been put much more compellingly that I can possibly do elsewhere:
http://backreaction.blogspot.co.uk/2008/07/we-have-only-ourselves-to-judge-on-each.html
Therefore we need not only abandon metrics but also reestablish a peer review system where it is not that easy to just trash other people’s research. And there is a strong opposition to move in that direction not only from Governments, Fundind Bodies and Universities but also from a large enough part of the Academic Establishment.