BiO Editorial – Publishing in the biomedical sciences: if it’s broken, fix it!
Posted by JordanRaff, on 23 October 2012
To mark Open Access Week (October 22-28), the Node is reposting a recent editorial in Biology Open (BiO), by BiO editor-in-Chief Jordan Raff. Please leave your feedback in the comments.
During my short time as Editor-in-Chief of Biology Open (BiO), I’ve come to realise that publishing in the biomedical sciences is entering a period of profound change, the likes of which none of us has experienced before. The present system is under sustained attack and, although many scientists are probably unaware of this, there seems little chance that it will survive in its current form. In this Editorial, I want to share what I’ve learned over the past year and explain why I think change is inevitable. As in all things scientific, I will probably be wrong in detail, but I hope these thoughts will stimulate you to think about these issues and how we might influence them. I am convinced they will have an enormous impact on us all.
My assertion that the present system will inevitably change might seem the wishful thinking of a new Editor of a new journal. But I believe several factors have combined to create a perfect storm that will drive change. At the heart of the problem is that, although the public funds much of our research, we have to pay to access most of the published results. This is because we scientists usually give the copyright to our data to the publishers. Although it is true that most members of the public don’t want to access these data, I’m a member of the public, and I need access because it is essential for my research. It is unacceptable that I (in my case through my institution) have to pay large amounts of money to private publishers for this privilege when the publishers do not pay anything for the research.
Many publishers argue that they impart significant ‘added value’ to the published work by organising the peer review process, editing manuscripts, and distributing the journals. This argument may have had merit in the past, but it does not today; modern web-based publishing methods mean that the costs of producing and distributing journals cannot possibly justify the exorbitant price of most journals or the high profit margins of some of the biggest publishers (http://bit.ly/jordanref1; http://bit.ly/jordanref2). Moreover, the most valuable part of the services provided by publishers is peer review, which is provided free by scientists.
Why then has the present system, so obviously flawed, survived for so long? I think the most important reason is that the impact-factor-led hierarchy of journals has provided a simple mechanism for ranking a scientist’s worth, and this system is now so embedded in our culture that we believe we cannot function without it. Few scientists have the time to read and understand someone else’s papers anymore, and the convenience of the journal hierarchy means we don’t have to: we all understand that a paper published in a high-impact journal must be ‘better’ than one published in a lesser journal. Scientists, funding agencies, and the various bodies that hire and promote us have all adopted this simple system, even though most scientists realize that it is flawed and, ironically, often feel unfairly treated by it. Still, most of us seem to have accepted that the system generally gets things about right and ensures that modern biological science works as a meritocracy. I will argue below that the system does nothing of the sort and that, worryingly, it is now actually distorting and impeding the scientific enterprise.
The overwhelming emphasis on publishing in top journals largely explains why Open Access publishing failed to break the stranglehold of the top journals when it was first championed in the 1990s. Although several Open Access journals have been successful, they have not displaced the handful of journals, such as Nature, Cell, and Science, at the top of the hierarchy. Many life scientists understand the perverse economics of the present system, and have supported the idea and goals of Open Access, but few of us have had the courage to stop trying to publish in the top journals (or the lower ranking sister journals that they have spawned). We were simply too scared of the negative impact that it would have on our careers. And we were right to be scared. Funding agencies and employers are still obsessed with the impact factor of the journals we publish in.
So why am I convinced that the system will change? One reason is that journalists and politicians have started to notice the absurdity of the present system. There have been scathing articles in the main sections of high profile newspapers such as The Guardian (http://bit.ly/jordanreference3) and New York Times (http://bit.ly/jordanreference4), a major report on the publishing system by the UK Parliament [House of Commons Science and Technology Committee (2011). Peer Review in Scientific Publications. http://bit.ly/jordanreference5], and the US Congress has recently discussed several bills that would promote or restrict Open Access publishing. All this activity has increased general awareness of the problems with the current system, but I am not naïve enough to believe that this alone will drive meaningful change.
More important will be the growing unease with the present system from within the science community itself. The idea that the worth of a publication should be judged by the impact factor of the journal it is published in has actually long been discredited (http://bit.ly/jordanreference6) (Editorial, 2005; Seglen, 1997), mainly because the impact factor of individual papers is poorly correlated with the impact factor of the journal they are published in (http://arxiv.org/abs/1205.4328). Moreover, journals can and do artificially manipulate their impact factor, and the data on which a journal’s impact factor is calculated are not freely available; independent attempts to calculate a journal’s impact factor have failed (Rossner et al., 2007). Perhaps surprisingly, many politicians and bureaucrats seem to be ahead of scientists in recognising the weakness of the present system. Several funding agencies and government bodies around the world now explicitly advise against the use of journal impact factors in the assessment of an individual’s research performance (http://bit.ly/jordanreference7; http://bit.ly/jordanreference8). Thus, remarkably, it is we scientists who are most responsible for maintaining the current system, and this is why it will be we scientists who ultimately have to bring about change.
The real reason that I am so confident that change will come is not often discussed but, in my view, it is the most important: our current obsession with impact factor is actually damaging science. The scientific method is well established: propose a hypothesis and design experiments to test it. Crucial to the success of this approach is that the scientist should be neutral about the outcome of the experiment. This is important because it is well known that we human beings have a strong bias toward seeing what we want to see in all sorts of contexts, and this can confound the interpretation of any experiment. This is not fraud; it is simply human nature, and it is why we try to perform experiments ‘double blind’.
In practice, it is often difficult to perform experiments blind, and I suspect that the vast majority of us usually don’t do it. But the present publication system puts enormous pressure on students and postdocs to get the ‘right’ result, especially when performing the experiments demanded by referees, and particularly if the right result means acceptance of the paper in a high-impact journal, which can often mean the difference between getting a job or not. This kind of pressure is dangerous. As scientists, we readily understand how incentives can distort political, financial and many other systems, yet we seem blind to the potential dangers in our own system.
If the current system is no longer fit for purpose, how do we go about replacing it? Perhaps the most important job will be to find better ways of judging ‘good’ science. I’ll discuss some possibilities and the potential role of journals such as BiO in a future Editorial. In the meantime, I would like to hear your views on any of the points I have discussed.
Changes are already underway and it is essential that we scientists act to ensure that these are the right changes. As an example, the UK Government recently announced that all Government funded research will have to be published in a fully Open Access format within the next two years. It is unclear how this will be implemented, but I believe there is a real risk that it will be done in a way that maintains the profits of the largest publishing companies without addressing the fundamental distortions of the present system. This would be a disaster. I urge you to get involved.
The copyright issue has surprised me, because the good literary journals do not ask for copyright – it remains with the writer. Even when the journal pays for publishing your article, it mostly asks for non-exclusive rights. I wonder why scientists don’t care much about this – I would think that their citations would increase if more people could access their papers.
I wonder what the consequences of disrupting the publishing hierarchy would be – because the top publishing groups don’t just publish scientific articles, but also publish articles on various science related issues. They bring highlights and changes into focus. They have an established community of readers.
I wouldn’t say a hierarchy based on merit is bad, but yes – the present hierarchy is undiscriminating (in publishing articles from established scientists) and dubious, peer-review is flawed and a horror for starting scientists. But will the solution be to correct the issues with this system or bring in a new system? (Open access publishing might have its own flaws – how many scientists can afford the Open Access fees some of these journals charge?)
I am very surprised that topics like the ones Jordan has raised here have received so little attention and a mute response by the community. Do we really care so little? Are we so busy beavering away to improve our impact factor (IF) that we cannot raise to what he says and comment on? Maybe this explains the heart of the problem which is an apathy to change the system. The article is very explicit about topics, particularly the misuse and abuse of the impact factor as a proxy for quality, that all of us discuss avidly in private and yet, when it is put on the table we refrain from giving our views which is like saying that we are happy with the system. Really? Why can’t we come out and discuss these issues openly?
I agree with Jordan that the system is no longer fit for purpose or, maybe more appropriately, that it has developed its own purpose. The evolution of publishing over the last twenty years has seen a reversal of roles: where scientists used publishing to air and discuss their results and thoughts, publishers now use scientists to develop their business and, on the way, determine the paths and the modes of scientific endeavour. There is some good science coming out of this but, more often, a lot of damage is done and is being done to our enterprise. Of course, situations like this are not reached by default. We, the community, contribute to it by going with the flow, and the muted response to this piece from Jordan is an indictment of the current situation.
It is strange that issues like Open Access, which have more to do with the journals than with us, receive so much attention and have led to some significant and important changes in the system. And yet, the system itself, the impact factor and, more significantly the peer review process (more about this in future postings), do not get the same attention. If it did, if we could move together, maybe we could change it.
To take on Jordan and try to get something going: things will change but maybe slower than he thinks. I would love to be wrong here. In my view there are three forces slowing down change. The first one is that the current generation of scientists has grown with the system as it is, in fact it has made the system, and this attitude passes on to the next generation It is this generation, the current postdocs and PhD students, who are finding that the system lets them down, and therefore it is this generation that has the choice (we also do and should exercise it) to carry on as things are or to prepare the way for the next generation which might implement the changes. A second important force is our submission to the system. As long as we are happy to invest 10 or 14 months (and growing) to get a publication in one of those High IF (HIF) journals, the system will not change. As long we agree to the ever increasing, and sometimes absurd demands, of reviewers and editors and are happy to invest all this time to improve a paper 5 or 10% in order to get that publication, instead of trying to publish in lower IF (LIF) journals, the system will not change and, as it is already happening, the LIFs will demand as much as the HIFs. Finally, as long as those who decide jobs and fellowships use the publication brand rather than the science of the individual as a guide for selection, the system will not change.
In the end it is an odd realization that at a time in which the internet has changed the music and the book industries, has revolutionized commerce, politics, and interactions between people, making all of them more democratic, all we have done with it is to make the system that fuels our job more cumbersome, its fabric less helpful to us, we have created a mesh in which we, as a community, are drowning while the journals flourish by making more difficult for us to do with we do. I can see that the quality of science cannot be decided by how many clicks your paper has, but it is also true that the current system is not working as it should and that, as Jordan says, it is in need of change.
It would be helpful if, as starters, more of us would enter into debates like the one proposed here as a way to get the ground ready for real change.
This is a nice well argued piece, and there are many other similar pieces out there. But, how to resist? Somehow we have to change the policy of those who assess us, and dethrone the bureaucrats who determine so much of our scientific opportunities. These bureaucrats and politicians have fallen in love with meretricious bibliometrics and devalued our evaluations, which were based on knowledge and experience and not on phoney numbers.
In our research we adopted a policy in 1996 of not even trying to publish in the 3 big journals, not that I can claim we would have succeeded had we tried. But it was expensive, we lost some funding because our papers were “low profile” and some of our findings were republished 4 years later and our credit was stolen. So I cant say what we did has paid off, except for us, as we didnt spend most of our time fighting big journals to wangle our papers in but instead were able to get on with research at the bench.
Thanks for a very interesting view. One aspect of the gold open-access model of science journal publishing that I think is potentially problematic but not debated a great deal is that the publisher (profit/non profit) will inherently want to serve the customer. Traditionally, with a subscription model, this worked fairly well because the customer was the reader (or ‘end-user’ in publisher jargon) and readers presumably wanted the best product for their money so it was in the publishers interest to help craft a good journal, with rigorous peer review, additional editorial comment, discussions, reviews etc. Now with gold OA, the author becomes the paying customer; and I believe there is an inbuilt tendency to give the customer what they want: publication. This I believe inherently biases against the application of peer review, which is why the new ‘mega journal’ model is simply that papers be methodologically sound. The consequence is that the embattled researcher now has a rapidly expanding and unfiltered mass of literature to assess. Journal Impact Factors have utility in my opinion for signalling to the reader what journals they are more likely to get a useful yield within a finite timespan.
Disclosure: I am a journals publisher for Elsevier.
This is bollocks, to use a robust English word. The Publishers have often raised the spectre of open access journals publishing without peer review. Indeed this is about the only argument they can use to support their rapacious behaviour. A single paper from Elsevier’s Tetrahedron Letters will cost you nearly $40: NONE of this goes to that authors, who have, in the great majority of cases, done the research supported by money from the taxpayer.
Anyone who has published, or who has tried to publish, in PLoS Biology will know that peer review can be just as rigorous as any traditional journal, and far more rigorous than that of many I could name.
On the other hand I would not go to the wall defending peer review. The vast majority of papers
are cited once and then vanish without trace. That is why there are so many bad journals. The world would be a better place without them, and if Elsevier, Wiley et al simply closed shop tomorrow then I do not think that science would be any the poorer.
There is simply no argument for publishing in
a journal that charges for access. I also think that we should simply refuse to do anything to help conventional journals continue; we should
actively encourage their editorial boards to resign (indeed some have, bravo!) and refuse to
referee. With help from Rich Roberts I have
a template of a letter which anyone can copy:
see http://caseybergman.wordpress.com/2011/12/02/just-say-no/.
On the other hand, “traditional journals” and funding bodies and academic employers playing the IF game encourage the submission of over-hyped work.
This idea that HIF Journals help focussing on the literature worth reading is also slightly patronising. The fact is that there are relevant papers being published in PLoS ONE and other Journals with a low IF or out of the IF game. We need to keep track of them anyway, which we can do with things as simple as a Google search or alerts. It’s been years since I last used emailed tables of contents to keep up with my field. And I usually end up deciding for myself whether I like a paper or not.
This piece misses the real problem. The problem is not glamour journals, the real problem is peer review. How many of Einstein’s 300 papers were peer reviewed? Only one, and was rejected. Please, my scientific colleagues, read this great post by Michael Nielsen about the myths of peer review, a sadistic system in my view:
http://michaelnielsen.org/blog/three-myths-about-scientific-peer-review/
Good to see some discussion here, and I think there are some good points raised all round – including the question of whether an ‘author pays’ model opens up the market to unscrupulous ‘vanity’ publishers making money from publishing large volumes of papers that haven’t been through a rigorous editorial or reviewing process.
My personal view is that we are inexorably moving towards an OA model, so we’re going to have to figure out how to make it work! But if you believe that selective journals have a value to the community (which unsurprisingly I do), then we need to make sure that OA models provide them with the funds to maintain editorial standards.
I also think that Jordan’s article, as well as the various comments, bring up a bigger issue here: what matters most to authors when choosing where to publish their work? And is this the same as what SHOULD matter most?!
To my mind, there are three major factors that have been introduced by Jordan, Alfonso and Peter:
1. Journal Impact, by which I don’t just mean IF (although that’s clearly a depressingly important parameter), but also community reach and broader reputation.
2. Author Experience – which takes into account some of the issues both Jordan and Alfonso raise about how the peer review process works.
3. Accessibility of the published article – i.e. whether it’s OA or not.
So, Node readers, how much do each of these factors influence your choice of journal? We’d love to find out, so join in the discussion!
[Disclosure: I’m the Executive editor of Development]
The comment of Andrew Miller is difficult to understand and, if representative of the editorial side of the argument, clearly highlights why there is a problem. The comment misses the point that Jordan tried to make.
In general, it is strange this obsession of publishers with OA. Paraphrasing the Clinton’s campaign slogan: “it is the peer review, stupid!” If, as a community we dedicated as much space and time to discuss Peer Review and IF, maybe we could make some progress in improving the system. At the moment it is all happening by default and the only moves that take place are led by the journals. Peter Lawrence is right (and my colleagues in the lab have reminded me) that there is a lot out there on these matters but it is mostly a sort of ‘whining’ and there is very little by way of possible actions. It would be good if we found a way to, at least, get some feedback on the relevant issues. The questions that Katherine Brown poses at the end are a good start and this forum a good one since just about 90% of the people who read it have (I am sure) expressed some view about both Peer Review and IF. So, I agree with Kat: what do you think? What would you suggest should be done? Remember that THE JOURNALS DEPEND ON US and not the other way around.
One journal which, in my opinion, is actually doing something, listening to the scientists and implementing positive actions to improve the system is EMBO J (http://www.nature.com/emboj/about/process.html). It would be good if you had a look at this and, if you feel that this is positive, say it here. With a bit of luck we can force that more journals implement some of these measures as a start.
But, above all, let us move on from OA and onto the part of the problem that concerns us most and do not leave your thoughts about Peee Review and the larger issue of IF for the tea room.
I agree with Prof. M-A that topics of peer review and impact are the most important aspects here, but the OA journal model does impinge strongly on them and in my opinion biases against at least the first.
I would suggest we encourage experimentation with journal types and importantly give authors freedom to choose, uninhibited by mandates. If we permit a mixed field of journal types (traditional and new) I think the best will be sustained naturally.
I was very pleased to read Jordan’s constructive editorial earlier in the year, and I am even more so now that I see his comments have stirred a little debate. I agree with Katherine Brown and Alfonso that we need to separate different aspects here that are not necessarily related.
On one hand is the OA issue. As it has been discussed already here and elsewhere (http://www.the-scientist.com/?articles.view/articleNo/31858/title/Opinion–Academic-Publishing-Is-Broken-/) this issue comes down to ‘where does the -public- money go to?’. Journals such as Development are not OA, but the re-investment of the money in the community – I believe – justifies the cost. This is not always the case. Moreover, this can also affect the science. Some of us are lucky enough to work at institutions which cover access to a very vast number of journals, but those who don’t can end up finding themselves in the paradox of having to pay to access their own papers. The ability of the general public (including undergrad students at small institutions, for instance) to access the scientific literature could be the topic of a different debate. However, let me just mention that the ‘Big Three’ – which are the ones that most often feature in the news, are meant to have the broadest readership, and to be the main display of the scientific production – are far from free to access beyond the Table of Contents.
On the other hand, there is the IF. In my (very) short academic experience I have gotten the impression that he have developed a trend to do science – or experiments – in order to get a publication into a ‘HIF’, and not in order to get an answer to our question. I see a dangerous risk of tailoring our research to (A) suit the requirements of a certain journal, rather than to (B) find the right answer. Intuition says A and B should be the same. Yet I’m not sure this is the case. I really hope some of the more experience minds involved in this debate will correct me on this point.
Now, I do think the solution to the IF issue is up to us. First, as Peter Lawrence proposes, it is up to us to choose to publish in journals whose scope and readership are most suited for the work, regardless of their ranking, or which are OA, or which endorse a fairer review process – such as EMBO J -, etc. It would be most important that leaders in the field got equally involved in this. Second, for those who hold chairs in review panels and selection committees (or even in higher Houses) to really push for abandoning the IF and for finding alternative methods to assess scientists’ research. We are scientists after all, shouldn’t we be the able to develop better indexes? I agree that we need an easier way to assess someone’s value than reading through a career’s worth of papers, but doing this by assessing the journal’s value makes no sense. I think the use of the citation index is a step in the right direction (although it won’t work for recent publications). Perhaps implementing a standardized post-publication peer review process would also be useful.
Lastly, I just want to pose a question – perhaps a wild one, perhaps absurd: would a non-anonymous peer review process help to promote a more open, healthy discussion about our work? Would the reviewers be able to have tangible input into the work, and perhaps receive credit for suggesting the key experiment? And would the authors get less indiscriminate criticisms, and, maybe, new collaborations?
Thanks to all the people that has contributed to this discussion. It’s a pity that not too many people joined the debate but my general feeling is that the community does not think the system will change in the short-term. I think that a very important issue is that, as Jordan put it very clearly, having papers in HIF journals is a guarantee for good jobs and good grants. As Jordan says, “funding agencies and employers are still obsessed with the impact factor of the journals we publish in”. These funding agencies and employers are us, so it means that an important part of the community thinks this is a good way to evaluate ourselves. As long as this is the main measuring stick with which evaluations are done there is little hope things change. The questions I hence ask other scientists/evaluators/panel members are: How do you evaluate people when it’s time to assign positions and grants? Why having papers in HIF journals is so important? What else do you take into account or think it would be good to take into account when evaluating scientists at different stages of their careers?
Eduardo Moreno is right, Nielsen’s piece (http://michaelnielsen.org/blog/three-myths-about-scientific-peer-review/), though a bit lengthy is, indeed, a good analysis of the peer review system three years ago and it still applies. It runs into a couple of topics but provides some useful facts that deserve to be better known. But……
I am still surprised by the lack of engagement and the ease with which we ride the current situation despite the fact that, in private, we complain about it. The issue at the moment is not to belittle the system –this is easy- but to put on the table suggestions for a better system that represents our interests as a community and not those of a few selected individuals and journals.
The main problem is that we have a hypertrophied scientific system and are running a publication device that was created for a very different, much smaller scenario with a different sociology (the first part of Nielsen’s article is a good review of this). And of course, not surprisingly, the system does not work; some things do not scale. As a consequence a selection is applied, so that the size of the final product remains more or less the same as it was before, and the selection is applied by those who can apply it and, of course, will work in favour of those interests.
The facts are well known and I want to repeat, what we need is solutions. I also want to encourage people to look at the steps that EMBO J has begun towards some solutions with what it calls a ‘transparent review process’. This, alone, will not solve the problem created by the collusion of the large size of the current scientific output, that a great deal of the work is (technically) pretty good, that much (grant money and job security) depends on a publication and that we need a better system that takes all this into account and develops a way to allow Science, XXI century (not XX century) Science, to continue making essential contributions to Society. Herein lies the challenge. In the meantime we shall continue to contend with the strange mixture of pop-idol/glamour magazine culture that we have which highlights some good pieces of work but hides many more and certainly favours a certain kind of science (capitals or lower case on this word are deliberate).
For those of you attending the ASCB meeting next week, Jordan Raff will be discussing this same topic (open access publishing) at the Company of Biologists booth (stand 1303) on Monday December 17, at 9:30 AM. I’ve been told there will also be cookies!
To respond to Michael Ashburner’s comment (some way up in the thread):
There are ‘predatory’ OA publishers out there that exploit the OA model in what I think most people would agree is entirely inappropriate – see this recent article in Nature: http://www.nature.com/news/predatory-publishers-are-corrupting-open-access-1.11385
The “Beall list” of such publishers produce journals you’ve probably never heard of and would never consider submitting to, but they do point to potential problems with the model.
But of course it’s possible for OA journals to have a rigorous peer review system, and PLoS Biology is a great example of that. But PLoS Biology wouldn’t be financially viable (with its current charges) if it weren’t propped up by PLoS One, which makes money through its high acceptance rate and the sheer volume of work it publishes. So if a selective journal wants to go fully OA, and turn a profit (or even break even), the fees they’d have to charge are significantly higher than what you currently see.
What it means in practice is that the authors of a published article are paying not only for the costs of taking their article through peer review and production, but also the costs incurred by rejecting the articles that don’t make it through. Is this really much fairer?!
And the ‘the taxpayer has the right to read the output from the work they support’ argument doesn’t quite work for me either: most journals make their content freely accessible after 6 months or a year anyway – isn’t this sufficient for the general public?
Despite these arguments, I’m not actually against OA – it has clear advantages for the academic community in terms of content accessibility (particularly in a world where data- and text-mining are so important) – but I’m not convinced that the arguments in favour are quite as clear as its proponents would make out. When all labs (or at least those in countries with mature scientific communities) have money set aside from their grants or their core funding to pay realistic fees for OA publication, without placing restrictions on which journals the lab can use those fees to publish in, then we’ll be in a good position to move to an OA world. Not sure we’re there yet, though…
Plus, I’d like journals like Development (which does, incidentally, have an OA option) still to be able to turn a profit. We make good use of our subscription money we to fund our charitable activities, and I hope people would agree that this is a worthwhile service to the community. So I do hope that you won’t follow Michael’s advice and desert Development!
Adding to Katherine’s point:
A problem with the so-called ‘gold’ model of open access is the profusion of new junk journal appearing that will publish almost anything as long as the author pays the fee.
The most spectacular example of this the mathgen system that automatically generates plausible-looking but complete nonsense mathematics papers. In September this year, one of these was accepted by a journal! See
http://thatsmathematics.com/blog/archives/102
Re Professor Ashburner’s robust comments: I wasn’t saying OA journals do not employ peer review but that the model they use inherently biases against it. PLOS Biology is an exception: it applies a traditional review approach because it is financially supported by PLOS ONE which just requires that papers be sound scientifically.
Andrew you are simply wrong: PLoS Biology used “traditional” peer review from Day 1, well before
PLoS One came on the scene. And, please explain
what you mean by the words “inherently biases against” – it seems to be a completely ex cathedra statement with no evidential basis.
Michael – yes PLoS Biology used traditional peer review from day 1, but it wasn’t financially sustainable then: it relied on huge financial support. And PLoS in fact only became self-sustaining last year. The point is that running a selective journal is expensive!
Dear all,
(A bit of a long post; actual suggestions are numbered at the end so feel free to skip the initial analysis and ranting)
I agree that the problem is peer review. And with Alfonso’s analysis that this is due to hypertrophy, which leads to arbitrary hyper-selection. Most editors these days have a rejection target of 70-80% BEFORE review. A rejection without review or feedback is not constructive or useful in anyway. Furthermore, because the pressure for rejection continues even after review, arbitrary/unfair rejections still happen at this stage, or ridiculously long reviewing processes (up to a calendar year) since the editors implement each negative reviewer’s suggestion to the bitter end, perhaps in the hope of getting rid of a few more manuscripts. In turn, this makes the editors to prefer a reduced circle of consistently negative or picky reviewers over constructive ones. (These are not difficult to find since everybody is so angry, frustrated and resentful about reviewing, and anonymity seems to encourage nastiness). Finally, add to this the ludicrous pursuit for ‘novelty’ by many journals (they believe publishing ‘primary reference articles’ improves their relative citation count, impact factor and hence advertisement revenue) and you have a reviewing process led by ‘maximum arbitrary negativity’.
-In summary: peer review has been subverted, from a method to improve papers constructively to help publication, to a method that provides excuses for rejection to stop publication. Is now a series of hurdles to overcome before we can disseminate our work. It no longer serves its original purpose and must be overhauled.
We can change this. Earlier in the year, a successful boycott by thousands of scientists (me included) forced Elsevier to change its policies regarding library charges. Therefore, other types of boycott (call it embargo if you prefer) can be successful as well.
Either the dissemination method we use (publication in journals) must change, or the peer review model must change. Following the non-scaling argument of Alfonso, I do not believe that the informal ‘web pre-publication + open community review’ strategy employed by physicists and mathematicians will work for us. They are communities of hundreds/thousands; we in the life sciences are tens of thousands.
Suggestions (in order or radicalism):
1) Don’t send your paper to a paper where it can end up with a non-academic (junior staff) editor. He/she did not make it into a PI and managed to publish only 1-2 papers in his/her career. What kind of credential for judging other people’s publications, or to override the opinions of ‘important’ negative reviewers, is that?
2) Submit your papers only to PLoS One (or similar). It has a very decent impact factor of 4.5 if you are concerned.
3) The one-stop shop, like that implemented by Peerage of Science (http://www.peerageofscience.org/) Pros: you only submit your paper once to one place, and reviewer’s comments can be reviewed too. Importantly, you submit anonymously. Cons: this particular scheme is run by a for-profit company and the “reviewing of reviewers” seems a bit convoluted. A non-for profit simplified scheme could be set up.
4) Find or start a journal that ‘does not respect reviewer anonymity’; or simply, sign your reviews. This will quickly make everybody ever so polite and constructive, and perhaps make a “reviewing of reviewers” unnecessary. It will also expose favouritisms, cronyisms, networks, mafias, etc. and will force the editors to work hard to find unbiased reviewers. It will be fascinating!
5) Formal open submission to servers run by open-access publishers. These manuscripts would be deemed ‘submitted’ and could be quoted as such in your CV, accessible to your employers and funders. After 6-12 months (which is the time it takes to see your article published these days), only those articles that have passed a certain threshold in number of accessions, downloads, and citations, are deemed ‘accepted’, formatted and published in the journal; the lowly or no accessed ones would be deleted from the ‘submission’ server. No lengthy debates run by the extroverted few are necessary; just quiet, anonymous, widespread interest by your peers. Comments and suggestions would still be possible, but not become the criteria for publication.
6) Run naked down you corridor shouting “peer review is evil! peer review made me mad!” Then, stop whining, find another job (if you can) and start enjoying life again (if you still can).
As someone who spent several years as a ‘non-academic (junior staff) editor’, who ‘did not make it as a PI’ I feel the need to step in and defend what I prefer to think of as ‘professional editors’. No, I didn’t make it as a PI, but I never wanted to: my skills were and are better tailored to an editorial job. You don’t need to be actively doing great science to be able to recognise it. And while academic editors in general do a great job, so do many professional editors. In my 3 years at EMBOJ, I estimate that I read around 1500 submissions, a whole load more published papers in order to judge those submissions, and I discussed several thousand more with my fellow editors. I think this gave me a pretty good idea of what was going on in the field, and what was likely to meet with positive reviews from referees.
At no journal I’ve been associated with have I seen editors trying to hit ‘targets’ to reject papers. I’d much rather be accepting papers than rejecting them, but if a team of editors are convinced that a paper will not make it through peer review, then I do think it’s better to return the paper after 3 days than a month: everyone complains about the time taken to go through peer review, so is a fast answer really such a bad thing?
The sole job of a professional editor is to select what he/she/the team thinks is the best science for their journal – there are no potential conflicts of interest and no other demands on their time. I don’t think that’s such a bad thing. Having worked for journals with both professional and academic editor models, I can say they are very different, both have their advantages and disadvantages, but professional editors are certainly NOT “failed scientists”.
Yes, there are problems with the peer review system, and yes we can do more to improve it – but I’ll save my thoughts on that for another comment, or perhaps even a new post!
Thank for your reply Katherine. Firstly, apologies if I have caused offense. But, if we are to move on I think it is time to be frank. My comment is an opinion that many scientists will say in private, so I felt it needed to be said in public too. I have had dealings with good and informed professional editors, and bad dealings with misinformed academic editors. My own opinion stems mostly from self-criticism. I was scientifically unqualified to judge papers right after my PhD, and only after becoming a PI I realised the full human cost of journals’ rejections. Even now, if I spend too much time away from the lab/bench, my expectations of what is experimentally possible/doable in a given time frame become unrealistic. So, altogether, I am afraid I have to stick to my guns that, on average, I would rather have a paper dealt with academic editors than professional editors; but only as a matter of lesser evil. If you keep going down my list of suggestions, you will see that where I want to go, is further than this, to a system where referees and editors will not be used, or will not have decisive power.
What is wrong with peer review could be encapsulated in your post:
“The sole job of a professional editor is to select what he/she/the team thinks is the best science for their journal”
We have to give up this system were a few individuals (referees plus editors, whether professional or academic, and however well intended or trained) decide on what is best for publication, and best for science. It is ludicrous that we have a community of tens of thousands and yet a minority of few hundreds have such power. The revelation of the XXI century is how the web is democratising access to information, and allowing mass decision-making. Surely, instead of trying to shoehorn a large community through a system that was not designed, and cannot cope with such numbers, we should have the imagination to design a system that actually uses these numbers, takes strength from them and integrates as many people as possible; as in my suggestion number 5) (which could also incorporate a like/dislike voting like this and other forums). You seem to have your own suggestions for improving peer review, please contribute them.
Again, apologies for blunt language. Some of us are getting very frustrated.
No offense caused Juan Pablo – you’re entitled to your own opinion, and you’re certainly not the only one who thinks that way.
I will be writing a post about peer review sometime over the next couple of weeks, but my ideas are certainly less radical than what you’re proposing. And I just don’t agree that anonymised peer review is such a bad system – it’s certainly not perfect, but I do believe that it still has an important place in the science dissemination process. In any case, more on this when I have the time to compose my thoughts!
It is very nice to see a debate going on (even amongst a very select crowd!).
I agree that there are actually two different debates going on here. (1) The OA debate and (2) the IF/peer review debate — that could be described as a “how do we judge a scientists worth?” debate.
The OA debate is interesting, complicated and now has a momentum of its own – so I won’t comment on it further here (although I re-iterate that it is crucial we scientists remain engaged).
The more important debate for scientists, however, is (2). This is inextricably linked to scientific publishing (but not OA) because the IF-led hierarchy of journals IS our current system for judging a scientists worth. I don’t think this is the Journals fault, but, as Alfonso argues, they certainly can exploit this for commercial gain (and so will fight very hard to maintain the journal hierarchy in any new model of scientific publishing).
Most scientists agree that the current system is seriously flawed, but many also think it is the least bad option; it has delivered a reasonable approximation of a meritocracy for many years. This view needs to change! It is now so hard to publish in the top journals, while the rewards for publishing in them are so high, that this system is seriously distorting the scientific process. I know many good scientists who understand this, but feel there is little they can do about it and so have no option but to play the game – and I would include myself here. As Peter says, there is a price to be paid by those who opt out of this system.
One can consider whether the problem with the present system is peer review – i.e. if peer review were perfect, then would the present system actually be a good one (irrespective of whether the journals are OA or not)? The answer is probably yes (but I maybe haven’t thought this through enough). Therefore, efforts to improve peer review are very worthwhile, and I also applaud the efforts being made by The EMBO J. in this area.
The problem is that while peer review can be dramatically improved, it can never be perfect. If this is the case, then the real challenge for us scientists is to come up with a better system for judging a scientist’s worth. I have thought about this a lot and have yet to come up with a good solution. We need metrics, as I can’t imagine we will suddenly find the time to read and understand lots of other people’s papers. Yet, almost all metrics one can think of are relatively poor indicators of genuine scientific quality. Moreover, if any metric were to become accepted as a standard, it would doubtless be open to “gaming’.
The best system I’ve been able to think of is a citation-based system where one can compare all papers that were published in the same year (or month) with the same key words. So, for example, my paper published in 2010 with the key words “Drosophila” and “centrosome” might have 5 citations, and this would put it in the 40th percentile of all papers published in 2010 with those key words. This is obviously a measure of impact rather than quality, but I wouldn’t mind seeing this sort of number as I think I could judge whether the key words chosen were reasonable and have a feel for what the number means (and I could check it was true quickly with the right tools). All self-citations would have to be rigorously removed! Any thoughts?
As Eva mentioned, I’m going to be at the BiO booth as the ASCB meeting next week so would be delighted to talk about this stuff with anyone who is interested.
Just a very quick suggestion Jordan. An easy way to compare the relative worth of a paper is to compare its number of citations with the average number of citations for the journal it was published, i.e. the journal’s impact factor (citations/IF). Thus is very easy to prove if particular papers are outperforming their journal, or if part of their impact is being ‘borrowed’ from the journal. This metric is easy to use even for time-pressed commitees.
‘Gaming’ (let’s all submit to low IF journals!) would be avoided by giving consideration to total number of citations; different studies show that papers in HIF journals get more citations than low IF ones, so people would still try to go for HIF. Notice IF could be replaced by circulation numbers, or access/downloads to a journal website.
I wonder if an open debate/workshop at a BSDB/BSCD meeting or similar would be a good idea? (nonwithstanding your brave appareance at the BiO booth as the ASCB meeting)
To reply further to Michael Ashburner: ‘inherently biases against’ comment is a hypothesis which relates to nature of business model. As OA journals rely on fees from author not readers therefore Publishers will reduce barriers to its customers and one such barrier is critical review. To quote Paul’s comment above “A problem is the profusion of new junk journal appearing that will publish almost anything as long as the author pays the fee”. Peer review is a hassle that many publisher will either water down or bypass, or leave to ‘post-publication review’, which generally doesn’t work. This is not to say we shouldn’t experiment in these areas, permitting a mixed economy.
PS The hypothesis that gold OA journals are easier to publish in versus traditional subscription journals is testable: one could measure manuscript rejection rates for a random selection of both journal types. I would wager the article rejection rate is higher for subscription titles. High or low author fees also would have a bearing: high rejection rate journals could be sustained by higher author fees.
…and what does the rejection rate have to do with scientific quality?
One correction of a common misconception. It is true that PLoS ONE ‘just’ looks at whether a paper is scientifically sound but 1) one wished that many HIF papers were scientifically sound (as many a journal club can show) and 2) the papers in PLoS ONE ARE NOT automatically published and all cases I have reviewed and published they have required further experiments and re-review i.e. like in any other journal. Like many people who look at PLoS ONE I know that there are many good and some very good papers in there. It would be helpful if people would stop thinking of PLoS ONE as a place where you pay and you publish; it ain’t!
As for OA, like Jordan (replies to back up what he says later) and many others I repeat: it is about peer review.
As an academic editor for Plos One I have to agree with Alfonso, I certainly reject more papers I edit than I accept, precisely because many submissions to that journal are not scientifically sound. The reason might be the hope that open access might mean automatic publication, but it clearly does not.
However, one issue concerning peer review that I feel has not been raised sufficiently is that we biologists seem to have misunderstood the role of the review process. In contrast to, say, physics, biological reviewers seem to treat being cleverer than the authors by devising extra experiments as some kind of sport.
This typically results in gigantic delays trying to please the reviewers, while the added value is usually minimal. IMO the review for any journal should exclusively focus on scientific soundness and internal consistency and maybe, for self declared high end journals, novelty.
This is particularly relevant in the times of electronic publishing. The limited number of articles published per issue and hence the high editorial rejection rates increasingly look like an artificial limitation of a resource, and hence an abuse of a market monopoly that would be illegal in most other contexts. I suspect that for many HIF journals this is the ONLY reason for the continued existence of print issues.
Nevertheless, as can be seen by just having a quick look at the contributors, talking about boycotts of commercial journals is much easier for tenured scientists. Given the way current faculty search committees work I certainly cannot afford not to try to squeeze a Nature or Science (or other HIF) paper out of my next story, regardless of the commercial interests of the publishers.