the community site for and by
developmental and stem cell biologists

What’s the future of peer review?

Posted by , on 3 January 2013

Jordan Raff’s recent Biology Open editorial on the future of publishing, posted on the Node, sparked quite a debate in the comments section. Much of that discussion focussed on perceived problems with the peer review system in scientific publishing. Particularly with the rise of journals like PLoS One and BiO, it seems that authors are increasingly dissatisfied with the time and effort – and with the sometimes cryptic decision-making – involved in publishing in more selective journals (journals whose selection criteria include some measure of ‘conceptual advance’ or ‘general interest’). I promised in one of my comments to Jordan’s post to write in more detail about my take on these issues, and what Development is trying to do to alleviate community concerns with the peer review process. So, here goes…

To start with what is perhaps an obvious point: one of the key aims of the peer review process is to improve the submitted paper, and in the vast majority of cases, I think it does just that – the finally accepted version of a manuscript tends to be both scientifically more sound and easier for the reader to understand than the original submission. Importantly, peer review – whether it’s of the more selective or the purely technical kind – provides some kind of quality assurance stamp on a published paper: although erroneous and fraudulent papers do end up being published, I’m sure there are far fewer of them in the public domain as a direct result of the peer review process.

However, that’s not to say that the system is perfect, because it certainly isn’t. Particularly with the rise of supplementary information, it’s all too easy for referees to ask for a ‘shopping list’ of experiments, many of which can be peripheral to the main story of the paper. And all too easy for editors to simply pass on those referee reports without comment – either because they’re too busy to go through the reports in sufficient detail to figure out what the really important points are, or because they don’t have the specialist knowledge to pass those judgments (which, after all, is why we need referees in the first place!). With a few tweaks to the system, we can do better than this.

For a selective journal, which Development unashamedly is, I think the key is to encourage referees to focus their reports on two things:

1. What’s the significance of the paper and why should it be of interest to the journal’s readership?

2. Do the data adequately support the conclusions drawn, or are there additional experiments necessary to make the paper solid?

With clear answers to those two questions in hand, it should be much easier for editors to decide firstly whether the paper is in principle suitable for the journal (spelled out in the answer to question 1), and secondly what the authors need to do for potential publication (the experiments given in response to question 2). It should get rid of that long list of ‘semi-relevant’ experiments (that aren’t really pertinent to q2), and it should make decisions much more definitive. There’s nothing worse than going through 2-3 rounds of extensive revision only for an editor to decide that the paper’s not worth publishing after all (something that, incidentally, Development is good at avoiding: around 95% of papers that receive a positive decision after the first round of review are published in the journal). Having a clearer (and shorter!) list of necessary revisions should help to avoid such situations.

I’m not a radical and I think it’s evolution not revolution of the system that’s required here. But I (and we at Development) do want to improve things. To this end, we’re looking at ways of changing our report form to reflect the aims laid out above. It might seem like a small step, but I genuinely believe that it could be a valuable one in easing the path to publication.

Moreover, I don’t think that the more radical alternatives work – various possibilities have been proposed and tested, but success is thin on the ground. Deposition in pre-publication servers and community commenting works very well in the physical sciences, but not in the biological sciences – as trials by Nature (see here and here) have demonstrated. Post-publication commenting could be a valuable addition to peer review, or even an alternative to it, but it just hasn’t taken off: I just looked at a random issue of PLoS Biology from 2012 and of the 17 papers published, only 3 had comments, none of which were particularly substantial. Open peer review – where referees sign their reports – would be great in an ideal world, but whenever I ask an audience if they’d be happy to sign their report if they were reviewing a paper for a top name in their field who might in turn be reviewing their next grant application, the vast majority opt to stay anonymous. It’s a competitive world out there, and scientists (like everyone else) hold grudges. Double-blind peer review – where the authors are also anonymous – might have some benefits in terms of reducing potential referee or editor bias, but it’s not easy to implement, and in most cases the referees will know who the authors are in any case.

So given the limitations of the alternatives, I believe that most journals will continue to operate some form of traditional peer review for the foreseeable future, and I don’t think this is a bad thing. That’s my opinion, but we also want to hear your views on this. What most frustrates you about the whole publishing process? Would a more streamlined review process like the one I’ve suggested help? What else can we do to make the system better?

Finally, though, there’s one thing that always comes to mind when I hear people complaining about the review process. You as authors are also the reviewers (or if you aren’t yet, then you one day will be) – meaning that you’re the ones giving ‘unreasonable’ lists of experiments to other people. It’s easy to pick holes in a paper, but harder to recognise when the authors have already done enough. So when you put on your reviewing hat, remember how you felt about the anonymous hyper-critical reviewer of your own paper so you don’t risk turning into one of them!

Thumbs up (6 votes)
Loading...

Tags: ,
Categories: Discussion

7 thoughts on “What’s the future of peer review?”

  1. A small comment on a big topic, but when reviewing for Developing we are given an initial tick box for accept, or reject. What’s the reasoning behind this? Do others find it helpful? It’s relatively rare that a paper is an absolute yes or no on initial submission…and I almost never know which want to tick, regardless of the quality of the paper!

    1
    0
    1. As to why the form was set up that way, I don’t really know, since it was done before my time here. Basically I agree with you that it’s not terribly helpful – either for referees or editors, and it’s one of the things we’re hoping to change when we re-vamp the report forms!

      2
      0
  2. It would be good if we engaged in the important discussion that Katherine Brown has opened up here: what is that we want from the peer review process? What kind of peer review do we want?
    She proposes an emphasis on two questions:

    1. What’s the significance of the paper and why should it be of interest to the journal’s readership?

    2. Do the data adequately support the conclusions drawn, or are there additional experiments necessary to make the paper solid?

    Here are some first thoughts on this matter.

    Nothing new nor surprising in what she proposes as this is, particularly point 2, exactly what we all want. It is also what we expect as authors. However, if we have to remind ourselves of this, something is going wrong. What this means is that editors need to be reminded of their job and be selective and opinionated with the reviewers and not just with the authors. Sure it takes time but, we all deserve the effort.

    On the necessary or unnecessary experiments (see http://www.nature.com/news/2011/110427/full/472391a.html, if you haven’t), it would be good if the editors were more involved in the actual evaluation of the reviews. I can hear Katherine telling me that they do, but I (and certainly others) have ample evidence to the contrary. More often than not, regular authors (let us not forget that not all authors are equal in the eyes of some editors) find themselves with a list of experiments to do and the editors not interested in engaging in a discussion. When one tries to explain to an editor why an experiment is not necessary or relevant, the editor is likely to reply that the reviewer is a trusted expert in the field…….as if the author were not, right?

    But there is a way around this and it should be appended to the two points of Katherine: one single round of review. This already exists in EMBO (see: http://www.nature.com/emboj/about/process.html) and, If you want to be progressive, here you have a huge first step to change the system: adopt it. A ‘single round of review” (no sending the ms back to the reviewers) with a prominent role of the editor in the decision must be a trend in the editorial world. The excuses that the editors do not know the fields to make a decision, should the raise the issue that the EMBO J editors obviosult are.

    On the thorny issue of anonymity of the reviewing process, social and political history should make us suspicious of a system that thrives and relies on anonymous criticisms (which often, given the value of certain publications, make or break careers). The fact that people want to remain anonymous has little to do with competition and more with the fact that the system is contrived and intellectually pernicious. Of course, this is not going to change quickly but it would be good if we admitted what is really behind anonymous reviewing at the moment. If we cannot stand by what we say and put our names to it, perhaps there is something not quiet right in it. It is this side of the process that allows the long lists of experiments to go unchecked back and forth between reviewers and authors.

    If we are not going to change the anonymity issue, there is a small change which could transform the peer review process: to have a word limit in the review and the replies. This will lead everybody to focus on the essentials and would help the editors do their job better. A word limit would separare the wheat from the chaff and would certainly focus the mind on the adjective ‘necessary” in front of experiments. We have word limits in grant applications and reviews, how come we have not put them in place where we most need them? This, I feel, would be a massive step forward. Would anybody dare? What do you think?

    So, to summarize, I would complement the two issues raised by Katherine

    1. What’s the significance of the paper and why should it be of interest to the journal’s readership?

    2. Do the data adequately support the conclusions drawn, or are there additional experiments necessary to make the paper solid?

    With two additional ones

    3. We need a “one reviewing round only process” and, as in the pioneering work of EMBO J, the editor makes the decision without sending it back to reviewers.

    4. write the reviews in a word limited space and reply to them in a similar manner.

    As I said in the thread on Jordan Raff’s editorial: we have to engage in this discussion. The alternative is a system increasingly cumbersome in which science comes second to a complicated and opaque decision making process.

    3
    0
  3. A couple of comments to Alfonso’s points…

    The complaint that editors often don’t get involved in the evaluation of reviews is a reasonable one. It’s true that editors (at many journals) often use ‘form letters’ and don’t necessarily spell out what needs to be done in a revision. In general, I would encourage editors to pick out the most critical points that need addressing, and spell them out in their decision letters, but I do recognise that this doesn’t always happen. To me, though, part of the solution to this lies with the author. If you ask an editor whether certain experiments are necessary/sensible for a revision (ideally way BEFORE you actually submit the revision!), then the editor will look at the issue in detail, and in my experience will usually give a much more useful response than Alfonso suggests. A well-argued email addressing concerns with the referees’ reports can be very helpful, and (while I suspect that Development’s editors may not thank me for saying this given the additional workload it could generate!), I’d encourage authors to engage with the editors when they have questions, rather than just grumbling in their labs.

    Secondly, Alfonso has mis-understood EMBO’s policy. It’s one round of REVISION, not one round of REVIEW. Accepting papers without a second round of review does happen at all journals, EMBOJ and Development included, and it would be great if this could happen more. But papers at EMBOJ often get sent back to referees for re-review, and it’s often necessary so that the editor can be sure that new experiments added are sound. What EMBOJ is very good at avoiding is a second major round of revision – any final revisions following a second round of review tend to be just of the presentational type. I applaud this, and while our statistics aren’t quite as good here at Development, it’s something we’re improving.

    1
    0
    1. With regard to the first answer by Katherine, I (and surely I am not alone here) have done what she suggests many times. The reply varies but usually it is of the kind: the reviewer rules. An extreme form of this happened to my group with one editor of Development a few years ago. We got reviewers asking for too many experiments and I decided to do exactly what Katherine suggests and appeal to the common sense of the editor who was (and is) perfectly qualified to evaluate our arguments. However, the editor sent the comments to the reviewer and (after a month) passed us the reply: either we did ALL experiments or the reviewer would not consider the revision. That was that. Impossible to discuss with the editor who justifed the decision on a number of points about he was not there top overrule reviews, even if they were unreasonable. The paper was published somewhere else, as it happens with higher IF and without those experiments. There are variations of this around and for any reasonable editor-author interaction, there are many more of the kind I have described.

      The lesson from this: editors are overstretched and afraid of making decisions on their own. Like other authors I recognized that editors are busy and cannot pay attention to all details but, then….we should recognize that this is a problem (and I know, Katherine, that you are doing that here) and try to solve it.

      With regard to the second point, apologies for the mistake. What Katherine says is right and is what I meant but, notice what she says: that EMBO is very good at avoiding a major second round of revision i.e. they get their first review right and they do this through a dialogue with the authors i.e. EMBO J recognizes that authors can contribute to the reviewing process beyond submitting the paper and its revised versions. Here is what they guarantee on the issue of a single round of reviews:

      • Papers rarely undergo more than one major round of revision
      • Referees are asked to focus on essential revisions and to consider the feasibility of experiments they suggest
      • Revisions are invited only if they are possible in a realistic time frame
      • Editors ensure that referees do not raise new non-essential points upon revision
      • More than 95% of invited revisions are published at The EMBO Journal

      Notice the last point.

      It is great to see Development keen to get feedback on its system and, given how much one hears from the experiences of my colleagues, it is surprising to see how little they want to contribute to debates like this one. So far. It would be helpful to everybody if more people expressed their views and their experiences on these topics.

      This is important.

  4. Debating about anonymous peer review maybe useless in the short term, but everybody knows it is total crap, and will disappear sooner or later.
    If you do not agree with Einstein:
    1) How many of Einstein’s 300 papers were peer reviewed? Only one, and was rejected! Einstein’s indignant reply to editor:
    http://michaelnielsen.org/blog/three-myths-about-scientific-peer-review/
    2) You may want to see this twitter hashtag thread: #overlyhonestreviews
    It will simply remind you how much the publishing system is broken. And everybody (at least those using twitter) knows it.

    4
    0
    1. A great very recent read on the topic:
      When peers are not peers and don’t
      know it: The Dunning-Kruger effect and
      self-fulfilling prophecy in peer-review
      http://onlinelibrary.wiley.com/doi/10.1002/bies.201200182/pdf
      From the article “the Dunning-Kruger effect
      and the self-fulfilling prophecy of the
      echo-chamber create a tacit culture
      of collective self-deception that can
      dramatically narrow the diversity of
      scientific publications. One manifestation
      of such self-inflicted limitation
      is the monolithic dominance of standard
      views while entire domains of alternative
      ideas are suppressed”.

      1
      0

Leave a Reply

Your email address will not be published. Required fields are marked *

Get involved

Create an account or log in to post your story on the Node.

Sign up for emails

Subscribe to our mailing lists.

Do you have any news to share?

Our ‘Developing news’ posts celebrate the various achievements of the people in the developmental and stem cell biology community. Let us know if you would like to share some news.