the community site for and by developmental biologists

7 thoughts on “What’s the future of peer review?”

  1. A small comment on a big topic, but when reviewing for Developing we are given an initial tick box for accept, or reject. What’s the reasoning behind this? Do others find it helpful? It’s relatively rare that a paper is an absolute yes or no on initial submission…and I almost never know which want to tick, regardless of the quality of the paper!

    1. As to why the form was set up that way, I don’t really know, since it was done before my time here. Basically I agree with you that it’s not terribly helpful – either for referees or editors, and it’s one of the things we’re hoping to change when we re-vamp the report forms!

  2. It would be good if we engaged in the important discussion that Katherine Brown has opened up here: what is that we want from the peer review process? What kind of peer review do we want?
    She proposes an emphasis on two questions:

    1. What’s the significance of the paper and why should it be of interest to the journal’s readership?

    2. Do the data adequately support the conclusions drawn, or are there additional experiments necessary to make the paper solid?

    Here are some first thoughts on this matter.

    Nothing new nor surprising in what she proposes as this is, particularly point 2, exactly what we all want. It is also what we expect as authors. However, if we have to remind ourselves of this, something is going wrong. What this means is that editors need to be reminded of their job and be selective and opinionated with the reviewers and not just with the authors. Sure it takes time but, we all deserve the effort.

    On the necessary or unnecessary experiments (see, if you haven’t), it would be good if the editors were more involved in the actual evaluation of the reviews. I can hear Katherine telling me that they do, but I (and certainly others) have ample evidence to the contrary. More often than not, regular authors (let us not forget that not all authors are equal in the eyes of some editors) find themselves with a list of experiments to do and the editors not interested in engaging in a discussion. When one tries to explain to an editor why an experiment is not necessary or relevant, the editor is likely to reply that the reviewer is a trusted expert in the field…….as if the author were not, right?

    But there is a way around this and it should be appended to the two points of Katherine: one single round of review. This already exists in EMBO (see: and, If you want to be progressive, here you have a huge first step to change the system: adopt it. A ‘single round of review” (no sending the ms back to the reviewers) with a prominent role of the editor in the decision must be a trend in the editorial world. The excuses that the editors do not know the fields to make a decision, should the raise the issue that the EMBO J editors obviosult are.

    On the thorny issue of anonymity of the reviewing process, social and political history should make us suspicious of a system that thrives and relies on anonymous criticisms (which often, given the value of certain publications, make or break careers). The fact that people want to remain anonymous has little to do with competition and more with the fact that the system is contrived and intellectually pernicious. Of course, this is not going to change quickly but it would be good if we admitted what is really behind anonymous reviewing at the moment. If we cannot stand by what we say and put our names to it, perhaps there is something not quiet right in it. It is this side of the process that allows the long lists of experiments to go unchecked back and forth between reviewers and authors.

    If we are not going to change the anonymity issue, there is a small change which could transform the peer review process: to have a word limit in the review and the replies. This will lead everybody to focus on the essentials and would help the editors do their job better. A word limit would separare the wheat from the chaff and would certainly focus the mind on the adjective ‘necessary” in front of experiments. We have word limits in grant applications and reviews, how come we have not put them in place where we most need them? This, I feel, would be a massive step forward. Would anybody dare? What do you think?

    So, to summarize, I would complement the two issues raised by Katherine

    1. What’s the significance of the paper and why should it be of interest to the journal’s readership?

    2. Do the data adequately support the conclusions drawn, or are there additional experiments necessary to make the paper solid?

    With two additional ones

    3. We need a “one reviewing round only process” and, as in the pioneering work of EMBO J, the editor makes the decision without sending it back to reviewers.

    4. write the reviews in a word limited space and reply to them in a similar manner.

    As I said in the thread on Jordan Raff’s editorial: we have to engage in this discussion. The alternative is a system increasingly cumbersome in which science comes second to a complicated and opaque decision making process.

  3. A couple of comments to Alfonso’s points…

    The complaint that editors often don’t get involved in the evaluation of reviews is a reasonable one. It’s true that editors (at many journals) often use ‘form letters’ and don’t necessarily spell out what needs to be done in a revision. In general, I would encourage editors to pick out the most critical points that need addressing, and spell them out in their decision letters, but I do recognise that this doesn’t always happen. To me, though, part of the solution to this lies with the author. If you ask an editor whether certain experiments are necessary/sensible for a revision (ideally way BEFORE you actually submit the revision!), then the editor will look at the issue in detail, and in my experience will usually give a much more useful response than Alfonso suggests. A well-argued email addressing concerns with the referees’ reports can be very helpful, and (while I suspect that Development’s editors may not thank me for saying this given the additional workload it could generate!), I’d encourage authors to engage with the editors when they have questions, rather than just grumbling in their labs.

    Secondly, Alfonso has mis-understood EMBO’s policy. It’s one round of REVISION, not one round of REVIEW. Accepting papers without a second round of review does happen at all journals, EMBOJ and Development included, and it would be great if this could happen more. But papers at EMBOJ often get sent back to referees for re-review, and it’s often necessary so that the editor can be sure that new experiments added are sound. What EMBOJ is very good at avoiding is a second major round of revision – any final revisions following a second round of review tend to be just of the presentational type. I applaud this, and while our statistics aren’t quite as good here at Development, it’s something we’re improving.

    1. With regard to the first answer by Katherine, I (and surely I am not alone here) have done what she suggests many times. The reply varies but usually it is of the kind: the reviewer rules. An extreme form of this happened to my group with one editor of Development a few years ago. We got reviewers asking for too many experiments and I decided to do exactly what Katherine suggests and appeal to the common sense of the editor who was (and is) perfectly qualified to evaluate our arguments. However, the editor sent the comments to the reviewer and (after a month) passed us the reply: either we did ALL experiments or the reviewer would not consider the revision. That was that. Impossible to discuss with the editor who justifed the decision on a number of points about he was not there top overrule reviews, even if they were unreasonable. The paper was published somewhere else, as it happens with higher IF and without those experiments. There are variations of this around and for any reasonable editor-author interaction, there are many more of the kind I have described.

      The lesson from this: editors are overstretched and afraid of making decisions on their own. Like other authors I recognized that editors are busy and cannot pay attention to all details but, then….we should recognize that this is a problem (and I know, Katherine, that you are doing that here) and try to solve it.

      With regard to the second point, apologies for the mistake. What Katherine says is right and is what I meant but, notice what she says: that EMBO is very good at avoiding a major second round of revision i.e. they get their first review right and they do this through a dialogue with the authors i.e. EMBO J recognizes that authors can contribute to the reviewing process beyond submitting the paper and its revised versions. Here is what they guarantee on the issue of a single round of reviews:

      • Papers rarely undergo more than one major round of revision
      • Referees are asked to focus on essential revisions and to consider the feasibility of experiments they suggest
      • Revisions are invited only if they are possible in a realistic time frame
      • Editors ensure that referees do not raise new non-essential points upon revision
      • More than 95% of invited revisions are published at The EMBO Journal

      Notice the last point.

      It is great to see Development keen to get feedback on its system and, given how much one hears from the experiences of my colleagues, it is surprising to see how little they want to contribute to debates like this one. So far. It would be helpful to everybody if more people expressed their views and their experiences on these topics.

      This is important.

  4. Debating about anonymous peer review maybe useless in the short term, but everybody knows it is total crap, and will disappear sooner or later.
    If you do not agree with Einstein:
    1) How many of Einstein’s 300 papers were peer reviewed? Only one, and was rejected! Einstein’s indignant reply to editor:
    2) You may want to see this twitter hashtag thread: #overlyhonestreviews
    It will simply remind you how much the publishing system is broken. And everybody (at least those using twitter) knows it.

    1. A great very recent read on the topic:
      When peers are not peers and don’t
      know it: The Dunning-Kruger effect and
      self-fulfilling prophecy in peer-review
      From the article “the Dunning-Kruger effect
      and the self-fulfilling prophecy of the
      echo-chamber create a tacit culture
      of collective self-deception that can
      dramatically narrow the diversity of
      scientific publications. One manifestation
      of such self-inflicted limitation
      is the monolithic dominance of standard
      views while entire domains of alternative
      ideas are suppressed”.


Leave a Reply

Your email address will not be published. Required fields are marked *