San Francisco Declaration on Research Assessment
Posted by Katherine Brown, on 17 May 2013
In December last year, a group of editors and publishers, including editors from our sister journal Journal of Cell Science, got together at the annual ASCB meeting to discuss ways in which we could improve the way in which scientific output is evaluated. There is much discontent in the community with the all-pervasive importance of the Impact Factor, and this meeting looked at how we might find a more balanced way to assess individual researchers and their work.
Today, the results of this discussion are being made public, with the release of the “San Francisco Declaration on Research Assessment” website, and accompanying editorials in Science, eLife and other journals. As initial signatories to the Declaration, The Company of Biologists and its journals fully support, and comply with, the proposals for Publishers stated in the Declaration – look out for more information on this in an upcoming editorial in Development.
I’d encourage you to read the Declaration, and – if you agree with its principles – to sign it. In my personal opinion, reflected in what I frequently hear from members of the community, the prevalent use of the Impact Factor as a mechanism to judge a scientist’s worth (and hence future job prospects) isn’t healthy. I also think that developmental biologists may suffer more than some in other fields: the very nature of our work often makes for long-term projects, and so the two-year window of the Impact Factor does not reflect the speed of our field. So I hope that this Declaration will help to change the culture of research assessment, and to ensure that research and the researchers behind it are assessed on their individual merits and not on the number associated with the journal in which they happen to have published.
We’d love to hear your thoughts on this Declaration, so please comment below if you’ve got anything you’d like to discuss with the journal and with the developmental biology community at large.
The declaration is a very important step forward to untangle the situation we have got ourselves into. However, the size and the direction of the step will be determined by how, who and how firmly the resolutions are implemented. Some habits are difficult to quit and one can see people not mentioning the impact factor (IF) explicitly but using it implicitly valuing publications in the same journals as a proxy for ‘quality’ and, basically, perpetuating the situation. As the Leopard says in the famous novel of its name: ‘everything has to change for everything to remain the same” . There have been already some voices of scepticism in this direction. However, let us use the concern to guide us moving forward and take the opportunities that lie within San Francisco’s DORA.
DORA is, above all, a commitment of individuals and institutions to a common sense principle: that the value of a piece of scientific work cannot be determined by a publisher, that scientists should be the arbiters of their own work and that judgement should be done in a transparent and considerate manner. The history of Science shows many errors of judgement (the list, while interesting would be long to discuss it here) but we are not talking, for the most part, of world changing discoveries but of something simpler, the piecemeal and steady increase in knowledge and the use that this has in allowing people to pursue it. And here it is that we, as a group, have allowed publishers to use us to create a tangled web of control over our activity that stems from their power and position.
One often hears the question “have you seen the latest paper of so-and-so in Cell/Nature/Science (CNS)?” And when you ask what is it about, after a few mumbles we are told that IT IS IN Cell/Nature/Science; as if this were the seal of authority for the validity of the publication, whatever its content. It is like those experiments in which the colour of the water determines its taste. DORA suggests that this should end and that we should begin to bother to read and consider the works on their own merits: water is water, whether pink or blue. Like many of you I have often heard that when one is faced by a large number of applicants for a position or a fellowship, the ‘publication record’ (and we know what this means) is a quick way (the best?) to sort people out. I am not going to deny that very good science is published in High IF (HIF) journals, but I also know that this is not the best measure of Science. I have suggested before and do here again, that a good practice should be to ask applicants to list their three or four best publications and write a paragraph explaining their impact and potential. Not so much the selection but its justification will tell you a great deal about the applicant and put the rest of the application into perspective. It is, I believe, suggestions like this which in the spirit of DORA, will change the attitude of individuals and allow us to move forward to the values of the past (that rest firmly on the insight and projection of a piece of research) but facing the challenges of the future (increased technical and data gathering abilities and growth of science based population).
As for the journals, DORA gives them responsibilities and we shall see if they rise to them. Here I have less faith. Nature magazine has not signed DORA and has explained why (http://bit.ly/17AFlYF) . It has also been pointed out that Nature (and others) have been encouraging care and thought with the IF for years (see http://bit.ly/10JFAZY); this is well known and welcome. Personally I take their refusal to sign as a statement of honesty: they recognize that agreeing to DORA should have some consequences for the scientific publishing world and they are not prepared to meet them (they express it in a different manner but this is essentially it). After all, Nature is a commercial enterprise and what they have to do is maximize profit and market value, both of which depend on their IF and DORA, implicitly, bears the potential to undermine these (they have recently claim that it costs them around £30K to publish a paper http://bit.ly/11NqvbG ). So, why should they adhere to it? They have made it clear that the IF is something created by the community which benefits them/us so, quite rightly say that it is not their doing and wonder what gain is there for them in ignoring it. They point out that many of the statements in DORA have a very broad sweep, and they might be right but this is good. Let me put forward an example of something implicit in DORA for publication policies.
I have heard editors often say that the reason to keep rejection rates high is to try to boost up the IF of their journals (there is no point in denying this; it is a fact). If the IF is not any more a measure of the merit of a publication, there is no reason to keep rejection rates as high. I do hear people saying that this is not true, that what they want with the rejection rate is quality and that they will keep their rejection rates………..if this happens, and in the short term IT WILL happen, we shall be subject to IF by stealth. But still, let us continue with the argument. PLoS, as I have discussed before (http://bit.ly/Zi7OKH) is a very good example of the failure of this policy. PLoS Biology was born with many ideals in mind, but one of them was to create an OA journal that would compete with NCS. The OA bit has succeeded (and we are all grateful for that), the competition with NCS has not and this despite their very high rejection rates. For example, compare articles in April 2013 between them and another high profile PLoS journal: PLOSBiology 19 publications (IF 11.45) PLOSGenetics 69 publications (IF 8.69). They can talk to me about quality but actually it is not justifiable that two sibling e-journals have such disparate publication rates. The data also show the poor relationship between IF and rejection rate. Why the difference?. Because they (and many others) believe that increasing the rejection rates will increase the IF; it does not work like that. But…….. one of the consequences of DORA should be that, since they do not have to pursue the HIF value, one would expect a rise on the publication rate of some of these journals. A good reason for them would be that they might want to capture the papers that will make a real impact (which might not be the ones that are presumably doing this at the moment……). Let me repeat it, not doing this will be IF by stealth. NB: I am not talking about ‘metrics’ here, I am talking about IF and how it is understood, evaluated and perceived.
Along these lines, let me spell out what I see as a most important consequence of this declaration and how it can be harnessed for an all around win. If we waive the value of the IF in the assessment of careers and science, we get back our rightful position of running the publication of our work and have the journals compete for our toilings rather than the way it is now (the other way around, actually): we compete amidst each other for the space in the journals. Think.
What is the incentive of sending your next piece of work to a journal with a high IF? Let me become more clear in the context of the interests of The Node. When you have a good piece of work, why would you send it to one of the Cell Press journals, Nature, Science or even PLoS Biology where you know ahead of time that, if sent to review, it will linger through several rounds of review for any time between 8 months and a year, without a guarantee that it will be in the end published and, in the case of Cell Press, with the threat that it will be rejected if a related piece of work is published in the meantime? In the process, in these journals the professional editors will meddle with YOUR science, will decide WHAT experiments YOU NEED TO DO, this will make you spend time and money improving a paper 10-20% and all within the cloak of anonymity of a reviewing process which everybody admits is seriously flawed. Why don’t you send your work to Development, Dev Biology or, in terms of generality, EMBO J or maybe even eLife? In these journals, though there are similarities in the general process, you will have a fair viewing, your work will be dealt with by people in your field and you will have it out soon with a minimum of hassle. If the IF is not any more an issue, you should be confident of your Science and make sure that it is out (after a fair, transparent and helpful peer review) as fast and efficiently as possible so that it can be judged by your peers. So, DORA offers an opportunity for you and also for journals like Development to catch up and regain some territory over others which have been sailing on the wave of the HIF. By increasing their acceptance rate (and this does not mean to decrease the quality of the publications) Development, Dev Biol, EMBO J can capture a lot of very good pieces of work. This will entail changes in their editorial policies, but perhaps DORA means change for everybody and we should face this together. The real question here is how confident are journals on their possibilities. I have always wondered why Development is happy being a journal that trails on the wake of, say, Dev Cell or as they say a ‘community journal”. If they want to be the latter they should make sure that they rally the community with them. I think DORA is presenting them, and others, with a tremendous possibility to lead. I suspect that Nature magazine knows this and we should respect that they do not want to play ball. In making this statement, they are making their colours clear.
This is the time for us to choose between, for example in developmental biology, Development/Dev Biol and Dev Cell. The CoB should be aware of this and play their cards with imagination and courage.
For us, the question is: shall we seize this opportunity? As Leslie Vosshall said (and I cannot repeat this enough because it is SO right and to the heart of the problem): “scientific publishing is an enterprise handled by scientists for scientists, which can be fixed by scientists” (http://bit.ly/Q52zvY). DORA is an opportunity and the question is: are we bold enough?
Thanks for your comments Alfonso. I agree with a lot of what you say here, although I take a slightly different slant on some of your points.
You say that ‘editors keep the rejection rate high to boost their impact factor’. While this may be true in some cases, I like to believe that – for most journals at least – rejection rates are high to ensure that only the highest quality papers are published in their pages. Of course, ‘quality’ is a subjective thing – encompassing not only the technical quality of a paper, but also the editors’ perception of its ‘interest value’ – which might be conflated with ‘impact’ but should not be conflated with ‘impact factor’. As I’ve argued in comments to previous posts, I think there is an important role for editors in defining the scope and ‘level’ of their journal – there’s so much literature out there that readers benefit from filtering mechanisms, and the journal in which a paper is published is one such filter. At none of the journals I’ve seen first hand (the EMBO and the COB publications – admittedly journals that you seem to consider ‘the good guys’, for which I am grateful!) is there a firm target for acceptance or rejection rates – editors reject those papers they don’t think make the grade, and accept those that do. So I don’t agree with your statement that ‘journals can increase their acceptance rate without decreasing the quality of the publications’ – we already accept all those papers that, at least in our assessment, pass the ‘quality’ bar.
That said, I do hope that the SF Declaration marks a sea change in how the IF is viewed. Personally, I think a lot of it has to start with the funding agencies – as long as your ability to get a job and a grant depends on the name (and IF) of the journal in which you publish, you the authors have to care about it, and therefore we the publishers have to care. Several funding agencies are now switching over the model you propose – whereby applications include the applicants’ selection of their most important papers – and this is a really positive move. If this is truly used as a way to assess candidates, then IF will become less important, and journals can focus fully on what should be our most important function – how best to serve our community in the way we go about our editorial and publishing processes – instead of having to worry about whether any changes we make will negatively affect our IF, or trying to come up with ways of increasing it.
I don’t think that anyone at COB or within the Development team is really ‘happy being a journal that trails in the wake of Dev Cell’ (and I certainly wouldn’t describe us that way!), but the current importance of the IF means that Dev Cell does have an advantage over us in attracting the best papers. I believe that we have other advantages, and we’re working very hard on many levels to provide authors and readers with the best experience we can, and hence the best reasons to submit. Through these, we hope to ensure that Development remains a top journal (hopefully THE top journal) for the developmental biology community.
Katherine, this is not the first time that you and I end up commenting on each other’s thoughts on this forum. It would be good to engage other people in the discussion. I know people have things to say, I hear them all the time but it seems to me that when it comes to actually putting them down………I shall not elaborate the points I made since, as you say, we agree on the essentials. Most importantly, this is more about us, the scientists, than about the journals. Although I feel that you have an opportunity to capitalize on the proposed change and I think that you should do it.
On the rejection rate, I stand by what I said.
We need a new system and DORA is an opportunity. However, if we substitute Numbers for Names, nothing will have changed. I am afraid that this will happen in the short term, because the transition will be slow, but I am hope I am wrong and that there is a catalysis. I still remember the origin of PLoS, all the people that were going to stop reviewing for NCS…..we got PLoS, which has been good and important, but people continued to work for those journals……….hopefully this will be different.
I agree with both of you, and side with Katherine on the rejection rate side of things. We want to publish high-quality papers of broad interest to the developmental biology community, and therefore by default we have to have a high(er) rejection rate. And indeed it is more about the scientists, and their institutions. But the journals have a role to play in facilitating this change. Let’s see how it goes….
Let us not be naïve on the issue of rejection rates and scientific quality.
There is 20% rejection rate and 80% rejection rate. The first is clearly likely to lower the quality of the science that a journal publishes, the second one is more interesting. Imagine an aspiring ‘glamour journal’ (we are not allowed to talk about IF so let us not to) with a 80% rejection rate, publishing very good science (it is possible with this %) but, for reasons that we might not get into (become more elitist, increase its glamour factor, etc….) might want to increase the rejections to 90% (this happens, believe me). Given the current trends, it is likely that the 20% of the papers which could be published are very good and I can assure you that what paper falls in the 10% that would be rejected (in the change of policy) has nothing to do with quality but rather with perceived impact or, simply, arbitrary decisions. So, in this case keeping the rate at 80% would not lower the quality of what is publishing and, frankly, would not change the glamour factor that much.
In this context, the comparison between PLoS Genetics and PLoS Biology to which I have often alluded to (see above comment) is a good example of this. Are the papers in PLoS Genetics of lesser scientific quality than those in PLoS Biology? I don’t. Do you really think that papers in Dev Cell are better than those in Development? I don’t. In fact what I often find is that “a DevCell paper” is a hyped version of a Development paper and many papers which end up in Development come from journals like Dev Cell where, after (some times a few rounds of) review they are rejected because……reviewer 3 or simply, the editor was not ‘convinced’. And the Development papers tend to be good. All this refers to rejections after review, not editorial rejections for which I suggest people to read a recent article (http://www.sciencedirect.com/science/article/pii/S0169534713001262) and remember that what it says is not restricted to Ecology and Evolution. Editorial rejections (which cannot be on the basis of the science because they do not involve peer review, though in some journals do involve an interaction with the editorial board) are, in general, the most common ones.
So, in the age of e-publishing quality should indeed be the criterion but it seems to me that it is not possible to justify publication rates below 10%, which is what glamour journals often do. Personally I am fine with this, as long as it is made clear that the decision is a matter of opinion and is not justified on scientific grounds. I have never understood why in the occasions of two glowing reviews and one negative, the negative wins.
DORA opens the door to many new possibilities but for them to be realized we are going to have to leave behind many hang-ups.
Interesting comments.
I absolutely agree with the sentiments of DORA.
However, perhaps this whole debate will become redundant as personal citation metrics become increasingly available? There are already some out there but i’m surprised noone has considered the following.
For each paper published, divide the number of times it is cited by the impact factor of the journal in which it was published. You can then quote your highest scoring paper(s), and an average score across all your work.
The benefit is that this reflects the impact of your work (rather than the journal you publish in). It deals with some of the intrinsic advantages (i.e. unrelated to the quality of the work) of publishing in the fashion journals. This, to an extent, should also remove some of the disadvantages to those working in ‘unfashionable’ fields.
The disadvantages are that it takes time to accrue citations, and so this will probably not be useful to judge a candidate applying for their first group leader job for instance. It may also be seen as replacing one set of useless numbers with a whole new set. I certainly hope the quality of someone’s contribution to science should be judged on more than any simple scoring system. However, it would seem more logical, if we want a quick impression of someone’s impact in the field, that we do that by actually judging the impact of their work, rather than the journals they publish in.
Maybe this idea has already been considered? Maybe someone’s already doing it? Thoughts?
(i’m sure someone will be able to give an example of a very important paper which wasn’t cited very much… there are exceptions to every rule!).