How obsession can fuel science blogging: The story of Retraction Watch
Posted by Ivan Oransky, on 23 May 2012
It was a summer afternoon in 2010 when Adam Marcus and I had the phone conversation that led to the birth of Retraction Watch.
We had each been covering medicine and science for more than a decade, and we had come to realize that we shared an unusual obsession: Scientific retractions. We had both experienced what happens when, as a reporter, you peel back the curtains on a mysterious retraction notice. Sometimes, there’s a story so big, major newspapers have to pick up on your coverage, as The New York Times and others did when Adam broke the story of Scott Reuben, the anesthesiology researcher who was forced to retract 22 papers – and go to jail – thanks to fraud.
We also both felt strongly that most journals did a pretty terrible job of publicizing their mistakes. Those realities, taken together with the fun I had been having with my blog Embargo Watch, which I’d founded about six months earlier, prompted me to suggest that we start a blog to monitor retractions as a window into the scientific process.
Adam was enthusiastic, so we launched on August 3, 2010. We figured we’d post a few times per week, whenever we saw an interesting retraction notice and could dig into it. There were fewer than 100 retractions per year, after all.
We – and others who thought this would be an interesting but limited project — were wrong.
That’s because the first weeks we started, two stories broke that thrust retractions into the public eye. One was that of Harvard psychologist Marc Hauser. The other was a seemingly unimportant retraction – of a paper claiming that Jesus cured a woman with the flu – that nonetheless landed us on Colin McEnroe’s show on WNPR in Connecticut.
Unbeknownst to us, we had struck a journalistic gold mine. Our traffic grew quickly. Within a few months, we couldn’t even keep up with all of the retractions we were seeing, thanks to searches and eager tipsters. It turned out that we had launched just in time to report on what would be a record year in retractions in 2011, with some 400.
Retractions are clearly on the increase. And as the Wall Street Journal and Nature have reported, using Thomson Reuters data, they’re outpacing the rate of growth in publications. There are 44% more papers published every year than a decade ago, but at least 10 times the number of retractions per year.
Why the rise? It’s always dangerous to generalize from what are still very small numbers among the more than one million papers published every year – especially when nearly a quarter of the retractions in 2011 belong to one person, the German anesthesiologist Joachim Boldt. But a few trends have manifest themselves. Some of the increase is due to more visibility for papers thanks to online publishing, and to the advent of plagiarism detection software. But journal editors Ferric Fang and Arturo Casadevall have made convincing arguments that the harsh competitive environment in which scientists work today has tempted more researchers to cut corners and commit fraud. As much as that makes some scientists uncomfortable, Fang and Casadevall have received substantial support for their theory.
Retractions may make some scientists worry that their careers have hit a speed bump, but the effects on a body of work are sometimes more indelible than we’d like to think. A recent paper showed that retracted papers are usually only cited a third as often after they’re withdrawn – but there’s other evidence that scientists still tend to cite retracted work in support of their ideas.
Fortunately, some scientists, including Fang and Casadevall, are growing concerned about these trends. They come at a time when drug companies and others are finding that many results aren’t reproducible.
The fact that many notices are opaque – one journal publishes only “This paper has been withdrawn by the authors” or “This paper has been withdrawn by the publisher – contributes to the problem by hiding fraud or giving readers the impression that fraud happened when in fact there was honest error. All of that makes it extremely difficult to determine the real rate of misconduct.
In short, the much-vaunted self-correcting nature of science has some issues.
There are, however, some solutions, based on what others have proposed, and what we’ve seen in our work on Retraction Watch. Editors can:
— Use systems to detect image manipulation and plagiarism
— Require authors to disclose prior retractions and investigations
— Trust anonymous whistleblowers more
— Demand more of institutions, by forcing them to disclose the results of misconduct investigations
— Move more quickly to correct and retract
— Make retraction notices clearer
— Make such notices freely available
We’re grateful that so many readers are paying attention to what we’re doing – and helping us do it better. Some institutions are concerned enough about what’s going on in scientific publishing that they ask us to speak as part of research ethics curricula. We’re always happy to do that, and we’re thrilled to be able to continue our work at Retraction Watch, with the support of an army of scientists around the world.