The importance of indifference in scientific research
Posted by Journal of Cell Science, on 3 August 2015
This essay by Martin Schwartz was originally published in the Journal of Cell Science.
Current issues regarding scientific ethics have focused for the most part on regulations governing research and publication. I suggest that the internal process by which we separate self interest from the scientific process is a crucial and neglected part of training. Consideration of these issues might help us train better scientists instead of just scientists who adhere to the rules.
This is a follow-up to the essay ‘The importance of stupidity in scientific research’ by Martin A. Schwartz (J. Cell Sci. 121, 1771).
There has been a good deal of public discussion recently about scientific ethics and data reproducibility. As is usually the case in these matters, much attention has been focused on devising regulations or procedures for preventing fraud and ensuring that experimental results can be reproduced. But I believe that something central is missing from this discussion: the internal process by which scientists get the right answers and communicate them to our colleagues. One might get the impression from the public pronouncements and proliferation of mandatory courses in scientific ethics that ethical behavior is primarily a matter of adhering to a set of standards. ‘Thou shalt not Photoshop thy figures.’ I want to suggest that regulations are a response to a breakdown in something deeper, and, further, that they are a poor substitute. Let me explain.
Scientific research is a juggernaut that seems to roll powerfully and inevitably forward, and yet, the process is oddly fragile. Once we have a reasonably accurate picture of how things work, the testing and validation can proceed in a well-organized, efficient manner. But in the early stages, when we really don’t understand the system and the possibilities seem endless, it is easy to go astray. Is that 1.5-fold change meaningful? Maybe if conditions are optimized it will become a 5-fold change. Or maybe it’s just a blip and doesn’t mean a thing. The question often becomes how much effort should we expend on working out the bugs and obtaining a robust result. I have been fooled by a nice result that happened twice but then stopped; only after spending a long time trying to repeat it, and then exploring the system using other approaches, did I realize that it was not correct. I have also gotten results that looked nice once or twice and then stopped working, but after extensive optimization turned out to be correct and important. I have had a weak, hard to reproduce result that I abandoned, only to see it published a few years later in a very nice paper from another lab. In short, there are false positives and false negatives. Nor does statistical analysis solve the essential problem. No result is statistically significant at the start. The statistical analysis comes later, after the system has been optimized and the experiment repeated a number of times. In the early stages of a project when the picture is just emerging, it’s hard to know.
At this early stage in a project, intuition is a major factor. When you really know your system, sometimes a preliminary result just feels right. Or being right makes so much sense that you are willing to follow a feeble lead. But a major confounding variable in this process is the human tendency to want our hypotheses to be correct. If my hypothesis is correct, it means I’m smart, I’m close to writing the paper, and then I have a good shot at landing the job or getting tenure. Our desire to be correct makes it harder to actually be correct.
Every project contains myriad decisions about how to proceed, which are often very delicate. When I see a small effect that is exactly what I hoped for, but the second and third experiments show nothing, do I try again with a different calcium concentration or give it up? When the data goes in the right direction but has a feature that doesn’t quite fit, do I ignore the small discrepancy or explore further to see where it leads? Every project has discrepancies, you can’t follow every one or you will never publish. But some of them are crucial; if you ignore them you will miss something big. The way we make these choices accounts for a significant part of what distinguishes the good scientists from the great ones.
There is a state of mind that facilitates clear thinking; in the title, I jokingly called it disinterest. To be more accurate, I might have called it ‘passionate disinterest’. Buddhists call it non-attachment. We all have hopes, desires and ambitions. Non-attachment means acknowledging them, accepting them, and then not inserting them into a process that at some level has nothing to do with you. Yes, this is a peculiar, even paradoxical idea. Your own discoveries have nothing to do with you? How can that be?
Science occupies a kind of middle ground between two opposite forms of exploration. The arts explore, in free form manner, every aspect of individual, subjective human experience. We might, as an audience, share in it but we each do so in our own individual, subjective way, and to the extent that it touches on our own lives. At the opposite pole, mathematics elucidates a kind of universal language that is true for all time in all places, independent of its creators. Science lies in between. Scientists aim to discover universal laws yet do so through subjective experiences that we call experiments. The paradox originates in the way in which science stands with a foot in two different worlds, between subjective and objective. (It is also, incidentally, the source of the paradoxes in quantum mechanics and relativity.) We might think of an experiment as a conversation with nature, where we ask a question and listen for an answer, then interpret the answer. This process is personal in that the questions come from us. But by listening for an answer that comes from nature, there is also a way in which it connects to something vastly larger than we are; something that might even be universal.
Non-attachment means appreciating the bigger-than-we-are aspect. The reward of doing so is that we have a better chance of getting it right. In which case, we help build something that will long outlive us and that has the potential to grow in ways that we cannot currently even imagine. Making non-attachment a central part of science education would beat the hell out of ethics classes and regulations about the use of Photoshop in preparing figures.
A very interesting piece indeed. Would a logical extension be that the scientific career structure should actively promote passionate disinterest?
Really liked this piece! I’m wondering however why problems with scientific ethics and data reproducibility (appear to) have become more pronounced over the last couple of years? It’s likely that the human tendency of wanting to be right that Martin writes about hasn’t really changed much. What’s more likely to have changed is the culture of our enterprise that makes more or less easy to adhere to a passionated disinterest.
Just think of the review that goes “If the author’s model is right, then experiment X should give result Y”. Getting the paper accepted (and getting the next grant, or job) now depends on a particular resul; this will make it very hard for a scientist (or in fact any human being) to retain any passionate disinterest. Thus the way forward will be foster a culture that promotes the long-term values of passionate disinterest, and depends much less on getting the “right” result than is currently the case.