Here’s an interesting paper published in the free-access journal PLoS ONE, discussing the growing pressures on scientific objectivity:
The growing competition and “publish or perish” culture in academia might conflict with the objectivity and integrity of research, because it forces scientists to produce “publishable” results at all costs. Papers are less likely to be published and to be cited if they report “negative” results (results that fail to support the tested hypothesis). Therefore, if publication pressures increase scientific bias, the frequency of “positive” results in the literature should be higher in the more competitive and “productive” academic environments.
The author (Daniele Fanelli from the University of Edinburgh, Scotland) introduces my definite word-of-the-week: HARKing (Hypothesizing After the Results are Known). Anyhow, Fanelli analysed 1316 scientific papers from the United States to determine the percentage of ‘positive’ results (i.e. supporting the hypothesis) vs negative (null) results. Interestingly, the percentage of positive results varied considerably between the states (between 25-100%):
Seemingly, >90% is a pretty impressive ‘positive’ rate (NC sits somewhere towards the upper end – good effort JB!). Interestingly though, papers were:
…more likely to support a tested hypothesis if their corresponding authors were working in states that produced more academic papers per capita.
So where did all the non-results go? This doesn’t necessarily imply that all results are positive as they are made up, but the lack of reporting of negative results (stuff that simply didn’t work, or wasn’t worth writing about) is surprising:
What happened to the missing negative results? As explained in the Introduction, presumably they either went completely unpublished or were somehow turned into positive through selective reporting, post-hoc re-interpretation, and alteration of methods, analyses and data.
So what does this all mean? Fanelli concludes that:
..these results support the hypothesis that competitive academic environments increase not only scientists’ productivity but also their bias. The same phenomenon might be observed in other countries where academic competition and pressures to publish are high.