Nearly every scientist has felt the frustration of pouring effort, money and (sometimes) tears into a project only to get null results. The elusive p<0.05 decides whether results get published or not—the oft-mentioned file drawer problem. Others have explained better than I can why this is a problem. Rather, I’d like to contribute some tips I’ve picked up on how to publish null results.

  • Combine results with a significant result. If the story of the paper allows it, consider adding the null results to another related analysis and publishing one big paper. I’ve had some success with this myself, where I decided to include three analyses of biomarkers that were non-significant with one that was.
  • Consider different journals. There is a bias to not even review null results. So consider other options such as open access journals or lower tier journals. The science still gets out there and a publication is better than no publication. Some journals specifically focus on null results such as PLOS One’s Missing Pieces and the Journal in Support of the Null Hypothesis. While I personally haven’t tried any journals focusing on null results, I have had some success looking beyond the top tier journals in my field and even considering journals in other fields.
  • Don’t give up easily, but know when to fold. One part of publishing null results is getting used to rejection, particularly not even having papers reviewed. If you know your methods are solid and your science strong, don’t give up. Human beings are prone to bias and this likely includes a bias against wanting to publish null results (particularly if it counters a favored hypothesis or theory). Be mindful, though, of how much time you have to spend reformatting for different journals compared to the time you need to spend on other responsibilities. There is no shame in giving up once you’ve made a good effort.
  • When designing your study or analysis, try to design it in such a way that even null results are interesting. This, obviously, won’t help after you’ve analyzed your data but you can still consider why the null results would be useful.
  • Add power analyses. I’ve tried this myself and it does seem to help. Showing that your study was powered to find a typical effect size for your field helps establish that the result wasn’t just because the study was too small or had too much error.

These strategies don’t negate the root cause of the file drawer problem, namely a publication system that wants flashy, highly citable papers, and null results just aren’t that. I don’t have any good suggestions for how to solve the systemic parts of the problem except what others have already said: more journals that specifically publish null results or having current journals commit to publishing more null results or abandoning p-values altogether. Hopefully these tips will help other new scientists until the field addresses the problem.

Want to live on the Edge?

Register


Join the conversation

Your email address will not be published. Required fields are marked *


Saving subscription status...

0 Comments

You May Also Like