Always percolating below the surface of biomedical science is the ever-important quest to distinguish ourselves and our research through the art of publishing. Inevitably our efforts lead us to the same gateway: passing muster with the dreaded Reviewer, who lurks in the shadow world of anonymity. Fans of Lord of the Rings might compare navigating peer review to Frodo and company attempting the Redhorn Pass that ostensibly provided courageous middle-earthers a way over the treacherous peaks of Caradhras – with the caveat that peer review is much harder. After all, traversing the sheer slopes of a frozen mountain is good honest danger, the kind that advertises itself well ahead of time and therefore is unambiguous. Peer review is anything but. The process is fraught with ambiguity. You never really know who will review your paper. Even if you did, you cannot gauge their frame of mind when they accept the assignment or finally sit down to review what took you weeks and months to prepare.  So, we try to stack the deck in our favor best we can. We demonstrate fervor for things like rigor, reproducibility, readability, and grasping the literature of the day. We practice staying on message (difficult for those of us who are perpetually over-caffeinated) and writing clearly and succinctly (difficult for everybody, with or without caffeine). Well, so far so good. A seasoned reviewer is used to forgiving honest mistakes – things like forgetting a citation, abusing the use of commas, or sticking a p-value in the middle of a sentence. No harm, no foul, and we move on together.

There are other tendencies or practices, however, that are a little less forgiving and apt to cause trouble for intrepid investigators trying to make it to print. When I edit a paper for a colleague or mentee, what gets caught in my gizzard every time are the unnecessary, unforced errors – the kind that people walk into willingly. These include the category of what I like to call “chest thumping”:  a form of self-assertion that makes a paper less about the science and more about the scientists. Sometimes, in the attempt to distinguish our work or emphasize our innovative ideas, we step into certain human traps, the kind that ornery reviewers are keen to monitor.  Here are some examples:

“In our paper, we show for the first time that A leads to B”

No one has investigated whether A leads to B, and we do so here for the first time”

“Our approach is the first to use A to show B”

… and so on. Here’s the problem. Claiming to be either first or better in science is always tainted with at least a hint of subjectivity, especially when you’re trying to climb the proverbial ladder. Also, to state the ridiculously obvious, you don’t know what you don’t know. Don’t kid yourself – everything is open to interpretation. Chances are whatever you did, someone did a version of it before you. Murphy’s law mandates that this someone (or perhaps their mentor or protégé) will be your reviewer. At best, you might have gotten the facts wrong. At worst, you got the facts wrong and insulted the reviewer. The point is, neither is necessary – this is what we mean by “unforced errors”.  If indeed the approach and findings are novel and important, the experts whose respect and attention you want will recognize your achievement. No need to tout yourself and risk alienating an otherwise benign reviewer. Finally, if in fact you have done something better or improved upon a method, people will adopt your approach and cite your work, including those whose methods you improved.  This is a far bigger compliment than you could ever give yourself.  By the way, working in the perfunctory and now cliché “to our knowledge” disclaimer doesn’t really help. Everyone assumes you have the latest, greatest knowledge before you write the paper. No need to draw attention to the possibility that you might not.

Why am I making such a fuss? Regardless of who did what, rigorous, novel, and innovative results speak for themselves and stand the test of time – and will define the cutting edge until a more sensitive approach comes along. That’s part of the game we play. By staying in the background and letting your results stand on their own, you reduce the chances of inciting the reviewer’s personal ire and giving them the opportunity to bring you down a couple of notches. Consistent with the theme of these musings, consider avoiding statements like:

Dr. X showed A while Dr. Y showed B.”

“Dr. X and Dr. Y showed A, Dr. Z showed B, and Drs. X, Y, and Z showed C.”

You see the trend? The writing becomes burdensome very quickly, trying to layer in who did what.  What stands out is the litany of names and dates, not the important facts.  Though this kind of citation is well-intentioned, once again you are bound to leave someone out, thus bruising egos and hurting your chances for a quick acceptance.  I urge my trainees and colleagues simply to state the facts with appropriate citations:

“A leads to B (X et al., 2010; Y et al., 2015,), while B leads to C (Z et al., 2020).”

When you arrive at the discussion of your paper, your contribution will flow that much more easily from prior work:

“A leads to B (X et al., 2010), while our results indicate C and D.”

In this context, there is absolutely nothing wrong with using “our results” because you are simply drawing attention to the current presentation – not tooting your own horn.

Look, writing papers is hard work, just like the science that goes into them. As new information piles up daily around us, ironically what we can be certain of becomes harder to discern. By keeping your referencing impersonal, you minimize the risk of losing your message in needless offense while emphasizing what is most important – namely, that A leads to B.

Want to live on the Edge?

Register


Join the conversation

Your email address will not be published. Required fields are marked *


Saving subscription status...

0 Comments

You May Also Like