I’m the Problem: My Generation’s Addiction to Bibliometrics
Publication-based measures of scientific impact provide little of value to the research community. Despite assertions that bibliometrics can improve the evaluation of scientists and their establishments, we lack a qualitative or quantitative argument that substantive problems were solved following their introduction. I am unconvinced that hiring, tenure, or promotion decisions became more accurate after journal (e.g., journal impact factor) or individual (e.g., h-index) bibliometrics entered into broad usage.
Even if there was a bullet-proof calculation, publication-based measures of scientific impact draw on an inferior data set (i.e., the bibliography of publications). Instead, grant awards from NSF, NIH, ERC, MRC, and many other peer-review foundations serve as our clearest signal of a laboratory’s influence. The bibliography of a funded research proposal comes closest to listing the most influential papers in a scientific field. If we could collect that curated bibliographies from funded research proposals (e.g., filtered of self-citations), then we would have a better data set from which to calculate influence. An improvement of this sort would be interesting, yet it would nonetheless have shortcomings (e.g., large federal grant proposals are notoriously conservative, and may avoid controversial papers that will ultimately shape our future). Therefore, identifying and filtering an appropriate data set remains a practical limitation of applied bibliometrics.
Even if we had access to a proper data set, all publication-based measures of scientific impact have weaknesses that undermine their usefulness. These shortcomings have been discussed in scholarly evaluations of bibliometric approaches, often from proponents who offer improvements (e.g., Kreiman and Maunsell, 2009; Hutchins et al., 2015). To make matters worse, obtaining an accurate value depends on properly vetting the publication database used by a particular system. For example, the Reuters Web of Science calculation of h-index depends on a user’s ethical selection of all articles that are attributable to one of potentially many authors with the same last name, and a determination of that author’s relative contribution to the each paper. An unlikely event. Therefore, bibliometrics remain of greatest value to those outside of scientific research (e.g., historians, statisticians, librarians), and their value to research scientists is, at best, unsubstantiated.
Even if publication-based measures of scientific impact were helpful to someone, they selectively penalize junior investigators. The scientists who are placed at greatest risk by the current misuse of bibliometrics are those who are searching for a faculty position, competing for their first independent research award, or are about to enter the tenure & promotion process. Investigators of my generation who uphold this culture were evaluated by a procedure that did not include bibliometrics. My personal experience is that nothing was broken 30 years ago, when I entered the pool of candidates, and our adoption of bibliometrics has served no purpose other than to degrade the professional landscape for our students.
There is a general willingness to repair our culture. The Declaration on Research Assessment (DORA; http://www.ascb.org/dora/) provides the rationale for acting, and >12,000 signatories endorse curtailing the use of bibliometrics in science. A few brave women have taken positive action towards rallying a younger generation (see https://www.youtube.com/watch?v=SCV6Rk8g1Kc) or unilaterally changing the faculty recruitment process (see http://sciencecareers.sciencemag.org/career_magazine/previous_issues/articles/2013_09_03/caredit.a1300186). However, specific proposals to defeat the enemy (e.g., people like me) are missing. I have one: let’s divide up the problem, and lobby our local interest groups (journal clubs, departments, societies) to vote on procedural changes.
I work in the hearing sciences, and I am a member of a society (the Association for Research in Otolaryngology, ARO) that best represents my research community. I am acquainted with many of ARO’s members, and I have served on some of its committees and its council. Therefore, I decided to lobby the current ARO leadership to hold a referendum on diminishing the influence of bibliometrics in our shared field. I was able to have the idea placed on the agenda for the next formal meeting of our organization, and I am currently seeking an informal venue to discuss this openly with members at our annual meeting in February 2016.
I don’t know where the path will lead, but I envision a resolution that reads something like this….
We oppose the use of bibliometrics in the fields of basic and clinical Otolaryngology research. Procedurally, we will decline to use bibliometrics for the purpose of making decisions about interviewing, hiring, promoting, tenuring, or funding our scientific colleagues. When confronted with the use of bibliometrics, we encourage ARO members to inform their colleagues that:
“I will not be using bibliometrics to reach a decision about [job interviews, hiring, tenure, promotion, seminar invitations, and so forth]. I encourage you to join me.”
We also encourage ARO members to repair deficiencies in our culture that have accrued with journal brand-naming. Statements such as the following can be used to alert colleagues to a brighter path:
“In the future, I will not be referring to journal names when presenting my work in public. It is irrelevant to the scientific process, and it sends an inappropriate message to our students.”
To my senior colleagues: please join me in reforming us. Our students will benefit and no one will lose.
To my junior colleagues: when confronted with bibliometric comparisons or brand-naming, I suggest that you politely challenge the practitioner. A few short years from now, you will either join me as part of the problem or become part of the solution.
Dan Sanes is a Professor at NYU in the Center for Neural Science, a good egg and a Reviewing Editor at The Journal of Neuroscience. The Edge for Scholars posted this on his behalf.