The use and misuse of bibliometric indices in evaluating scholarly performance
DOI:
https://doi.org/10.7557/5.3670Abstract
See a video of the presentation.
Quantifying the relative performance of individual scholars, groups of scholars, departments, institutions, provinces/states/regions and countries has become an integral part of decision-making over research policy, funding allocations, awarding of grants, faculty hirings, and claims for promotion and tenure. Bibliometric indices (based mainly upon citation counts), such as the h-index and the journal impact factor (JIF), are heavily relied upon in such assessments. There is a growing consensus, and a deep concern, that these indices - moreand-more often used as a replacement for the informed judgment of peers - are misunderstood and are, therefore, often misinterpreted and misused. Although much has been written about the JIF, some combination of its biases and limitations will be true of any citation-based metric. While it is not my contention that bibliometric indices have no value, they should not be applied as performance metrics without a thorough and insightful understanding of their (few?) strengths and (many?) weaknesses. I will present a range of analyses in support of this conclusion. Alternative approaches, tools and metrics, that will hopefully lead to a more balanced role for these instruments, will also be presented.