Excellence in science can’t be reduced to just numbers - Hindustan Times
close_game
close_game

Excellence in science can’t be reduced to just numbers

Nov 06, 2022 02:40 PM IST

In 2014, India’s department of science and technology supported this declaration: Quantitative assessment of research outputs cannot solely measure scientific creativity.

Last month, Stanford University released a list of the world’s top 2% of most-cited scientists. While media coverage in India focused mainly on the number of Indian scientists (52) who made it to the list, the idea of ranking research output remains a controversial one in the scientific community as it often degenerates into a debate over the quality of research and creativity.

The Nobel Prize in the sciences and the Fields Medal in mathematics are widely perceived as representing the highest standards of research excellence. The Stanford-led group examined the citations of 47 Nobel Laureates (2011-15). Of them, 15 would get the top rank if the criteria is the total number of citations. (AP) PREMIUM
The Nobel Prize in the sciences and the Fields Medal in mathematics are widely perceived as representing the highest standards of research excellence. The Stanford-led group examined the citations of 47 Nobel Laureates (2011-15). Of them, 15 would get the top rank if the criteria is the total number of citations. (AP)

To understand these rankings, one must understand what they measure and if they correlate with what is perceived as scientific excellence. The rank assigned to scientists is based on how many other publications cite their work. More citations imply that a particular work was noticed by peers and is a measure of its relevance and possible importance.

American information scientist Eugene Garfield pioneered the system using citations as the basis for ranking scientists when many new scholarly journals began publication after World War II. When journals turned digital in the 1990s, citation data became easily accessible and assigning citations-based ranks gained popularity. One such metric, suggested in 2005 by physicist Jorge Hirsch, is the h-index (Hirsch index), which indicates a scientist’s productivity and citation impact. The Stanford rankings are based on a composite index that considers six different indicators, including the h-index.

But are citations a good indicator of the quality of research? The evidence available is not compelling. The Nobel Prize in the sciences and the Fields Medal in mathematics are widely perceived as representing the highest standards of research excellence. The Stanford-led group examined the citations of 47 Nobel Laureates (2011-2015). Of them, only 15 would get the top rank if the criteria is the total number of citations; 18, if the h-index is used, and 37 if the composite index is used.

Since the citation volume and practices vary widely across fields, any mechanism that uses citations alone can be misleading. For instance, none of the Nobel Prize laureates in 2022 or Fields Medal winners could secure a rank of less than 1,000. Though a Fields Medallist entered the list at 1,023, many other winners did not even figure on the list. Moreover, the top 500 ranks were primarily occupied by biomedical scientists, an understandable skew from complete reliance on citation metrics. If this well-meaning attempt at quantifying research is fraught with such pitfalls, then we must be careful not to interpret the ranks to necessarily imply scientific excellence.

This exercise has also unwittingly brought to light other sordid issues, such as scientists artificially inflating citations or riding on excessive self-citations to game the system. The Stanford data tried to mitigate this problem by giving another ranking chart, disregarding self-citations. These fixes help somewhat in acting against unethical practices but cannot curb the inappropriate use of citation-based metrics.

Since 2005, many universities and funding agencies worldwide have been accused of using the h-index and journal impact factors to evaluate applicants for academic positions or research grants instead of a critical evaluation by experts. In 2012, a group of journal editors and publishers initiated the San Francisco Declaration of Research Assessment, calling upon the community “to eliminate the use of journal-based metrics... in funding, appointment, and promotion considerations” and emphasised the “need to assess research on its own merits rather than on the basis of the journal in which the research is published”. In 2014, India’s department of science and technology supported this declaration: Quantitative assessment of research outputs cannot solely measure scientific creativity. It is, at best, one of the many facets that contribute to a researcher’s profile. If statistical indicators are the only criteria for excellence, then Sachin Tendulkar, with a test batting average of 53.78 and ranked 23 in the list of highest career batting average in Test matches, would not be celebrated as one of the greatest cricketers ever.

In science, as in sports, excellence cannot be reduced to just numbers. The Stanford group’s ranking list will be meaningful if it is read, keeping in mind the warning given by its authors: “All citation metrics have limitations and their use should be tempered and judicious”.

MS Santhanam is professor of physics, Indian Institute of Science Education and Research, Pune

The views expressed are personal

Discover the complete story of India's general elections on our exclusive Elections Product! Access all the content absolutely free on the HT App. Download now!

Continue reading with HT Premium Subscription

Daily E Paper I Premium Articles I Brunch E Magazine I Daily Infographics
freemium
SHARE THIS ARTICLE ON
Share this article
SHARE
Story Saved
Live Score
OPEN APP
Saved Articles
Following
My Reads
Sign out
New Delhi 0C
Sunday, April 21, 2024
Start 14 Days Free Trial Subscribe Now
Follow Us On