Listen to one of our scientific editorial team members read this article.
Click here to access more audio articles or subscribe.
How do you measure how good you are as a scientist? How would you compare the impact of two scientists in a field? What if you had to decide which one would get a grant? One method is the h-index, which we will discuss in more detail below. First, we’ll touch on why this is not a simple task.
Measuring scientific performance is both more complicated and more important than it might seem at first. Various methods for measurement and comparison have been proposed, but none of them is perfect.
At first, you might think that the method for measuring scientific performance doesn’t concern you—because all you care about is doing the best research that you can. However, you should care because these metrics are increasingly used by funding bodies and employers to allocate grants and jobs. So, your perceived scientific performance score could seriously affect your career.
Metrics for Measuring Scientific Performance
What are the metrics involved in measuring scientific performance? The methods that might first spring to mind are:
- Recommendations from peers. At first glance, this seems like a good idea in principle. However, it is subject to human nature, so perceived performance will inevitably be affected by personal relationships. Also, if a lesser-known scientist publishes a ground-breaking paper, then they would likely get less recognition than if the same paper was published by a more eminent colleague.
- The number of articles published. A long publication list looks good on your CV, but the number of articles published gives no indication of their impact on the field. Having a few publications that have been well-heeded by colleagues in the field (i.e. they are cited often) is better than having a long list of publications cited poorly, or not at all.
- The average number of citations per article published. So, if it’s citations we’re interested in, then surely the average number of citations per article is a better number to look at. Well, not really. The average could be skewed greatly by one highly cited article, so it does not allow a good comparison of overall performance.
In 2005, Jorge E. Hirsch of UCSD published a paper in PNAS in which he put forward the h-index as a metric for measuring and comparing the overall scientific productivity of individual scientists. (1)
The h-index has been quickly adopted as the metric of choice for many committees and bodies.
Conceptually, the h-index is pretty simple. You just plot the number of papers versus the number of citations you (or someone else) have received, and the h-index is the number of papers at which the 45-degree line (citations=papers) intercepts the curve, as shown in the diagram below. That is, h equals the number of papers that have received at least h citations. For example, do you have one publication that has been cited at least once? If the answer is yes, then you can go on to your next publication. Have your two publications each been cited twice? If yes, then your h-index is at least 2. You can keep going until you get to a “no.”
So, if you have an h-index of 20, then that means you have 20 papers with at least 20 citations. It also means that you are doing pretty well with your science!
The advantage of the h-index is that it combines productivity (i.e. number of papers produced) and impact (number of citations) in a single number. So, both productivity and impact are required for a high h-index; neither a few highly cited papers nor a long list of papers with only a handful of (or no!) citations will yield a high h-index.
What is a Good h-Index?
Hirsch reckons that after 20 years of research, an h-index of 20 is good, 40 is outstanding, and 60 is truly exceptional.
In his paper, Hirsch shows that successful scientists do, indeed, have high h-indices: 84% of Nobel prize winners in physics, for example, had an h-index of at least 30.
Limitations of the h-Index
Although having a single number that measures scientific performance is attractive, the h-index is only a rough indicator of scientific performance and should only be considered as such. Hirsch himself writes:
“Obviously a single number can never give more than a rough approximation to an individual’s multifaceted profile, and many other factors should be considered in combination in evaluating an individual. This and the fact that there can always be exceptions to rules should be kept in mind especially in life-changing decisions such as the granting or denying of tenure.”
Limitations of the h-index include the following:
- It does not take into account the number of authors on a paper. A scientist who is the sole author of a paper with 100 citations should be given more credit than one who is on a similarly cited paper with 10 co-authors.
- It penalizes early-career scientists. Outstanding scientists with only a small number of publications cannot have a high h-index, even if all of those publications are ground-breaking and highly cited. For example, if “Albert Einstein died in early 1906, then his h-index would be stuck at 4 or 5, despite his being widely acknowledged as one of the most important physicists, even considering only his publications to that date.”
- Review articles have a greater impact on the h-index than original papers since they are generally cited more often.
- Use of the h-index has now broadened beyond science. However, it’s difficult to compare fields and disciplines directly, so, really, a ‘good’ h-index is impossible to define.
Calculating the h-Index
There are several online resources and h-index calculators for obtaining a scientist’s h-index. The most established are ISI Web of Knowledge, and Scopus, both of which require a subscription (probably via your institution), but there are free options too, one of which is Publish or Perish.
If you check your own (or someone else’s) h-index with each of these databases, you might get a different value. This is because each uses a different database to count the total publications and citations. ISI and Scopus use their own databases, and Publish or Perish uses Google Scholar. Each database has different coverage, so will come up with different h-index values. For example, ISI has good coverage of journal publications, but poor coverage of conferences, while Scopus covers conferences better, but has poor journal coverage pre 1992. (2)
The h-Index Summed Up
The h-index provides a useful metric for scientific performance, but only when viewed in the context of other factors. So, when making decisions that are important to you (funding, job, finding a PI) be sure to read through publication lists, talk to other scientists (and students) and peers, and take account of career stage. So, keep in mind that an h-index is only one consideration among many—and you should definitely know your h-index—but it doesn’t define you (or anyone else) as a scientist.
- Meho LI, Yang K. Impact of data sources on citation counts and rankings of LIS faculty: Web of science versus scopus and google scholar. JASIST 2007;58(13):2105–25.
Originally published April 2, 2009. Reviewed and updated February 2021.