Eldenhall Research

← Back to InsightsAcademic Writing

The H-Index Was Never Meant to Define a Researcher's Worth. We Used It to Destroy Careers Anyway.

March 27, 2026By Dr. Marcus Eldridge, Senior Research Advisor | 35 Years in Academic Publishing12 min read
The H-Index Was Never Meant to Define a Researcher's Worth. We Used It to Destroy Careers Anyway.

Jorge Hirsch invented the h-index in 2005 as a rough personal tool to compare physicists within a narrow disciplinary context. He explicitly warned against using it across fields, across career stages, or as a standalone measure of scientific contribution. Nineteen years later, universities are using it to deny tenure, funding bodies are using it to allocate grants, and hiring committees are using it to eliminate candidates before reading a single word of their actual research. I have watched this number hollow out the academic profession from the inside. In this article I explain how it happened, what it costs, and why I believe the h-index is now one of the most destructive forces operating quietly inside research institutions worldwide.

Number That Was Never Supposed to Be a Verdict In 2005, Jorge Hirsch, a physicist at the University of California San Diego, published a short paper proposing a new bibliometric measure. He called it the h-index. A researcher has an h-index of n if they have published n papers that have each been cited at least n times. A researcher with an h-index of 20 has 20 papers with at least 20 citations each. It is mathematically simple and intuitively easy to communicate. Hirsch was solving a specific and narrow problem. He wanted a way to compare the output of physicists at similar career stages within the same subfield. He was explicit in the original paper that the index had significant limitations. He warned that it should not be compared across disciplines. He acknowledged that it disadvantaged early-career researchers and those who work in fields where citation volumes are structurally lower. He described it as one data point among many, not a comprehensive measure of scientific value. Within five years of that paper's publication, universities around the world were requiring h-index disclosures on job applications. Within ten years, funding agencies were using h-index thresholds as eligibility criteria for grants. Within fifteen years, I was sitting in academic hiring committee meetings and watching candidates eliminated from consideration in under two minutes because someone looked up their Google Scholar profile on a phone and announced a number. The man who invented the tool explicitly said not to use it this way. We used it this way anyway.

Why Simple Numbers Are Irresistible to Institutions Before I explain what the h-index does to research careers and to science, I want to explain why institutions adopted it so enthusiastically. Because understanding the appeal is the only way to understand why dismantling it is so difficult. Academic hiring, tenure review, and grant allocation are genuinely hard problems. They require evaluating the quality of work that is often highly technical, sometimes deeply niche, and almost always time-consuming to read properly. A hiring committee reviewing forty applications for one faculty position cannot realistically read forty full publication portfolios. A funding body processing thousands of grant applications cannot commission expert evaluations for every submission. There is real pressure, real time constraint, and real institutional risk in every decision. Into this environment, the h-index arrived like a gift. A single number that appears to summarize a career of scientific contribution. A number that allows comparison across candidates. A number that removes ambiguity and makes decisions feel defensible. The problem is that what the number actually measures is almost entirely different from what the institutions believe they are measuring. And the gap between those two things is where research careers go to die.

What the H-Index Actually Measures The h-index measures citation accumulation within a specific time frame in a specific disciplinary ecosystem. That is all it measures. And each of those factors introduces distortions that make cross-researcher comparison not just imprecise but often meaningless. Citation accumulation is time dependent in ways that make career stage comparisons systematically unfair. A researcher who is 55 years old with 25 years of publication history will almost always have a higher h-index than a researcher who is 35 years old with 8 years of publication history, regardless of the relative quality or importance of their work. The older researcher has had more time for citations to accumulate. This is arithmetic, not achievement. But hiring committees treating both researchers' h-index scores as comparable measures of quality are comparing numbers that were never designed to be comparable. Disciplinary citation norms vary dramatically and the h-index absorbs those differences without adjusting for them. In molecular biology, a well-placed paper can accumulate hundreds or thousands of citations within five years because the field is large, the citation culture is dense, and the literature builds on itself rapidly. In mathematics or philosophy or classical history, citation volumes are structurally lower because the fields are smaller, the publication pace is slower, and the citation conventions are different. A molecular biologist with an h-index of 30 and a philosopher with an h-index of 12 cannot be meaningfully compared using these numbers. The h-index of the philosopher may represent far more significant scientific contribution within their field's context. But the number is smaller and in a world that has decided numbers are verdicts, the philosopher loses the comparison before anyone reads a word. Self-citation, collaborative inflation, and gaming are real and documented phenomena that the h-index is structurally incapable of filtering. A researcher who cites their own papers extensively, who is part of a large collaborative network where co-authors routinely cite each other, or who publishes in journals with artificially inflated citation counts, can accumulate a high h-index that reflects network position rather than intellectual contribution. The h-index cannot distinguish between these cases. It does not try to.

The Specific Career Damage I Have Witnessed I want to be concrete here because I think the abstract critique of bibliometrics is too easy to dismiss. I have watched specific, identifiable patterns of career damage play out across three and a half decades of working inside academic institutions. I have watched researchers who do deep, slow, methodologically careful work that produces fewer but more significant papers be systematically disadvantaged relative to researchers who publish frequently in lower-impact journals and accumulate citations through volume rather than significance. The h-index rewards frequency and accumulation. It does not reward depth, originality, or the kind of careful foundational work that underpins entire research programs but does not generate rapid citation counts. I have watched female researchers, who take career interruptions for caregiving at significantly higher rates than male researchers, face h-index comparisons that embed those interruptions as permanent numerical penalties. A researcher who took two years away from publication has two fewer years of citation accumulation for the rest of their career. The h-index records that gap as if it were a measure of scientific contribution. It is not. It is a measure of uninterrupted career continuity, which is not equally available to all researchers. I have watched researchers who make fundamental theoretical contributions, the kind of work that reshapes how an entire field thinks about a problem, receive h-index scores that do not reflect the significance of that contribution because theoretical work is cited differently than empirical work. A single paper that changes the conceptual framework of a field may be cited intensively within a narrow community of specialists and less extensively in the broader literature. The h-index treats that paper identically to a minor empirical study with a similar citation count. And I have watched institutions defend all of these outcomes by saying they are using objective criteria. The h-index is not objective. It is numerical. Those are not the same thing.

The Deeper Problem: What We Lost When We Stopped Reading Here is what concerns me most about the dominance of the h-index and bibliometric measures generally, and it goes beyond the unfairness to individual researchers. When institutions stopped reading research and started counting citations, they lost the ability to recognize the most important work being done in their fields. I have been in enough hiring meetings and tenure review discussions to know that the pattern is real and consistent. When a committee evaluates a candidate with a strong h-index, the discussion centers on confirming what the number suggests. When a committee evaluates a candidate with a lower h-index, the discussion centers on finding reasons to justify the number rather than interrogating it. The number sets the frame. Everything else becomes evidence for or against it. This means that the researchers who most challenge existing frameworks, who work at disciplinary boundaries where citation networks are thinner, who ask questions that the field is not yet equipped to evaluate, are systematically disadvantaged in exactly the institutional moments that determine whether their careers survive. The h-index does not just measure the past. It actively shapes which futures get funded.

What Hirsch Himself Said That Nobody Listened To I want to return to Jorge Hirsch for a moment, because I think the original paper deserves more attention than it typically receives in these discussions. Hirsch was careful. He proposed the index as a tool for comparing physicists with similar career lengths in similar subfields. He explicitly noted that the index should be used alongside other information about a researcher's career, not as a standalone measure. He acknowledged limitations around field size and citation norms. He was a physicist proposing a rough heuristic for a specific context. In subsequent years, Hirsch himself expressed concern about how the index was being applied. He watched institutions use a tool he designed for narrow internal comparison as a universal measure of scientific worth and he was not comfortable with it. When the inventor of a measurement tool warns publicly that the tool is being misused, and institutions continue using it the same way, we are no longer dealing with a measurement problem. We are dealing with an institutional incentive problem. The h-index is convenient. It is defensible in committee rooms. It provides cover for decisions that would otherwise require genuine engagement with the difficulty of evaluating scientific quality. Institutions adopted it not because it is accurate. They adopted it because it is easy.

What Should Replace It and What Will Not I am often asked, when I raise these concerns, what I propose as an alternative. It is a fair question and I want to answer it honestly rather than optimistically. There is no single metric that can do what institutions want the h-index to do, because what institutions want is a simple number that makes difficult evaluative decisions easy. That number does not exist. Scientific quality is multidimensional, context-dependent, and partially subjective. Any honest evaluation system has to acknowledge that. What I have seen work in institutions serious about research quality is a combination of approaches. Narrative self-assessment, where researchers describe their five most significant contributions and explain the significance in their own words, provides information that no metric can capture. Field-normalized citation analysis, where citation counts are compared against the average for the field and career stage rather than in absolute terms, removes the worst distortions from disciplinary and temporal differences. Genuine external expert review, where specialists in the researcher's subfield evaluate the actual content of the work, remains the most reliable tool we have for assessing scientific contribution. None of these are as fast or as simple as looking up a number. All of them require institutions to invest time and expertise in evaluation. That investment is exactly what most institutions are trying to avoid when they reach for the h-index. This is why the h-index will not disappear quickly. Not because it is good. Because replacing it with something better requires more work, and the people who would have to do that work currently benefit from the existing system.

A Final Thought on What Science Is Actually For I have spent 35 years working in academic publishing because I believe that the careful, honest accumulation of knowledge is one of the most important things human beings do. I believe that research, done well, makes the world more intelligible and more manageable. I believe the people who dedicate their lives to that work deserve systems that evaluate them with something approaching the care they bring to their scholarship. The h-index is not that system. It is a rough heuristic applied with an institutional certainty it was never designed to support, producing outcomes that systematically filter out some of the most valuable scientific contributions being made today. The researchers who survive and thrive in this environment are often not the researchers doing the most important work. They are the researchers who best understand how to produce work that the current measurement system rewards. That is not a meritocracy. That is a system selecting for adaptability to its own metrics. Science deserves better than that. The researchers doing the work deserve better than that. And institutions that genuinely care about scientific quality will eventually have to choose between the convenience of the h-index and the mission they claim to serve. They cannot have both.

Dr. Marcus Eldridge has served as a manuscript reviewer for over 40 indexed journals, worked as associate editor for three Q1 publications, and has supported more than 2,000 researchers through the academic publication process over a 35-year career. He currently serves as Senior Research Advisor at Eldenhall Research.

Unlock the potential of your research narrative.

Submit Manuscript
Eldenhall Research

End-to-end academic research, writing, and publication support

Β© 2026 Eldenhall Research LLC.

Eldenhall Research LLC

Admin
Talk to ExpertWhatsApp Us Now

Eldenhall Research

Online Now
Chat with our editorial team β€” Ask anything about our services