Eldenhall Research

← Back to InsightsJournal Publishing

The Peer Review System Is Quietly Failing Science and Most Academics Are Too Afraid to Say So

March 27, 2026By Dr. Marcus Eldridge, Senior Research Advisor | 35 Years in Academic Publishing11 min read
The Peer Review System Is Quietly Failing Science and Most Academics Are Too Afraid to Say So

After 35 years inside the academic publishing ecosystem as a researcher, reviewer, editor, and advisor to journals across three continents, I can say what most of my peers will not: peer review as we practice it today is structurally broken. It does not consistently identify the best science. It consistently identifies the most familiar science. In this article, I break down exactly how this happened, what it costs the global research community, and what I believe must change, even if the publishing establishment refuses to hear it.

The Conversation Happening in Private That Nobody Publishes Every year, I attend at least four major academic publishing conferences. And every year, the same thing happens. In the conference halls, during keynotes and panel sessions, everyone speaks carefully. The language is diplomatic. Peer review is described as "evolving," "under pressure," or "in need of refinement." People nod. Papers are presented. Lunches are eaten. Then, in the evening, over dinner, the truth comes out. Senior editors from Q1 journals tell me they regularly receive reviews from so-called experts who clearly did not read past page four. Researchers tell me they have had groundbreaking work rejected because a reviewer, anonymous and unaccountable, had a personal preference for a competing theoretical framework. Journal editors admit they struggle to find reviewers at all, so they accept whoever says yes. I have been in this system for 35 years. I have reviewed manuscripts across disciplines including business, medicine, engineering, environmental science, and social sciences. I have sat on editorial boards. I have trained junior reviewers. I have watched excellent science disappear and mediocre science get celebrated. And I am no longer willing to call this "peer review under pressure." This is peer review in crisis. And it is time the academic community said so publicly.

How We Got Here: A Brief and Honest History Peer review was never designed to be what it has become. The formalized peer review process, as most journals practice it today, was largely institutionalized in the mid-twentieth century. Its original purpose was relatively modest: to give editors a second pair of expert eyes before committing to publication. It was advisory. It was a check, not a gatekeeping mechanism for all of human knowledge. Somewhere between then and now, peer review became the single most powerful filter in the global production of scientific knowledge. A paper's fate, and increasingly a researcher's career, funding, tenure, and reputation, rests on the judgments of typically two to three anonymous individuals who are not paid, not monitored, and not accountable for the quality of their assessments. Let that sit for a moment. The most consequential quality control mechanism in science operates almost entirely on unpaid voluntary labor, with no performance tracking, no standardization, and no systematic consequences for poor reviewing. I am not saying peer reviewers are malicious. Most are dedicated professionals trying to do a good job under real time constraints. I am saying the system was never designed to carry the weight we have placed on it, and it is visibly cracking under that weight.

What the Data Actually Shows and Why It Should Alarm You This is not a matter of opinion alone. The evidence has been accumulating for years. A 2015 study published in PLOS ONE conducted a landmark experiment. Researchers submitted a fabricated paper, deliberately seeded with obvious methodological errors, to 304 journals that claimed peer review. Over 60 percent accepted it. These were not obscure predatory publishers. Several were indexed journals with identifiable editorial boards. A meta-analysis examining inter-reviewer agreement found agreement rates hovering around 17 to 20 percent for many disciplines. To put that plainly: if you submit your paper to a journal and two reviewers assess it independently, there is roughly a one in five chance they will agree on whether it deserves publication. The rest is noise, personality, and chance. The Reproducibility Crisis, which rocked psychology in 2011 and has since spread to medicine, economics, and neuroscience, is directly linked to this failure. Peer review did not catch the problems in those papers. In many cases, peer review actively endorsed them. And yet, when I raise these statistics at conferences, the response is almost always the same: "Yes, but what is the alternative?" That question, I have come to believe, is the mechanism by which broken systems preserve themselves.

The Three Specific Failures Nobody Wants to Name Let me be precise, because vague criticism helps no one. Failure One: Anonymity Without Accountability The double-blind review model was designed to reduce bias. In theory, it is a good idea. In practice, it creates a system where a reviewer can write a dismissive, intellectually dishonest, or factually wrong assessment with zero professional consequence. I have seen reviewer comments that were demonstrably incorrect. Comments that misunderstood basic statistical concepts, misrepresented the author's argument, or simply expressed ideological disagreement dressed as methodological critique. Authors have no recourse. They cannot challenge the reviewer. They cannot ask for evidence. They must revise and resubmit or walk away. This is not peer review. This is anonymous authority without accountability. And we do not accept that standard in any other professional domain. Failure Two: The Familiarity Bias Here is something I have observed consistently across three decades: peer reviewers are not evaluating whether a piece of science is true or important. They are evaluating whether it matches their mental model of what good science looks like. This means that paradigm-challenging research, the work that most deserves publication, faces the steepest climb. The research that fits neatly into existing frameworks, uses familiar methods, and cites the right authors gets smoother passage. Thomas Kuhn described this in 1962 in The Structure of Scientific Revolutions. He explained how scientific communities resist paradigm shifts not because the new evidence is wrong, but because it is unfamiliar. Peer review, as currently structured, is one of the primary mechanisms through which that resistance operates. The science that most needs to be heard is the science the current system is most likely to suppress. Failure Three: The Reviewer Shortage and Its Consequences The volume of manuscript submissions to indexed journals has grown dramatically over the past two decades. The pool of qualified, willing reviewers has not grown at the same rate. Many senior academics, the people most qualified to review, are drowning in editorial requests and quietly declining them. What fills the gap? Junior researchers who lack the perspective to evaluate certain work. Reviewers who accepted an invitation because they felt obligated, not because they had time. In some cases, editors who cannot find qualified reviewers end up stretching the definition of "expert." I have spoken privately with editors at Q1 journals who told me they sometimes accept reviews they know are inadequate because the alternative is a six-month delay. They are not bad editors. They are managing an impossible logistics problem with no systemic support. The result is a peer review system that is simultaneously overloaded and understaffed, and the manuscripts caught in the middle are the casualties.

What This Means for Researchers in Practice If you are an early-career researcher, or a researcher from a non-English-speaking country, or someone working on genuinely novel interdisciplinary questions, this system does not serve you equally. You are being judged by an invisible panel whose qualifications you cannot verify, whose reasoning you cannot fully challenge, and whose biases, conscious or otherwise, may have nothing to do with the quality of your work. I want to be very direct about this: a rejection letter from a peer-reviewed journal is not a verdict on your science. It is one signal, from a flawed system, that your work did not pass through a particular editorial filter on a particular day. That does not mean all rejections are wrong. Many rejections are entirely appropriate. But it means that rejection alone tells you almost nothing definitive, and you should not treat it as such. What it does tell you is information: about journal fit, about framing, about the gap between your argument and your audience. Use that information. Use it precisely.

What I Believe Must Change I am not calling for the abolition of peer review. I am calling for an honest confrontation with what peer review actually is versus what we pretend it is. Here is what I believe the publishing community must do, and what some are beginning to do, though not fast enough. Open peer review must become the default, not the exception. When reviewer identities are known, or at least made available post-publication, the quality of reviews improves measurably. Accountability changes behavior. Journals like eLife and several BMJ-group publications have demonstrated this works. Reviewers must be formally recognized and evaluated. Academic institutions should credit peer reviewing in promotion and tenure decisions. Journals should track reviewer quality over time and provide structured feedback to reviewers just as they require feedback on manuscripts. The two-reviewer minimum is not sufficient for complex interdisciplinary work. Some manuscripts require three or four disciplinary perspectives to be evaluated fairly. Journal economics make this difficult. But scientific integrity must take priority. Authors must have the right to respond to factual errors in reviews. Not to argue subjectively, but to correct verifiable misstatements before a final decision is made. This is basic due process that the current system denies. Registered reports should become mainstream. In a registered report model, journals evaluate and accept the research question and methodology before the study is conducted. This eliminates publication bias toward positive results, one of the most corrosive distortions in the current system. None of this is radical. All of it has been piloted successfully somewhere. The obstacle is not innovation. The obstacle is the institutional inertia of an establishment that benefits from the current system exactly as it is.

A Final Word for the Research Community I have spent 35 years working inside academic publishing because I believe in what research is supposed to do. I believe in the accumulation of reliable knowledge. I believe in the extraordinary effort that researchers put into their work, the years of reading, designing, collecting, analyzing, and writing that go into a single manuscript. That effort deserves a better system than the one we currently operate. Peer review, properly designed and honestly managed, is still the best quality control mechanism we have. But "better than nothing" is not a standard worthy of science. The researchers who understand this system, who know what peer review can and cannot tell them, who study the journals they target, who understand what editors are looking for before they submit, those researchers publish more, publish better, and build stronger careers. Not because they cheat the system. Because they understand it clearly. Understanding a broken system is not the same as accepting it. I understand it. I still do not accept it. And neither should you.

Dr. Marcus Eldridge has served as a manuscript reviewer for over 40 indexed journals, worked as associate editor for three Q1 publications, and has supported more than 2,000 researchers through the academic publication process over a 35-year career. He currently serves as Senior Research Advisor at Eldenhall Research.

Unlock the potential of your research narrative.

Submit Manuscript
Eldenhall Research

End-to-end academic research, writing, and publication support

Β© 2026 Eldenhall Research LLC.

Eldenhall Research LLC

Admin
Talk to ExpertWhatsApp Us Now

Eldenhall Research

Online Now
Chat with our editorial team β€” Ask anything about our services
The Peer Review System Is Quietly Failing Science and Most Academics Are Too Afraid to Say So | Eldenhall Research