VOL. I, NO. 1 • THURSDAY, FEBRUARY 12, 2026 • PRICE: ONE MOMENT OF ATTENTION

THE REVIEW

“What we know is only as good as the machinery that tells us we know it”


Science Built a Trust Machine. It’s Coughing Up Smoke.

A special edition on the creaking, wheezing, occasionally hilarious infrastructure that decides what counts as knowledge—and the people trying to fix it, exploit it, or burn it down

The journal Environmental Microbiology publishes, at the end of each year, a collection of the most memorable things its anonymous peer reviewers have written in confidence. One reviewer begged: “Is there a chance you could send me any good papers, at least once in a while?” Another simply wrote: “Reject—more holes than my grandad’s string vest.”

These are the people we trust to decide what is true. Unpaid volunteers, working nights and weekends, wading through manuscripts they didn’t ask for, powered by coffee and professional obligation. It’s the scientific equivalent of running air traffic control with a volunteer fire department. For three centuries, more or less, it worked.

In 2025, that system ran into a wall. Fraud factories now produce fake research papers faster than anyone can catch them. Artificial intelligence wrote one in five peer reviews at a major conference—and nobody noticed until a forensic audit. The one federal agency charged with policing scientific misconduct managed two findings in an entire year. A known-fraudulent paper about the world’s most popular weed killer sat in the scientific record for a quarter century after the evidence of its fabrication became public, because literally no one’s in-box was set up to receive a complaint. And the language of legitimate scientific reform was borrowed, with a straight face, to justify the largest cuts to federal research funding in modern history.

These are not separate crises. They are the same story: the infrastructure that turns claims into “knowledge” was built for a smaller, slower world, and it is now failing at scale. This edition of The Review traces how we got here—with, where possible, a sense of humor about the absurdity of it all. Because if you can’t laugh at a system where robots review papers written by other robots while human scientists serve as the biological substrate for a closed circuit of synthetic evaluation, what can you laugh at?

Dear reader, pour yourself something strong. We have some machinery to inspect.

❧ ❧ ❧

Your Grandparents Had No Peer Review, and They Were Fine

The surprisingly modern invention that scientists assume has existed since the Enlightenment is actually younger than the polio vaccine

Ask a scientist when peer review began and you will get a vague hand-wave toward Isaac Newton’s era—the comfortable assumption that anonymous expert evaluation is as old as science itself. Historians who actually check the record find something funnier: for roughly 300 years after the first scientific journal launched in 1665, the system we now treat as sacred simply didn’t exist.

Henry Oldenburg, who founded the Philosophical Transactions of the Royal Society, selected papers himself, occasionally asking friends what they thought. This was not a scandal. It was the system. It worked fine for a scientific community that was, in historian Melinda Baldwin’s phrase, “a small, select group” communicating through short articles, personal letters, and the occasional public demonstration.

The shift came not from an epistemological breakthrough but from a bureaucratic headache. After World War II, the U.S. government started spending enormous sums on science. New agencies—the NSF, the expanded NIH—needed a standardized way to justify sending taxpayer money to researchers. Someone other than the researcher’s friends needed to sign off on the work. External anonymous review was formalized in the 1950s through the 1970s: a managerial fix for a scaling problem, dressed up over time as an ancient tradition.

“Indeed, as all papers sent to Nature are checked by members of the board, peer review is unnecessary.”

Not everyone was thrilled. Nobel Prize winner Max Perutz summoned an editor from Nature to his office at Cambridge in 1974 to demand an explanation for why the journal had started peer-reviewing his lab’s papers. His view was blunt: editorial judgment had served molecular biology brilliantly; this new bureaucratic process was a downgrade. He wasn’t wrong, exactly. He just couldn’t see what was coming.

What was coming was three million papers a year, industrial-scale fraud, and AI-generated submissions. The system Perutz objected to was designed for a world of perhaps 10,000 active researchers. It now serves millions. As a Scientific American analysis put it in 2024, peer review as we know it is “a surprisingly recent invention.” Treating it as untouchable prevents exactly the kind of modular redesign it urgently needs.

Meanwhile, the Impact Factor—the single number that governs more academic careers than any other metric on earth—started life as a Cold War-era mapping tool. Eugene Garfield built the Science Citation Index in the early 1960s with NIH and NSF funding, not to measure quality but to track how ideas spread. It measured social velocity: which papers were influencing which other papers, regardless of whether those papers were right. By the 1990s, this connectivity map had been repurposed, without validation, as a proxy for scientific prestige.

Garfield himself warned against using it that way. Nobody listened.

The result is a system where a paper cited 500 times looks like a triumph even if 75 of those citations are saying it’s wrong. Citation sentiment analysis—platforms like scite.ai have classified over 25 million citation statements—shows that 10 to 15 percent of all citations are negative or critical. The system counts them all the same. It’s like measuring a restaurant’s popularity by counting both Yelp reviews and health code violations.

For Further Reading: Perspectives

🟢 Pro: ""Peer Review Is a Surprisingly Recent Invention—But It Still Works""

A Scientific American analysis argues that despite its recent origins, peer review has empirically underpinned the most productive era of science in history, and dismantling it without a tested replacement risks chaos. Source: [scientificamerican.com (2024)](https://www.scientificamerican.com/blog/information-culture/the-birth-of-modern-peer-review/)

🔴 Con: ""Will NIH’s New Director Reform His Agency—or Destroy It?""

Science (AAAS) profiles Bhattacharya’s argument that peer review is ‘leaky’ and that the entire system rewards prestige over rigor, with Ioannidis pleading: ‘We need some shaking up and we need that to happen rigorously.’ Source: [science.org (2025)](https://www.science.org/content/article/will-nih-s-new-director-reform-his-agency-or-destroy-it)

❧ ❧ ❧

Fraud Is Now an Industry, and Business Is Booming

Paper mills churn out fake science faster than anyone can catch it. Then the robots showed up.

Scientific fraud used to be a cottage industry—one ambitious postdoc, one faked image, one career-ending retraction at a time. Those days are over. “Paper mills,” commercial operations that manufacture fraudulent manuscripts and sell authorship slots, now produce fake research at a pace that makes the legitimate scientific enterprise look like it’s standing still.

How fast? A study published in the Proceedings of the National Academy of Sciences last August found that the number of fraudulent papers in the literature is doubling every 1.5 years. Legitimate science doubles every 15 years. If you plot those two curves on the same graph, the picture is terrifying—or, depending on your sense of humor, the setup to a very dark joke about exponential functions.

Cancer researcher Jennifer Byrne screened roughly 12,000 cancer papers. Six percent showed characteristics of paper mill production—fabricated data, manipulated images, assembly-line text. Extrapolate that across the millions of papers published annually in biomedicine and you arrive at a number that should keep journal editors up at night.

The economics are depressingly rational. Career advancement depends on publication volume. Article processing charges run 10,000 per paper. Paper mills charge a few hundred to a few thousand for a co-authorship slot. The clients pay because conducting original research is slower, more expensive, and less certain. The mills profit because they can produce manuscripts at scale. The publishers profit because they collect fees regardless. It’s a supply chain, and everyone’s incentives are aligned—except, of course, those of the people who need the science to be real.

Taylor & Francis reported that 2,000 of its 4,000 ethics cases in 2023 involved paper mills. In 2019, the number was zero. Wiley flags one in seven submissions. This is not gradual drift. This is an industry that materialized in four years.

“This paper is the very expression of what happens when one tries to chop up one piece of work into as many publications as possible.”

Then came the robots. If paper mills are fraud at industrial scale, generative AI is fraud at the speed of light. A Nature investigation reported that up to 20 percent of submissions to top computer science conferences showed signs of AI generation. ArXiv, the dominant preprint server, saw rejection rates triple—from 4 percent to 10–12 percent—almost entirely due to AI-generated “slop.”

The most devastating data point came from a forensic audit of the International Conference on Learning Representations 2026: of 75,800 peer reviews, 21 percent were written entirely by AI. Over half showed some degree of machine involvement. The AI-written reviews were longer but emptier—verbosity without depth. They were also nicer, because large language models are, in the audit’s memorable phrase, “people pleasers.” They cited papers that don’t exist. They suggested methodological improvements regardless of relevance: “needs more ablations” became the chatbot equivalent of “thoughts and prayers.”

As one professor reflected after the audit, the community was forced to “confront the absurdity of a system where we use AI to write papers for other AIs to review.”

ArXiv’s response was structural: in January 2026, it expanded its endorsement mandate, requiring newcomers to the platform to obtain human sponsorship before posting. It’s a social gate—a bouncer at the door of the preprint server. Whether it holds is an open question. The line is long, and the robots are patient.

For Further Reading: Perspectives

🟢 Pro: ""AI Tools Tackle Paper Mill Fraud Overwhelming Peer Review""

Chemistry World details how AI-powered screening tools are catching fraudulent submissions faster than human editors ever could, with PLOS increasing desk rejections from 13% to 40% using automated integrity dashboards. Source: [chemistryworld.com (Oct. 2025)](https://www.chemistryworld.com/features/ai-tools-tackle-paper-mill-fraud-overwhelming-peer-review/4022253.article)

🔴 Con: ""Fraudulent Papers Are Doubling Every 1.5 Years""

Richardson et al.’s landmark PNAS analysis documents the exponential growth rate of paper mill output, arguing that detection infrastructure is losing the arms race despite record retraction rates. Source: [pnas.org (Aug. 2025)](https://www.pnas.org/doi/abs/10.1073/pnas.2420092122)

❧ ❧ ❧

America’s Science Watchdog Made Two Calls Last Year

The Office of Research Integrity is supposed to police fraud in federally funded research. In 2025, it found misconduct exactly twice. Volunteers are doing the rest.

The Office of Research Integrity exists for one purpose: investigate scientific misconduct in publicly funded research and, when the evidence warrants, bar fraudsters from receiving federal grants. It is the only U.S. federal body with this authority. It oversees research funded through the NIH—the largest biomedical research funder on earth.

In all of 2025, ORI made findings of misconduct in two cases. Two. The historical average is about ten per year. Retraction Watch, which broke the story in December, noted it was the lowest count since 2006.

The pipeline data from ORI’s own annual report tells the fuller story: 713 allegations entered the system. 119 cases were closed. Two findings emerged. That is a ratio of roughly 350-to-1, allegations to outcomes. In any other regulatory context—banking, aviation, food safety—a throughput like that would provoke a congressional hearing. In science, it provoked a blog post.

The structural problem is older than the current collapse. ORI’s investigative process depends on cooperation from the universities where accused researchers work. Those universities are asked to investigate their own faculty—faculty who often bring in millions of dollars in overhead on their grants. A principal investigator with a large NIH portfolio is not merely a researcher; they are a revenue source. Asking an institution to investigate its own cash cow is, as one observer put it, “like asking a casino to police its own whales.”

“Like asking a casino to police its own whales.”

So who’s actually doing the work? A decentralized network of unpaid volunteers. Elisabeth Bik scans images full-time, funded by donations, identifying thousands of papers with duplicated or fabricated figures. Jennifer Byrne screens cancer papers for mill characteristics on top of her research duties. PubPeer hosts anonymous commentary flagging problematic papers. Retraction Watch tracks it all.

None of them are paid by the institutions that benefit from their labor. The publishers who profit from the journals they police don’t fund them. The universities whose reputations they protect don’t support them. The federal government, which spent billions funding the research they audit, allocated a budget to ORI that produced two findings in a year.

ORI issued its first finding of 2026 on Feb. 6, confirming the office hasn’t been shuttered entirely. But the question hangs: how long can a system function when its formal oversight mechanism has effectively stopped and its replacement runs on goodwill?

For Further Reading: Perspectives

🟢 Pro: ""Director Shares Vision for NIH""

The NIH Record presents Bhattacharya’s argument that the real crisis is not fraud but unreproducible science, and that ORI’s low output may reflect a shift toward prevention and systemic reform rather than individual prosecution. Source: [nihrecord.nih.gov (Jan. 2026)](https://nihrecord.nih.gov/2026/01/30/director-shares-vision-nih)

🔴 Con: ""ORI Made Just 2 Misconduct Findings in 2025""

Retraction Watch documents the collapse in federal oversight, noting that the office’s staff page was removed from its website and leadership experienced significant turnover during a period of record fraud detection by external actors. Source: [retractionwatch.com (Dec. 2025)](https://retractionwatch.com/2025/12/18/ori-has-released-just-two-misconduct-findings-this-year/)

❧ ❧ ❧

It Took an Astrophysicist to Kill a Ghost

A fraudulent paper about the world’s most popular weed killer survived in the scientific record for 25 years. Two outsiders finally pulled the plug—by simply asking.

In the year 2000, three scientists published a review concluding that the herbicide Roundup—the most widely used weed killer in human history—posed no health risk to humans at expected exposure levels. The paper climbed into the top 0.1 percent of all glyphosate-related citations. It was referenced by the EPA, the European Food Safety Authority, Health Canada, and Wikipedia. For a quarter of a century, it functioned as a load-bearing wall in the global regulatory architecture for glyphosate safety.

In 2017, litigation discovery in the Roundup cancer lawsuits cracked open Monsanto’s internal communications. The resulting “Monsanto Papers” contained what may be the most damning single email in the history of corporate ghostwriting. A Monsanto scientist, William Heydens, wrote about a new paper being planned: “Recall that is how we handled Williams Kroes and Munro 2000.” The process he described: Monsanto scientists would write the paper, then pay external experts to “edit & sign their names.”

This wasn’t rumor. It was documentary evidence produced under legal discovery. The paper was not an independent review. It was a corporate communication wearing a lab coat.

Here is where the story should end: evidence goes public, journal investigates, paper gets retracted. Here is where the story actually went: for eight years after the ghostwriting evidence became public knowledge—reported in major media, discussed at conferences, cited in subsequent publications—nothing happened within the scientific publishing system. No retraction. No expression of concern. No editorial investigation. The paper continued to be cited. Regulators continued to reference it.

The correction finally came because two people from completely outside the field—Alexander Kaurov, an astrophysicist at Victoria University of Wellington, and Naomi Oreskes, a historian of science at Harvard—published an impact analysis in Environmental Science & Policy tracing the paper’s influence. Then they did something remarkably simple: in July 2025, they sent a formal retraction request to the journal.

On Nov. 28, 2025, the paper was retracted. Editor-in-chief Martin van den Berg told Retraction Watch that the request was “actually the first time a complaint came to my desk directly.” Van den Berg had held the editorship since 2019. The evidence had been public since 2017. No one—not a reader, not a reviewer, not an institutional integrity officer—had filed a formal complaint in all that time.

The system did not fail because it evaluated the evidence and found it insufficient. It failed because the evidence never reached the system. There was no in-box for it to arrive in.

“I am sure there are a lot of such ghost-written and undeclared conflict papers in the literature, but they are very difficult to unearth unless one goes really deep in litigation cases.”

Roundup’s maker, Bayer (which acquired Monsanto in 2018), maintains that the company’s involvement “did not rise to the level of authorship.” The EPA says it “never relied on this specific article.” Both claims may have technical merit. Neither addresses the structural failure: a known fraud, in plain sight, for a quarter century, with no mechanism for correction until two outsiders invented one.

For Further Reading: Perspectives

🟢 Pro: ""Activists Attempt to Create a Scandal Over Retracted 25-Year-Old Narrative Summary on Glyphosate""

The Genetic Literacy Project argues that industry involvement in drafting review papers is a common and accepted practice, that the retraction is ‘largely overinterpreted’ by anti-pesticide campaigners, and that the underlying science on glyphosate safety remains robust. Source: [geneticliteracyproject.org (Jan. 2026)](https://geneticliteracyproject.org/2026/01/29/viewpoint-activists-to-create-a-scandal-over-retracted-25-year-old-narrative-summary-on-glyphosate-safety-falls-flat/)

🔴 Con: ""The Impact of a Ghostwritten Paper on the Fate of Glyphosate""

Kaurov and Oreskes in Undark trace how a single ghostwritten paper shaped two decades of regulatory decisions, Wikipedia entries, and AI training data, arguing that corporate ghost-writing is ‘a form of scientific fraud’ that corrupts the entire evidentiary chain. Source: [undark.org (Aug. 2025)](https://undark.org/2025/08/15/opinion-ghostwritten-paper-glyphosate/)

❧ ❧ ❧

The Cure That Ate the Patient

The replication crisis is real. So is the $9.5 billion in cancelled research grants justified by talking about it.

For eighty years, the United States operated under a simple bargain: the government funds basic research, scientists do science, and eventually useful things emerge. The architect was Vannevar Bush, whose 1945 report Science, the Endless Frontier proposed the deal. The bargain produced the transistor, the polio vaccine, the internet, and CRISPR. It worked.

Beginning in the early 2010s, reform-minded scientists documented a serious problem: when independent teams tried to reproduce published results, they failed at alarming rates. The Reproducibility Project found that only 36 percent of psychology studies could be replicated. A Bayer internal review found roughly 75 percent of preclinical drug studies failed replication. The reformers—prominently Brian Nosek, John Ioannidis, and their colleagues—diagnosed structural incentive problems and prescribed structural solutions: pre-registration, open data, funded replication, incentive reform. Every one of those prescriptions costs money.

Enter Jay Bhattacharya, appointed NIH Director in April 2025. A Stanford health economist best known for the Great Barrington Declaration, Bhattacharya brought the replication crisis to center stage. At a January 2026 fireside chat at Duke, he asked: “If half or more of the published literature is not true, how do we, as scientists, take the next step?”

It’s a real question. The Duke Chronicle reported that he discussed it at length while omitting any discussion of the funding cuts his agency was simultaneously implementing. That omission is the hinge.

Since January 2025, the NIH has terminated approximately 2,100 research grants totaling $9.5 billion. The proposed FY2026 budget would cut NIH’s funding by 40 percent. A Senate report documented the wreckage: cancer research lost 38 percent of terminated funding. The 26-year-old Pediatric Brain Tumor Consortium—the primary vehicle for testing new therapies for children with fatal brain cancers—was shut down. A 35-year longitudinal Alzheimer’s study at Johns Hopkins was terminated; longitudinal studies cannot be “paused.” Three decades of data tracking was effectively incinerated.

The agency deployed keyword filters to screen grants. Terms flagged for termination: “health equity,” “structural racism,” “climate change,” “gender identity,” “trans.” A study on transcription factors could be flagged alongside DEI-related research. The system made no distinction between scientific and political usage of a word.

In November 2025, the NIH eliminated its payline system—the threshold that allowed researchers to predict their odds of funding. Without paylines, every decision becomes discretionary. Power shifts from scientific review panels to administrative leadership. This is not a side effect. It is the architecture.

The Paragon Health Institute, a think tank with ties to the first Trump administration, proposed that NIH spend 4.9 billion in cancelled grants. The ratio of proposed investment in replication to actual disinvestment in research: 1-to-100.

On June 9, 2025, nearly 500 NIH employees issued the Bethesda Declaration—deliberately modeled on Bhattacharya’s own Great Barrington Declaration—demanding restoration of terminated grants, protection of human subjects in mid-trial, and reinstatement of independent peer review. A companion letter gathered 32,746 signatures, including 69 Nobel laureates. In November, Jenna Norton, identified as a lead author, was placed on administrative leave.

Congress, for its part, has largely rejected the proposed cuts. A bipartisan spending bill released in January 2026 set NIH’s budget at 415 million increase over the previous year. The legislative process is ongoing.

The reformers who identified the replication crisis argued for more investment in rigor. The political actors who adopted their language are using it to justify less investment in everything. The diagnosis has been severed from the prescription. The disease name is being used to justify withdrawing treatment.

“None of us were saying, ‘Oh, just literally blow up the whole system.‘”

For Further Reading: Perspectives

🟢 Pro: ""Jay Bhattacharya Announces the ‘Second Scientific Revolution’ for Public Health""

NutraIngredients reports from the MAHA Institute’s ‘Reclaiming Science’ event, where Bhattacharya frames the NIH restructuring as a paradigm shift toward reproducibility, arguing that ‘the constructive way forward is to change the major basis of truth in science to replication.’ Source: [nutraingredients.com (Feb. 2026)](https://www.nutraingredients.com/Article/2026/02/02/jay-bhattacharya-announces-the-second-scientific-revolution-for-public-health/)

🔴 Con: ""What the Impacts of NIH Cuts Tell Us About the Future""

STAT News First Opinion analyzes how the administration’s approach goes far beyond legitimate reform, arguing that ‘characterizing the threat requires moving beyond top-line budget numbers’ to examine whether ‘evidence or ideology now governs science and medical research in America.’ Source: [statnews.com (Dec. 2025)](https://www.statnews.com/2025/12/18/nih-cuts-impacts-future-analysis/)

❧ ❧ ❧

EDITORIAL

The Plumbing Problem

There is a joke—possibly apocryphal, definitely appropriate—about a homeowner who calls a plumber to fix a leaky pipe. The plumber walks in, looks at the pipe, taps it once with a wrench, and the leak stops. The bill arrives: 5. Knowing where to tap was $495.”

The story of scientific publishing in 2026 is a plumbing problem. Not a values problem, not an intelligence problem, not even primarily a corruption problem—although corruption is certainly present. The core issue is plumbing: the specific pipes, joints, valves, and pressure gauges that move claims from “someone thinks this” to “this is knowledge.” Those pipes were installed in the 1950s and 1960s. They have not been replaced.

Peer review bundles at least four distinct functions—filtering nonsense, validating claims, signaling quality, and gatekeeping careers—into a single system that handles all of them badly at current scale. The Impact Factor measures social velocity and calls it truth. The Office of Research Integrity depends on the institutions it oversees to cooperate with investigations. The editorial system has no intake mechanism for evidence that arrives through litigation instead of formal academic complaint. And the fraud-detection system is, functionally, a volunteer fire department with no budget, no legal authority, and no protection from lawsuits.

Each of these failures is fixable. Peer review’s functions can be unbundled and redesigned independently. Citation metrics can incorporate sentiment analysis so that critical citations stop counting as endorsements. Replication can be funded as a dedicated function. Oversight can be professionalized. Editorial systems can be built to receive evidence from any source.

But—and this is the uncomfortable part—the current failures are also exploitable. The legitimate diagnosis of the replication crisis has been adopted by political actors whose interest is not in building better plumbing but in shutting off the water. When you cut 48 million for quality control, you are not fixing pipes. You are turning the water off and calling it a plumbing reform.

This editorial does not take a position on the specific budget level the NIH should have, or the precise structure of a reformed peer review system, or whether glyphosate is safe. It takes the position that knowing where to tap requires first understanding the pipes. The essays in this edition are an attempt to understand the pipes.

The trust factory is not sacred. It is not ancient. It is not inevitable. It was built by specific people, in specific decades, for specific purposes. It can be rebuilt. The question is whether the rebuilding will be done by people who understand the plumbing, or by people who have discovered that broken pipes are politically useful.

We trust you, dear reader, to bring your own wrench.

For Further Reading: Perspectives

🟢 Pro: ""Amid White House Claims of a Research ‘Replication Crisis,’ Scientists Push Back—and Propose Fixes""

Chemical & Engineering News explores how reformers like Nosek and Goldman cautiously welcome replication funding while fearing the political context, with Goldman noting: ‘If this is legitimately a request for reform, rather than a justification for saying science is bad, then there are a lot of people willing to work with the administration.’ Source: [cen.acs.org (July 2025)](https://cen.acs.org/research-integrity/reproducibility/Amid-White-House-claims-research/103/web/2025/06)

🔴 Con: ""Protect NIH for the Health and Welfare of the Nation""

Emeritus faculty at the University of Utah Eccles School of Medicine argue in the Deseret News that the administration’s cuts and politically appointed review layers have dismantled a ‘great institution’ under the guise of reform, and call for the removal of both Kennedy and Bhattacharya. Source: [deseret.com (Jan. 2026)](https://www.deseret.com/opinion/2026/01/15/nih-funding-research-cuts-rfk-jr-jay-bhattacharya/)


Production Note: This edition of The Review was generated with AI assistance on Feb. 10, 2026, using research compiled from multiple independent reports and cross-referenced against publicly available sources. All factual claims are sourced to the underlying research report (“The Trust Factory,” Feb. 2026) and independently verified through web search where possible. Where claims could not be verified, they are flagged. The humor is real; the facts are, to the best of our ability, also real. Your skepticism remains appropriate and encouraged.

Coming Next: If the plumbing is broken, what would new plumbing look like? A future edition examines the people building alternatives—open science platforms, registered reports, post-publication review, and whether any of them can scale before the old system collapses entirely. Also: more funny reviewer quotes. We have a file.

The Review is not available to the general public. Distribution is limited to readers vouched for by existing subscribers. There are no invitations. If you know, you know.

© 2026 The Review. All rights reserved. Editor: Daniel Markham Generated: Thursday, Feb. 12, 2026