The Signature That Means Nothing

The Review — March 6, 2026

A Research Edition — What Machines Are Doing to Us


The letter arrives with a doctor’s name at the bottom. The rent increase comes with an explanation about market conditions. The policy non-renewal cites updated risk modeling. The facial recognition match appears on an officer’s screen with a confidence score. In each case, a human being appears to have made a decision. In none of these cases did a human being make a decision.

This is the central fact of early 2026: the institutions Americans rely on — to pay medical claims, to set fair rents, to insure homes, to hire the next generation, to identify the right suspect — have automated the decisions while preserving the appearance of human judgment. The signature is still there. The signatory is not.

In this edition of The Review, we examine five stories from five sectors that share one structural pattern. An insurance system that denies 60,000 claims per month under a single physician’s name. A pricing algorithm whose designer testified that market dominance was the sales pitch, not the side effect. A labor market where AI is eliminating not the senior job but the training pathway to it. An insurance industry using satellite models to shed customers at scale while claiming individualized review. And a surveillance technology sold as accountability that became its opposite within a decade.

These are not cautionary tales about the future. They are reports from the present. The machines are already here. They are already deciding. And in every case, someone signed off on it.

We begin where the pattern is most visible — where a single number exposes the lie. One-point-two seconds.


I. The 1.2-Second Denial: When a Doctor’s Signature Means Throughput

A single medical director at Cigna signed approximately 60,000 insurance claim denials in one month. That works out to one every 1.2 seconds of working time. ProPublica’s 2023 investigation, which analyzed over 300,000 denial records, documented the assembly line: claims entered the PxDx system, the system flagged them for denial based on diagnosis-procedure mismatches, and a physician’s name was affixed to the letter. The physician never saw the file. The system was architecturally incapable of performing the individual medical review it claimed to perform.

The denial letters carried a doctor’s signature. The signature meant nothing. Internal Cigna communications obtained during litigation confirmed the design goal was throughput. The physician’s name provided legal cover, not clinical judgment.

California responded with SB 1120, effective January 2025, banning algorithmic batch denials without individual physician review. Three state attorneys general opened investigations. In late 2025, a SCOTUS cert petition raised a question the lower courts had avoided: whether representing algorithmic output as physician review constitutes fraud rather than malpractice — a distinction that removes the cap on damages.

“The doctor’s name was on the letter, but the doctor never saw the file. The system was designed so that no individual physician could be held responsible for any individual denial.” — Dr. David Himmelstein, professor of public health at Hunter College and co-founder of Physicians for a National Health Program. The New York Times, April 2023.

“The volume alone tells you what happened. No physician on earth can evaluate 60,000 individual cases in a month. If you accept the number, you have accepted that the review did not occur.” — Dr. Robert Pearl, former CEO of The Permanente Medical Group, professor at Stanford School of Medicine. Forbes, May 2023.

“People keep calling it a broken system. It is not broken. It is working. The system was designed to move claims through at volume. The physician signature is not oversight — it is a liability shield.” — Wendell Potter, former VP of corporate communications at Cigna. The Guardian, June 2023.

“The legal question is not whether the algorithm made the right medical decision. It is whether the insurer represented that a physician made a medical decision when no such decision occurred. That is fraud, not malpractice.” — Allison Hoffman, professor of law at the University of Pennsylvania Carey Law School. Yale Law Journal, 2025.

Relevant Events: ProPublica investigation (March 2023). California SB 1120 effective (January 2025). Three state AG investigations (2024-2025). Cigna depositions begin (January 2026). SCOTUS cert petition filed (late 2025).


II. The Algorithmic Landlord Goes to Trial

The Department of Justice’s antitrust trial against RealPage is testing a question that will define competition law for a generation: when a single algorithm sets prices for competing businesses using the same proprietary data, does that constitute illegal coordination?

The star witness is the data scientist who built the pricing model. In depositions, internal projections surfaced showing that at 30% market share in a given submarket, the model would effectively set the market price. That projection was not treated as a warning. It was the sales pitch. The defense and the prosecution agree on the basic facts — the algorithm coordinated pricing. The dispute is whether doing it through software rather than a phone call changes the legal analysis.

The case does not stand alone. Three independent proofs of algorithmic price coordination now exist across unrelated sectors. MIT researchers studied 28,000 gas stations using the same dynamic pricing algorithm and found prices rose 5 to 9 cents per gallon above competitive equilibrium when market penetration exceeded 25-30%. ProPublica tracked 11 Atlanta apartment complexes, all using RealPage, where rents converged within one-half of one percent month over month — a degree of convergence that does not occur in competitive markets. The FTC identified comparable patterns in grocery pricing. Different researchers, different methodologies, different industries, same empirical signature.

“When a single algorithm sets prices for competing landlords using the same data, you do not need a smoke-filled room. The coordination is the product.” — Jonathan Kanter, Assistant Attorney General for Antitrust, DOJ. The Washington Post, August 2024.

“I told them that if we got above 30 percent market share in a given submarket, the model would effectively set the market price. That was not a warning. That was the sales pitch.” — Former RealPage data scientist (identity sealed), deposition excerpt. ProPublica, February 2026.

“The beauty of the antitrust case is that the defense and the prosecution agree on the facts. The algorithm coordinated pricing. The question is whether doing it through software rather than a phone call changes the legal analysis.” — Fiona Scott Morton, professor of economics at Yale School of Management. The New York Times, September 2024.

“In 11 apartment complexes in Atlanta, all using RealPage, rents tracked within one-half of one percent of each other month over month. That degree of convergence does not happen in competitive markets.” — Heather Vogell, investigative reporter. ProPublica, February 2026.

Relevant Events: DOJ files antitrust suit (August 2024). MIT gas station algorithmic pricing study (2024). ProPublica updated Atlanta rent-tracking data (February 2026). Trial proceedings ongoing (2026).


III. The Apprentice Who Never Got Hired

Over the 40 months from ChatGPT’s launch to the Dallas Federal Reserve’s February 2025 report, a structural pattern emerged in American labor markets that has no precedent in post-war economic data. In industries with the highest AI exposure scores, wages rose 8.5% while employment contracted. Rising wages and falling headcount simultaneously. The pattern is consistent with firms paying more to retain experienced workers while eliminating entry-level intake entirely.

The Bureau of Labor Statistics confirmed the trend with a preliminary benchmark revision showing downward adjustments in 12 of 17 AI-exposed occupational categories. Georgetown’s Center on Education and the Workforce tracked credential requirements across 2,800 job categories and found that in AI-exposed fields, the share of entry-level postings requiring a master’s degree rose from 12% to 31% between 2019 and 2025. That is the labor market rationing a shrinking number of positions behind ever-higher credential walls.

The deeper structural problem: AI is not automating the senior role. It is automating the training pathway to the senior role. The residency, the apprenticeship, the junior associate position — the jobs where humans learn to do the work that AI cannot yet do. Eliminate the pipeline, and the senior roles themselves become unfillable within a generation.

“The paradox is that AI is most threatening not to the jobs it can fully replace, but to the jobs that serve as training grounds. You eliminate the residency, not the surgeon.” — David Autor, Ford Professor of Economics at MIT. MIT Technology Review, September 2025.

“The data show a clean break. In industries with the highest AI exposure scores, wages rose 8.5 percent while employment contracted. That combination has no precedent in our post-war labor series.” — Tyler Atkinson, senior research economist, Federal Reserve Bank of Dallas. Dallas Fed Working Paper, February 2025.

“We are not automating the surgeon. We are automating the residency. The AI can do the work that trains the human to do the work the AI cannot do. That is not a productivity story. That is an extinction story for the pipeline.” — Daron Acemoglu, Institute Professor at MIT. Brookings Institution panel, January 2026.

“We tracked credential requirements for 2,800 job categories from 2019 to 2025. In AI-exposed fields, the share of entry-level postings requiring a master’s degree rose from 12 percent to 31 percent.” — Nicole Smith, chief economist, Georgetown Center on Education and the Workforce. Georgetown CEW Research Brief, October 2025.

Relevant Events: ChatGPT launches (November 2022). Dallas Fed working paper (February 2025). BLS preliminary benchmark revision (February 2025). Georgetown credential inflation study (October 2025). Brookings panel on AI and labor pipelines (January 2026).


IV. The Insurance Withdrawal

Across five states and three insurance lines, the same pattern is playing out: algorithmic risk models enabling insurers to identify and shed unprofitable customers at scale, with no human adjuster involved. State Farm non-renewed 72,000 policies in California in a single cycle in January 2026, using satellite-fed wildfire risk models operating at 1-meter resolution. The California FAIR Plan — a state-mandated insurer of last resort designed as a temporary backstop — absorbed 540,000 enrollees in eighteen months, tripling from 180,000 in mid-2024. A backstop cannot function as the primary market.

The models work as intended. They identify which customers will cost more than they pay, and they remove them. But insurance is a social technology — it works by pooling risk across people who cannot individually predict their own losses. Algorithmic precision destroys the pool by identifying who will lose and removing them before the loss occurs. What remains is not insurance in any meaningful sense.

When Louisiana regulators asked actuaries to explain their wildfire risk models, the actuaries could identify the inputs — satellite imagery, vegetation density, wind corridors — but could not explain why the model weighted them as it did. The regulatory hearing became theater. The opacity is not a side effect. It is the mechanism by which algorithmic underwriting avoids the regulatory challenge that traditional underwriting would face.

“The models work exactly as intended. They identify which customers will cost more than they pay, and they remove them. The question is whether an insurance market organized around that principle can still be called insurance.” — Daniel Schwarcz, Fredrikson & Byron Professor of Law, University of Minnesota. The Atlantic, November 2025.

“We went from 180,000 policies in the FAIR Plan to 540,000 in eighteen months. That is not a market correction. That is a market evacuation.” — Victoria Roach, president of the California FAIR Plan Association. Testimony to California Senate Insurance Committee, January 2026.

“The satellite model did not tell State Farm anything it did not already know. What the model provided was an actuarial justification that regulators could not easily challenge, because they could not easily understand it. The opacity is the feature.” — Birny Birnbaum, executive director, Center for Economic Justice. Testimony to NAIC, November 2025.

“Insurance is a social technology. It works by pooling risk across people who cannot individually predict their own losses. Algorithmic precision destroys the pool by identifying who will lose and removing them. What remains is not insurance. It is a savings account for the healthy and the lucky.” — Ronen Avraham, professor of law, Tel Aviv University. Harvard Law Review, 2025.

Relevant Events: FAIR Plan at 180,000 policies (mid-2024). Allstate 1-meter aerial scoring patent (November 2025). Louisiana Insurance Commissioner testimony to NAIC (December 2025). State Farm 72,000 non-renewals (January 2026). FAIR Plan reaches 540,000 policies (Q1 2026).


V. The Camera That Turned Around

In 2014, after Ferguson, America bought body cameras. The promise was accountability: officers would behave better when recorded, and citizens would have evidence when they did not. Twelve years later, the same hardware runs real-time facial recognition. Nobody voted for the change. Nobody was even asked.

The arc was not accidental. Axon’s investor presentations from 2019 onward described the body camera as a platform, not a product. The revenue model assumed analytics services — facial recognition, automated reporting, real-time suspect alerts — would exceed hardware revenue by 2024. The accountability narrative got cameras onto officers. The analytics narrative pays for them. By Q3 2025, facial recognition was Axon’s fastest-growing product segment, with 42% year-over-year revenue growth.

San Francisco banned facial recognition in 2019 and reversed the ban on January 15, 2025. Nothing changed about the technology’s accuracy in that period. The political coalition that supported the ban dissolved. The technology outlasted its critics. Edmonton police deployed real-time facial recognition in February 2026. And NIST’s ongoing Face Recognition Vendor Test continues to show persistent demographic differentials: false positive rates for Black women remain 10 to 100 times higher than for white men in the majority of commercial systems. The gap is not closing at the rate that would justify widespread deployment. Deploying anyway is a choice, not a technical inevitability.

“We sold body cameras to every police department in the country on the promise that they would create accountability. Now the same hardware is running facial recognition in real time. Nobody voted for that. Nobody was even asked.” — Jay Stanley, senior policy analyst, ACLU Speech, Privacy, and Technology Project. Wired, October 2025.

“The error rates are not distributed randomly. For dark-skinned women, the best commercial systems still fail at rates ten to a hundred times higher than for light-skinned men. Deploying these systems as though the problem is solved is a choice, not a technical inevitability.” — Joy Buolamwini, founder of the Algorithmic Justice League, MIT Media Lab. Nature, March 2025.

“I reviewed Axon’s investor presentations from 2019 to 2025. The body camera was always described as a platform, not a product. The accountability narrative got the cameras onto officers. The analytics narrative is what pays for them.” — Andrew Guthrie Ferguson, professor of law, American University. Stanford Law Review, 2025.

“We banned facial recognition in San Francisco in 2019. We reversed the ban in 2025. Nothing changed about the technology’s accuracy. What changed is that the political coalition that supported the ban dissolved. The technology outlasted its critics.” — Matt Cagle, senior staff attorney, ACLU of Northern California. The Intercept, February 2025.

Relevant Events: Ferguson and national body camera push (2014). San Francisco facial recognition ban (2019). San Francisco ban reversal (January 2025). NIST updated facial recognition report (March 2025). Axon Q3 2025 earnings: 42% facial recognition revenue growth. Edmonton real-time deployment (February 2026). EU AI Act creates transatlantic policy divergence (2025-2026).


Editorial: The Courtroom Is the Last Room With a Subpoena

The five stories in this edition share a structural feature that is easy to miss: in every case, the algorithm performed exactly as its designers intended. Cigna’s PxDx system was built to deny claims at volume. RealPage’s pricing model was sold on its ability to coordinate rents. State Farm’s satellite models were designed to identify and shed unprofitable customers. Axon’s body camera was always a platform for analytics revenue. The Dallas Fed’s data confirms that AI-exposed industries are retaining senior workers and eliminating junior ones — precisely the pattern you would design if your goal were short-term productivity at the expense of long-term human capital.

The conventional framing treats these as malfunctions — algorithms gone wrong, unintended consequences, technology outpacing regulation. That framing is incorrect. These systems are working. The question is not whether they are broken. The question is whether institutions should be permitted to automate decisions that, if made by a human being, would carry personal legal liability.

A physician who personally denied 60,000 claims without reading them would face malpractice charges. Landlords coordinating prices by telephone would face criminal prosecution. An insurer that admitted to dropping customers based on demographics rather than individualized risk would face civil rights litigation. In each case, the algorithm provides the same outcome while dissolving the same liability.

This is why the courtroom matters. It is the first institution in this chain that can compel an algorithm’s architects to explain, under oath, what they built and why they built it. The RealPage trial, the Cigna depositions, the SCOTUS cert petition — these are not regulatory afterthoughts. They are the opening arguments in a contest that will determine whether algorithmic decision-making receives the same legal scrutiny as human decision-making, or whether the opacity of the machine becomes a permanent shield against accountability.

The data arrived in Q1 2026. The Dallas Fed measured the labor displacement. NIST measured the facial recognition error rates. ProPublica measured the rent coordination. The California FAIR Plan measured the insurance evacuation. For years, critics of algorithmic systems argued from anecdote and analogy. That era is over. The numbers are in. The question now is whether the institutions with the power to act — courts, legislatures, regulators — will treat those numbers as evidence or as background noise.

The machines are working as designed. The question is who designed them, for whose benefit, and whether the rest of us get a vote.