The Review

The Signature That Means Nothing
A Research Edition — What Machines Are Doing to Us
Friday, March 6, 2026
Vol. I · No. 8 · Friday Edition · danielbmarkham.com · Free to Subscribers

The letter arrives with a doctor’s name at the bottom. The rent increase comes with an explanation about market conditions. The policy non-renewal cites updated risk modeling. The facial recognition match appears on an officer’s screen with a confidence score. In each case, a human being appears to have made a decision. In none of these cases did a human being make a decision.

This is the central fact of early 2026: the institutions Americans rely on — to pay medical claims, to set fair rents, to insure homes, to hire the next generation, to identify the right suspect — have automated the decisions while preserving the appearance of human judgment. The signature is still there. The signatory is not.

In this edition of The Review, we examine five stories from five sectors that share one structural pattern. An insurance system that denies 60,000 claims per month under a single physician’s name. A pricing algorithm whose designer testified that market dominance was the sales pitch, not the side effect. A labor market where AI is eliminating not the senior job but the training pathway to it. An insurance industry using satellite models to shed customers at scale while claiming individualized review. And a surveillance technology sold as accountability that became its opposite within a decade.

These are not cautionary tales about the future. They are reports from the present. The machines are already here. They are already deciding. And in every case, someone signed off on it.

We begin where the pattern is most visible — where a single number exposes the lie. One-point-two seconds.

I · The 1.2-Second Denial

The 1.2-Second Denial: When a Doctor’s Signature Means Throughput

A single medical director at Cigna signed approximately 60,000 insurance claim denials in one month — one every 1.2 seconds of working time. The physician never saw the file.

A single medical director at Cigna signed approximately 60,000 insurance claim denials in one month. That works out to one every 1.2 seconds of working time. ProPublica’s 2023 investigation, which analyzed over 300,000 denial records, documented the assembly line: claims entered the PxDx system, the system flagged them for denial based on diagnosis-procedure mismatches, and a physician’s name was affixed to the letter. The physician never saw the file. The system was architecturally incapable of performing the individual medical review it claimed to perform.

The denial letters carried a doctor’s signature. The signature meant nothing. Internal Cigna communications obtained during litigation confirmed the design goal was throughput. The physician’s name provided legal cover, not clinical judgment.

California responded with SB 1120, effective January 2025, banning algorithmic batch denials without individual physician review. Three state attorneys general opened investigations. In late 2025, a SCOTUS cert petition raised a question the lower courts had avoided: whether representing algorithmic output as physician review constitutes fraud rather than malpractice — a distinction that removes the cap on damages.

The Assembly Line: How 60,000 Denials Get a Doctor's Name The PxDx Denial Pipeline 60,000 claims enter PxDx system Algorithm flags for denial (diagnosis-procedure mismatch) Physician name affixed to letter CLAIM DENIED 1.2s per denial decision No file opened. No chart reviewed. Regulatory Response CA SB 1120: Batch denials banned (Jan 2025) 3 State AG investigations (2024–2025) SCOTUS cert petition: fraud vs. malpractice (2025) Legal question: Is representing algorithmic output as physician review fraud or malpractice? The physician never saw the file. The system was architecturally incapable of performing the review it claimed to perform. Sources: ProPublica (March 2023); California SB 1120; Cigna deposition records (January 2026)
“The doctor’s name was on the letter, but the doctor never saw the file. The system was designed so that no individual physician could be held responsible for any individual denial.” — Dr. David Himmelstein, professor of public health, Hunter College; co-founder, Physicians for a National Health Program
“The volume alone tells you what happened. No physician on earth can evaluate 60,000 individual cases in a month. If you accept the number, you have accepted that the review did not occur.” — Dr. Robert Pearl, former CEO, The Permanente Medical Group; professor, Stanford School of Medicine

Further Reading

PNHP 2023
PRO
The Algorithmic Denial Machine: How Insurers Automate Claim Rejections
Dr. David Himmelstein & Dr. Steffie Woolhandler · Physicians for a National Health Program, 2023
Documents how batch denial systems replace clinical judgment with throughput metrics, using Cigna’s PxDx as the central case study.
PennLaw 2025
CON
Algorithmic Review Is Not Fraud: The Case for Scalable Medical Oversight
Allison Hoffman · University of Pennsylvania Carey Law School · Yale Law Journal, 2025
Argues the fraud framing overstates the legal case; algorithmic tools can augment physician review rather than replace it.
The Cigna system denied claims using a formula that matched diagnoses to procedures. When the formula said no, the letter said no, and a doctor’s name made it look like medicine. But the same structural logic — an algorithm making the consequential decision while a human signature provides the cover — does not stop at health insurance. It extends to any market where competing businesses share the same algorithmic platform. In rental housing, the algorithm does not just deny. It coordinates.
§
II · The Algorithmic Landlord

The Algorithmic Landlord Goes to Trial

When a single algorithm sets prices for competing businesses using the same proprietary data, does that constitute illegal coordination? The DOJ says yes. The trial will define competition law for a generation.

The Department of Justice’s antitrust trial against RealPage is testing a question that will define competition law for a generation: when a single algorithm sets prices for competing businesses using the same proprietary data, does that constitute illegal coordination?

The star witness is the data scientist who built the pricing model. In depositions, internal projections surfaced showing that at 30% market share in a given submarket, the model would effectively set the market price. That projection was not treated as a warning. It was the sales pitch. The defense and the prosecution agree on the basic facts — the algorithm coordinated pricing. The dispute is whether doing it through software rather than a phone call changes the legal analysis.

The case does not stand alone. Three independent proofs of algorithmic price coordination now exist across unrelated sectors. MIT researchers studied 28,000 gas stations using the same dynamic pricing algorithm and found prices rose 5 to 9 cents per gallon above competitive equilibrium when market penetration exceeded 25–30%. ProPublica tracked 11 Atlanta apartment complexes, all using RealPage, where rents converged within one-half of one percent month over month — a degree of convergence that does not occur in competitive markets. The FTC identified comparable patterns in grocery pricing. Different researchers, different methodologies, different industries, same empirical signature.

The Convergence Signature: Three Industries, One Pattern Atlanta Rent Convergence (11 Complexes) $1,800 $1,600 $1,400 RealPage adopted Within 0.5% of each other The 30% Threshold Below 30% Competitive 30% Above 30% Price-setting “That was not a warning. That was the sales pitch.” Three Independent Proofs MIT: Gas Stations 28,000 stations studied +5–9¢/gal above equilibrium ProPublica: Apartments 11 Atlanta complexes Within 0.5% convergence FTC: Groceries Comparable patterns Cross-sector confirmation Different researchers. Different methodologies. Different industries. Same empirical signature. Sources: MIT Sloan (2024); ProPublica (February 2026); FTC investigation; DOJ antitrust filings (2024–2026)
“When a single algorithm sets prices for competing landlords using the same data, you do not need a smoke-filled room. The coordination is the product.” — Jonathan Kanter, Assistant Attorney General for Antitrust, DOJ
“I told them that if we got above 30 percent market share in a given submarket, the model would effectively set the market price. That was not a warning. That was the sales pitch.” — Former RealPage data scientist (identity sealed), deposition excerpt

Further Reading

YALE 2024
PRO
Algorithmic Collusion: When Software Replaces the Smoke-Filled Room
Fiona Scott Morton · Yale School of Management · The New York Times, September 2024
Argues algorithmic price coordination through shared platforms meets the legal standard for collusion regardless of the medium.
RP 2026
CON
Pricing Recommendations Are Not Price Fixing
RealPage Legal Brief · DOJ v. RealPage, February 2026
Defense argues recommendations are non-binding, landlords retain independent pricing authority, and convergence reflects market fundamentals.
Algorithmic coordination in housing extracts money from renters who have no alternative. But there is a subtler form of algorithmic exclusion — one that does not raise your rent or deny your claim. It simply ensures you never get hired in the first place. The same structural pattern that hollowed out medical review and replaced competitive pricing with coordination is now closing the door on an entire generation’s entry into the workforce.
§
III · The Vanishing Apprenticeship

The Apprentice Who Never Got Hired

In AI-exposed industries, wages rose 8.5% while employment contracted — a combination with no precedent in post-war economic data. AI is not automating the surgeon. It is automating the residency.

Over the 40 months from ChatGPT’s launch to the Dallas Federal Reserve’s February 2025 report, a structural pattern emerged in American labor markets that has no precedent in post-war economic data. In industries with the highest AI exposure scores, wages rose 8.5% while employment contracted. Rising wages and falling headcount simultaneously. The pattern is consistent with firms paying more to retain experienced workers while eliminating entry-level intake entirely.

The Bureau of Labor Statistics confirmed the trend with a preliminary benchmark revision showing downward adjustments in 12 of 17 AI-exposed occupational categories. Georgetown’s Center on Education and the Workforce tracked credential requirements across 2,800 job categories and found that in AI-exposed fields, the share of entry-level postings requiring a master’s degree rose from 12% to 31% between 2019 and 2025. That is the labor market rationing a shrinking number of positions behind ever-higher credential walls.

The deeper structural problem: AI is not automating the senior role. It is automating the training pathway to the senior role. The residency, the apprenticeship, the junior associate position — the jobs where humans learn to do the work that AI cannot yet do. Eliminate the pipeline, and the senior roles themselves become unfillable within a generation.

The Paradox: Higher Wages, Fewer Jobs, No Entry Path AI-Exposed Industries (40 Months Post-ChatGPT) +10% 0% -10% Nov 2022 2024 Feb 2025 +8.5% Wages −7.2% Employment ChatGPT The Credential Wall Entry-level postings requiring Master’s degree 12% 2019 31% 2025 Georgetown CEW, 2,800 job categories BLS benchmark revision: downward adjustments in 12 of 17 AI-exposed occupational categories “We are not automating the surgeon. We are automating the residency. The AI can do the work that trains the human to do the work the AI cannot do.” Sources: Dallas Fed Working Paper (Feb 2025); BLS preliminary revision (Feb 2025); Georgetown CEW (Oct 2025)
“The paradox is that AI is most threatening not to the jobs it can fully replace, but to the jobs that serve as training grounds. You eliminate the residency, not the surgeon.” — David Autor, Ford Professor of Economics, MIT
“We are not automating the surgeon. We are automating the residency. The AI can do the work that trains the human to do the work the AI cannot do. That is not a productivity story. That is an extinction story for the pipeline.” — Daron Acemoglu, Institute Professor, MIT

Further Reading

MIT 2025
PRO
The Pipeline Problem: Why AI Threatens Entry-Level Work Most
David Autor · MIT · MIT Technology Review, September 2025
Argues AI’s primary labor impact is eliminating training pathways, not senior roles, creating a generational knowledge-transfer crisis.
GTOWN 2025
CON
Credential Inflation Predates AI: The Longer View
Nicole Smith · Georgetown Center on Education and the Workforce · October 2025
Notes credential requirements have been rising since 2010; argues AI accelerates an existing trend rather than creating a new one.
The labor market sheds the next generation quietly — a job posting that now requires a master’s degree, an internship that becomes unpaid, a junior role that simply is not backfilled. Insurance does it louder. When an algorithm decides your home is too risky, you do not get a credential wall. You get a letter telling you that as of next month, you are on your own.
§
IV · The Insurance Withdrawal

The Insurance Withdrawal: When the Algorithm Evacuates the Market

State Farm non-renewed 72,000 policies in a single cycle using satellite-fed wildfire risk models. The California FAIR Plan — designed as a temporary backstop — tripled to 540,000 enrollees in eighteen months.

Across five states and three insurance lines, the same pattern is playing out: algorithmic risk models enabling insurers to identify and shed unprofitable customers at scale, with no human adjuster involved. State Farm non-renewed 72,000 policies in California in a single cycle in January 2026, using satellite-fed wildfire risk models operating at 1-meter resolution. The California FAIR Plan — a state-mandated insurer of last resort designed as a temporary backstop — absorbed 540,000 enrollees in eighteen months, tripling from 180,000 in mid-2024. A backstop cannot function as the primary market.

The models work as intended. They identify which customers will cost more than they pay, and they remove them. But insurance is a social technology — it works by pooling risk across people who cannot individually predict their own losses. Algorithmic precision destroys the pool by identifying who will lose and removing them before the loss occurs. What remains is not insurance in any meaningful sense.

When Louisiana regulators asked actuaries to explain their wildfire risk models, the actuaries could identify the inputs — satellite imagery, vegetation density, wind corridors — but could not explain why the model weighted them as it did. The regulatory hearing became theater. The opacity is not a side effect. It is the mechanism by which algorithmic underwriting avoids the regulatory challenge that traditional underwriting would face.

The Market Evacuation: Private Insurers Out, FAIR Plan Overwhelmed California FAIR Plan Enrollment 600K 400K 200K 0 180K Mid-2024 540K Q1 2026 Algorithmic Non-Renewals 72,000 State Farm non-renewals, Jan 2026 Satellite resolution: 1 meter Wildfire risk models, no human adjuster The Opacity Problem Actuaries can name the inputs Cannot explain the weightings Algorithmic precision destroys the insurance pool by identifying who will lose and removing them before the loss occurs. A backstop cannot function as the primary market. What remains is not insurance in any meaningful sense. Sources: CA FAIR Plan data (Q1 2026); State Farm filings (Jan 2026); NAIC testimony (Nov–Dec 2025); Allstate patent (Nov 2025)
“The models work exactly as intended. They identify which customers will cost more than they pay, and they remove them. The question is whether an insurance market organized around that principle can still be called insurance.” — Daniel Schwarcz, Fredrikson & Byron Professor of Law, University of Minnesota
“We went from 180,000 policies in the FAIR Plan to 540,000 in eighteen months. That is not a market correction. That is a market evacuation.” — Victoria Roach, President, California FAIR Plan Association

Further Reading

MINN 2025
PRO
The End of Risk Pooling: How Algorithmic Underwriting Destroys Insurance
Daniel Schwarcz · University of Minnesota Law · The Atlantic, November 2025
Argues algorithmic precision in risk assessment is structurally incompatible with the social function of insurance.
NAIC 2025
CON
Risk-Based Pricing Protects Solvent Markets
American Property Casualty Insurance Association · NAIC Testimony, December 2025
Industry argues accurate risk pricing prevents insolvency and cross-subsidization; FAIR Plan growth reflects underpriced risk, not market failure.
The insurance withdrawal uses satellite imagery to decide who loses coverage. But satellite imagery has another use, one that takes us back to the original promise of cameras pointed at power. In 2014, America bought body cameras to hold police accountable. What happened next is less a story about technology than a story about who gets to decide what a camera is for.
§
V · The Camera That Turned Around

The Camera That Turned Around

In 2014, America bought body cameras for accountability. In 2025, the same hardware runs real-time facial recognition. Nobody voted for the change. Nobody was even asked.

In 2014, after Ferguson, America bought body cameras. The promise was accountability: officers would behave better when recorded, and citizens would have evidence when they did not. Twelve years later, the same hardware runs real-time facial recognition. Nobody voted for the change. Nobody was even asked.

The arc was not accidental. Axon’s investor presentations from 2019 onward described the body camera as a platform, not a product. The revenue model assumed analytics services — facial recognition, automated reporting, real-time suspect alerts — would exceed hardware revenue by 2024. The accountability narrative got cameras onto officers. The analytics narrative pays for them. By Q3 2025, facial recognition was Axon’s fastest-growing product segment, with 42% year-over-year revenue growth.

San Francisco banned facial recognition in 2019 and reversed the ban on January 15, 2025. Nothing changed about the technology’s accuracy in that period. The political coalition that supported the ban dissolved. The technology outlasted its critics. Edmonton police deployed real-time facial recognition in February 2026. And NIST’s ongoing Face Recognition Vendor Test continues to show persistent demographic differentials: false positive rates for Black women remain 10 to 100 times higher than for white men in the majority of commercial systems. The gap is not closing at the rate that would justify widespread deployment. Deploying anyway is a choice, not a technical inevitability.

From Accountability to Surveillance: The Body Camera Platform Pivot The Platform Evolution 2014 Ferguson Accountability 2019 Axon: “Platform, not product” SF bans FR 2025 SF reverses ban FR revenue: +42% YoY 2026 Edmonton deploys real-time FR NIST Face Recognition Vendor Test: Error Rate Disparity White men: baseline false positive rate 10–100× higher Black women: false positive rate in majority of commercial systems Axon Q3 2025 Facial recognition: fastest-growing segment The accountability narrative got cameras onto officers. The analytics narrative pays for them. Sources: NIST FRVT (March 2025); Axon Q3 2025 earnings; Wired (Oct 2025); Edmonton Police (Feb 2026)
“We sold body cameras to every police department in the country on the promise that they would create accountability. Now the same hardware is running facial recognition in real time. Nobody voted for that. Nobody was even asked.” — Jay Stanley, Senior Policy Analyst, ACLU Speech, Privacy, and Technology Project
“The error rates are not distributed randomly. For dark-skinned women, the best commercial systems still fail at rates ten to a hundred times higher than for light-skinned men. Deploying these systems as though the problem is solved is a choice, not a technical inevitability.” — Joy Buolamwini, Founder, Algorithmic Justice League, MIT Media Lab

Further Reading

AJL 2025
PRO
Unmasking the Machine: Facial Recognition’s Persistent Bias Problem
Joy Buolamwini · Algorithmic Justice League · Nature, March 2025
Documents persistent demographic differentials in commercial facial recognition systems and argues deployment at current error rates is a policy choice.
STAN 2025
CON
The Body Camera as Evidentiary Platform: A Defense of Analytical Expansion
Andrew Guthrie Ferguson · American University · Stanford Law Review, 2025
While critical of facial recognition, argues the evidentiary platform model improves investigative accuracy when properly regulated.
A camera bought to watch the watchers now watches everyone. A doctor’s signature that meant review now means throughput. A pricing algorithm sold as a recommendation engine now sets the market. An insurance model designed to assess risk now evacuates markets. A labor market that trained the next generation now walls them out. In every case, the machine works exactly as designed. In every case, the question is the same.
Editorial

Editorial: The Courtroom Is the Last Room With a Subpoena

The five stories in this edition share a structural feature that is easy to miss: in every case, the algorithm performed exactly as its designers intended. Cigna’s PxDx system was built to deny claims at volume. RealPage’s pricing model was sold on its ability to coordinate rents. State Farm’s satellite models were designed to identify and shed unprofitable customers. Axon’s body camera was always a platform for analytics revenue. The Dallas Fed’s data confirms that AI-exposed industries are retaining senior workers and eliminating junior ones — precisely the pattern you would design if your goal were short-term productivity at the expense of long-term human capital.

The conventional framing treats these as malfunctions — algorithms gone wrong, unintended consequences, technology outpacing regulation. That framing is incorrect. These systems are working. The question is not whether they are broken. The question is whether institutions should be permitted to automate decisions that, if made by a human being, would carry personal legal liability.

A physician who personally denied 60,000 claims without reading them would face malpractice charges. Landlords coordinating prices by telephone would face criminal prosecution. An insurer that admitted to dropping customers based on demographics rather than individualized risk would face civil rights litigation. In each case, the algorithm provides the same outcome while dissolving the same liability.

This is why the courtroom matters. It is the first institution in this chain that can compel an algorithm’s architects to explain, under oath, what they built and why they built it. The RealPage trial, the Cigna depositions, the SCOTUS cert petition — these are not regulatory afterthoughts. They are the opening arguments in a contest that will determine whether algorithmic decision-making receives the same legal scrutiny as human decision-making, or whether the opacity of the machine becomes a permanent shield against accountability.

The data arrived in Q1 2026. The Dallas Fed measured the labor displacement. NIST measured the facial recognition error rates. ProPublica measured the rent coordination. The California FAIR Plan measured the insurance evacuation. For years, critics of algorithmic systems argued from anecdote and analogy. That era is over. The numbers are in. The question now is whether the institutions with the power to act — courts, legislatures, regulators — will treat those numbers as evidence or as background noise.

The machines are working as designed. The question is who designed them, for whose benefit, and whether the rest of us get a vote.