VOL. I, NO. 12 • TUESDAY, FEBRUARY 4, 2026 • PRICE: ONE AUDIT OF YOUR SOUL

THE REVIEW

“Counting what matters—before we forget why it mattered”


❧ ❧ ❧

The Audit Nobody Ordered

This winter, institutions everywhere discovered the price of a soul—and decided it wasn’t worth paying

Dear reader: Welcome to an accounting of the accountants.

Between Thanksgiving 2025 and Groundhog Day 2026, universities, hospitals, courts, and museums arrived—separately, without apparent coordination—at the same conclusion: human judgment is expensive, inconsistent, and difficult to scale. Philosophy departments suspended doctoral admissions. A leading hospital replaced its ethics committee with “Ethics Ambassadors.” A startup promised AI arbitrators with “zero hallucinations.” Federal regulators required physicians to document, to the minute, how long they discussed death with patients.

This edition examines what might be called the Metricization of Moral Obligation: the quiet conversion of wisdom into line items, judgment into algorithms, and discretion into documented deliverables. Five stories, one theme. We do not claim the sky is falling—only that someone is measuring it.

Inside: the strange timing of Georgetown’s doctoral suspension; the disappearance of the deliberative ethics committee; AI judges who cannot feel mercy; the impossible insurance mathematics of returning stolen art; and expert predictions about machine superintelligence ranging from “three months” to “never.”

If you disagree, please document your objection in triplicate, with timestamps.


❧ ❧ ❧

The Christmas Week Massacre

Georgetown told applicants “never mind” one week after they applied

The Department of Philosophy at Georgetown University informed prospective doctoral students on Dec. 22 that their applications had been filed, reviewed, and—oops—rejected by temporal paradox. The program would not accept anyone for Fall 2026.

The timing mattered: Dec. 22 falls exactly one week after the Dec. 15 application deadline. Students had invested months preparing writing samples, securing recommendations, and paying fees for a program that had already closed. The philosophers learned about the suspension around the same time the applicants did.

Georgetown cited a “perfect storm”: projected revenue decline of 1.5 million over five years. One email, sent during Christmas week, accomplished what would take years of faculty negotiation.

“I sincerely apologize for the late timing of this decision, which I understand is extremely frustrating for many reasons.” — John Greco, Director of Graduate Admissions

Georgetown is not alone. Chicago suspended 19 programs. Cornell, Brown, and Boston University paused anthropology and humanities. Harvard is reducing its philosophy intake. Rutgers: zero admissions. The Journal of Philosophy—operating since 1904—suspended new submissions until August.

Critics call this the “missing generation” problem: a void in the mentorship chain that has defined academic philosophy since Plato walked with Aristotle. Without students to train, the ecosystem that produces philosophers develops a gap. The field that asks “What is worth pursuing?” has received an answer from the spreadsheet.


FOR FURTHER READING: PERSPECTIVES

🟢 PRO

“Humanities Cuts Leave Us Defenceless in the Age of AI” Times Higher Education (January 2026) — Agnieszka Piotrowska Precisely when society most needs qualitative reasoning about AI, we are dismantling the departments equipped to provide it.

🔴 CON

“Does Cutting Philosophy Help A University’s Budget?” Daily Nous (April 2024) — David C.K. Curry Universities that cut philosophy did not see enrollment gains. At Emporia State, enrollment fell 12.5% while other Kansas institutions grew 2%.


❧ ❧ ❧

The Committee Will See You Never

Cleveland Clinic traded deliberation for delivery—and called it progress

For decades, the hospital ethics committee served as the institution’s conscience: a diverse panel of physicians, nurses, chaplains, social workers, and community members who gathered to wrestle with questions no algorithm could answer. Should we continue treatment when the family disagrees? When does autonomy cross into harm?

At Cleveland Clinic, that model is now obsolete.

Reports confirm that Cleveland Clinic has dissolved its traditional ethics committees, replacing them with “Ethics Ambassadors”—professional consultants embedded in clinical units who deliver “just-in-time ethics.” The standing committee, which once convened at inconvenient hours to bring multiple perspectives to moral dilemmas, has been streamlined into a service.

The case for dissolution was operational. Committees met at 7 a.m. or noon—times that excluded bedside nurses and community members. Attendance collapsed to 23.8 percent in one case. The “output” of deliberation often stayed trapped in meeting rooms. Members struggled to define their purpose.

“While members were ‘interested,’ they often lacked the time or expertise to deliver education outside the committee context.” — Staff ethicist describing committee dysfunction

The new model professionalizes the function: credentialed ethicists deliver consistent, legally sound guidance faster than any committee could convene. Critics see a different story. A committee, by virtue of its diversity, is harder to pressure than a single employee. The shift from “deliberative democracy” to “service delivery” aligns ethics more closely with hospital administration.

Meanwhile, federal regulators have their own ideas. CMS now requires physicians to document start and end times for Advance Care Planning conversations—discussions about how patients wish to die. The billing codes (CPT 99497 and 99498) require a minimum of 16 minutes. Fourteen minutes of discussing a dying patient’s fears? No bill. Sixteen minutes? Billable.

An OIG audit found $42 million in “improper payments”—not because conversations didn’t happen, but because they weren’t timed correctly. One bioethicist proposed a “null code” for time spent simply being present. No such code exists. Silence, grief, and listening generate no revenue.


FOR FURTHER READING: PERSPECTIVES

🟢 PRO

“Reimagining Thriving Ethics Programs Without Ethics Committees” The American Journal of Bioethics (March 2025) — Mabel, Crites, Cunningham, Potter Professional clinical ethicists can provide more consistent, timely, and expert support than volunteer committees with attendance problems.

🔴 CON

“(Ir)Relevance of Ethics Committees” The American Journal of Bioethics (March 2025) Committees provide community deliberation and institutional independence that professional staff cannot replicate.


❧ ❧ ❧

The Robot Will Hear Your Case Now

Arbitrus.ai promises justice in 72 hours—and zero hallucinations

The founders of Arbitrus.ai have a pitch: Why pay 10,000 for a fast, allegedly unbiased AI that delivers binding decisions in 72 hours?

The startup, launched in February 2026, claims its AI judge achieved “zero hallucinations” across 100 test scenarios. The system handles the entire arbitration process—filing, briefing, discovery, decisions—on its platform. Initial targets: business-to-business vendor disputes and employment contracts. Coming soon: landlord-tenant.

The marketing builds on acknowledged failures of human justice. Critics point to “procedural rot”—the decay of due process into bureaucratic delay. Human arbitrators want to be hired again, creating incentives to “split the baby” rather than rule decisively. Courts are slow, expensive, inaccessible.

Because arbitration is consensual—parties agree in advance to accept decisions—a contract specifying AI arbitration is theoretically enforceable. The social contract becomes literally a contract. The judge becomes code.

“Why use the public court system or expensive AAA arbitration when you can do it faster, cheaper, and better with Arbitrus?” — Arbitrus.ai website

The American Arbitration Association has already entered the game. In November 2025, AAA launched an AI arbitrator for construction defect cases, promising 30–50 percent cost reductions. Expansion to insurance and healthcare disputes is planned for 2026.

Legal experts raise concerns. Enforceability under the New York Convention is uncertain. Awards can be refused if they violate “public policy”—and automated decisions may fail due process standards. Then there’s the black box problem: a human judge writes an opinion that can be appealed on legal error. An AI renders a “decision” based on probabilistic pattern matching. What exactly would one appeal?

The deeper question: Is justice simply correct output derived from inputs? “Equity”—bending strict rules when application would yield unjust results—assumes a reasoning entity capable of recognizing the exceptional case. An AI trained on precedent optimizes for consistency. Mercy, by definition, is inconsistent.

The founders envision an “Arbitration State”—a private legal system that subsumes public litigation. In this vision, justice scales.


FOR FURTHER READING: PERSPECTIVES

🟢 PRO

“AI-Powered Arbitration: Is Arbitrus.ai the Future?” TechLaw Crossroads (February 2025) AI is well-suited for limited, defined contractual disputes where speed and cost matter more than exhaustive legal analysis.

🔴 CON

“Ethical Constraints When Using AI in Arbitration” FedArb (November 2025) AI-generated work must be verified by humans. The removal of nuance and control creates liability risks.


❧ ❧ ❧

The Art That Can’t Go Home

Museums face a cruel choice: keep looted treasures or insure them to oblivion

Western museums are caught in a pincer movement of their own making: morally obligated to return centuries of stolen heritage, financially unable to insure the journey home.

Standard fine art insurance carries “War Risk” exclusions—clauses that once seemed arcane but now loom over every collection. As geopolitical instability rises, insurers have reclassified destinations. The irony is precise: many objects were looted from regions now classified as uninsurable. To return a Benin Bronze to Nigeria requires navigating an insurance market that views the artifact’s rightful home as a risk category, not a nation with a legitimate claim.

Dealers report slowing sales and inventory pile-ups. Museums face “aggregation issues”—too much value concentrated in single warehouses, exceeding coverage limits. A single fire or missile strike could eliminate billions in cultural heritage. The practical effect: museums are increasingly reluctant to lend internationally. Traveling exhibitions have become logistical nightmares.

“Museums cannot change past wrongdoings, but they can change how we interact with cultural heritage today.” — Center for Art Law

Meanwhile, pressure to repatriate intensifies. The Virginia Museum of Fine Arts returned works only after “irrefutable evidence” emerged from government investigation—despite knowing of provenance issues earlier. Internal communications revealed the approach: delay until denial is impossible, then spin compliance as moral leadership. The Met cooperated with Thailand to return a statue of Shiva after years of disputes.

In May 2025, British Museum director Nicholas Cullinan ruled out permanent restitution, preferring loan arrangements. The UK government appointed Tiffany Jenkins—author of books against repatriation—as a British Museum trustee.

Advocacy group Routes to Return argues restitution should be treated as a human rights issue, proposing the UK could become “a world leader in repatriation.” No such policies have emerged.

One proposal floated in policy circles: “Sovereign Heritage Indemnity” treaties—governments providing state-backed guarantees for objects in transit, bypassing commercial insurance via diplomatic status. No such treaties exist. The artifacts wait in climate-controlled rooms, too risky to move, too fraught to keep.


FOR FURTHER READING: PERSPECTIVES

🟢 PRO

“Looted Cultural Objects” Columbia Law Review (December 2024) While U.S. law doesn’t require return of colonially-acquired objects, restorative justice frameworks support an evolving restitution norm.

🔴 CON

“Are Museums Going Backwards on Repatriation?” Museums Association (July 2025) Growing political reticence toward repatriation, with some arguing “legal acquisition within 100 years” should be a cut-off point.


❧ ❧ ❧

When Do the Machines Take Over?

Depends who’s selling

If you want a definitive timeline on artificial general intelligence, you’ve come to the wrong decade.

In January 2026, Ben Goertzel—the AI researcher known as “the father of AGI”—predicted that machines matching or exceeding human cognition across all domains are “three to eight years away.” That puts arrival somewhere between 2029 and 2034.

Elon Musk, whose predictions slip one year annually, now forecasts AGI by 2026. He predicted 2025 when he said so in 2024. He also signed an open letter calling for an AI development pause—then launched xAI, reportedly raising $20–30 billion annually.

At Davos, DeepMind founder Demis Hassabis offered a cautious view: roughly 50 percent probability of AGI by 2030. Progress in coding and mathematics is rapid, he noted, but scientific discovery and creative reasoning remain harder. Autonomous self-improvement is “less certain.”

Stanford’s Human-Centered AI faculty were blunter. Co-director James Landay’s prediction: “There will be no AGI this year.”

“We won’t get to AGI in 2026 (or 7). Astonishing how much the vibe has shifted in just a few months.” — Gary Marcus, AI researcher

Gary Marcus made his own forecast: AGI won’t arrive in 2026 or 2027. “Faith in scaling as a route to AGI has dissipated,” he observed. Even AI luminaries like Ilya Sutskever are expressing concern. Marcus describes 2025 as “the year of peak bubble.”

The uncertainty matters because belief shapes behavior. If decision-makers think AGI is imminent, investing in human-centric institutions—philosophy departments, human judges, deliberative committees—looks like a bad bet. Better to automate now and prepare for obsolescence.

Whether predictions prove accurate, the expectation of AGI justifies the present dismantling of human infrastructure. “Why train philosophers who’ll be obsolete in a decade?”

States have attempted guardrails. California requires watermarking for AI content. Texas mandates risk frameworks. Illinois prohibits AI discrimination in employment. Colorado creates developer liability. The federal response: executive orders challenging state regulations.

The race continues. The finish line remains invisible.


FOR FURTHER READING: PERSPECTIVES

🟢 PRO

“Nine Predictions for AI in 2026” IT Brief / AI Multiple (January 2026) — Ben Goertzel AGI is 3–8 years away, based on accelerating progress in reasoning and self-improvement.

🔴 CON

“Six (or Seven) Predictions for AI 2026” Gary Marcus, Substack (December 2025) AGI won’t arrive in 2026 or ‘27. Faith in scaling has dissipated. 2025 will be remembered as “the year of peak bubble.”


❧ ❧ ❧

EDITORIAL

The Things That Cannot Be Counted

On the folly of auditing the unmeasurable

There is a scene in Douglas Adams’ Hitchhiker’s Guide to the Galaxy where an advanced civilization builds a computer to answer “the Ultimate Question of Life, the Universe, and Everything.” After seven and a half million years, the machine delivers its answer: 42.

The joke is that the answer is meaningless without understanding what was asked. The machine was brilliant at calculation but had no capacity for meaning.

This winter, institution after institution arrived at similar conclusions. They ran the numbers. The humanities departments cost money without producing patents. The ethics committees were inefficient. The human arbitrators were slow. The physicians spent too much time on conversations that couldn’t be billed. The philosophy programs produced no measurable return.

The answer, in each case, was a version of 42. Cut. Dissolve. Automate. Document to the minute.

But the question was never being asked.

A philosophy department is not a cost center to be optimized. It is the place where we ask what things are worth. An ethics committee is not a service line with attendance metrics. It is the place where a community argues about how to be good. A conversation with a dying patient is not a billing code. It is medicine in its most fundamental form.

“The question is whether any institution will remain brave enough to argue that efficiency is not the highest good.”

These things cannot be counted because their value lies precisely in their resistance to quantification. Wisdom is not scalable. Moral deliberation is inherently inefficient. The physician who sits in silence with a grieving family produces no documentable output—and that may be the most valuable thing anyone does that day.

The irony of “metricizing moral obligation” is that it defeats itself. Convert wisdom to a number and you no longer have wisdom—you have a number. Replace deliberation with delivery and you no longer have ethics—you have compliance. Automate judgment and you no longer have justice—you have prediction.

We do not claim that Georgetown’s budget pressures are fake, that hospitals should cling to dysfunctional committees, or that courts should reject all innovation. Resources are finite. Trade-offs are real. Institutions that cannot adapt will fail.

But adaptation is not surrender. The question is not whether to change but what to preserve. Every institution exists for some purpose beyond survival. When Georgetown suspends philosophy admissions, it saves money—but does it remain a university? When a hospital replaces its ethics committee with consultants, it gains efficiency—but does it retain a moral community?

The coming decade will answer these questions. AGI may arrive in 2026, or 2029, or 2040, or never. What arrives in the meantime is a civilization that has forgotten how to value what it cannot count.

The audit is complete. The findings are: 42.

Now we must decide if that means anything at all.

— The Editors


FOR FURTHER READING: PERSPECTIVES

🟢 PRO

“2025 International Arbitration Survey” White & Case (2025) 90% of practitioners expect to use AI for research, analytics, and document review. 54% cite time savings as the biggest driver.

🔴 CON

“How 2026 Could Decide the Future of AI” Council on Foreign Relations (January 2026) “The edge cases of 2025 won’t remain edge cases for long.” Decisions made this year determine where power concentrates in the AI era.


❧ ❧ ❧


Production Note: This edition of The Review was produced in collaboration between a human editor and an AI assistant. Facts are drawn from verified sources in the public record; the synthesis, framing, and editorial perspective represent collaborative judgment. Your skepticism remains appropriate and encouraged.

Coming Next Week: The Sleep Deficit—how cities traded rest for productivity, and what the numbers say about what was lost. Also: the return of the written letter.


The Review is not available to the general public. Distribution is by referral only. If you know, you know. There are no invites.


© 2026 The Review. All rights reserved.

Editor: Daniel Markham

Tuesday, February 4, 2026