VOL. I, NO. 12 • TUESDAY, FEBRUARY 4, 2026 • PRICE: ONE MOMENT OF ATTENTION
THE REVIEW
“Counting what matters—before we forget why it mattered”
❧ ❧ ❧
THE AUDIT NO ONE CALLED FOR
This Winter, Western Institutions Discovered How Much a Soul Costs—And Decided It Wasn’t Worth It
Dear reader: Welcome to an accounting of the accountants.
Between Thanksgiving 2025 and Groundhog Day 2026, something peculiar happened in conference rooms and budget committees across the Western world. Without coordinating—or at least without being caught coordinating—universities, hospitals, courts, and museums all arrived at the same conclusion: human judgment is expensive, inconsistent, and difficult to scale.
Philosophy departments suspended doctoral admissions. A major hospital replaced its ethics committee with “Ethics Ambassadors.” A startup promised robot arbitrators with “zero hallucinations.” The federal government required physicians to document, down to the minute, exactly how long they spent discussing death with patients. And if all that weren’t enough, the AI prophets announced that artificial general intelligence will arrive by 2026 or 2029 or maybe just after lunch.
This newspaper examines what might be called the Metricization of Moral Obligation—the quiet conversion of wisdom into line items, judgment into algorithms, and human discretion into documented deliverables. We do not claim the sky is falling. We merely note that someone is measuring it.
Inside, you will find: the strange timing of Georgetown’s doctoral suspension; the disappearance of the deliberative ethics committee; the rise of AI judges who cannot feel mercy but can feel confident; the impossible insurance mathematics of returning stolen art; and expert predictions about machine superintelligence ranging from “three months” to “never.” Draw your own conclusions.
And if you disagree with any of it, please document your objection in triplicate, with timestamps.
❧ ❧ ❧
PHILOSOPHERS FIRED AFTER DEADLINE
Georgetown Tells Applicants “Never Mind” One Week After They Applied
[Suggested Image: An empty philosophy seminar room with a chalkboard still showing notes from a lecture on Aristotle’s ethics]
The Department of Philosophy at Georgetown University informed prospective doctoral students on Dec. 22 that their applications had been filed, reviewed, and—oops—rejected by temporal paradox. The program would not be accepting anyone for Fall 2026.
The timing was notable: Dec. 22 falls exactly one week after the Dec. 15 application deadline.
John Greco, director of graduate admissions, expressed “deep regret” in his email to applicants who had invested months preparing writing samples, securing recommendation letters, and paying application fees for a program that had, without their knowledge, already closed. The decision came from the College of Arts & Sciences, not the department itself. The philosophers learned they would not be training philosophers around the same time the applicants did.
Georgetown cited a “perfect storm” of financial pressures: a projected revenue decline of 112 million for Fiscal Year 2026; a 23 percent drop in graduate applications; a 20 percent plunge in international enrollment. Interim University President Robert M. Groves blamed federal funding cuts, shifting immigration policies, and the end of certain graduate loan programs.
The financial logic was straightforward. By canceling the incoming class, Georgetown eliminated four or five “funded slots”—tuition waivers plus modest stipends—saving approximately $1.5 million over the next five years. A single administrative decision, issued during Christmas week when faculty were scattered, achieved in one afternoon what would take years of painful negotiations at the department level.
Georgetown is not alone. The University of Chicago suspended Fall 2026 admissions for 19 programs, including classics, linguistics, and public policy. Cornell, Brown, and Boston University have paused admissions in anthropology and other humanities programs. Harvard is reducing its incoming philosophy class. Rutgers is accepting no applicants at all.
“I sincerely apologize for the late timing of this decision, which I understand is extremely frustrating for many reasons.” — John Greco, Director of Graduate Admissions
Critics call this the “missing generation” problem: a gap in the chain of mentorship that has defined academic philosophy since Plato walked with Aristotle. Without incoming students to train as teaching assistants, the burden shifts to adjuncts or overtaxed faculty. The ecosystem that produces philosophers develops a void.
Meanwhile, at Portland State University, administrators attempted to lay off three non-tenure-track philosophy faculty members citing “financial exigency.” An arbitrator ruled the university had “made its layoff decisions before it had sufficient evidence to support them” and ordered reinstatement with back pay.
The Journal of Philosophy, operating since 1904, suspended new submissions on Jan. 16 until August—a seven-month “dead zone” during which new ideas cannot enter the formal record of one of the discipline’s flagship publications. The journal’s 56-page print limit, a relic of the analog era, cannot accommodate the torrent of papers demanded by a “publish or perish” job market.
The field that asks “What is worth pursuing?” has received an answer from the spreadsheet.
For Further Reading: Perspectives
PRO (defending humanities): “Humanities Cuts Leave Us Defenceless in the Age of AI” — Times Higher Education The piece argues that precisely when society most needs qualitative reasoning about artificial intelligence, we are dismantling the departments equipped to provide it. Philosophy students are having their doctoral programs closed mid-application. Source: timeshighereducation.com (January 2026)
CON (skeptical of ROI arguments): “Does Cutting Philosophy Help A University’s Budget?” — Daily Nous Analysis showing that universities which cut philosophy programs did not subsequently see enrollment improvements. At Emporia State, enrollment fell 12.5 percent after humanities cuts while other Kansas institutions grew 2 percent. The cuts may be ideologically motivated pretexts. Source: dailynous.com (April 2024)
❧ ❧ ❧
THE ETHICS COMMITTEE WILL SEE YOU NEVER
Cleveland Clinic Replaces Deliberation With Delivery
[Suggested Image: A hospital conference room table with empty chairs around it, a single clipboard remaining]
For decades, the hospital ethics committee served as the institution’s conscience—a diverse panel of physicians, nurses, chaplains, social workers, and community members who gathered to wrestle with questions no algorithm could answer. Should we continue treatment when the family disagrees? When does autonomy cross into harm? What does this patient actually want?
At Cleveland Clinic, that model is now obsolete.
Reports in January 2026 confirm that Cleveland Clinic has dissolved its traditional ethics committees at its Ohio hospitals, replacing them with a new “Ethics Ambassador” model. The standing committee—which once convened at inconvenient hours to bring multiple perspectives to moral dilemmas—has been streamlined into a professional consulting service.
The case for dissolution was operational. Committees typically met at 7 a.m. or noon, effectively excluding clinical nurses and community members with rigid schedules. The “output” of deliberation—education, policy guidance—often stayed trapped in the meeting room. Attendance rates collapsed; in one reported case, to 23.8 percent. Committee members struggled to define their purpose: Was it education? Case review? Policy enforcement?
The new model professionalizes the function. “Ethics Ambassadors” are embedded in clinical units to provide “just-in-time” ethics, connecting floor staff to a corps of credentialed clinical ethicists who can deliver consistent, legally sound guidance faster than any committee could convene.
Critics see a different story. A committee, by virtue of its size and diversity, is harder to pressure than a single employee. The shift from “deliberative democracy” to “service delivery” aligns ethics more closely with hospital administration. The Ambassador represents the Ethics Department to the floor—but they also represent the hospital’s risk management interests in the consultation.
There is also the loss of what might be called moral community. The committee was a miniature polis where an institution’s values were debated by people who did not share a boss. The professional model treats ethics as a technical problem to be solved by an expert, rather than a dilemma to be grappled with by a community.
“While members were ‘interested,’ they often lacked the time or expertise to deliver education outside the committee context.” — Staff ethicist describing committee dysfunction
Meanwhile, federal regulators have their own ideas about how to govern the bedside. The Centers for Medicare & Medicaid Services now requires physicians to document start and end times for Advance Care Planning conversations—discussions about how patients wish to die. The billing codes (CPT 99497 and 99498) require a minimum of 16 minutes to bill, creating a perverse incentive to watch the clock rather than the patient.
An audit by the Office of Inspector General found that providers received an estimated $42 million in “improper payments” for advance care planning—not because the conversations did not happen, but because they were not timed and documented according to federal standards. Sixty-seven percent of reviewed services were documented incorrectly.
If a physician spends 14 minutes discussing a dying patient’s fears, they cannot bill the code. If they stretch it to 16 minutes, they can. Silence, grief, and listening generate no revenue.
One bioethicist proposed a “null code” for time spent simply being present—moments that produce no documentable content but serve a therapeutic purpose. No such code exists.
For Further Reading: Perspectives
PRO (supporting professionalized ethics): “Reimagining Thriving Ethics Programs Without Ethics Committees” — The American Journal of Bioethics Research arguing that professional clinical ethicist staffing can provide more consistent, timely, and expert ethics support than volunteer committees with attendance problems and unclear mandates. Source: tandfonline.com (March 2025)
CON (defending committee deliberation): “(Ir)Relevance of Ethics Committees: The Continued Value of Hospital Ethics Committees in Programs with Professional Ethicist Staffing” — The American Journal of Bioethics A counter-argument that committees provide community deliberation and institutional independence that cannot be replicated by professional staff alone. Source: tandfonline.com (March 2025)
❧ ❧ ❧
THE ROBOT JUDGE WILL HEAR YOUR CASE NOW
Arbitrus.ai Promises Justice in 72 Hours, No Hallucinations
[Suggested Infographic: A balance scale showing “Human Arbitrator: 10,000, 72 hours” on the other]
The founders of Arbitrus.ai have a pitch: Why pay 10,000 for a fast, allegedly unbiased AI that delivers binding decisions in 72 hours?
The legal startup, which launched in February 2026, claims its AI judge achieved “zero hallucinations” in testing 100 hypothetical scenarios. The system handles the entire arbitration process—filing, notification, briefing, discovery, and decisions—on its platform. Initial use cases target business-to-business vendor disputes and employment contracts, with planned expansion to landlord-tenant disputes.
The marketing is built on the acknowledged failures of human justice. Critics of traditional arbitration point to “procedural rot”—the decay of due process into bureaucratic delay and predictable bias. Human arbitrators want to be hired again, creating incentives to “split the baby” rather than rule decisively. Courts are slow, expensive, and increasingly inaccessible.
Arbitrus.ai promises a contractual escape hatch. Because arbitration is consensual—parties agree in advance to accept its decisions—a contract specifying AI arbitration is theoretically as enforceable as one specifying human arbitration.
Kimo Gandall, a Harvard Law graduate and one of the founders, argues that contractual disputes are inherently limited and predictable, making them ideal for AI. “The contract defines the parameters of the dispute,” the company explains. “The parties can anticipate disputes and set the framework for resolution.”
The American Arbitration Association, for its part, has already entered the game. In November 2025, the AAA launched an AI-powered arbitrator for document-only construction defect cases, promising 30-50 percent cost reductions and 25-35 percent time savings. The organization plans to expand to insurance and healthcare disputes in 2026.
“Why use the public court system or expensive AAA arbitration to settle your disputes, when you can do it faster, cheaper, and better with Arbitrus?” — Arbitrus.ai website
Legal experts raise significant concerns. The enforceability of “smart awards” under the New York Convention is uncertain. Awards can be refused if they violate “public policy,” and critics argue that automated decisions may fail due process standards. Then there is the black box problem: a human judge writes an opinion that can be appealed based on legal error. An AI renders a “decision” based on probabilistic pattern matching. What exactly would one appeal?
The deeper question is whether justice is simply the correct output derived from inputs. The concept of “equity”—bending strict rules when their application would yield an unjust result—assumes a reasoning entity capable of recognizing the exceptional case. An AI trained on precedent will optimize for consistency. Mercy, by definition, is inconsistent.
The founders envision their platform evolving into what they call an “Arbitration State”—a private legal system that subsumes large swathes of public litigation. In this vision, the social contract is literally a contract, and the judge is code.
For Further Reading: Perspectives
PRO (supporting AI arbitration): “AI-Powered Arbitration: Is Arbitrus.ai the Future of Dispute Resolution?” — TechLaw Crossroads Analysis arguing that AI is well-suited for limited, defined contractual disputes where speed and cost matter more than exhaustive legal analysis. Source: techlawcrossroads.com (February 2025)
CON (cautioning on AI risks): “Ethical Constraints When Using Artificial Intelligence in Arbitration” — FedArb Legal analysis warning that AI-generated work product must be verified by humans, that supervising attorneys remain responsible for AI errors, and that the removal of nuance and control creates liability risks. Source: fedarb.com (November 2025)
❧ ❧ ❧
THE ART YOU CAN’T SHIP HOME
Museums Face an Impossible Choice: Keep Looted Treasures or Insure Them to Oblivion
[Suggested Image: A museum crate stenciled “FRAGILE” sitting alone in an empty loading dock, with a map overlay showing conflict zones]
Western museums are caught in a pincer movement of their own making: morally obligated to return centuries of stolen heritage, financially unable to insure the journey home.
Standard fine art insurance policies carry “War Risk” exclusions—clauses that once seemed arcane but now loom over every major collection. As geopolitical instability rises, insurers have redefined “war” to encompass an expanding list of destinations. The result is a paralyzed cultural sector holding assets it cannot ethically keep but cannot safely move.
The irony is precise: many objects were looted from regions now classified as uninsurable. To return a Benin Bronze to Nigeria, for example, requires navigating an insurance market that views the artifact’s rightful home as a risk category, not a nation with a legitimate claim.
Dealers report slowing sales and inventory pile-ups. Museums face “aggregation issues” where too much value is concentrated in a single warehouse, exceeding coverage limits. A single fire, flood, or—the fear barely spoken—missile strike could eliminate billions in cultural heritage.
The practical effect: museums are increasingly reluctant to lend works internationally. Traveling exhibitions, once a primary mechanism for cultural diplomacy and public education, have become logistical nightmares.
Meanwhile, the pressure to repatriate intensifies. The Virginia Museum of Fine Arts returned works only after “irrefutable evidence” emerged from a government investigation—despite having known of provenance issues earlier. Internal communications revealed the institution’s approach: delay until denial is impossible, then spin compliance as moral leadership. The Metropolitan Museum of Art cooperated with Thailand to return “Golden Boy,” a statue of the Hindu deity Shiva, following years of provenance disputes.
“Museums cannot change past wrongdoings, but they can change how we interact with cultural heritage today.” — Center for Art Law
In May 2025, British Museum director Nicholas Cullinan ruled out any permanent restitution under his watch, preferring loan arrangements over outright returns. The UK government’s appointment of commentator Tiffany Jenkins—author of several books against restitution—as a British Museum trustee was interpreted by some as a sign of the political wind.
A policy briefing by advocacy group Routes to Return argues that restitution should be treated as a human rights issue, citing the UK’s commitment to the UN Declaration on the Rights of Indigenous Peoples. The briefing proposes that the UK could become a “world leader in repatriation.” No such policies have emerged.
One proposal floated in cultural policy circles: “Sovereign Heritage Indemnity” treaties whereby governments agree to provide state-backed guarantees for objects in transit for repatriation, bypassing the commercial insurance market entirely via diplomatic status.
No such treaties currently exist. The artifacts wait in climate-controlled rooms, too risky to move, too fraught to keep.
For Further Reading: Perspectives
PRO (supporting repatriation): “Looted Cultural Objects” — Columbia Law Review Legal analysis arguing that while U.S. law does not require the return of colonially-acquired objects, restorative and reparative justice frameworks support an evolving norm of restitution. The article examines NAGPRA as a domestic model. Source: columbialawreview.org (December 2024)
CON (cautioning against repatriation): “Are Museums Going Backwards on Repatriation?” — Museums Association Analysis noting growing reticence toward repatriation in political circles, with some arguing that claims lose legal weight over time and that “legal acquisition within 100 years” should be a cut-off point for restitution requests. Source: museumsassociation.org (July 2025)
❧ ❧ ❧
WHEN WILL THE MACHINES TAKE OVER? DEPENDS WHO’S SELLING
AGI Predictions Range from “Three Months” to “Maybe Never”
[Suggested Infographic: A timeline showing AGI predictions from various figures: Musk (2026), Goertzel (2029-2034), Stanford HAI (“Not this year”), Marcus (“Won’t happen in 2026 or 2027”)]
If you are looking for a definitive timeline on when artificial general intelligence will arrive, you have come to the wrong decade.
In January 2026, Ben Goertzel—the AI researcher known as “the father of AGI”—predicted that machines matching or exceeding human cognitive ability across all domains are “three to eight years away.” That puts arrival somewhere between 2029 and 2034.
Elon Musk, who has a track record of ambitious predictions that slip by one year annually, now forecasts AGI by 2026. He predicted it would arrive in 2025 when he said so in 2024. Before that, he warned that AI posed an existential threat and signed an open letter calling for a pause in development—shortly before launching his own AI company, xAI, which is reportedly raising $20-30 billion annually.
At the World Economic Forum in January 2026, Demis Hassabis, founder of DeepMind, offered a more cautious view: roughly a 50 percent chance of achieving AGI by the end of the decade (2030). Hassabis acknowledged rapid progress in coding and mathematics but emphasized that scientific discovery and creative reasoning remain harder problems. Autonomous self-improvement, he noted, is “less certain.”
Stanford’s Human-Centered Artificial Intelligence faculty were blunter. James Landay, co-director of HAI, offered his biggest prediction for the year: “There will be no AGI this year.”
“We won’t get to AGI in 2026 (or 7). Astonishing how much the vibe has shifted in just a few months.” — Gary Marcus, AI researcher
Gary Marcus, the AI researcher known for skepticism toward grand claims, made his own prediction: “We won’t get to AGI in 2026 (or 7).” He noted that “faith in scaling as a route to AGI has dissipated” and that the “vibe has shifted” in the industry, with even AI luminaries like Ilya Sutskever expressing concern.
The uncertainty matters because belief shapes behavior. If decision-makers think AGI is imminent, investing in human-centric institutions—philosophy departments, human judges, deliberative ethics committees—looks like a bad bet. Better to automate now and prepare for a world where human judgment is an anachronism.
Whether or not the predictions prove accurate, the expectation of AGI justifies the present dismantling of human infrastructure. “Why train a generation of philosophers who will be obsolete in a decade?”
Meanwhile, several states have attempted to erect guardrails. As of January 2026, California requires watermarking and provenance labeling for AI-generated content. Texas mandates risk frameworks and internal testing for enterprise AI. Illinois prohibits AI discrimination in employment and housing. Colorado creates developer liability for discriminatory outcomes.
The federal response: executive orders challenging state regulations and asserting federal preemption.
The race continues. The finish line remains invisible.
For Further Reading: Perspectives
PRO (expecting near-term AGI): “Nine Predictions for AI in 2026, from the Father of AGI” — IT Brief Ben Goertzel’s January 2026 forecast arguing that AGI is three to eight years away, based on accelerating progress in reasoning, programming, and self-improvement. Source: research.aimultiple.com (January 2026)
CON (skeptical of AGI timelines): “Six (or Seven) Predictions for AI 2026 from a Generative AI Realist” — Gary Marcus, Substack Analysis arguing that AGI will not arrive in 2026 or 2027, that faith in scaling has dissipated, and that 2025 will be remembered as “the year of peak bubble.” Source: garymarcus.substack.com (December 2025)
❧ ❧ ❧
EDITORIAL
THE THINGS THAT CANNOT BE COUNTED
On the Folly of Auditing the Unmeasurable
There is a scene in Douglas Adams’ Hitchhiker’s Guide to the Galaxy where an advanced civilization builds a computer called Deep Thought to answer “the Ultimate Question of Life, the Universe, and Everything.” After seven and a half million years of calculation, the machine delivers its answer: 42.
The joke, of course, is that the answer is meaningless without understanding what was being asked. The machine was brilliant at calculation but had no capacity for meaning.
This winter, we have watched institution after institution arrive at similar conclusions. They ran the numbers. The humanities departments cost money without producing patents. The ethics committees were inefficient. The human arbitrators were slow. The physicians spent too much time on conversations that could not be billed. The philosophy doctoral programs produced no measurable return.
The answer, in each case, was a version of 42. Cut. Dissolve. Automate. Document to the minute.
But the question was never being asked.
A philosophy department is not a cost center to be optimized. It is the place where we ask what things are worth. An ethics committee is not a service line with attendance metrics. It is the place where a community argues about how to be good. A conversation with a dying patient is not a billing code. It is medicine in its most fundamental form—one person helping another face the end.
These things cannot be counted because their value lies precisely in their resistance to quantification. Wisdom is not scalable. Moral deliberation is inherently inefficient. The physician who sits in silence with a grieving family produces no documentable output—and that may be the most valuable thing anyone does that day.
The great irony of the “metricization of moral obligation” is that it defeats itself. When you convert wisdom to a number, you no longer have wisdom—you have a number. When you replace deliberation with delivery, you no longer have ethics—you have compliance. When you automate judgment, you no longer have justice—you have prediction.
“The question is whether any institution will remain brave enough to argue that efficiency is not the highest good.”
We do not claim that the budget pressures facing Georgetown are fake, that hospitals should cling to dysfunctional committees, or that courts should reject all technological innovation. Resources are finite. Trade-offs are real. Institutions that cannot adapt will fail.
But adaptation is not the same as surrender. The question is not whether to change but what to preserve. Every institution exists for some purpose beyond its own survival. When Georgetown suspends doctoral admissions in philosophy, it saves money—but does it remain a university in any meaningful sense? When a hospital replaces its ethics committee with consultants, it gains efficiency—but does it retain a moral community?
The coming decade will answer these questions. AGI may arrive in 2026, or 2029, or 2040, or never. What arrives in the meantime is a civilization that has forgotten how to value what it cannot count.
The audit is complete. The findings are: 42.
Now we must decide if that means anything at all.
— The Editors
For Further Reading: Perspectives
PRO (defending metricization): “The Path Forward: Realities and Opportunities in Arbitration” — 2025 International Arbitration Survey, White & Case The largest survey of arbitration practitioners finds that 90 percent expect to use AI for research, data analytics, and document review. Fifty-four percent say saving time is the biggest driver. Source: whitecase.com (2025)
CON (warning of institutional collapse): “How 2026 Could Decide the Future of Artificial Intelligence” — Council on Foreign Relations Analysis arguing that the “edge cases of 2025 won’t remain edge cases for long” and that decisions made this year will determine where “responsibility, power, and opportunity ultimately concentrate in the AI era.” Source: cfr.org (January 2026)
❧ ❧ ❧
Production Note: This edition of The Review was produced in collaboration between a human editor and an AI assistant. The facts presented are drawn from verified sources in the public record; the synthesis, framing, and editorial perspective represent the collaborative judgment of both parties. Your skepticism remains appropriate and encouraged. If you find errors, please let us know.
Coming Next Week: The Sleep Deficit—how cities traded rest for productivity, and what the numbers say about what was lost. Also: the return of the written letter.
© 2026 The Review. All rights reserved.
Editor: Contact via comment | Submissions: Welcome
This newspaper was generated on Saturday, February 1, 2026, using research dated through January 2026.