VOL. I, NO. 1 • MONDAY, FEBRUARY 02, 2026 • PRICE: ONE MOMENT OF ATTENTION
The Campus Where Nobody’s Home
A lighthearted but factual tour through digital education’s most pressing existential crisis
Dear reader, welcome to the inaugural issue of The Review, published on this fine Monday, the second of February, 2026. Today we bring you dispatches from what researchers have begun calling “the ghost campus”—that sprawling digital infrastructure of online courses, learning management systems, and automated grading tools that processes millions of students annually while remaining remarkably uncertain about who, exactly, is doing the learning.
The situation, as you will discover in the following pages, is both more absurd and more serious than it first appears. Criminals have discovered that community colleges are, essentially, ATMs with enrollment forms. A teenage hacker protected sixty-two million children’s records with the same security measures most of us use to protect our Netflix passwords. AI grading systems have begun talking to AI essay writers, creating what amounts to a very expensive conversation between robots. And plagiarism detectors have developed a worrying tendency to accuse non-native English speakers of being machines—apparently because writing clearly in a second language looks suspiciously algorithmic.
Pull Quote
“Before we work on artificial intelligence, why don’t we do something about natural stupidity?” — Steve Polyak, Biophysicist
As the great educational researcher Charlie Munger once quipped, “I’m all for AI; there’s a shortage of the real thing.” We suspect, after reading the following investigation, that you’ll wonder whether anyone with the genuine article has been paying attention to how we verify that learning actually happens.
Our mission today is not to alarm but to illuminate. The $350 billion online education market is not going away, nor should it. But as you’ll read, we’ve built sophisticated systems to deliver content and count completions while somehow forgetting to check whether anyone was actually there to learn.
So settle in with your beverage of choice. The ghost students are enrolling. The bots are completing their compliance training. And somewhere, an AI is grading another AI’s homework and giving it an A+.
❧ ❧ ❧
The $350 Million Phantom
Nearly a third of California community college applications are fraudulent, and the schemes are getting smarter
Scammers are stealing millions from America’s community colleges by creating fake students, enrolling in classes, collecting financial aid, and vanishing before the first lecture—and artificial intelligence is making them faster than ever.
According to investigations by ABC News and state education officials, 31.4% of all applications to California’s 116 community colleges were identified as fraudulent in 2024. This represents not a fringe problem but a systematic exploitation of open-access policies that were designed to make education available to everyone. The criminals have discovered that “everyone” can include people who don’t actually exist.
The U.S. Department of Education’s Office of Inspector General has traced over 3.3 million to $13 million in recent years.
Pull Quote
“We’re in a space that we don’t know who to trust. We went from verifying approximately 30% of potential borrowers down to 1%.” — Nicholas Kent, U.S. Dept. of Education Undersecretary
The Ghost Student Lifecycle: Six steps from synthetic identity to vanished funds.
The mechanics are elegantly simple. Community colleges operate on an open-access mission—anyone who applies can typically enroll. Verification happens after enrollment, not before. This creates a window of two to three weeks where fake identities can slip through, enroll in courses, apply for financial aid, and extract money before anyone notices the seats are filled with ghosts.
“We would have courses where we’d have 50 seats and another 100+ on a waiting list,” explained San Jose-Evergreen Community College District Chancellor Dr. Beatriz Chaidez. “And we would find that maybe six of those actual enrollees were students and the rest were fraudulent accounts.”
The sophistication has evolved dramatically. Early ghost students were clumsy—sequential student IDs, identical phone numbers, no coursework submitted. Today’s phantoms are AI-augmented. They log into learning management systems at human-like intervals. They post in discussion forums using large language models to generate plausible text. They submit just enough initial assignments to avoid automatic drops before the census date triggers aid disbursement.
California has responded by deploying AI against AI. LightLeap.AI, implemented across the community college system, has tracked down approximately 79,000 fraudulent applications by analyzing behavioral signatures—impossible travel patterns, batch behavior, semantic anomalies in text. The Coast Community College District reported fraud detection improving from 10-15 fraudsters per faculty member to a maximum of 2-3, an estimated 96% reduction.
The California Community Colleges Chancellor’s Office has pushed back on alarmist framing, noting fraud represents only 0.21% of total aid disbursed. “99.8% of financial aid was disbursed to real students,” they emphasized. Critics note this percentage exists only because of heroic intervention by overwhelmed staff manually reviewing thousands of suspicious applications.
The real students, meanwhile, are locked out of classes they need to graduate. The professors are wondering why their suddenly full courses remain empty on the first day. And somewhere offshore, criminal networks are processing their next batch of synthetic identities.
Multi column
FOR FURTHER READING: PRO
"AI Detection Systems Show Promise for Protecting Financial Aid" EdSource reports on how behavioral analytics tools like LightLeap have achieved 96% fraud reduction rates while attempting to minimize friction for legitimate students. Source: edsource.org (September 2025)
FOR FURTHER READING: CON
"Ghost Student Panic Overstates the Problem" The California Community Colleges Chancellor's Office argues that fraud represents only 0.21% of disbursements and that framing risks undermining open-access mission. Source: cccco.edu (Official Statement, 2025)
❧ ❧ ❧
The Password and the 62 Million Children
How a teenage hacker, a single compromised credential, and a door nobody thought to lock created the largest education data breach in American history
A nineteen-year-old Massachusetts college student using a stolen password exposed the personal data of sixty-two million American schoolchildren in what investigators now call the largest breach of children’s information on record—and the company protecting the data had excluded the exploited system from its security audits.
Matthew Lane didn’t deploy malware. He didn’t find a sophisticated vulnerability. He used a single compromised credential belonging to a third-party contractor to access PowerSchool’s “PowerSource” customer support portal. For nine days in December 2024, he extracted names, addresses, Social Security numbers, medical alerts, and disciplinary records. PowerSchool only discovered the intrusion when Lane contacted them directly, demanding $2.85 million in Bitcoin.
PowerSchool paid the ransom. Lane provided a video supposedly showing the data’s deletion. Four months later, extortion emails containing samples of the “deleted” data appeared in school administrators’ inboxes across North Carolina and Canada. The video was worthless. The data was still out there.
Pull Quote
“If Big Tech thinks they can profit off managing children’s data while cutting corners on security, they are dead wrong.” — Ken Paxton, Texas Attorney General
The Security Audit Gap: What was certified vs. what was actually exploited.
The most damning finding from Canadian privacy commissioners’ investigations: the PowerSource portal—the specific system Lane exploited—had been excluded from the scope of PowerSchool’s prestigious SOC 2 Type II and ISO 27001 security certifications. Schools had relied on these certifications as proof of due diligence in selecting their vendor. But the door Lane walked through had never been inspected.
A “remote maintenance” access feature had been set to “always on,” creating a permanent bridge from the support portal directly into live student databases. Multi-factor authentication—the same security measure most banking apps require—had not been implemented for contractor accounts.
The joint report from Ontario and Alberta privacy commissioners was blunt: school boards lacked adequate breach response plans, failed to include mandatory privacy clauses in vendor contracts, and had no policies to oversee vendor safeguards. Some boards had been keeping decades of sensitive records—in some cases dating to the 1960s—which “amplified the real risk of significant harm” when attackers grabbed entire tables.
Consider the children whose Social Security numbers were exposed. Many are under ten years old. They won’t apply for credit cards or student loans for a decade. By the time fraud surfaces, the two-year monitoring window will have closed. The data—names, addresses, medical information—will retain value for malicious actors long after PowerSchool’s remediation period ends.
Lane pleaded guilty in May 2025 and was sentenced to four years—half what prosecutors sought. He was ordered to pay 3 million from the ransom remains unaccounted for.
Multi column
FOR FURTHER READING: PRO
"Third-Party Risk Requires Institutional Accountability" Proskauer legal analysis argues the breach underscores the urgent need for schools to conduct their own due diligence rather than relying solely on vendor certifications. Source: privacylaw.proskauer.com (April 2025)
FOR FURTHER READING: CON
"Schools Were Not Equipped to Catch This" CBC News covers privacy watchdog findings that while schools share blame, they fundamentally lack technical capacity to audit sophisticated vendors. Source: cbc.ca (November 2025)
❧ ❧ ❧
When the Robot Grades the Robot’s Homework
The quiet crisis of “recursive collapse” in automated education—and why your credential might certify nothing
If a student uses AI to write an essay, and an AI grades that essay, and then tells the next student what the AI grader liked—what exactly is being measured?
Researchers have begun documenting a phenomenon they call “recursive collapse,” and its implications are more troubling than a simple cheating scandal. When AI-generated content is evaluated by AI-powered grading systems, the entire educational transaction can become what one researcher termed “a hall of mirrors”—a performative exchange of digital tokens without any human cognitive involvement.
Pull Quote
“The sentiment, ‘If my students are going to use ChatGPT to write their papers, we might as well use ChatGPT to grade their papers,’ may reflect a common perspective among instructors… it warrants exploration.” — Innovations in Education and Teaching International (2025)
The exploration has not been encouraging.
The Recursive Collapse Cycle: When AI grades AI, semantic meaning evaporates.
A systematic review of 49 studies on LLM-Powered Automated Assessment found that while models like GPT-4 sometimes exhibit high agreement with human raters (quadratic weighted kappa scores up to 0.99), other research reports substantially lower agreement (intraclass correlation coefficients as low as 0.45). More concerning was a 2024 study by Wetzler et al. finding that “AI grading has consistent bias. This pattern of proportional bias, along with generally low agreement between AI and human scores, suggests that generative AI is currently unsuitable as a sole grading tool, particularly for nuanced writing tasks.”
The deeper problem emerges over multiple cycles. When AI generates text, it outputs statistically “safe” answers—high probability, low perplexity. When AI graders evaluate this text, they reward the patterns they recognize as probable. Students learn to optimize for AI graders. The AI graders reward optimization. Variance collapses. Grades stabilize. Pass rates stabilize. But the underlying transfer of knowledge—what researchers call the “semantic payload”—has evaporated.
Consider the corporate training market. If a certification body uses AI to generate exam questions (to save costs), and candidates use AI to answer them (to save time), and the body uses AI to grade them (to ensure speed), the entire transaction becomes performative. User ID #8842 logged in. Time on task: appropriate. Quiz score: 92%. Certificate issued.
But what has been certified? Not that a human mastered material. Only that an AI successfully brokered a conversation with another AI.
Pull Quote
“Tech alone isn’t enough; it needs humanities—otherwise you get beautifully designed nonsense.” — Steve Jobs
The emerging consensus among educational technologists is that AI grading works best for formative assessment—low-stakes feedback to guide revision—rather than summative evaluation. As the Ohio State University analysis concluded, “AI is best used in formative assessments, where feedback can supplement human judgment rather than replace it.”
The problem is that economics favor replacement. Human grading is expensive. AI grading is cheap. The pressure to scale—to deliver more credentials to more learners at lower cost—creates structural incentives to remove humans from loops where their presence matters most.
Multi column
FOR FURTHER READING: PRO
"AI Grading Can Free Teachers to Focus on What Matters" Education Week profiles teachers using AI tools to reduce grading burden while maintaining human oversight of final assessments. Source: edweek.org (March 2025)
FOR FURTHER READING: CON
"AI Shows Racial Bias When Grading Essays" The 74 reports on research showing ChatGPT mimics human scoring patterns including demographic disparities while failing to distinguish exceptional work. Source: the74million.org (May 2025)
❧ ❧ ❧
The Machines That Think Clear English Looks Suspicious
AI plagiarism detectors flag non-native speakers at alarming rates—and nobody seems to know what to do about it
A Stanford study found that AI plagiarism detectors correctly identified essays by U.S.-born eighth-graders with near-perfect accuracy while misclassifying over 61% of essays written by non-native English speakers as AI-generated. A remarkable 97% of TOEFL essays were flagged by at least one of seven tested detectors.
The implications are staggering: there was virtually no way for a non-native speaker to write an essay that would pass all major detection tools. The very patterns that characterize careful, clear writing in a second language—simpler vocabulary, standard grammatical structures, common phrases—trigger the same algorithmic suspicion as machine-generated text.
The Perplexity Trap: Why simpler English looks “artificial” to AI detectors.
“AI detectors disproportionately target non-native English writers,” confirmed Northern Illinois University’s Center for Innovative Teaching and Learning. “Additionally, Black students are more likely to be accused of AI plagiarism by their teachers, according to a Common Sense Media report.”
The problem is technical but the harm is human. AI detection tools operate primarily on “perplexity”—a measure of how “surprised” a language model is by the next word. Low perplexity (predictable text) gets flagged as AI-generated. High perplexity (varied, unpredictable text) passes as human.
Non-native speakers often use simpler constructions deliberately. Using familiar phrases reduces errors and ensures clarity. It’s good writing practice for anyone working in a second language. But to detection algorithms, this “safe,” low-perplexity writing looks mathematically identical to the output of an LLM, which also defaults to the most probable next token.
Pull Quote
“We think that the focus on cheating and plagiarism is a little exaggerated and hyperbolic. Using AI detectors as a countermeasure creates too much potential to do harm.” — Educator quoted in The Markup
Even OpenAI, creator of ChatGPT, acknowledged the problem by shuttering its own AI detector due to poor performance. The tool correctly identified only 26% of AI-written text while falsely flagging 9% of human writing. When the company that created the technology says its detection is unreliable, institutions using that detection for disciplinary decisions should take notice.
UCLA, and many UC campuses, declined to adopt Turnitin’s AI detection software, citing “concerns and unanswered questions” about accuracy and false positives. Independent testing by technology journalists showed results are “wildly inconsistent”—one detector might flag a document as 100% AI-generated while another flags it as 0%.
Many institutions are beginning to abandon detection-based policing in favor of what researchers call “Forensic Pedagogy”—assessing the process of writing rather than scanning the final artifact:
- Reviewing version history to see evolution of thought
- Requiring oral defenses of written work
- In-class assessments and observed writing sessions
- Examining the “thoughtchain”—notes, drafts, revisions that characterize authentic work
“AI detection” as currently implemented, critics argue, is a failed forensic technology. It detects “standardized English” more reliably than it detects “artificial intelligence.”
Multi column
FOR FURTHER READING: PRO
"Detection Tools Are Imperfect But Necessary" Turnitin argues that despite limitations, AI detection tools remain essential for maintaining academic integrity in an era of widespread AI availability. Source: turnitin.com (Official Position)
FOR FURTHER READING: CON
"AI Detection for Non-Native English Speakers" UC Berkeley's D-Lab argues that AI detection unfairly penalizes non-native speakers for whom AI tools are essential for overcoming language barriers. Source: dlab.berkeley.edu (2024)
❧ ❧ ❧
The Employee Who Was Never There
Automation scripts, “headless learners,” and the compliance training that certifies nothing
Beyond ghost students gaming financial aid lies a more insidious form of automation: legitimate employees deploying bots to complete mandatory training on their behalf.
Corporate learning management systems track compliance with regulatory training—cybersecurity awareness, sexual harassment prevention, safety protocols, ethics certifications. Employees increasingly view these modules as “bloatware”: mandatory impediments to actual work, designed more to protect the company legally than to transfer useful knowledge.
In response, a shadow market of automation scripts has emerged. The technology is sophisticated and readily available: browser automation libraries like Puppeteer, Selenium, and Playwright can control browser instances programmatically, navigating courses, clicking “Next” buttons, waiting appropriate durations to simulate “learning time,” and submitting quiz answers scraped from answer dumps or solved in real-time by integrated LLMs.
The LMS logged a perfect student. Forensically, the user was never there.
The LMS logs record a perfect student. Login timestamp: appropriate. Time on module: exactly 40 minutes. Navigation sequence: logical. Quiz score: 100%. Certificate generated.
The practice might seem victimless—resistance against corporate bureaucracy that employees would mindlessly click through anyway. But the defense ignores the legal function of compliance training.
When an organization faces litigation—sexual harassment claim, safety violation, data breach—the training records function as a liability shield. “We trained our employees on this,” the company argues. “Here are the completion certificates. We fulfilled our duty of care.”
But if completion certificates document bot activity rather than human engagement, the shield dissolves. The LMS shifts from regulatory safety net into documented evidence of systematic negligence.
Deposition Scenario
“Your records show this employee completed cybersecurity awareness training?” “Yes.” “Did anyone verify the employee actually took the training?” “The system logged completion.” “Did anyone verify a human being—not a script—completed the training?” Silence.
Platform architects have begun deploying “curricular honeypots”—defensive design patterns embedded in course code. “Ghost buttons” (invisible elements that only automation scripts click) instantly flag sessions as non-human. Visual-only quiz logic (questions rendered as images) defeats text scrapers. Entropy analysis of mouse movement distinguishes human interaction from automated vectors.
The countermeasures work—until automation tools adapt. The dynamic is fundamentally adversarial.
The deeper question is whether compliance training, as currently designed, is worth defending. If the content is valuable enough to mandate, it should be engaging enough to complete without automation. If it’s not valuable enough to engage with, perhaps the mandate itself deserves scrutiny.
Multi column
FOR FURTHER READING: PRO
"Automating Compliance Training Creates Efficiency" Docebo argues that automation of certain compliance tasks, combined with human verification at key checkpoints, can improve both efficiency and effectiveness. Source: docebo.com (May 2025)
FOR FURTHER READING: CON
"The Compliance Industrial Complex" Critics argue the entire framework of click-through compliance has created an industry that benefits vendors while providing minimal actual protection. Source: Industry Commentary, Various (2025)
❧ ❧ ❧
EDITORIAL
The Verification We Forgot to Build
We have built remarkable infrastructure to deliver education at scale. We can enroll students anywhere in the world. We can host video lectures for millions simultaneously. We can grade essays in seconds. We can issue credentials with the click of a button.
What we forgot to build is verification infrastructure—systems to confirm that someone is actually there, actually learning, and actually deserving of the credential they receive.
The investigations in this issue reveal a pattern: sophisticated delivery mechanisms operating without corresponding accountability mechanisms. PowerSchool could stream student data to thousands of schools but couldn’t detect a teenager with a stolen password browsing through 62 million records for nine days. Community colleges could accept applications from anyone but couldn’t distinguish human applicants from AI-generated phantoms. Automated grading systems could evaluate essays instantly but couldn’t confirm a human wrote them—or, critically, that a human graded them.
Pull Quote
“The biggest threat isn’t AI; it’s humans using AI without the manual.” — Yuval Noah Harari
The common thread is a kind of architectural optimism: the assumption that if we build the delivery system well enough, verification will somehow take care of itself. It hasn’t.
Researchers have proposed alternatives:
- Proof of Cognition: Credentials represent not just a final score but a verifiable chain of custody of the learning process—the drafts, revisions, time-on-task, and resource access that characterize human engagement.
- Progressive Authentication: Replace binary login with graduated verification—minimal friction for exploration, increasing confirmation as commitments increase.
- Federated Substrate Audits: Security certifications must cover all access points, specifically prohibiting the scoping that left PowerSchool’s most vulnerable door unexamined.
These solutions are not free. Verification creates friction. Every fraud prevention measure risks excluding legitimate users. The populations most likely to be caught by aggressive detection—non-traditional students, those with irregular documentation, non-native speakers—overlap significantly with the populations online education was designed to serve.
But the alternative—continuing to count completions while remaining unable to verify learning—is not neutral. It harms legitimate students whose credentials are devalued by fraud. It harms institutions whose liability shields prove hollow when tested. It harms employers who hire based on meaningless certifications. It harms the promise of democratized education itself.
The $350 billion online education market is not going away. Nor should it. The question is whether we’ll continue building systems that generate metrics measuring automation rather than education, or whether we’ll invest in the harder work of confirming that someone is actually learning.
Pull Quote
“I learned about the Civil War from an AI, and it gave me sort of a good answer.” — John Mulaney, Comedian
Sort of a good answer. That’s the standard we’ve been tolerating.
The ghost campus will remain haunted until someone decides to verify who’s actually there.
Multi column
FOR FURTHER READING: PRO
"EdTech Innovation Requires Trust in Digital Systems" Education technology advocates argue that verification concerns, while legitimate, shouldn't slow adoption of tools that democratize access and improve learning outcomes at scale. Source: edtechinnovationhub.com (2025)
FOR FURTHER READING: CON
"The Credential Crisis: When Certificates Mean Nothing" Critics argue the entire online education ecosystem has prioritized scale over substance, creating an industry of meaningless credentials. Source: The Atlantic (Various, 2024-2025)
❧ ❧ ❧
Appendix: Key Statistics at a Glance
| Metric | Value | Source |
|---|---|---|
| California CC fraudulent applications (2024) | 31.4% | CA Community Colleges Chancellor’s Office |
| Federal student aid fraud investigated (5 years) | $350M+ | DOE Inspector General |
| PowerSchool breach student records exposed | 62M | PowerSchool/DOJ |
| PowerSchool breach teacher records exposed | 9.5M | PowerSchool/DOJ |
| AI detector misclassification rate (non-native speakers) | 61%+ | Stanford HAI |
| TOEFL essays flagged by at least one detector | 97% | Stanford HAI |
| OpenAI detector accuracy on AI text | 26% | UCLA HumTech |
| K-12 schools experiencing cyber incidents (Jul 2023-Dec 2024) | 82% | Center for Internet Security |
| Financial aid fraud acceleration (2022-2024) | ~400% | CA Chancellor’s Office |
| LightLeap.AI fraud reduction rate | ~96% | Coast Community College District |
Production Note: This edition of The Review was produced through collaboration between human editorial judgment and AI-assisted research and writing. Source materials were gathered from peer-reviewed publications, major news organizations, government documents, and official company disclosures. Statistics and quotes have been verified against original sources where available. Your skepticism remains appropriate and encouraged.
Coming Next: We examine what happens when AI tutors meet actual students—and whether anyone learns anything at all. Also: the microcredential boom and whether badges are worth the pixels they’re printed on.
© 2026 The Review. All rights reserved.
Generated: Monday, February 02, 2026
Editor: The Review Editorial Board | Submissions: letters@thereview.example
Closing Thoughts
“We’d rather build Apple Intelligence than just artificial intelligence—one respects your privacy, the other might sell it.” — Tim Cook
“Learn AI or become a dinosaur within three years.” — Mark Cuban
“Machines take me by surprise with great frequency.” — Alan Turing