2026-02-16 - Final Edit
Context
I have been playing around with various LLMs to tease out whether or not I have enough material for some long-form non-fiction writing. After asking for a sample newspaper I believe there is something here.
Goal
I like the version of the work I’ve included in the input section the best, it looks like it might actually work for creating a newspaper, but it needs to both be tightened up a bit and made more engaging for a layman audience.
(I may have been too lazy to paste into the input session and if so, let’s continue working with what we have so far.)
Make no mistake, this is a newspaper for highly educated people and don’t go screwing around with the details or article length. My goal is to make the look of the paper better, not the content. So make sure the headlines hook people, the first paragraph gives an overview so that they can determine to continue reading or not. I believe this is the usual editorial work done in newspapers. Also, since the drop quotes and the infographics are also scanned first, we’re going to need something good. If there are any included, re-do those as needed to make them more appealing to a layreader that’d be fine. If there are not any there, make some. We’re going to want to be able to embed, pack up, or otherwise distribute as a stand-alone static web page, pdf, and markdown for obsidian. Don’t link graphics out to other places. For Obsidian, I’m happy with svgs that I can drop in my attachment folder
We’ve got the content. We need to pay attention to the look and feel of the paper. Review the style guide and do your best to follow it.
Finally, I have an idea about one of those pencil sketches like the NYT used to run, only as a thumbnail beside each of the pro and cons for the section. Show us who wrote this. Makes the paper look more personal and official. You might need to dig around to figure out who that is. Also, this may be a bad idea. I don’t know.
We’ll probably do one more look at the overall feel of the newspaper before we’re officially done with this.
When we do content like this, many LLMs just can’t make it through without hanging and/or losing context. Take some time to orchestrate what you’re doing and use temp workspaces in order to be able to easily pick this back up where you left off.
Background
Everyday I use various AI tools and ideas of my own to play around with creating long-form content. Sometimes I run this as a big prompt stream. Sometimes I cut and paste. Sometimes I take things from one tool and use as input for another.
Success Criteria
Success is in the eye of the human reading this. I’m looking for thematic quality, intellectual rigor, approachability by a lay-audience, and enough new, non-hyped deep material here to make it worth even scanning.
Failure Indicators
Don’t get hung up running out of context window or having problems going through so much content you hang up. If you’re able, create an orchestration document or a to-do list and update it as you go so that you can pick things back up. Work in very small pieces.
The PDF cannot have a solid black background. You’ve made this mistake several times. Be sure it’s set to none or whatever the equivalent.
Please orchestrate this carefully and make the pieces as small as possible. It’s a common problem for you to timeout here or hang. You’re making me burn through token asking you to complete what you should have completed the first time around.
Daily Newspaper Style Guide
This style guide ensures consistency across all editions of the daily newspaper. It applies to both human editors and large language models (LLMs) during the final polishing stage, after core content (articles, headlines, images, etc.) has been drafted. The goal is to maintain a professional, readable, and uniform appearance, fostering reader trust and brand recognition. Adhere strictly to these rules unless overridden by specific editorial decisions.
1. Overall Structure and Layout
- Edition Header (Masthead): Every edition must start with a centered masthead block including:
- Volume and issue details, day, date, and price in uppercase, small caps or equivalent, on one line (e.g., “VOL. I, NO. 47 • SUNDAY, JANUARY 11, 2026 • PRICE: ONE MOMENT OF ATTENTION”), centered, in 10-12pt font.
- Newspaper name in bold, uppercase, large font (e.g., 48pt), split across two lines if needed (e.g., “THE GLOBAL” on first line, “CONNECTOR” on second), centered.
- Tagline in quotes, italic, below the name (e.g., “Tracing the threads that hold the world together—before they snap”), centered, in 14pt font.
- A horizontal rule (---) below the masthead for separation.
- Example in markdown approximation:
VOL. I, NO. 47 • SUNDAY, JANUARY 11, 2026 • PRICE: ONE MOMENT OF ATTENTION THE GLOBAL CONNECTOR *"Tracing the threads that hold the world together—before they snap"* ---
- Background and Visual Style: Aim for a newspaper-like background in digital formats (e.g., light beige or subtle paper texture via CSS if possible; in plain markdown, note as a design instruction for rendering).
- Sections: Organize content into a themed newsletter format rather than rigid categories. Start with an introductory article, followed by 4-6 main stories, and end with an editorial. Each story should stand alone but tie into the edition’s theme.
- Introductory article: Begins immediately after masthead, with a main headline in bold, title case.
- Main stories: Each starts with a bold headline, followed by a subheadline in italic.
- Editorial: Labeled as “EDITORIAL” in uppercase, bold, with its own headline.
- Separate sections with ❧ ❧ ❧ or similar decorative dividers.
- Limit total content to 2000-3000 words for a daily edition.
- Page Breaks/Flow: In digital formats, use markdown or HTML breaks for readability. Aim for a “print-like” flow: no more than 800-1000 words per “page” equivalent. Use drop caps for the first letter of major articles.
- Footer: End every edition with:
- A horizontal rule.
- Production Note: A paragraph explaining the collaboration between human and AI, verification process, and encouragement of skepticism (e.g., “Production Note: This edition… Your skepticism remains appropriate and encouraged.”).
- Coming Next: A teaser for the next edition (e.g., “Coming Next Week: [Theme]—examining [details]. Also: [additional hook].”).
- Copyright notice: ”© 2026 [Newspaper Name]. All rights reserved.”
- Contact info: “Editor: [Name/Email] | Submissions: [Email]“.
- No page count; end with a clean close.
2. Typography and Formatting
- Fonts (for digital/print equivalents):
- Headlines: Serif font (e.g., Times New Roman or Georgia), bold, 18-24pt.
- Subheadlines: Serif, italic, 14-16pt.
- Body Text: Serif, regular, 12pt.
- Captions/Quotes: Sans-serif (e.g., Arial or Helvetica), 10pt, italic.
- Use markdown equivalents: # for main headlines, for sections, bold for emphasis, italic for quotes/subtle emphasis.
- Drop Caps: Introduce new articles or major sections with a drop cap for the first letter (e.g., large, bold initial like Welcome). In markdown, approximate with W and continue the paragraph; in rendered formats, use CSS for 3-4 line height drop.
- Headlines:
- Main article headlines: Capitalize major words (title case), no period at end.
- Keep to 1-2 lines (under 70 characters).
- Example: “Everything Is Connected (By Very Fragile Stuff)”
- Body Text:
- Paragraphs: 3-5 sentences each, separated by a blank line.
- Line length: 60-80 characters for readability.
- Bullet points for lists (e.g., key facts): Use - or * with consistent indentation.
- Tables: Use markdown tables for data. Align columns left for text, right for numbers.
- Pull Quotes (Drop Quotes): Insert 1-2 per story, centered, in a boxed or indented block, larger font (14pt), italic, with quotation marks. Place mid-article for emphasis. Example in markdown:
> "The tech giants in California scream about latency and 'packet loss,' viewing the outage as a software bug. The ship captain knows the truth: the internet is just a wire in the ocean." - Emphasis:
- Bold (text) for key terms or names on first mention.
- Italics (text) for book titles, foreign words, or emphasis.
- Avoid ALL CAPS except in headers.
- No underlining except for hyperlinks.
- Punctuation and Spacing:
- Use Oxford comma in lists (e.g., “apples, oranges, and bananas”).
- Single space after periods.
- Em-dashes (—) for interruptions, en-dashes (–) for ranges (e.g., 2025–2026).
- Block quotes: Indent with > or use italics in a separate paragraph for quotes longer than 2 lines.
3. Language and Tone
- Style Standard: Follow Associated Press (AP) style for grammar, spelling, and abbreviations.
- Numbers: Spell out 1-9, use numerals for 10+ (except at sentence start).
- Dates: “Jan. 12, 2026” (abbreviate months when with day).
- Titles: “President Joe Biden” on first reference, “Biden” thereafter.
- Avoid jargon; explain acronyms on first use (e.g., “Artificial Intelligence (AI)”).
- Tone: Neutral, factual, and objective for news stories, with a witty, reflective edge. Editorial may be more opinionated but balanced. Overall voice: Professional, concise, engaging—aim for a reading level of 8th-10th grade. Use direct address like “dear reader” in intros.
- Length Guidelines:
- Introductory article: 200-400 words.
- Main stories: 300-500 words each.
- Editorial: 400-600 words.
- Avoid fluff; prioritize who, what, when, where, why, how, with thematic connections.
- Inclusivity: Use gender-neutral language (e.g., “they” instead of “he/she”). Avoid biased terms; represent diverse perspectives fairly.
- For Further Reading: Perspectives: At the end of each story and editorial, include a “FOR FURTHER READING: PERSPECTIVES” section. Use PRO (green box) and CON (red box) for balanced views. Each entry: Bold label (PRO or CON), title in quotes, source with hyperlink. Approximate boxes in markdown with code blocks or tables; in rendered formats, use colored backgrounds (e.g., light green for PRO, light red for CON). Example:
FOR FURTHER READING: PERSPECTIVES **PRO** "Why Governments Must Control Cable Repair" — Parliament UK Joint Committee Report Source: [publications.parliament.uk](https://publications.parliament.uk) (September 2025) **CON** "Sabotage Fears Outpace Evidence" — TeleGeography Analysis Source: [blog.telegeography.com](https://blog.telegeography.com) (2025)
4. Images and Media
- Placement: Insert images after the first or second paragraph of relevant articles. Use 1-2 per article max. No images in this example, but if used, tie to stories (e.g., maps for cables, illustrations for AI).
- Formatting:
- Size: Medium (e.g., 400-600px wide) for main images; thumbnails for galleries.
- Alignment: Center with wrapping text if possible.
- In text-based formats, describe images in brackets: [Image: Description of scene, credit: Source].
- Captions: Below images, in italics, 1-2 sentences. Include credit (e.g., “Photo by Jane Doe / Reuters”).
- Alt Text (for digital): Provide descriptive alt text for accessibility (e.g., “A bustling city street during rush hour”).
- Usage Rules: Only relevant, high-quality images. No stock photos unless necessary; prefer originals or credited sources.
5. Editing and Proofing Checklist
Before finalizing:
- Consistency Check: Ensure all sections follow the structure. Cross-reference dates, names, facts, and thematic ties.
- Grammar/Spelling: Run through a tool like Grammarly or manual review. Use American English (e.g., “color” not “colour”).
- Fact-Checking: Verify claims with sources; add inline citations if needed (e.g., [Source: Reuters]).
- Readability: Read aloud for flow. Break up dense text with subheads, pull quotes, or bullets.
- LLM-Specific Notes: If using an LLM for polishing, prompt with: “Apply the style guide to this draft: [insert content]. Ensure consistency in structure, tone, formatting, including drop caps, pull quotes, and perspectives sections.”
- Variations: Minor deviations allowed for special editions (e.g., holidays), but document changes.
This guide should be reviewed annually or as needed. For questions, contact the editor-in-chief. By following these rules, each edition will maintain a polished, predictable look that readers can rely on.
Input
VOL. I, NO. 1 • MONDAY, FEBRUARY 16, 2026 • PRICE: ONE MOMENT OF ATTENTION
THE REVIEW
“If nobody checked, does it count?”
The Trillion-Dollar Trust Fall
The AI industry built the future. It forgot to check the receipts.
The machines are getting smarter, the chips are getting smaller, and the money is getting bigger. The artificial intelligence industry now commands trillions in market capitalization, consumes more electricity than some countries, and sits at the center of the most consequential geopolitical competition since the space race. There is just one small problem: almost none of it can be independently verified.
This edition of The Review presents six stories, each drawn from a two-month forensic investigation spanning December 2025 through February 2026. They begin at the smallest scale imaginable — individual photons misbehaving inside chip-printing machines — and expand outward through corrupted calculations, smuggled processors, stolen blueprints, vulnerable encryption, and rivers of unmetered water. Read separately, each is a curious tale of an industry cutting corners in its own backyard. Read together, they describe a single, structural deficit: the AI industry has no coherent system for proving that the things it builds and sells are what anyone says they are.
In criminal law, this is called chain of custody — the unbroken paper trail that keeps evidence honest between the crime scene and the courtroom. The AI industry has no equivalent. Quantum computer benchmarks are self-reported. Chip authenticity relies on trust. Export controls are enforced at the honor bar. Trade secrets walk out the front door. Environmental costs are tallied on the back of a napkin. And the encryption protecting all of it may have an expiration date nobody has checked.
Dear reader, we don’t expect you to read all six stories in one sitting, though we won’t stop you. Each stands alone. But if you read even two, you’ll notice the same question echoing from different altitudes: who’s checking? The answer, with unsettling consistency, is: not enough people, not often enough, and not with the right tools.
Welcome to The Review. Your skepticism is the point.
❧ ❧ ❧
When Atoms Play Dice, Nobody Wins the Bet
Quantum computers promise the moon. The chips printing today’s AI can’t promise a clean transistor. At the very bottom of the technology stack, physics is making fools of us all.
You would think that a trillion-dollar industry would start with solid ground beneath its feet. You would be wrong.
At the absolute bottom of the AI technology stack — beneath the chatbots, beneath the software, beneath even the silicon wafers — sit two problems rooted in the behavior of things too small to see. One is quantum computing, the field that has been promising to revolutionize everything since approximately forever. The other is extreme ultraviolet lithography, the process that prints the circuits on today’s most advanced chips. Both share a punchline that the marketing departments would prefer you not hear: at atomic scales, nature operates like a casino, and the house doesn’t keep great records.
The Qubit Numbers Game
Quantum computing has a metrics problem, and it isn’t subtle. The field’s public scoreboard has long been organized around a single number — qubit count — treated like horsepower in a car ad. More qubits, more power. Simple.
Except it isn’t. Google Quantum AI demonstrated in late January 2026 that its 49-qubit processor could achieve logical error rates of 10⁻⁴ per correction cycle, a genuine engineering milestone. The catch? Extracting a single reliable “logical qubit” from the noisy mess of physical qubits can require hundreds or thousands of the raw kind. A 1,000-qubit chip where only 10 logical qubits survive the error-correction gauntlet is not a supercomputer. It is a very expensive abacus.
“Forget the Qubits” — headline from The Quantum Insider, January 2026, arguing for metrics beyond raw qubit count
IBM’s updated quantum roadmap lays out a “clear path to fault-tolerant quantum computing.” QuantWare’s 2026 outlook calls the emerging “KiloQubit Era” a manufacturing crisis. Alice & Bob, a French startup, claims “Elevator Codes” that slash error rates by a factor of 10,000 — if independently replicated. The word “if” is doing a heroic amount of heavy lifting in quantum computing press releases.
The strongest argument for optimism is that the field is pre-commercial, and demanding production-grade benchmarks from research systems is like demanding crash-test ratings from the Wright Flyer. The billions invested by Google, IBM, and Microsoft suggest informed actors believe the timeline is short.
The strongest argument for skepticism is that classical computing keeps eating quantum computing’s lunch. Tensor-network simulation, GPU-accelerated algorithms, and approximate methods have narrowed the “quantum advantage” window for many applications. And critically, the Quantum Economic Development Consortium’s proposed benchmarks remain voluntary and unevenly adopted. The field’s progress narrative is, for the most part, self-graded homework.
The Ghost in the Chip Factory
While quantum computing debates its future, the chips being manufactured today face their own reckoning with atomic-scale randomness. As semiconductor fabrication enters the “Angstrom Era” — features smaller than 2 nanometers — a new class of defect has emerged that cannot be engineered away, because it is built into the physics.
The culprit is photon shot noise: the irreducible randomness of individual photons arriving during the extreme ultraviolet lithography process. At the smallest scales, these random arrivals create phantom defects — broken gates and disconnected wires that appear not because anything went wrong, but because probability said so.
Chris Mack, CTO of Fractilia, framed the tradeoff with an engineer’s precision: wider aperture lenses can increase contrast, but if used to print smaller features instead, “stochastic effects will likely become worse.” Stochastic defects now consume up to half the error budget for placing features on a chip — and a single 1-in-a-trillion defect can scrap an entire AI wafer worth millions.
TSMC has responded by deciding to skip high-NA EUV for its next node, relying instead on multi-patterning — printing the same layer multiple times. This is the semiconductor equivalent of tracing over your handwriting until it’s legible. It works, but each additional pass multiplies opportunities for new errors, extends production time, and threatens to reverse the cost-per-transistor curve that has made the entire digital economy possible.
The industry is evolving lithography from a “printing” process into a predictive-forensics discipline, using AI digital twins to anticipate photon fluctuations — which is to say, using AI to compensate for the physics that makes AI chips unreliable. The recursion would be funny if the stakes weren’t quite so high.
[Image placeholder: Infographic comparing the size of an EUV-printed transistor feature (~1.4nm) to a strand of human DNA (~2.5nm). Caption: “Smaller than biology, governed by probability.“]
For Further Reading: Perspectives
PRO “Real Quantum Progress Is Happening — Just Not Where You Think” — Analytics Insight Argues that enterprise pilots in logistics and pharma are delivering measurable value from current NISQ-era devices. Source: analyticsinsight.net (January 2026)
CON “Will Quantum Computing Ever Live Up to Its Hype?” — Scientific American / Horgan Scott Aaronson’s long-running concern that the field is prone to “irrational exuberance” given that something about “quantum” makes it uniquely susceptible to hype. Source: scientificamerican.com (2024, still widely cited)
PRO “Quantum Error Correction Achieves 99.9% Fidelity Using Surface Codes” — Quantum Zeitgeist Reports on Google’s below-threshold error correction results, the strongest recent evidence of engineering momentum. Source: quantumzeitgeist.com (January 2026)
CON “Quantum Computing and Crypto in 2026: Hype vs Reality” — Cointelegraph Multiple experts argue the “quantum threat” timeline is inflated, with one analyst calling the narrative “90% marketing and 10% imminent threat.” Source: cointelegraph.com (December 2025)
❧ ❧ ❧
Your Computer Is Lying to You (And It Doesn’t Even Know)
Processors return wrong answers without raising an alarm. Counterfeit chips enter supply chains disguised as the real thing. Inside the silent crisis of hardware you can’t trust.
Somewhere in a data center right now, a processor is giving the wrong answer. It will not crash. It will not beep. It will not send an error message. It will simply produce a number that is slightly, invisibly wrong — and the AI model training on that number will absorb the error like a sponge absorbs water, silently, and keep going.
Welcome to the world of Silent Data Errors, or SDEs — the computing industry’s most unsettling open secret. These are not software bugs. They are hardware failures at the transistor level: a bit flips, a floating-point calculation drifts, a result comes back corrupted, and nothing in the system notices because nothing was designed to check.
Jyotika Athavale, director of engineering architecture at AMD, described the mechanism in terms even a newspaper reader can appreciate: an impacted CPU “might miscalculate data silently” and “these corruptions can derail entire datasets without raising a flag.” In clusters running tens of thousands of nodes, the math is unkind. Janusz Rajski of Siemens EDA quantified it: roughly 1 in 1,000 servers may be affected at any given time. In a 16,000-GPU training cluster — a common size for frontier AI models — that is 16 nodes producing subtly wrong answers every day.
“Doing more of what we have been doing in the past will not significantly move the needle. We need more research in this space because this needs a more holistic solution.” — Rama Govindaraju, Engineering Director, Google
The root causes read like a catalog of entropy’s greatest hits: manufacturing tolerances pushed to their physical limits, aging transistors, environmental stress from chips never designed to run at maximum power 24 hours a day. Nitza Basoco of Teradyne put it bluntly: these chips “weren’t meant to be run 24/7 at the maximum voltage, maximum frequency.”
The most insidious failure mode is NaN contagion — when a single Not-a-Number result from one corrupted calculation propagates through matrix multiplications like a rumor through a cafeteria, infecting entire training batches. Meta engineers have documented cases where NaN events erased weeks of training progress before anyone noticed. And as the industry moves to lower-precision data formats like FP8 to save memory, each individual bit carries more significance — meaning each silent flip matters more.
When the Lying Is On Purpose
If silent data errors represent hardware lying by accident, counterfeit chips represent hardware lying by design. The global shortage of AI accelerators has spawned a shadow market in recycled, relabeled, and outright fake silicon.
The techniques are Bond-villain creative. Shenzhen police dismantled a ring rebranding discarded chips as H100/B200 equivalents — counterfeiting not just the GPUs but the power supplies and support components as well. Amazon marketplace scams substituted RTX 5090 graphics cards with fanny packs. (You read that correctly. Fanny packs.) The SAE International standard for counterfeit detection had to be updated in 2024–2025 specifically because chiplet-based designs make it possible for the outside of a package to conceal dies from entirely different fabrication runs.
The intersection with AI is direct: a counterfeit or degraded GPU in a training cluster would produce exactly the same class of silent errors described above, but with the added complication that the operator would have no reason to suspect the hardware itself. The major cloud providers buy through verified channels, but startups, universities, and organizations in developing countries frequently rely on secondary markets. The counterfeit risk falls hardest on those least equipped to detect it.
[Image placeholder: Scanning acoustic microscopy image showing “shadow” etchings from removed chip markings. Caption: “What the chip used to say, before someone changed its name.“]
The technical fix exists: Physically Unclonable Functions (PUFs) — structures that exploit manufacturing variability to generate a unique, unclonable ID for each chip, like a silicon fingerprint. Combined with cryptographic attestation at each transfer point, this would create a verifiable chain of custody from fabrication to deployment. NIST is working on the standards. The question, as always, is whether the industry will adopt them before the next embarrassing headline, or after.
For Further Reading: Perspectives
PRO “Addressing Hardware Failures and Silent Data Corruption in AI Chips” — EDN / Aronoff Makes the case that silicon lifecycle management and on-chip telemetry can proactively catch SDEs before they corrupt training runs. Source: edn.com (April 2025)
CON “AI Coding Degrades: Silent Failures Emerge” — IEEE Spectrum / Twiss Argues that a subtler form of silent corruption is already here: AI coding assistants trained on poisoned feedback loops produce code that quietly removes safety checks and fabricates data. Source: spectrum.ieee.org (January 2026)
❧ ❧ ❧
The World’s Most Expensive Game of Keep-Away
The U.S. says China can’t have the best AI chips. China keeps getting them anyway. Inside the smuggling operations, shell companies, and policy whiplash that turned GPU export controls into the geopolitics of Whac-A-Mole.
The most strategically contested physical objects on earth are not nuclear warheads or barrels of oil. They are rectangular slabs of silicon and gold, roughly the size of a deck of cards, that retail for $30,000 apiece. The United States has bet its AI strategy on keeping the best ones out of China’s hands. Based on recent evidence, the bet is not going well.
On December 8, 2025, the Department of Justice unsealed Operation Gatekeeper, a case that reads less like a national security investigation and more like a heist movie written by a committee. Alan Hao Hsu of Missouri City, Texas, reactivated a dormant company, purchased over 7,000 H100 and H200 GPUs worth more than $160 million from legitimate U.S. distributors, shipped them to a warehouse in New Jersey, replaced the Nvidia branding with labels reading “SANDKYAN” — a company that does not exist — reclassified the world’s most powerful processors as “adapter modules” on customs paperwork, and arranged wire transfers routed through Thailand, Singapore, and Malaysia to obscure their Chinese origin.
His co-conspirator, Benlin Yuan, a Canadian citizen, tried to buy back chips he believed had been stolen from the warehouse. They had in fact been seized by the FBI. He was purchasing evidence from a sting. Hsu’s sentencing is scheduled for February 18, 2026.
“It’s happening. It’s a fact.” — Jeffrey Kessler, BIS chief, before Congress, on GPU smuggling to China
Operation Gatekeeper was crude. The Megaspeed International case suggests the operation has evolved. Bloomberg’s investigation revealed that Singapore-based Megaspeed purchased 5.7 million in cash at the end of 2023.
The critical mechanism is the “rental loophole”: current rules often permit renting AI chips to Chinese companies for use in data centers outside China. If Megaspeed is effectively a Chinese entity rather than an independent Singaporean firm, the arrangement transforms from a legitimate cloud service into a jurisdictional end-run around the entire enforcement framework.
The Policy That Can’t Make Up Its Mind
The enforcement picture is made incoherent by the policy sitting on top of it. On the very same December 8 that Operation Gatekeeper was unsealed, President Trump posted that H200 exports to China would now be permitted — with a 25% U.S. cut. On January 15, 2026, the Bureau of Industry and Security formalized the shift, moving from a “presumption of denial” to “case-by-case review.”
The Council on Foreign Relations assessed the new rule as “strategically incoherent,” noting that even capped sales could increase China’s installed AI compute by 250% in a single year. The Lawfare Institute went further, arguing that the revenue-sharing arrangement amounts to an illegal tax on exports, imposed without congressional authorization.
Meanwhile, BIS’s own budget received a 23% increase earmarked for semiconductor enforcement — not the posture of an agency that considers the problem solved.
[Image placeholder: Map showing GPU diversion routes — Texas → New Jersey → Singapore → China, with key waypoints labeled. Caption: “The scenic route for a $30,000 chip.“]
The strongest defense of continued engagement comes from Brookings: starving China’s supply of U.S.-designed chips will push China to develop its own capacity faster. The strongest case for restriction comes from the Institute for Progress, which estimates that without exports and smuggling, the U.S. would hold a 31× advantage in 2026-produced AI compute. With aggressive B30A sales, that advantage flips to China.
The forensic conclusion: the United States has built an export-control regime for AI hardware but has not built the tracking infrastructure to enforce it. A chip-level provenance system — combining hardware identifiers with cryptographic attestation at each transfer point — would convert “where did the chips go?” from an FBI investigation into a database query. The White House recommends “location verification features.” The recommendation remains unimplemented, unfunded, and unspecified.
For Further Reading: Perspectives
PRO “The New AI Chip Export Policy to China: Strategically Incoherent and Unenforceable” — Council on Foreign Relations / Allen Argues that the January 2026 BIS rule undermines the security rationale for controls by allowing massive compute increases for Chinese labs. Source: cfr.org (January 2026)
CON “How Overly Aggressive Bans on AI Chip Exports to China Can Backfire” — Brookings Contends that blocking chip sales accelerates China’s domestic chip industry, ultimately weakening U.S. market dominance. Source: brookings.edu (August 2025)
PRO “Trump’s Illegal AI Chip Export Controls, and Who Can Challenge Them” — Lawfare Legal analysis arguing the revenue-sharing arrangement violates the Export Control Reform Act’s prohibition on licensing fees. Source: lawfaremedia.org (January 2026)
CON “The Limits of Chip Export Controls in Meeting the China Challenge” — CSIS Acknowledges smuggling but argues the structural chokepoint remains effective, noting 22,000+ Chinese semiconductor companies have shut down. Source: csis.org (May 2025)
❧ ❧ ❧
The Heist That Doesn’t Need a Getaway Car
A Google engineer convicted of espionage. A Dutch chipmaker hollowed out from within. An AI model interrogated through its own front door. Five ways the most valuable knowledge in tech is walking out — and nobody built an alarm.
Linwei Ding did not kick down any doors. He did not cut through any fences. He sat at his desk at Google, copied over 2,000 pages of proprietary AI architecture to a personal cloud account using Apple Notes, and quietly founded a competing startup in Beijing while still drawing a Google paycheck. On January 30, 2026, a federal jury in San Francisco convicted him on fourteen counts — seven of economic espionage, seven of trade secret theft — in the first U.S. prosecution of its kind. He faces up to 175 years in prison.
The FBI called it a calculated breach of trust. The prosecutor called him a man who “stole, cheated, and lied.” Google’s VP of regulatory affairs expressed gratitude that justice was served. Nobody explained how an engineer was able to exfiltrate thousands of pages of the company’s most sensitive AI infrastructure documentation over the course of a year before anyone noticed.
The Ding case is dramatic, but it represents only one of five knowledge-theft vectors that landed in the same sixty-day window — each exploiting different doors in the same poorly-alarmed building.
The Factory That Stayed, While Its Knowledge Left
In the Netherlands, a story was unfolding that makes corporate espionage look quaint. Nexperia, a Nijmegen-based chipmaker owned by China’s Wingtech Technology, became the first company in history to have the Dutch government invoke a 73-year-old Cold War statute to seize operational control. The Amsterdam Court of Appeal upheld the action on February 11, 2026, finding that Chinese CEO Zhang Xuezheng had “changed the strategy without internal consultation under the threat of upcoming sanctions.”
The court filings allege a systematic extraction of R&D files, machine settings, and strategic design assets from Nijmegen toward Chinese facilities — a factory whose physical shell remained in Europe while its technological substance was transferred east. European managers were reportedly stripped of authority. A plan called “Project Rainbow” allegedly explored selling off European production to avoid U.S. blacklisting, without European directors’ knowledge.
Beijing retaliated within four days by blocking Nexperia chip exports from China, halting Honda production lines.
“This conviction exposes a calculated breach of trust involving some of the most advanced AI technology in the world at a critical moment in AI development.” — Assistant Attorney General John A. Eisenberg, on the Ding verdict
Asking Nicely Is Also Stealing (Maybe)
Thirteen days after the Ding verdict, Google’s own Threat Intelligence Group published a report documenting something arguably more alarming: systematic campaigns exceeding 100,000 prompts engineered to reverse-engineer the reasoning architecture of Gemini through its public API. This is the distillation vector — extraction not through the back door but through the front door, one carefully crafted question at a time.
Then there is OpenEvidence v. Pathway Medical, a case that may determine whether the “personality” of an AI — its system prompt, behavioral rules, and domain-specific instructions — qualifies as a trade secret. The defendant allegedly impersonated a medical professional, subjected the platform to jailbreaking queries (including the historically significant “Haha pwned!!” injection string), and extracted the model’s foundational instructions.
And finally, the infrastructure vector: zero-day attacks on Ivanti endpoint management systems in early 2026 targeted the Dutch Data Protection Authority, the Finnish state ICT provider, and the European Commission — the systems used to manage mobile security for thousands of government employees overseeing semiconductor policy.
Five vectors. Five different mechanisms. One gap: the industry built the most valuable intellectual artifacts in the history of software and protected them with tools designed for a previous era.
[Image placeholder: Illustration of five doors in a wall, each ajar — labeled “Insider,” “Governance,” “API,” “Prompt Injection,” and “Cyber.” Caption: “Choose your door. They’re all open.“]
For Further Reading: Perspectives
PRO “Former Google Engineer Found Guilty of Espionage and Theft of AI Tech” — CNBC / Palmer Straightforward reporting on the Ding conviction, noting the FBI’s framing as a watershed moment for protecting U.S. technological leadership. Source: cnbc.com (January 2026)
CON “Inside the Google AI Espionage Case: How Trade Secret Theft Exposes Silicon Valley’s Vulnerability” — WebProNews Argues the conviction, while warranted, reveals that existing security measures at tech giants are inadequate to the scale of the threat. Source: webpronews.com (February 2026)
❧ ❧ ❧
Somebody Is Recording This Conversation
Adversaries are harvesting encrypted data today to decrypt it with quantum computers tomorrow. The locks on every system in this newspaper may have an expiration date. The migration has barely begun.
Here is a thought experiment for your Monday morning: every encrypted email you sent last year, every secure financial transaction, every classified government communication — imagine someone copied all of it and put it in a warehouse. Not to read it now. To read it later, once they have the right key. The key doesn’t exist yet. But they are very patient people, and the key is coming.
This is not a thought experiment. It is a strategy called “harvest now, decrypt later,” and intelligence agencies have been practicing it for years. The cost of harvesting is essentially a storage cost — hard drives are cheap. The payoff arrives when a quantum computer capable of running Shor’s algorithm at scale can crack the encryption that protects those files. The effective deadline for defending against this is not the day such a computer is built. It is today, for any data whose sensitivity outlasts the construction timeline.
The New Locks
In March 2025, NIST selected HQC (Hamming Quasi-Cyclic) as the fifth standardized post-quantum algorithm — a backup to ML-KEM, the primary replacement for current encryption. The fact that NIST built a backup is itself a statement: they are hedging against the possibility that a mathematical breakthrough, not even a quantum computer, could compromise the primary standard.
Dustin Moody, the NIST mathematician heading the Post-Quantum Cryptography project, was direct: “It’s essential to have a fallback in case ML-KEM proves to be vulnerable.”
The full family of standards now includes algorithms based on different mathematical foundations — structured lattices, error-correcting codes — specifically so that no single cryptanalytic breakthrough can collapse the entire defense. The engineering is sound. The adoption is not.
The Gap Between Having New Locks and Changing Them
A late-2025 Dutch government audit found that 71% of government agencies were unprepared for quantum-enabled attacks — in one of Europe’s most technologically advanced countries. Cloudflare’s survey documented browser-level progress in post-quantum key exchange, but the long tail of enterprise systems, embedded devices, and legacy infrastructure has barely started.
The European Commission has signaled mandates for critical infrastructure migration by 2030. The Department of Defense issued a November 2025 memorandum ordering expedited migration. Europol published a prioritization framework for banks in January 2026. On the ground, the Solana blockchain successfully tested post-quantum signatures, proving the migration can be done without replacing everything at once.
“Organizations should continue to migrate their encryption systems to the standards we finalized in 2024.” — Dustin Moody, NIST
The skeptic’s position is that fault-tolerant quantum computers are decades away, and diverting resources from pressing threats like ransomware is a misallocation. The counterpoint is mathematical: the harvest window is open now, the migration takes years, and the Dutch audit shows the gap between knowing and doing is enormous. As one Palo Alto Networks executive wrote in the Harvard Business Review: “The quantum era hasn’t arrived with a bang but with the quiet retroactive decryption of today’s secrets.”
For every provenance system described in this newspaper — chip authentication, supply chain tracking, content credentials, model watermarking — the post-quantum transition is existential. A digital signature that can be forged retroactively makes every chain of custody it protects a fiction. The migration must be embedded now, not bolted on after deployment.
[Image placeholder: Timeline infographic — “The Harvest Window.” Shows data encrypted today on the left, quantum decryption capability estimated on the right, with a red zone labeled “Everything in between is vulnerable.” Caption: “The clock started before the alarm was set.“]
For Further Reading: Perspectives
PRO “Why Your Post-Quantum Cryptography Strategy Must Start Now” — Harvard Business Review / Oswal (Palo Alto Networks) Argues the migration is a fundamental business risk, not a technical curiosity, and that organizations with long-lived sensitive data must act immediately. Source: hbr.org (January 2026)
CON “Quantum Computing and Crypto in 2026: Hype vs Reality” — BitcoinEthereumNews / multiple experts Analysts argue the cryptographic threat is “90% marketing” and that practical quantum attacks on encryption remain at minimum a decade away. Source: bitcoinethereumnews.com (December 2025)
PRO “Quantum-Safe Migration: An Opportunity to Modernize Cryptography” — World Economic Forum Frames the migration not as a cost but as a strategic opportunity to overhaul outdated cryptographic infrastructure, advocating defense-in-depth. Source: weforum.org (January 2026)
CON “The Limits of Chip Export Controls in Meeting the China Challenge” — CSIS While focused on export controls, includes analysis suggesting the quantum timeline is more distant than often portrayed, arguing against premature resource diversion. Source: csis.org (May 2025)
❧ ❧ ❧
The Tab Nobody Wants to Pick Up
AI data centers drink like a small city, burn electricity like a small country, and report their environmental impact on the honor system. The receipts don’t add up — because nobody’s collecting them.
A single AI data center can drink five million gallons of water a day. That is the daily consumption of a town of 50,000 people, diverted to keep servers cool enough to function. Multiply that by the thousands of data centers now operating or under construction worldwide, and you begin to understand why the residents of Bessemer, Alabama, Northern Virginia, and rural Wisconsin have started asking pointed questions at town hall meetings.
The AI industry’s environmental footprint has become impossible to ignore and nearly impossible to verify — a combination that should make everyone uncomfortable. Research published in Patterns by Alex de Vries-Gao estimates that AI systems alone may be responsible for 32.6 to 79.7 million tonnes of CO₂ annually — for scale, New York City emits about 52 million tonnes. AI’s water consumption may reach 312.5 to 764.6 billion litres per year, a volume comparable to all bottled water consumed globally.
The “may” in those sentences is not hedging. It is an honest reflection of the fact that nobody really knows, because the data is self-reported, aggregated, delayed, and incomplete.
The Accounting Gaps
Google admitted in its Gemini model report that it does not report indirect water use from electricity generation because it doesn’t control the power plants. Critics note this is like a company claiming zero transportation emissions because it doesn’t own the trucks. Microsoft, despite pledging to become “water positive” by 2030, now expects its water use to increase substantially in the AI era. The New York Times reported that the company’s sustainability goals and its AI ambitions are on a collision course.
“How Much Water Do AI Data Centers Really Use?” — headline, Undark Magazine investigation
In Northern Virginia — home to over 300 data centers and roughly two-thirds of the world’s internet traffic — water consumption rose 63% between 2019 and 2023. Stanford HAI’s transparency report documented a decline in voluntary environmental disclosure across major AI companies through 2025. The trend line is moving in the wrong direction.
Goldman Sachs Research projects that through 2030, roughly 60% of increased electricity demand for AI will be met by fossil fuels. Companies that previously committed to closing coal-fired power plants are now extending their lives. In Santa Clara, California, data centers account for 60% of the city’s entire electricity consumption.
The Political Friction
The concentration of data centers in already-strained communities is generating heat of a different kind. Wisconsin’s state legislature advanced a bill regulating data center siting after Microsoft’s $3.3 billion project raised concerns about local utility capacity. Microsoft responded to community backlash in one jurisdiction by promising to cover full power costs and forgo local tax breaks — an acknowledgment that the externalities have become a political liability.
The strongest environmental defense of AI is that the technology itself may enable solutions: optimizing energy grids, accelerating materials science, improving agricultural efficiency. The efficiency gains could offset the infrastructure costs. This argument has a structural problem: it asks the public to accept unverifiable resource consumption today in exchange for speculative environmental benefits tomorrow. This is a promissory note written on a napkin.
The solution is not to halt the buildout but to instrument it. Real-time, facility-level, independently verified resource monitoring — treating every kilowatt-hour and every gallon with the same evidentiary rigor that a chip provenance system would apply to every processor. The same chain-of-custody failure documented in five preceding stories applies here: sustainability claims that cannot be audited are not claims. They are marketing.
[Image placeholder: Bar chart comparing daily water consumption — average U.S. household vs. mid-size data center vs. large AI data center vs. a town of 50,000. Caption: “Thirst rankings: your house, a building full of computers, and a small city. One of these is not like the others.“]
For Further Reading: Perspectives
PRO “AI, Data Centers, and Water” — Brookings Detailed policy analysis documenting the tension between data center water consumption and community needs, with recommendations for infrastructure planning. Source: brookings.edu (November 2025)
CON “The Hidden Impacts of AI Data Centres on Water, Climate and Future Power Costs” — Daily Maverick South Africa–based investigation arguing that the environmental burden falls disproportionately on the Global South, challenging the “AI will fix it” narrative. Source: dailymaverick.co.za (February 2026)
PRO “‘Roadmap’ Shows the Environmental Impact of AI Data Center Boom” — Cornell Chronicle Cornell research modeling a 73% carbon reduction and 86% water reduction through strategic siting, grid decarbonization, and efficient operations. Source: news.cornell.edu (November 2025)
CON “The Carbon and Water Footprints of Data Centers” — Patterns / de Vries-Gao Peer-reviewed estimate placing AI’s carbon footprint equivalent to New York City’s and its water footprint comparable to global bottled water consumption. Source: cell.com/patterns (December 2025)
❧ ❧ ❧
EDITORIAL
The Most Important Thing the AI Industry Can Build Next Is a Receipt
The six stories in this edition of The Review were reported separately. They concern quantum physics, chip manufacturing, criminal smuggling, corporate espionage, cryptographic standards, and water consumption. They involve different people, different countries, different technical disciplines, and different time horizons. They should have nothing in common.
They have everything in common.
Every story describes the same structural failure: artifacts of extraordinary value — chips, computations, models, supply chains, environmental claims — whose provenance cannot be independently verified. The quantum benchmarks are self-graded. The silicon computes incorrectly without telling anyone. The export-controlled chips get relabeled in a New Jersey warehouse. The trade secrets walk out through five different doors. The encryption may have an expiration date. The water bills are on the honor system.
This is not a collection of isolated problems. It is one problem viewed from six altitudes. The AI industry has prioritized speed-to-scale over verification at every level of the stack, and the result is an ecosystem where the distance between what is claimed and what can be proven is growing wider every quarter.
The closest analogue to what is needed already exists. The C2PA standard — adopted by Google, Sony, and the Library of Congress — embeds cryptographic provenance into digital media so that a photograph can prove where it came from and what happened to it along the way. The architectural logic is sound: tamper-evident, cryptographically signed, machine-readable metadata that travels with the artifact from creation through every transfer.
Now extend that logic. To silicon: PUF-based chip identities. To computation: signed inference chains. To supply chains: cryptographic attestation at each transfer point. To model outputs: provenance watermarks. To environmental reporting: real-time, metered, independently verified resource data.
None of this requires new legislation or international treaties. It requires companies to treat provenance the way pharmaceutical companies treat batch traceability — not as a forensic afterthought, but as an intrinsic property of the product.
The AI industry has spent the last three years building the most capital-intensive technological infrastructure in history. The most important thing it can build next is not a bigger model or a faster chip. It is a receipt.
“The question that runs through every chapter is forensic in origin: where did it come from, what happened to it along the way, and can anyone prove it?”
For Further Reading: Perspectives
PRO “Data Drain: The Land and Water Impacts of the AI Boom” — Lincoln Institute of Land Policy Comprehensive policy piece arguing that data centers should face the same environmental impact requirements as any other large-scale industrial facility. Source: lincolninst.edu (October 2025)
CON “Post-Quantum Cryptography in 2026: 5 Predictions” — QuantumXC Industry perspective arguing the migration is already underway and that pragmatic hybrid approaches make the transition manageable rather than existential. Source: quantumxc.com (February 2026)
Production Note: This edition of The Review was produced through a collaboration between a human editor and an AI assistant (Anthropic’s Claude). The underlying research was compiled during December 15, 2025 – February 13, 2026, from publicly available sources including court filings, government reports, peer-reviewed publications, and investigative journalism. All claims are attributed to their original sources. The opinions in the editorial are those of the editorial board. Your skepticism remains appropriate and encouraged.
Coming Next: The Verification Issue — examining who watches the watchers in AI auditing, the emerging discipline of algorithmic accountability, and whether third-party AI audits are rigorous science or expensive theater. Also: the C2PA standard turns real, and what happens when your photograph has to prove it wasn’t generated.
© 2026 The Review. All rights reserved.
Editor: editorial@the-review.example | Submissions: submit@the-review.example
This edition was generated on Saturday, Feb. 15, 2026.
Output
What will you produce? Deliverables, artifacts, decisions
Work Area
Log
- 2026-02-13 07:50 - Created