Writing

Inference Is Adderall for Computers

And the government will eventually need to regulate it the same way.


Adderall doesn't give you new neurons. It makes your existing neural hardware operate closer to its theoretical ceiling. Your brain has the capacity — the drug unlocks the throughput.

AI inference does the same thing for silicon. A GPU sitting idle is just sand with electricity. Inference is what makes it think. The parallel isn't poetic. It's structural: a supply chain that manufactures cognitive enhancement, a dependency curve that rewires the user, and a regulatory endgame that nobody in tech wants to talk about yet.

The Supply Chain

Pharmaceutical supply chains are vertically regulated. Every layer has oversight, licensing, and controlled distribution. Map it onto inference and the roles are already filled.

Precursor manufacturers → Hyperscalers & Nvidia. TSMC fabs chips the way chemical plants synthesize amphetamine salts. Nvidia designs the molecular structure. Export controls on advanced chips already function like precursor chemical regulations — the CHIPS Act is DEA scheduling for silicon.

Drug manufacturers → AI labs. Anthropic, OpenAI, and Google take raw compute and synthesize it into something with a specific cognitive profile. They determine dosage (context windows), formulation (RLHF, safety tuning), and what conditions it's prescribed for (usage policies). Responsible scaling policies are the clinical trials. Model cards are safety dossiers.

Pharmacies → The app layer. Every product integrating an LLM API is last-mile distribution. Terms of service are the pharmacist checking your ID. Rate limits are the prescription refill cap.

That's the mapping. From here on, the essay is about inference — but keep the supply chain in mind, because the regulatory argument only makes sense if you see the full vertical.

Dependency

Nobody plans to become dependent on a cognitive enhancer. You take it for a crunch, a deadline, a season of overload. Then you build a new performance baseline around it, and one day you realize the "enhanced" version is just how you work now.

Companies that integrated AI into code review, content generation, customer support, and data analysis aren't going back. The pre-AI performance level isn't acceptable anymore. It's not enhancement — it's the new minimum.

And unlike an individual, a company can't taper off. An organization that rebuilt its entire workflow around 100,000 API calls per day can't downgrade to manual processes. The failure mode isn't discomfort. It's operational collapse.

This is the political problem. The window for clean regulation is before the dependency sets in — before inference becomes load-bearing infrastructure that can't be touched without bringing something down. That window is closing fast.

The 900 IQ Teenager Problem

The access question is where things get serious.

We restrict powerful cognitive enhancers not because cognition is dangerous, but because enhancement without maturity, accountability, and friction destabilizes institutions faster than they can adapt.

A 17-year-old with unrestricted access to frontier inference can generate social engineering at scale, synthesize dangerous technical knowledge across domains, produce misinformation indistinguishable from expertise, and automate exploitation of systems they barely understand.

The risk isn't linear with capability. It's exponential. And the "dosage" a single user can access is functionally unlimited if the supply chain isn't controlled.

This is why point regulation fails. Chip export controls without API usage policies are like regulating precursors but letting anyone synthesize whatever they want. App-layer content policies without model-level safety tuning are security theater. Every layer of the stack is a potential leak point, and capability will always find the path of least resistance.

Open Weights and Local Inference

Local inference — running open-weight models on consumer hardware — creates a genuine regulatory tension.

The argument for sovereign compute is real. No serious society should want all of its cognitive infrastructure routed through a handful of API gateways. But the same logic that says "I should be able to run my own models" is structurally identical to "I should be able to manufacture my own compounds." Both claims have legitimate versions. Both also have versions that collapse the distance between self-provision and uncontrolled capability diffusion.

The saving grace is physics. Frontier capability still requires compute that doesn't fit on a desktop. You can run a 70B parameter model at home. You can't casually reproduce what's in the datacenter two years from now.

So the likely equilibrium is tiered. Tolerance at the edge, where thermodynamics already limits capability. Control at the frontier, where the hard stuff lives. Home labs survive. The ceiling stays behind licensing and monitoring.

The Misdiagnosis

Politicians can already feel the pressure. They just grabbed the wrong lever.

In late 2025, Bernie Sanders became the first sitting senator to call for a national moratorium on data center construction. Denver enacted one by February 2026. From the other end of the spectrum, Ron DeSantis pushed back on data center expansion in Florida. When the furthest-left senator and a right-wing governor converge on the same target, they've found something real.

But look at what they're actually proposing. Moratoriums. Zoning restrictions. Construction pauses. These are infrastructure regulations — they treat data centers like power plants that need to be managed for fair access and stable pricing.

The diagnosis is right. The prescription is wrong.

The datacenter isn't the thing that needs governing. What's happening inside the datacenter is. A moratorium on buildings doesn't regulate what flows through the API. That's like responding to a drug crisis by capping the number of warehouses.

The regulatory architecture inference actually needs isn't a utility framework. It's a scheduling framework — tiered controls on who can produce, distribute, and consume different levels of cognitive enhancement. Not a ban. Licensed production, controlled distribution, and dosage limits proportional to capability.

The fact that politicians are feeling for this at all validates the thesis. But the lever they need isn't zoning. It's classification.

And beyond the scheduling question, there's a deeper economic one: what happens when inference tokens become so cheap that digital cognition starts competing with physical infrastructure for the same energy, water, and grid capacity? That problem needs its own framework — something closer to monetary policy than drug scheduling. But that's a separate argument.

The Endgame

Inference makes dumb silicon smart. It's already load-bearing at the organizational level. The supply chain has the same vertical structure as pharmaceuticals. And the capability frontier is advancing faster than any regulatory framework can track.

The question isn't whether governance arrives. It's whether it arrives deliberately — through a scheduling framework that controls production, distribution, and access across the full stack — or reactively, after the first major collision between unregulated cognitive enhancement and institutional reality.

Every meaningful resource goes through the same arc: discovery, wild adoption, crisis, regulation. Inference won't be the exception.

It'll just happen faster. Because that's what Adderall does.

Newsletter

Signal

Product launches and strategic thinking from the frontier.