I have been in technology since 1987. I have watched the PC revolution, the internet boom, the mobile explosion, and the cloud migration. I have evaluated emerging technologies inside the NSA, the Department of Energy, and critical infrastructure environments where the word "disruption" is not a compliment.

Every single one of those eras had the same signature: move fast, monetize the moment, and let the next generation clean up the mess. AI is following that pattern — except the mess this time is physical, institutional, and constitutional. And it is accumulating faster than anyone is willing to admit.

I am not here to argue against AI. I work with AI every day. I believe in what it can do. What I don't believe in is the fiction that scaling faster automatically solves the problems that scaling creates.

It doesn't. It never has.

The acceleration tax

Every industry has a version of what I call the Integration Tax — the hidden cost of deploying a technology before you have tested it against reality. In cybersecurity, it shows up as months of parser development and manual configuration before a tool produces a single useful insight. In AI infrastructure, the same principle applies at a civilizational scale.

The current model is straightforward: identify demand, raise capital, build fast, worry about consequences later. That worked when the primary failure mode was a bad product launch or a negative earnings call. It does not work when the failure modes include power grid strain, water rights litigation, semiconductor supply chain fracture, and communities bearing costs they never agreed to.

The industry has given this problem a name: externalities. That word is doing a lot of work to make the transfer of costs seem acceptable. It isn't.

The question is not whether AI creates value. It does. The question is who pays for the infrastructure required to deliver that value — and whether they had any say in the matter.

Atoms do not move at the speed of bits

I have spent years in environments where physical constraints are not abstract. A 30-year-old PLC does not care how sophisticated your AI model is. It operates on a fixed protocol, in a fixed environment, with tolerances that were engineered before most of today's developers were born. You cannot software-update it into compliance.

The same logic applies to semiconductor fabrication. A leading-edge fab takes three to five years and tens of billions of dollars to build. The companies committing those resources today are making bets on a demand curve that may look very different by the time the facility opens. History has a word for that outcome: stranded assets.

The global supply chain for AI infrastructure is more fragile than the marketing suggests. Helium for lithography. Copper for data center cabling. Rare earth materials concentrated in politically volatile regions. Shipping routes that geopolitical events can reroute overnight. The bits move at the speed of light. The atoms are still subject to the laws of physics, geography, and human conflict.

This is not pessimism. It is engineering literacy — the kind that gets ignored when capital is abundant and urgency is the dominant narrative.

The socialization of costs

Here is the dynamic I watch most carefully. It is the same one I observed in the fossil fuel era, and in the rollout of every extractive technology before it.

The gains are private. The costs are public.

Utility bills are rising in communities adjacent to data centers. Grid upgrades required to support AI workloads are being funded, in part, through rate increases on residential customers who had no seat at the table when those decisions were made. Water rights are being contested in regions already under stress. Land is being acquired at scale, and the legal frameworks governing that acquisition have not caught up to the pace of demand.

This is not a side effect. It is the predictable consequence of an industry that has optimized for speed of deployment over social license. I have seen this pattern before. The backlash, when it comes, does not arrive gradually. It arrives in a wave — regulatory, legal, and political — and by then the window to shape it has closed.

The organizations building AI infrastructure today that are thinking about community impact are not being altruistic. They are managing risk. There is a difference, and it matters for how you structure the conversation with boards and leadership.

The knowledge commons is burning

There is a quieter cost that does not show up in utility bills or supply chain analyses, but it concerns me as much as any of the physical constraints.

I grew up professionally in an environment where the exchange of knowledge — across agencies, across disciplines, across institutional boundaries — was the mechanism by which hard problems got solved. The intersection of an NSA analyst's observation and a DOE engineer's intuition produced insights that neither would have reached alone. That is how breakthroughs actually happen.

The competitive dynamics of the current AI race are systematically dismantling the conditions that make those collisions possible. Academics are hiding their work to prevent it from being consumed without attribution. Institutions are building walls around research that would previously have been shared. The open-source culture that built the internet is being replaced by proprietary silos controlled by organizations whose primary obligation is to their shareholders, not to human progress.

When knowledge stops moving, we lose the sparks. That loss does not register on a quarterly earnings call. But it compounds over time, and the deficit eventually shows up in the places we least expect it — in the solutions we can't find, the breakthroughs that don't happen, the problems that remain unsolved because the right two people never compared notes.

What responsible scaling actually looks like

I am not arguing for a pause. That ship has sailed, and frankly it was never realistic. What I am arguing for is a shift in how the industry allocates attention and capital alongside the scaling.

Three principles define what that looks like in practice:

The real winners of this era will not be the ones who built the biggest models first. They will be the ones who built the infrastructure that made those models sustainable — physically, institutionally, and socially.

The bottom line

I have spent my career in environments where the cost of getting it wrong is measured not in stock price but in consequences. Power grids that go dark. Water systems that fail. Intelligence gaps that adversaries exploit. In those environments, the question is never "can we move faster?" The question is always "what breaks if we do?"

The AI industry has not asked that question seriously enough. It has assumed that the answer is "nothing we can't fix later" — the same assumption that drove every extractive technology cycle before this one, and produced the same pattern of privatized gains and socialized costs every time.

I am not betting against AI. I am betting that the organizations which succeed long-term will be the ones that stopped treating sustainability as a constraint and started treating it as a design requirement.

The foundation matters. It always has.