Insights
Analysis and perspective from the front line of AI infrastructure deployment.
The Case for Distributed AI Infrastructure
The global AI build-out has a centralisation problem. The prevailing model — concentrated hyperscale campuses connected to metropolitan power grids — is running into constraints that money alone cannot resolve. Grid capacity is finite. Planning approvals are slow. And the assumption that AI compute must be centralised is increasingly at odds with both the physics of data and the geopolitics of sovereignty.
The physics are straightforward. The further data must travel from its point of origin to the point of computation, the greater the latency, the higher the bandwidth cost, and the larger the attack surface for interception or compromise. For AI workloads that depend on real-time inference — autonomous systems, industrial automation, defence command and control — centralisation is not just inefficient. It is architecturally wrong.
Sovereign nations are asserting control over the physical location of their data and the infrastructure that processes it. The era of sovereign datasets being processed in foreign data centres is ending.
The geopolitics are equally clear. Sovereign nations are asserting control over the physical location of their data and the infrastructure that processes it. The era of sovereign datasets being processed in foreign data centres is ending. In-country AI capability is now a policy priority for dozens of nations — and in-country AI capability requires in-country infrastructure.
Distributed AI infrastructure resolves both constraints simultaneously. Factory-manufactured, modular AI compute deployed to where the data lives and where the energy is abundant. Not a single campus serving an entire region, but a distributed fleet of AI Factory modules — each self-contained, each locally powered, each independently secure — deployed to the locations that the workloads and the sovereignty requirements demand.
The economics support the architecture. Mass manufacturing drives unit costs down. Co-location with renewable energy drives energy costs to near zero. Modular deployment eliminates the multi-year construction timelines and capital-intensive site development of centralised campuses. And the fungibility of modular assets — the ability to redeploy infrastructure as demand shifts — creates an investment model that fixed-asset infrastructure cannot match.
The question is no longer whether AI infrastructure will distribute. The question is which organisations have the engineering depth, the manufacturing capability, the logistics infrastructure, and the operational track record to deliver distributed AI infrastructure at global scale, to the security standards demanded by sovereign nations and defence organisations. That question has a twenty-year-old answer.
Why Renewable Co-location Changes the Economics of AI
The AI industry has an energy problem that is widely acknowledged and poorly understood. The standard narrative focuses on the sheer volume of energy consumed by AI compute. The more consequential problem is that the AI industry is competing for the most expensive energy on earth — grid-delivered, peak-demand, metropolitan-priced electricity — when the cheapest energy on earth is being wasted in plain sight.
Curtailed renewable energy — power generated by wind farms, solar installations, and hydro facilities that the grid cannot absorb — is one of the largest untapped energy resources in the world. In Australia alone, curtailed solar generation reached record levels in 2024. In parts of Northern Europe, wind farms are routinely paid to switch off. In South America, hydro facilities generate surplus power that has no commercial outlet. This energy is not expensive. Its marginal cost approaches zero.
The constraint is not the energy. It is the assumption that compute must be located where the grid delivers power. This assumption, inherited from the first generation of data centre development, no longer holds. Factory-manufactured, self-contained AI compute modules can be deployed directly to the site of energy generation — co-located with the renewable asset, connected behind the meter, independent of grid transmission constraints.
The economics of this model are transformative. The energy developer gains a guaranteed off-taker for generation that would otherwise be curtailed — improving project returns, increasing capacity factor, and securing revenue that the grid connection cannot provide. The AI compute operator gains energy at near-zero marginal cost, no grid dependency, and deployment timelines measured in days rather than the years required for new grid-connected generation.
The levelised cost of compute under this model is structurally lower than any grid-connected alternative — because the single largest operating cost of any data centre, energy, has been reduced to near zero. When this energy cost advantage is combined with the capital efficiency of factory-manufactured modular infrastructure, the resulting cost trajectory is one that fixed hyperscale construction cannot approach at any scale.
This is not a future possibility. It is a current operational reality for companies that have spent the time required to engineer the integration of renewable energy systems, battery storage, power conditioning, and high-density AI compute into a single, self-contained, deployable module. The convergence of AI compute demand and curtailed renewable energy supply is the most significant structural opportunity in the infrastructure industry. The question is who has the engineering to capture it.
The Security Imperative for Sovereign AI Deployment
The global rush to deploy sovereign AI capability has produced an awkward omission in the public discourse: almost nobody is talking about the physical security of the infrastructure housing sovereign datasets. The conversation has been dominated by data residency regulations, model alignment, and the geopolitics of chip supply chains. All important. None sufficient.
A sovereign dataset processed on insecure infrastructure is not sovereign. It is exposed. The physical security of the computation environment — the data centre, the module, the rack, the enclosure — is the foundation upon which every other layer of security depends. Software encryption, network isolation, and access control policies are necessary but not sufficient. They all assume that the physical environment has not been compromised.
For classified environments — the domain of national intelligence, defence operations, and sensitive government systems — this is not a theoretical concern. Physical security standards exist for precisely this reason. In the Five Eyes community, Intelligence Community Directive 705 establishes the physical security requirements for facilities housing classified information at the highest levels. Zone 5 certification represents the most rigorous tier of physical security accreditation.
These standards are not optional features to be added to a data centre product. They are qualification gates. Either the infrastructure meets ICD705 and Zone 5 requirements, or it is excluded from consideration for classified workloads. There is no middle ground, no ‘close enough,’ and no waiver process for convenience.
For nations building sovereign AI capability — particularly nations within the Five Eyes and broader allied community — the pool of modular AI infrastructure that meets these standards is vanishingly small. The vast majority of modular data centre products on the market hold commercial information security certifications such as ISO 27001 and SOC 2. These are legitimate standards for commercial environments. They are not equivalent to, and are entirely different in nature and rigour from, physical security accreditation for classified environments.
The security imperative for sovereign AI deployment is not a feature comparison. It is a binary question: does the infrastructure meet the physical security standards required for the sovereign datasets it will house? For allied nations, for intelligence communities, and for defence organisations deploying AI at the classified edge, that question must be answered before any other question matters.
