TechReaderDaily.com
TechReaderDaily
Live
Datacenters & Infrastructure · Modular Buildouts

Factory-Made Data Centres Are Now Delivered by Truck

Modular and edge deployments have crossed from experiment to industrial programme as AI infrastructure spending heads toward $1.37 trillion this year, refactoring the supply chain around speed, not scale.

AWS modular data center design showing prefabricated electrical and cooling infrastructure components for AI workloads datacenterfrontier.com
In this article
  1. What the substation says about the schedule

In late March, on an industrial estate in Chantilly, Virginia, a transaction closed that most of the AI world overlooked. Compu Dynamics Modular, a specialist in high-performance data centre infrastructure, acquired a majority stake in R&D Specialties, a firm that builds the electrical distribution skids and cooling modules that make a prefabricated data centre possible. The deal was not large by the standards of an industry that raised $110 million for a single edge computing firm in the same quarter. It did not make the evening news. But it said something exact about where the AI buildout is heading: out of the construction site and into the factory.

The numbers behind that trajectory are bracing. Gartner estimated in January that worldwide spending on AI infrastructure alone will reach $1.37 trillion in 2026, accounting for 54 percent of total AI spending and rising 43 percent over 2025. CRN, in its 2026 AI 100 list published in April, named 25 infrastructure and edge computing companies driving that spend, from component makers to rack-scale integrators. The list captures a market in which the question is no longer whether to build but how fast the concrete can cure. The answer, increasingly, is that the concrete is optional.

To understand why modular is moving from niche to default, it helps to understand the difference between contracted load and connected load. Contracted load is the power a utility promises to deliver. Connected load is what the data centre actually draws. A traditional build locks both at the moment of site selection, years before the first server rack powers on. A modular build decouples them. The power infrastructure arrives in discrete, factory-tested increments. Each module brings its own electrical topology, its own cooling circuit, its own connection to the busbar. The site becomes a collection of independent power consumers rather than a single monolithic load. For a utility planner staring at a four-year interconnection queue, that distinction is everything.

Amazon internalised this logic with unusual candour this spring. In April, Business Insider revealed Project Houdini, an internal AWS programme that breaks the main server room into prefabricated sections assembled on site. The project, named for the escape artist who made barriers disappear, expects to save weeks of construction time and tens of thousands of labour hours per facility. It is not a pilot programme. It is the new template for AWS's North American buildout, and it signals that the largest cloud operator on the planet has concluded that stick-built data centres are a schedule risk it can no longer afford.

AWS is not alone in reaching for factory-built infrastructure.

French AI infrastructure firm Antimatter, which emerged from stealth in the first quarter, has secured over 1GW of power capacity across sites in the United States, Europe, and the Gulf states. Its plan, disclosed in filings and investor briefings, calls for 1,000 modular micro data centres deployed by 2030, each positioned adjacent to an underutilised power source: a substation with spare capacity, a stranded gas plant, a solar farm with a curtailed interconnection. The strategy is an explicit end-run around the interconnection queue. In parts of northern Virginia, the wait for a grid connection now stretches beyond four years. A modular unit that can be dropped next to an existing transformer does not wait in that queue. It negotiates a separate, smaller, faster interconnection, often at distribution voltage rather than transmission voltage, and the clock resets.

The grid is the story. As POWER Magazine reported in early May, hyperscale data centres are now outpacing grid infrastructure across every major US market. A pv magazine analysis from March noted that it takes 12 to 24 months on average to construct a data centre shell, while securing a grid connection can take two to three times as long. On-site battery storage, gas turbines, and direct interconnection to renewable generators are all being deployed to close the gap. The modular approach adds a further variable: the ability to bring capacity online in phases, matching the pace of grid upgrades rather than waiting for them to complete.

What the substation says about the schedule

In Texas, the proposition is being tested in real time. Duos Edge AI, a subsidiary of Duos Technologies Group, has begun deploying truck-deliverable edge data centres in Victoria and Corpus Christi, with open houses scheduled for this month to show local officials and potential customers what a 500kW modular unit actually looks like. The company reported 270 percent revenue growth in 2025 and raised $110 million to fund deployment. Its units are compact enough to travel on a standard flatbed trailer, pre-populated with servers, cooling, and power distribution, and can be operational within weeks of delivery. For a mid-sized city with a manufacturing base that needs low-latency inference, Duos is offering what the hyperscale campus cannot: proximity measured in metres, not miles.

What runs inside those units is itself being reinvented. In March, Pasadena-based PrismML emerged from stealth to announce what it calls the world's first commercially viable 1-bit large language model, a model designed explicitly for edge deployment. A 1-bit model quantises neural network weights to a single binary digit, slashing memory requirements and power draw by an order of magnitude compared to conventional 8-bit or 16-bit models. PrismML's launch is not an isolated curiosity. It is part of a broader shift in which model architecture is being shaped by the physical constraints of the edge: limited power, limited cooling, limited footprint. The server rack at the edge cannot dissipate 40kW. It is designed to dissipate 10kW, or five. The model must fit the rack, not the other way round.

The financing question trails every one of these deployments. When a modular data centre arrives at a substation, someone must pay for the transformer upgrade, the switchgear, the protection relays, the feeder lines. In a traditional campus build, the hyperscaler typically funds the dedicated substation as part of the project cost and deeds it back to the utility. The modular model scrambles that calculus. A 5MW edge site cannot absorb the cost of a new 138kV substation the way a 500MW campus can. The result is a patchwork of cost-sharing arrangements, some negotiated project by project with municipal utilities, some folded into rate-base recovery at the state public utility commission. The commissioners in Texas, Virginia, and Ohio are all, right now, adjudicating cases that will set the template.

Cooling is the second variable, and it is not trivial. A prefabricated module arriving on a truck has fixed dimensions that limit the chimney effect available to a raised-floor data hall. Direct-to-chip liquid cooling, of the sort sold by CoolIT Systems and Submer, becomes not a luxury but a requirement once rack density exceeds 30kW. CDM's acquisition of R&D Specialties brought in-house the fabrication of cooling distribution units, or CDUs, that circulate dielectric fluid or treated water through cold plates mounted directly on GPUs. The integration of power and cooling into a single factory-assembled skid is what makes a 100kW edge module thermally viable in a parking lot in August. Without it, the module is a shipping container with servers. With it, the module is a data centre.

The supply chain for all of this remains fragile. Ashley Belanger, reporting for Ars Technica in April, found that nearly 50 percent of data centre projects in the United States are experiencing delays, with China holding a critical position in the supply of power infrastructure components: transformers, switchgear, and high-voltage circuit breakers. Tariffs imposed over the past year have added months to procurement timelines and millions to project budgets. A modular data centre that can be assembled in a factory in Virginia, using components stockpiled before the tariff regime took effect, has a competitive advantage that has nothing to do with technology and everything to do with trade policy.

What is the building actually for, and is it built for anything else? It is a question worth asking of every modular deployment. A traditional data centre is a single-purpose structure. It will never be a warehouse, a hospital, or a school. A modular unit, by contrast, can be decommissioned, trucked away, and replaced with a newer model, or simply removed when the power contract expires. Antimatter's entire business model depends on this reversibility. Its micro data centres are sited on leased land with fixed-term power agreements. When the agreement ends, the unit departs. The land reverts to its previous use. Nothing is demolished. Nothing is abandoned. For county planners who have watched hyperscale campuses consume thousands of acres with no exit strategy, that is not a small argument.

In late April, the planning commission in a rural Ohio county held a hearing on a proposed modular edge site. The application, submitted by a developer working on behalf of an unnamed cloud tenant, requested a conditional-use permit for ten prefabricated data modules on a two-acre parcel adjacent to a 138kV substation. The hearing lasted 90 minutes. The questions from commissioners were not about noise or water use, as they would have been for a conventional data centre. They were about decommissioning: what happens to the site in year ten, year fifteen, year twenty. The developer had an answer, in writing, with a bond. The permit was approved unanimously. It was, in its quiet way, as significant a vote as any taken this year in the data centre industry.

The beneficiaries of this shift are not evenly distributed. The hyperscalers benefit most, of course: AWS, Microsoft, Google, and the large colocation providers can accelerate their build schedules and reduce their exposure to construction-labour shortages in markets such as Phoenix and northern Virginia. But the secondary beneficiaries may matter more in the long run. Municipal utilities gain ratepayers who can be connected without the multi-year planning cycles of a transmission-level interconnection. Regional manufacturers gain access to low-latency AI inference that does not depend on a fibre run to a distant availability zone. And the firms that build the modules themselves, the CDMs and Duoses of the world, are building balance sheets that look less like construction contractors and more like industrial manufacturers.

The next milestone is already on the calendar. On 14 May, Duos Edge AI will open the doors of its Victoria, Texas, edge data centre to the public. It is a 500kW unit, roughly the size of a shipping container, sitting on a concrete pad next to a municipal substation. By the end of the year, the company expects to have a dozen such units operational across the state. The factory-made data centre is not a prototype. It is not a whitepaper. It is sitting in a car park in Victoria, drawing power, serving inference requests, and it has a building permit with an expiration date. That date, more than any earnings call or analyst note, is worth watching.

Read next

Progress 0% ≈ 8 min left
Subscribe Daily Brief

Get the Daily Brief
before your first meeting.

Five stories. Four minutes. Zero hot takes. Sent at 7:00 a.m. local time, every weekday.

No spam. Unsubscribe in one click.