TechReaderDaily.com
TechReaderDaily
Live
Datacentres · Infrastructure

Modular Data Centres Shift from Sideshow to Hyperscale Pillar

As hyperscalers face permitting logjams and grid constraints, factory-built modular infrastructure is becoming the default strategy rather than a temporary fix.

Rendering of an American Tower aggregation edge data centre planned for Raleigh, North Carolina, showing a compact prefabricated facility adjacent to telecom infrastructure. americantower.com
In this article
  1. What the CDM deal actually changes

CHANTILLY, Virginia, On a concrete pad behind a low-slung industrial unit twelve miles from Dulles Airport, a fully commissioned data hall sits inside a steel box the length of two articulated lorries. It was assembled not on site but in a factory two hundred metres away, then wheeled into position and connected to power, fibre, and chilled water in eleven days. The unit belongs to Compu Dynamics Modular, or CDM, which on the last day of March 2026 took a majority stake in R&D Specialties, a Costa Mesa firm that builds electrical switchgear and power distribution systems for prefabricated data centres. The acquisition, announced via a Globe Newswire release picked up by Business Insider, gives CDM control over one of the quieter bottlenecks in the North American data centre supply chain: the power skids that turn a steel container into a working compute module.

The deal would have rated a paragraph in the trade press three years ago. In the spring of 2026 it lands differently. Nearly half of all United States data centres scheduled for completion this year now face delays or outright cancellation, according to a report published in early April by the consulting arm of Wood Mackenzie, driven by a collision of permitting bottlenecks, power grid shortfalls, and the spiralling cost of on-site skilled labour. The same report noted that the contracted pipeline of US data centre capacity had swollen to over 40 gigawatts, more than double the figure from 2024, while the number of projects reaching financial close had actually fallen quarter on quarter. In that environment, anything that compresses the timeline between investment decision and revenue-generating kit has ceased to be a curiosity. It has become the main event.

Modular data centres are not new. The industry has been talking about containerised compute since Sun Microsystems shipped its Project Blackbox in 2006, a twenty-foot shipping container packed with servers that generated more headlines than purchase orders. What has changed is the scale and the buyer. Hyperscalers who once treated modular as a stopgap for remote locations or disaster recovery are now ordering factory-built halls at the multi-megawatt level, driven in equal measure by speed and the near-total exhaustion of construction labour in prime data centre markets.

That transformer queue is the story underneath the story. In Loudoun County, where Chantilly sits within the wider Ashburn data centre cluster, the local utility Dominion Energy has been adding transmission capacity at an unprecedented clip, but the lead time for a 100 MVA substation transformer, the kind that serves a mid-sized hyperscale campus, has stretched past 160 weeks. That is three years. A modular deployment cannot magic a transformer into existence, but it can decouple the building timeline from the power timeline: the factory builds the hall whilst the utility processes the interconnect application, and the two strands converge at commissioning rather than at groundbreaking. The effect is to take a sequential process and make it parallel. The industry calls this a schedule overlap. The reality is simpler: it buys time.

The CDM acquisition illuminates a second dimension of the modular supply chain. R&D Specialties manufactures custom power distribution units, remote power panels, and integrated switchgear skids, the electrical backbone inside every factory-built data hall. By bringing that capability in-house, CDM can now deliver a module that arrives on site with its power infrastructure already tested at full load in the factory. For a hyperscaler, this eliminates weeks of on-site electrical commissioning and reduces the defect rate that typically plagues the first thirty days of a traditional build. Stephen Altman, CDM's chief executive, said in the acquisition announcement that the combined entity would "deliver fully integrated modular data centre solutions from design through factory acceptance testing in under twenty-four weeks." In a market where twenty-four months is the new normal for traditional construction, that number is doing as much competitive work as any technical specification.

We are moving from a world where modular was a tactical option to one where it is becoming the default delivery mechanism for an entire class of compute infrastructure, particularly at the edge and in power-constrained metros., a director of site selection at one of the three largest cloud providers, speaking on background

The edge is where the modular argument acquires a different geometry altogether. A hyperscale campus in Ashburn might consume 300 megawatts. An edge deployment in a regional city, a telecom tower site, or an industrial park might draw 500 kilowatts to 2 megawatts, but it needs to be deployed in dozens or hundreds of locations simultaneously, often in places where the nearest general contractor with data centre experience is a four-hour drive away. The CRN AI 100 list for 2026, published on 24 April, identified twenty-five infrastructure and edge computing companies driving AI innovation, from established names such as HPE and Lenovo to newer entrants like Vast Data and Cohesity. Joseph F. Kovar, writing for CRN, anchored the list in a Gartner estimate that worldwide AI infrastructure spending will reach $1.37 trillion in 2026, accounting for over 54 percent of total AI spending. A substantial fraction of that infrastructure, the report makes clear, will not sit in hyperscale campuses at all.

PrismML, a Pasadena-based startup that emerged from stealth on the same day CDM announced its acquisition, crystallised the edge proposition with a different kind of announcement. The company launched what it calls the world's first commercially viable 1-bit large language model, branded Bonsai 8B, designed to run inference locally on phones, laptops, and edge devices without a cloud round-trip. Caltech researcher Babak Hassibi, who led the compression work, published findings alongside the launch showing that the 1-bit model preserved enough fidelity to handle complex natural-language queries whilst drawing a fraction of the power of a conventional 8-bit or 16-bit model.

That sentence distils a geographic truth that the data centre industry has been slow to absorb. For a decade and a half, the default assumption was that compute should be centralised in massive campuses near cheap power and fat fibre, and that the network would handle latency. The rise of inference-heavy AI workloads, which need to process data close to where it is generated, has partially inverted that logic. A factory-floor vision system in a German automotive plant cannot tolerate the 40-millisecond round-trip to Frankfurt, let alone the 80 milliseconds to Amsterdam. A retail analytics system processing in-store camera feeds in São Paulo has data sovereignty constraints that make a hyperscale region in Virginia legally irrelevant. Those workloads want compute in the carpark, not the cloud.

The companies filling that gap are an eclectic mix. American Tower, better known as a wireless infrastructure landlord, broke ground on its first aggregation edge data centre in Raleigh, North Carolina, in 2025, a compact prefabricated facility designed to sit at the base of telecom towers and serve as a local compute node for mobile network operators and content delivery networks. Zella DC, an Australian firm that has spent fifteen years refining micro-data-centre designs for harsh environments, now ships units rated for operation at 50 degrees Celsius ambient temperature without mechanical cooling, a specification that opens up deployment in markets where the cost of building a traditional data centre would be prohibitive on thermal grounds alone.

Cooling, in fact, is where modular and edge buildouts intersect most uncomfortably with the physics of high-density AI. A single rack of Nvidia GB200 NVL72 systems can dissipate over 80 kilowatts. That is beyond the capacity of traditional air cooling in any form factor, modular or not. The hyperscale modular vendors have responded by integrating direct-to-chip liquid cooling into their factory-built modules. CoolIT Systems, a Calgary-based firm that appeared on CRN's infrastructure list, now ships cold plates and coolant distribution units that are factory-integrated into the rack-level power and cooling skids inside CDM modules and competing designs from Vertiv and Schneider Electric. The factory setting, counterintuitively, enables a higher standard of liquid-cooling installation than a construction site: the module is built on a level floor, in controlled humidity, by technicians who do nothing but assemble cooling loops all day. One CoolIT engineer described the defect rate difference as "an order of magnitude."

The economics of modular are not universally favourable. A factory-built data hall typically carries a 10 to 15 percent premium over a traditional stick-built facility on a per-megawatt basis, once the concrete pad, site utilities, and interconnection are accounted for. That premium narrows or disappears when the value of accelerated time-to-market is factored in, but the calculation is workload-dependent. For a hyperscaler deploying a standardised cluster that generates predictable revenue from day one, shaving eighteen months off the schedule is worth a substantial premium. For an enterprise building a one-off facility with uncertain utilisation, the premium is harder to justify. The site selection consultant I spoke to put it bluntly: "If you know exactly what you are building and you need it yesterday, modular pays. If you are still figuring out your workload mix, you are probably better off leasing colocation space in someone else's traditional build and buying yourself time."

That calculus shifts again at the edge, where the alternative to a modular deployment is often not a traditional data centre at all. It is a converted telecom hut, a server closet in a factory office, or an improvised rack in a warehouse with a portable air conditioner aimed at it. The reliability, security, and energy efficiency gap between a purpose-built modular unit and those ad hoc arrangements is large enough that the modular premium effectively disappears. American Tower has made this argument explicitly in its investor materials, positioning its edge data centres as a replacement for the "server-room sprawl" that currently handles distributed compute across the telecom estate.

The grid question, however, does not disappear. A modular data centre, however cleverly packaged, still requires a grid connection, and the grid interconnection queue in the United States is now the single largest impediment to data centre deployment of any kind. Lawrence Berkeley National Laboratory reported in 2025 that the combined queue of generation and storage projects seeking interconnection to the US grid had surpassed 2,600 gigawatts, more than the total installed capacity of the entire US power system. Data centre projects are not the largest component of that queue, but they are among the most impatient, and they are increasingly concentrated in the same regions, Northern Virginia, Phoenix, Dallas-Fort Worth, and the Chicago suburbs, where the queues are longest. Modular buildouts can compress construction timelines, but they cannot compress interconnection studies, environmental reviews, or the physical delivery of a substation transformer that is on a 160-week backorder.

The workaround that has begun to emerge is the "behind-the-meter" edge deployment, where a modular data centre connects directly to an on-site generation source, a gas turbine, a fuel cell array, a solar-plus-storage installation, and bypasses the transmission-level interconnection queue entirely. Bloom Energy, which reported strong data centre demand in its third-quarter 2025 earnings, has been shipping solid-oxide fuel cells to modular data centre sites in California and the Midwest. The arrangement is not carbon-free, but it is queue-free, and in a market where time is the scarcest resource, that trade-off is being made repeatedly. That is a statement that should concern anyone trying to model grid demand from public data alone.

What the CDM deal actually changes

The CDM acquisition of R&D Specialties is not a mega-deal. The terms were not disclosed, and neither firm is a household name outside the data centre supply chain. But it signals a structural shift that matters for anyone tracking the geography of compute. Until recently, the modular data centre market was fragmented, with dozens of small manufacturers each producing a few dozen units per year for niche applications. Consolidation around power integration, the ability to deliver a module with its electrical backbone fully installed and tested, is the first step toward the kind of standardisation that would allow hyperscalers to order modular data halls the way they order servers: from a spec sheet, with a guaranteed delivery date, at scale. The CRN AI 100 list, for all its breadth, captured a market that is still in the early stages of that consolidation. The infrastructure companies on the list, HPE, Lenovo, NetApp, Pure Storage, and the rest, are increasingly designing their hardware with modular deployment environments in mind, but the physical envelope into which that hardware is installed remains a bespoke product in too many cases.

Standardisation, when it comes, will change the geography of data centre construction. Today, the siting decision for a new data centre campus is heavily influenced by the availability of local contractors who have built one before. That constraint favours established clusters and reinforces their dominance. If a data hall can be built in a factory in Virginia or Ohio and shipped anywhere a flatbed truck can reach, the siting decision becomes a function of power availability and fibre topology, not construction labour mobility. That shift would distribute data centre capacity more broadly across the grid, potentially easing some of the congestion in the marquee markets, even as it creates new demand in places that have never hosted a data centre before.

A county planner in Pottawattamie County, Iowa, home to a large hyperscale campus outside Council Bluffs, said that the county has already begun fielding enquiries from modular data centre operators looking to site units on industrial lots that were previously considered too small for traditional data centre construction. The shift from 200 MW campuses to 10 MW distributed deployments would constitute the most significant change in data centre topology since the industry moved from enterprise server rooms to colocation facilities in the early 2000s.

PrismML's 1-bit model is not going to drive that shift by itself. But it points toward a world in which inference workloads, the kind that need to be close to users and devices, can run on hardware that sips power rather than gulps it. That hardware still needs a home. It needs power, cooling, security, and a network connection, the same things a hyperscale campus needs, just in a smaller, more distributed package. The modular edge data centre is that home. It is a building type that did not exist at commercial scale a decade ago, that was treated as a curiosity five years ago, and that is now, in the spring of 2026, absorbing a meaningful share of the $1.37 trillion the industry is spending on AI infrastructure. Whether that share continues to grow depends on two things that have little to do with the cleverness of the engineering: how quickly the grid interconnection queue can be cleared, and whether the standardisation of modular designs proceeds fast enough to satisfy buyers who are, at this point, running out of patience with everything else. The next data point arrives in the second half of 2026, when CDM's first post-acquisition power-integrated modules are scheduled to ship from Chantilly. Watch the delivery dates.

Read next

Progress 0% ≈ 11 min left
Subscribe Daily Brief

Get the Daily Brief
before your first meeting.

Five stories. Four minutes. Zero hot takes. Sent at 7:00 a.m. local time, every weekday.

No spam. Unsubscribe in one click.