Why Sustainable Data Centers Need a Reset in the Age of AI

The rapid growth of AI is exposing fundamental limitations in how data centers are designed.

Power availability, water consumption, and infrastructure scale are no longer secondary considerations — they are defining constraints.

This article outlines why traditional approaches to “sustainable data centers” are no longer sufficient and what must change to support the next generation of compute infrastructure.

To understand where the industry is going, it is necessary to understand where it came from. Data centers have existed in various forms since the 1950s and 1960s. The 1970s and 1980s introduced dedicated enterprise data centers as technology adoption expanded. The 1990s marked the rise of colocation, allowing enterprises and smaller businesses to place infrastructure in shared facilities.

The technology running inside data centers has changed dramatically over the past seventy-five years, and power and cooling requirements have changed with it. What did not change was the assumption that supporting industries could scale power and cooling fast enough to meet demand.

The rise of GPU‑driven compute through AI has broken that assumption. In the 1950s, a mainframe computer room averaged 0.05–0.5 MW of IT load. By the 1980s, that figure rose to 0.5–2 MW. In the 1990s, enterprise data centers reached roughly 3 MW, while small colocation facilities averaged 0.5–1 MW. The 2000s brought another jump, with enterprise data centers approaching 10 MW and colocation facilities as high as 15 MW. Even then, these loads remained within the capacity of traditional support industries. Heavy industrial facilities during the same period often reached 40 MW.

By the 2020s, enterprise data centers commonly reached 30 MW, while large colocation campuses approached 150 MW. To put that scale in perspective, 150 MW of continuous power is comparable to the electricity demand of the entire city of St. Louis, Missouri. At this point, data centers surpassed the scale of typical industrial loads. Power delivery for data centers moved into a new category entirely.

Data centers do not only consume power; they convert most of it into heat. While the relationship is not perfectly proportional, rising power density directly increases thermal output. Heat management has therefore become a defining constraint. In the 1950s, a mainframe room could consume up to 10,000 gallons of water per day for cooling. By the 1990s, an enterprise data center could consume as much as 89,000 gallons per day. Water demand has continued to rise with scale.

Although technology has become more power‑ and heat‑efficient and cooling systems have improved, the underlying constraints remain. Many current designs still rely on scaling approaches that demand large amounts of power and water. That model no longer holds. Data centers are now critical infrastructure, and they must be designed accordingly.

Why Sustainability Means Something Different Today

The term “sustainable data center” has become increasingly ambiguous. It has been used to describe incremental efficiency improvements, renewable energy credits, and carbon accounting practices that exist entirely outside a facility’s physical footprint. Many of these approaches emerged when data centers were smaller, power was plentiful, and cooling challenges could be addressed through scale.

For decades, incremental efficiency gains were sufficient. Servers improved, cooling systems evolved, and utilization metrics trended in the right direction. As long as demand grew predictably, the industry could refine existing designs rather than reinvent them.

That assumption no longer holds.

AI did not simply increase demand; it changed the slope of the curve. Power density rose faster than efficiency gains could offset. Cooling shifted from air management to thermal engineering. Infrastructure designed for predictable enterprise workloads is now expected to support dense, continuous compute at industrial scale.

Efficiency alone is no longer sufficient. A system can be efficient while still depending on constrained power or water resources. At scale, those dependencies fail. Sustainability is no longer about doing the same things better; it is about reassessing whether long‑standing assumptions remain valid.

This distinction matters outside technical circles. A data center may meet traditional efficiency metrics while still placing strain on a stressed grid or water‑limited region. As a result, data centers are increasingly discussed alongside factories, power plants, and other forms of critical infrastructure.

Sustainable data center design is no longer a feature or a certification. It is a design discipline grounded in physical resource availability and long‑term reliability.

Why Metrics Lag Reality

The industry has long relied on a small set of metrics to measure progress. Power utilization effectiveness became the dominant shorthand for efficiency because it was simple to calculate and compare. During a period when reducing overhead power delivered meaningful gains, this proxy was useful.

Today, it is incomplete.

Metrics such as PUE describe how efficiently incoming power is converted into compute. They do not describe where that power comes from, how constrained it is, or what tradeoffs are required to deliver it reliably. A highly efficient facility can still strain a regional grid if its demand exceeds what that grid was designed to support.

The same limitation applies to water. A facility can achieve acceptable efficiency metrics while still consuming large volumes of evaporative water in regions facing water stress. Metrics focused on internal optimization measure performance inside the fence line while ignoring external impact.

This gap exists because metrics tend to describe what already exists rather than challenge whether existing approaches still make sense. When data centers resembled large commercial buildings, these metrics aligned reasonably well with reality. As data centers evolved into industrial‑scale infrastructure, the metrics failed to keep pace.

AI accelerated this disconnect. Power density increased faster than metrics adapted. Cooling strategies changed. The consequences of design decisions began to surface at the grid and community level rather than solely inside the facility.

Metrics remain useful, but they are no longer sufficient. Sustainable design now requires evaluating how facilities interact with power systems, water resources, and surrounding communities over time. When scale changes, context changes.

Power as Infrastructure, Not a Utility Input

Power is no longer a static input. Availability is constrained, costs are rising, and long‑term reliability is increasingly uncertain. Traditional data center design treats the grid as a fixed variable. Modern design must treat power as an integral part of the system itself.

Data center designers must evaluate multiple power options rather than defaulting to the grid. The question is not whether grid power is available, but whether it is appropriate for the scale and reliability requirements of the facility.

Water Is the Hidden Constraint

Evaporative cooling remains a major driver of water consumption in legacy designs. Data centers and the power plants that supply them evaporate water daily at industrial scale. The relevant design question is no longer how much water is available, but how little evaporative water a system can functionally require.

Designs based on dry cooling and warm‑water loops enable near‑zero water consumption, greater geographic flexibility, and improved community alignment.

Water use is also increasingly viewed through a social and political lens. Even where water rights are secure, large industrial consumers face scrutiny over whether their use aligns with community priorities. Regulatory approval does not guarantee public acceptance.

As data centers grow in size and visibility, minimizing water use becomes as much a matter of trust as engineering. Reducing evaporative dependency forces a parallel shift in how heat is managed and valued.

Designing for Heat, Not Fighting It

Conventional designs treat heat as waste to be rejected. At modern scale, that approach leaves value on the table. Data centers must increasingly be designed to capture and utilize the heat they generate, reducing cooling energy requirements and enabling compatibility with advanced power systems.

The engineering challenges are well understood. What remains is the willingness to design around them.

Resilience, Security, and National Infrastructure Implications

Uptime expectations have always shaped data center design. Traditionally, grid outages were treated as rare events mitigated by batteries and generators sized for hours or days. As global power demand increases, interruptions may become more frequent or even planned.

In that environment, power rationing becomes a real risk. Facilities designed with integrated power strategies are insulated from this uncertainty. Sustainability, in this context, becomes a prerequisite for long‑term reliability.

What This Means for the Next Decade of Data Center Development

The industry has reached an inflection point. Future‑proof designs must integrate power, cooling, and compute from the outset. Minimizing dependency on evaporative water is a foundational principle, not an optimization.

These principles guide Island Roadhouse Data Centers in the design of our colocation facilities. Our objective is to provide long‑term power availability and reliability as part of the core engineering discipline, delivering infrastructure designed for the 2030s and beyond while creating durable value for host communities.

Closing: Sustainability as an Engineering Discipline

Designing sustainable data centers is a complex engineering problem, but not an unsolvable one. The challenge lies in balancing competing constraints that vary by region, grid, and community.

There is no universal solution. Power availability differs. Water availability differs. Community expectations differ. Sustainable design therefore becomes an exercise in tradeoffs rather than optimization against a single metric.

Uncertainty is now inherent. Long‑term power availability is less predictable. Water stress is more visible. Workloads continue to evolve in ways that challenge density assumptions. Designing under these conditions requires systems that remain viable as inputs change.

Sustainability is best understood as an engineering discipline rather than a checklist. It requires confronting scale, dependency, and long‑term impact directly. The outcome of that conversation will shape the infrastructure supporting the digital economy for decades to come.

Previous
Previous

The Overlooked Cost of Data Center Power