From Vision to Voltage: What Data Center Developers Need Utilities to Understand

In my last post, I explored the utility perspective on data center growth—why planning timelines are long, why reliability can’t be rushed, and how speculative load requests can distort grid investment. This month, we’re flipping the lens. What does this surge in demand look like from the developer’s side? What drives their urgency, and how do they navigate the complex ecosystem of siting, permitting, and procurement? By examining both viewpoints, we aim to foster better collaboration, clearer expectations, and smarter infrastructure decisions.

The Developer’s Urgency

Image courtesy of Pickit

Let’s start with the obvious: Data centers are not new. But as more applications move to the cloud, data centers now serve as the cloud itself, anchoring today’s digital infrastructure. They house the computing power behind everything from processing to storage, and they do it faster, better, and cheaper than any other on-premises solution. Recently, they have also become the new frontier of artificial intelligence and machine learning. This frontier isn’t just about innovation; it’s about competitive advantage. Major companies worldwide aren’t just building facilities; they’re building the future and, in doing so, trying to differentiate themselves, leverage AI to improve productivity, and deliver better products & services.

And the race is on. Thanks to tech giants like Amazon, Microsoft, Google, Oracle, Apple, Meta, and Nvidia, the U.S. currently leads in cloud and AI dominance. But this is not only a global race, it’s a race against time. There are no second-place winners. Countries like Saudi Arabia and China are aggressively courting data center investment, offering speed, incentives, and infrastructure. In the time it takes to energize them in the U.S., others offer the opportunity to launch, scale, and capture market share much faster.

Data Center Classification

At its core, a data center is a facility that houses servers, storage systems, and networking equipment. But not all data centers are created equal:

  • Enterprise Data Centers: Built and operated by a single company for internal use.

  • Colocation Centers: Shared facilities where multiple businesses rent space.

  • Hyperscale Data Centers: Massive, cloud-driven operations supporting global platforms.

  • AI Data Centers[i]: These AI Factories are purpose-built for machine learning and inference workloads—these are the new frontier. The recent explosion in energy demand, and the speed at which it is growing, is primarily driven by AI data centers.

  • Crypto mining data centers: These are primarily focused on mining cryptocurrency. Since mining those resources has become more difficult (profitably), some of those data centers are being repurposed as AI data centers since they already have the siting, permits, power, cooling, etc., in place.

AI data centers are unique. During training (learning mode), they consume enormous amounts of computer power, requiring hundreds to thousands of megawatt hours. Once models are deployed (operating mode), consumption drops, and the flexibility to manage demand increases significantly. This variability is critical, and it’s often misunderstood by utilities accustomed to designing systems for peak consumption.

Data Center Development Front-End Complexities that Utilities and Other Service Providers Don’t Always See

Before a single server is installed, developers are deep in analysis, navigating a maze of technical, regulatory, and logistical hurdles that extend far beyond electricity.

  • Land scouting: Is the site geologically stable? Is it ecologically acceptable (e.g., wetlands)? Does it require zoning changes? Does it have access to sufficient power and telecom?

  • Water sourcing: Is there sufficient access to water for cooling systems that may require millions of gallons annually if it is open-loop cooling? Will it use closed-loop cooling and use 80% less water.

  • Power procurement: Do we understand how much capacity is needed at the full site buildout? How will the site load grow from day 1 operations to full site buildout? How much can the power consumption ramp up or down, or is consumption fixed (step-function) throughout? Does the site need carbon-free power? What level of resiliency is required?

  • Permitting and compliance: What are the emissions limits, noise restrictions, and land use requirements — and how do they vary by jurisdiction?

  • Vendor coordination: How long are the lead times for critical components, from switchgear to servers to networking and telecommunication equipment, and how do they impact the buildout timeline?

  • Resiliency: Does power and telecom meet at least Tier III (dual source, with each source fully capable of supporting the facility's demands). Can auxiliary generation be built at the site, and what fuel is available, such as natural gas or diesel, and what are the limitations (e.g., noise, emissions)?

Electricity is just one piece, but it’s often the bottleneck. Building generation and the required transmission lines to deliver the energy can take several years and cost developers millions. That’s why developers talk to multiple utilities. It’s not about gaming the system; it’s about hedging risk in a high-stakes race. Developers are also exploring alternative power sources, such as fuel cells, battery storage, and small modular reactors (SMRs). Companies like Bloom Energy, NuScale, GE Vernova, Oklo, and others are rushing to supply data center energy demands.

Speed is the Currency

While costs are always a constraint, the biggest constraint for developers is time. Once construction starts, it can be less than 15 months to operation, and speed to market is a competitive advantage. If a hyperscale AI facility takes 36 months to energize in the U.S., but less than 18 months in India or China, that’s not just a delay. It’s a lost opportunity.

Yes, there’s hyperbole about U.S. competitiveness. But the risk is real.  If utilities can’t meet energy requirement timelines or if local constraints hinder data center development, operators will go where power is available. That’s not a threat; it’s a business reality.

Speed isn’t simply a metric. It moves markets. And in this race, every month counts. This urgency is reshaping corporate priorities. CEOs are stepping aside to focus on AI strategy, delegating day-to-day operations to others. And this future runs on electricity — lots of it, and fast. For developers, the challenge isn’t just securing power, telecom, and water; it’s securing it on their timeline.

Scaling is a Strategic Solution

Here’s the part utilities need to hear: developers don’t always need gigawatts on day one. Many are willing to start with a few hundred megawatts and ramp up over time. That gives utilities time to plan, secure approvals, and build infrastructure. It gives communities time to engage, and developers time to prove demand.

The flexibility is especially important for AI data centers, which operate in distinct phases. During training, GPUs and AI chips consume enormous amounts of compute power, often pushing toward their thermal design limits. But during communication or inference phases, power usage drops dramatically. These swings aren’t technical quirks. They can ripple across the data center and the grid, risking instability or mechanical failure if not properly managed.

This is not the problem of one data center or another; it is industry-wide. To resolve this, developers are advocating for co-design — aligning software, hardware, and infrastructure to ensure AI systems remain scalable and power-aware. Techniques like staggered scheduling, asynchronous training, and overlapping compute and communication can help mitigate power spikes without compromising performance.

As one recent paper notes, “Power swings visible at the rack, data center, and grid levels risk grid instability and mechanical failure.”[ii] The stakes are high, and solving them requires trust. If utilities treat every request as speculative or transactional, they risk missing real opportunities to grow demand. Developers aren’t asking for shortcuts; they’re asking for clarity, flexibility, and partnership.

From Pressure to Partnership

Utilities are also right to be cautious. Reliability matters. But the relationship with data center developers and operators needs to evolve from transactional to collaborative. Both sides face pressure to deliver, and both have valid concerns. But alignment starts with understanding.

Developers aren’t asking utilities to bend the rules of the grid. They’re asking for a handshake to build it together—one that acknowledges risk, manages supply chains, and adapts quickly. That kind of partnership requires trust, transparency, and shared commitment.

A Call to Action

To move from pressure to partnership, we offer these critical calls to action:

  • AI Framework and System Designers: Explore less synchronous, more power-aware training algorithms that reduce large-scale power swings without compromising convergence. Techniques like asynchronous training, staggered scheduling, and overlapping compute and communication can help stabilize grid impacts.

  • Utility Providers and Grid Operators: Share resonance and ramp specifications openly. Establish standardized communication pathways specifically with data center operators to ensure safe grid operation and avoid unplanned outages or equipment degradation.

  • Industry Collaboration: Support pre-competitive, open forums to establish interoperable standards for telemetry, load signaling, and sub-synchronous oscillation mitigation. One option is the Open Compute Project (OCP).[iii] No single customer, vendor, or hyperscaler can solve this alone. Coordination is essential if the U.S. is to maintain its leadership in AI and cloud infrastructure.

  • Co-coordinate with Regulators: Jointly engage with public sector bodies and regulators to use incentives, such as streamlined permitting in preferred development zones, to guide data center development.

  • Joint Forecasting and Timeline Alignment: Align buildout timelines, share demand forecasts, and co-design scalable solutions. The future of digital infrastructure depends on collaboration, not just electrons.

We’re at the dawn of the AI revolution, still only scratching the surface of the ‘art of the possible’. Consider the rapid evolution of AI chips: Nvidia’s GPU power consumption surged from 250W to 700W per chip in just two years. Their upcoming Blackwell generation boosts power consumption even further, with the B200 consuming up to 1,200W, and the GB200 expected to consume a staggering 2,700W.[iv]

The call to action is happening now. Strategic, methodical planning is essential to ensure we’re ready, not just for what’s next, but for what’s coming fast.  

This article, together with my previous post (Planning for Power: What Data Center Developers Need to Know), explores how utilities and data center developers/operators both face immense pressure to deliver, and both have valid concerns. But alignment starts with understanding. By unpacking the realities on each side, we hope to spark more productive conversations, accelerate responsible buildouts, and ensure that the future of digital infrastructure is powered by trust, not just transmission.


[i] Scott Guthrie – Microsoft Executive Vice President, Cloud + AI Inside the world’s most powerful AI datacenter. Sep 18, 2025. Inside the world’s most powerful AI datacenter - The Official Microsoft Blog

[ii] Numerous authors, Power Stabilization for AI Training Datacenters, Aug, 2025. arXiv:2508.14318v2. https://arxiv.org/abs/2508.14318v2

[iii] https://www.opencompute.org/

[iv] Beth Kindig, AI Power Consumption: Rapidly Becoming Mission Critical, Forbes.com, June 20, 2024 https://www.forbes.com/sites/bethkindig/2024/06/20/ai-power-consumption-rapidly-becoming-mission-critical/

1