Hyperscaler data centre lease cancellations
Let’s get one thing straight: this isn’t the start of a collapse. It’s not a bubble bursting, and it’s definitely not a retreat. By Sebastien Bonneau, partner, McDermott Will & Schulte.
When a hyperscaler pulled the plug on roughly 200MW of data centre lease agreements across the US earlier this year, it wasn’t a panic move – it was a calculated adjustment. These weren’t random cancellations. They were part of a broader strategic reassessment, involving early-stage agreements that never matured into full leases. This hyperscaler also shifted a significant chunk of its international spend back to the US, signalling a realignment, not a rollback.
Another hyperscaler followed suit shortly after, pausing several international lease commitments. An analyst described it as digesting recent aggressive lease-ups – a phrase that perfectly captures what’s happening. These hyperscalers aren’t pulling back because the market is shrinking. They’re recalibrating because the market is evolving.
The more I think about it, the more convinced I am: the internet growth well is real. These hyperscalers lease cancellations are surface-level adjustments – symptoms of a healthy, dynamic market. Hyperscalers aren’t scaling back.
They’re scaling smart. They’re making decisions based on real-time demand, not long-term speculation. In today’s digital infrastructure sector, there’s no such thing as a 10-year business plan. The pace is too fast, the variables too fluid, and all indicators point to a rising need for compute power.
So no, this isn’t the beginning of a bubble burst. It’s the beginning of a new phase – one where hyperscalers, innovators and governments work together to build the infrastructure of the future. If you’re wondering whether to be concerned, my suggestion is simple: don’t watch what they cancel. Watch what they build next.
The internet is exploding
If these cancellations were happening in a world where internet usage was declining, then yes, we’d have reason to worry. But that’s not the world we live in. Demand for digital services, cloud infrastructure and AI workloads is growing exponentially. Every day, more data are generated, more compute power is needed, and more infrastructure is required to support it. What we’re seeing isn’t a contraction – it’s a strategic pivot by companies that understand the stakes and are playing the long game.
This isn’t about supply chain issues, talent shortages, the insufficiencies of the grid, or geopolitical tensions – though those factors do exist. At its core, this is about hyperscalers refining their infrastructure strategies to better align with evolving business priorities. One of them, for example, is still committed to spending US$80bn on AI infrastructure this fiscal year. Another one, despite pausing some leases, continues to invest heavily in its own data centre builds, especially in northern Virginia. These companies aren’t scaling back – they’re scaling smart.
Understanding the data centre landscape
To make sense of these moves, it helps to understand the types of data centres in play. Hyperscale data centres are typically located near major urban hubs and serve as the backbone for cloud services offered by them. Then there are edge data centres – smaller facilities located in city centres that provide ultra-low latency for applications like autonomous vehicles and real-time analytics. Finally, we have AI training and backup data centres, often situated in remote areas with access to cheaper, greener energy. These handle the massive compute workloads required for training large AI models and maintaining data redundancy.
Each type of data centre serves a distinct purpose, and hyperscalers are now choosing different paths based on their strategic goals. Some are doubling down on gigantic hyperscale builds, while others seem to be investing more in edge or AI-specific infrastructure. The key takeaway? These decisions are deliberate, not reactive.
The fundamentals are strong
Let’s talk numbers. Hyperscalers' cloud services account for roughly 60% of the data centre market. The remaining 40% is driven by enterprise customers – and that segment is growing too. The fundamentals are solid: user growth is up, geographic expansion is accelerating, and average spend per user continues to rise. Digital infrastructure is built on three pillars – data centres, connectivity and power. More internet usage means more compute. More compute means more infrastructure. It’s a simple equation.
The US still represents about half of the global data centre market. Current political support is strong. There’s also a strategic dimension to this. National security, intelligence and cybersecurity all depend on robust digital infrastructure. Geopolitical tensions and trade policies are influencing where hyperscalers invest, but again – this is reallocation, not retreat.
Then there’s the Stargate Initiative – a US$500bn collaboration aimed at building next-gen data centres for AI. This isn’t speculative hype. It’s a serious investment in the future of compute.
Innovation is accelerating
Beyond hyperscale, the industry is buzzing with innovation. Cooling technologies are evolving rapidly, with immersive and chip-level cooling reshaping data centre design. The chip market is dynamic – Nvidia’s ultra dominance is being increasingly challenged by competitors and certain hyperscalers who are building their own custom silicon. Quantum computing is on the horizon, and recent announcements suggest quantum workloads will require entirely new infrastructure paradigms.
Some experts predict that wafer-scale semiconductors will radically change the game. Traditional chips are made by slicing a silicon wafer into many small dies, each becoming an individual chip. Wafer-scale integration flips this model: instead of cutting the wafer, the entire wafer is used as a single, massive chip, hosting billions of transistors and thousands of cores.
Wafer-scale chips eliminate inter-chip bottlenecks, enabling massive parallelism for AI and data-intensive tasks. They offer lower latency, higher bandwidth and improved energy efficiency compared to traditional multi-chip systems. Fewer, more powerful units simplify data centre infrastructure, reducing complexity in cooling, networking and orchestration. These advances position wafer-scale semiconductors to reshape data centre design and performance standards. Data centres could shift from GPU-centric clusters to wafer-scale AI accelerators, especially for large language models and deep learning. This could lead to new standards in chip design, data centre layout and cloud service offerings.
But the switch to wafer-scale chips still faces serious challenges: manufacturing complexity (building and testing a full wafer as a single chip is extremely difficult), yield issues (a defect in any part of the wafer can compromise the entire unit), and software adaptation (existing frameworks must evolve to fully leverage wafer-scale architectures).
AI chips valuations
No discussion of AI infrastructure is complete without addressing the elephant in the room: AI chip valuations. Nvidia briefly touched a US$5trn market cap, propelled by insatiable demand for chips powering an AI-driven world at scale. Yet institutions like the IMF and Bank of England are sounding alarms, warning of stretched valuations, circular financing and a mismatch between capital expenditure and monetisable demand. Some insiders argue that the economics of AI data centres are fundamentally flawed – hardware obsolescence cycles are shorter than investment horizons, and the revenue required to break even is staggering.
Michael Burry, famed for The Big Short, recently disclosed large bearish bets against Nvidia and Palantir, followed days later by Peter Thiel’s hedge fund selling its entire US$100m Nvidia stake. SoftBank also liquidated a US$5.8bn position – but its move is telling. Rather than exiting AI entirely, SoftBank reallocated capital to the Stargate project, aimed at building world-scale data centres. This shift underscores the thesis of this article: while chip valuations face scrutiny, growth in AI infrastructure remains undeniable.
Still, the bubble question lingers. Nvidia CEO Jensen Huang insists this isn’t a dot-com-style frenzy, pointing to real demand, transformative technology, and hyperscalers with strong cashflows and balance sheets. The debate hinges on whether today’s spending translates into sustainable productivity gains or fizzles out due to weak adoption. For now, the AI chip boom is real.
On November 19, Nvidia announced stellar Q3 results – US$57bn in revenue, beating Wall Street expectations – reinforcing its dominance in AI compute. On the same day, Brookfield unveiled a US$100bn AI infrastructure programme, with Nvidia as an investor. Huang summed up the moment: “We’re entering a new era for AI. AI is the most powerful technology force of our time. To build AI factories, we need more than compute. We need land, power and capital all aligned from the start.”
Whether this is rational or speculative remains an open question. If a correction hits AI chips, it could slow growth across the semiconductor sector and force hyperscalers to rethink strategy. But it’s unlikely to derail cloud services or enterprise adoption – global demand for digital infrastructure isn’t going anywhere. That means more data centres, and the developers behind them will remain firmly on their feet. For now, the industry looks strong – at least until the next inflection point.