Is Compute the New Oil?
The geopolitics of chips, the physics of power grids, and who really controls the AI revolution..
A decade ago, GPUs were niche hardware for gamers and 3D artists—the kind of thing most people never thought about. Then crypto miners discovered they could use the same chips to print money. Then AI researchers discovered they could use them to train neural networks.
The silicon that powers ChatGPT is the same silicon that runs Call of Duty. That's not a coincidence—it's the whole story.
GPUs were designed to do one thing extremely fast: parallel computation (future letter on this one to come). Games needed it first. Crypto mining needed it next. AI training needs it now. What looked like hobbyist hardware turned out to be the foundation of the next industrial revolution, and control over it has become a matter of national security.
Somewhere along the way, the graphics card became the most strategically important piece of hardware on Earth, and most people didn’t notice it happening…
LETTER
In January 2025, NVIDIA briefly lost $600 billion in market value in a single day—the largest one-day decline in U.S. stock market history. The catalyst wasn’t a product failure or an accounting scandal. It was a Chinese AI startup called DeepSeek, which had built an advanced model using fewer chips than anyone thought possible. For a moment, the market glimpsed an alternate future where compute scarcity might not be permanent.
That future hasn’t arrived. If anything, the chokepoints defining the AI industry have only tightened. Understanding them requires following a supply chain that stretches from Taiwanese fabs to American power grids, through Dutch lithography machines and Korean memory factories. At every link, you’ll find the same story: unprecedented demand crashing into infrastructure that wasn’t built for this moment.
The NVIDIA Reality
The numbers are almost absurd. NVIDIA controls roughly 92% of the discrete GPU market and somewhere between 70% and 95% of AI accelerators, depending on how you count. Its data center business generated over $51 billion in a single quarter in late 2025—twelve dollars for every one dollar its gaming division produces. Morgan Stanley estimates the company will consume 77% of all wafers allocated to AI processors in 2025, up from 51% the year before.
This isn’t just market dominance. It’s infrastructural control. NVIDIA’s CUDA software ecosystem has become the de facto standard for AI development, creating switching costs that keep customers locked in even as competitors try to catch up. AMD’s ROCm alternative exists, but adoption remains limited. Google’s TPUs power impressive internal workloads, but internal AWS data from 2024 showed Google’s competitors (Amazon’s Trainium and Inferentia) at less than 3% of NVIDIA GPU usage within Amazon’s own cloud.
The practical result: when Jensen Huang announces that Blackwell chips are sold out through 2025, or that data center revenue could hit $1 trillion by 2028, the rest of the industry has to organize itself around NVIDIA’s production schedule. The company has effectively become a toll booth on the road to AI.
The Taiwan Problem
NVIDIA doesn’t manufacture its own chips. That happens in Taiwan, where TSMC produces over 90% of the world’s most advanced semiconductors from facilities less than 100 miles from mainland China. This geographic concentration has become one of the defining strategic anxieties of our time.
TSMC’s dominance isn’t an accident. The company invented the “pure-play” foundry model in 1987, allowing chip designers to outsource manufacturing without building multi-billion-dollar fabs themselves. Over four decades, it accumulated expertise, equipment, and process knowledge that competitors simply cannot replicate quickly. TSMC’s advanced CoWoS packaging technology, critical for integrating GPUs with high-bandwidth memory, remains a bottleneck even as the company expands capacity.
The geopolitics are uncomfortable. The CHIPS Act supposedly committed $52 billion to reshoring semiconductor manufacturing, with TSMC now building six fabs in Arizona that could produce one-fifth of the world’s most advanced chips by 2030. But Arizona won’t be ready in time to matter if something happens in the Taiwan Strait tomorrow. TSMC’s chairman has warned that an invasion would render the company “not operable”—its facilities depend on connections to the outside world for raw materials, chemicals, and expertise that can’t be stockpiled.
China has responded to this vulnerability with a whole-of-nation effort to achieve semiconductor independence. The results so far have been mixed. SMIC, China’s leading foundry, has produced 7nm chips, but reportedly with poor yields and reliability issues. Huawei, cut off from advanced chips by U.S. export controls, will produce only about 200,000 AI chips in 2025—roughly 1-2% of estimated U.S. production. More than 22,000 Chinese semiconductor companies have shut down in the past five years.
Export controls have also created a massive smuggling economy. Federal prosecutors recently unsealed documents revealing a ring that attempted to export $160 million worth of NVIDIA GPUs between October 2024 and May 2025. Huawei reportedly used shell companies to trick TSMC into manufacturing 2 million chiplets before the scheme was discovered. In Shenzhen, vendors openly sell controlled chips in “hundreds or thousands.” The enforcement challenge is immense: chips are small, valuable, and easily hidden in legitimate equipment.
The Memory Wall
Even if you can get the GPUs, you might not be able to get the memory to make them work. High-bandwidth memory (HBM)—the specialized chips that stack multiple DRAM semiconductors to feed data to AI accelerators—has become perhaps the tightest chokepoint in the entire supply chain.
SK Hynix, Samsung, and Micron control this market. SK Hynix has told analysts that shortages may persist until late 2027, with all memory scheduled for 2026 production already sold out. OpenAI’s Stargate project alone could require up to 900,000 wafers monthly by 2029—roughly double today’s entire global monthly HBM output. DRAM supplier inventories have fallen from 13-17 weeks in late 2024 to just 2-4 weeks by October 2025.
The consequences ripple outward. Samsung has raised some server memory prices by 30% to 60%. Consumer GPU prices are climbing as manufacturers prioritize the more profitable AI market. NVIDIA reportedly plans to cut GeForce RTX 50 series production by 30-40% in the first half of 2026. Micron announced it would discontinue its consumer Crucial brand entirely to focus on enterprise and AI markets. The era of steadily declining price-per-performance in consumer graphics hardware may be over.
The Power Problem
In mid 2024, the sentiment around the hardware was: GPU shortage is easing, but “the future bottleneck will be power supply.” Kind of spot on.
U.S. data centers consumed 183 terawatt-hours of electricity in 2024—more than 4% of total national consumption, roughly equivalent to Pakistan’s entire electricity demand. By 2030, that figure is projected to grow by 133% to 426 TWh. A typical AI-focused hyperscaler now consumes as much electricity annually as 100,000 households. The larger facilities currently under construction will use twenty times that.
The geographic concentration is striking. Virginia’s “Data Center Alley” consumed 26% of the state’s total electricity in 2023. North Dakota, Nebraska, Iowa, and Oregon each see data centers consuming 11-15% of their power supply. PJM Interconnection, the largest U.S. grid operator serving 65 million people across 13 states, projects it will be six gigawatts short of reliability requirements by 2027. “It’s at a crisis stage right now, PJM has never been this short.”
This isn’t just an engineering problem. It’s becoming a political crisis. In the PJM region, data centers accounted for an estimated $9.3 billion price increase in the 2025-26 capacity market. Average residential bills in western Maryland have risen $18 per month; in Ohio, $16. Carnegie Mellon researchers estimate data centers could drive an 8% increase in the average U.S. electricity bill by 2030, potentially exceeding 25% in the highest-demand markets. In Ohio, one couple traced their 60% electricity price increase directly to the 130 data centers that have sprouted around Columbus.
State legislatures are taking notice. Lawmakers across every state considered 238 bills related to data centers in 2025, with half addressing energy concerns. Local opposition is mounting. Residents in Saline, Michigan rallied against Stargate’s planned $7 billion data center on farmland. Cities like Austin have begun studying whether their grids can handle projected demand at all.
Who Wins, Who Loses
The constraints are real, but they don’t affect everyone equally. If you’re Google, Amazon, or Microsoft, you’re signing power purchase agreements for geothermal and nuclear, negotiating directly with utilities, and developing custom chips as insurance against NVIDIA’s pricing power. If you’re a startup or a mid-sized enterprise, you’re waiting in queue, paying premium cloud rates, and wondering if your AI ambitions will survive contact with the supply chain.
The hyperscalers’ advantages compound. They can commit to massive, long-term orders that get priority allocation during shortages. They can negotiate power contracts that smaller players can’t access. They can vertically integrate, building their own chips and even exploring their own nuclear reactors. The gap between those who have compute and those who don’t is widening, not shrinking.
This has implications for the shape of the AI industry. We may be moving toward a world where frontier AI development is simply too capital-intensive for anyone but a handful of giants.
The optimistic counterargument—that efficiency improvements like those demonstrated by DeepSeek will democratize access—remains unproven at scale. For now, the big keep getting bigger.
The Road Ahead
None of these bottlenecks will resolve quickly. New fabs take years to build. Power plants take longer. The HBM shortage may persist until 2027 or beyond. Export controls create unpredictable supply disruptions.
What we’re witnessing is something genuinely new: a technology revolution constrained not primarily by ideas or software but by the physical world—by atoms, not bits. The AI industry has discovered that moving electrons through silicon and copper, keeping chips cool, and generating terawatts of electricity are problems that don’t yield to Moore’s Law. They yield to construction crews, mining operations, utility regulators, and geopolitical agreements.
The comparison to oil isn’t perfect, but it illuminates something important. Oil created concentrated strategic resources that reshaped global politics for a century. Compute is doing the same. The countries, companies, and regions that control the chokepoints—the fabs, the memory plants, the power capacity, the raw materials—will shape what AI becomes and who gets to use it. Everyone else will be waiting in line.
God-Willing, see you at the next letter.
GRACE & PEACE












