The Political Economy of Foundation Models: Compute, Chips, and Power
Control over AI compute infrastructure is reshaping global power dynamics. With NVIDIA dominating 80-90% of the AI chip market and nations pursuing technological sovereignty, the concentration of GPUs, data centers, and foundation models is creating new barriers for startups and enterprises while fragmenting the global AI ecosystem along geopolitical lines.
11/3/20253 min read


In the span of just three years, artificial intelligence has evolved from a promising technology into the defining strategic asset of our era. But beneath the headlines about ChatGPT and autonomous vehicles lies a more fundamental power shift: control over the infrastructure that makes AI possible. The political economy of foundation models—built on compute, chips, and data—is reshaping economic hierarchies and redrawing geopolitical boundaries in ways that will define the next decade.
The concentration of power in AI infrastructure is unprecedented. NVIDIA commands approximately 80-90% of the AI accelerator market PatentPCAinvest, a dominance reinforced by its proprietary CUDA software platform that creates substantial switching costs for developers. This isn't simply market leadership—it's a chokepoint that determines who can build, train, and deploy competitive AI systems. Training even smaller foundation models on the order of ten billion parameters, considered modest by 2025 standards, remains economically unviable for many applications Neptune.ai.
For hyperscalers like Microsoft, Google, and Amazon, this concentration has prompted a defensive pivot: developing custom AI chips to reduce dependence on external vendors. Yet even as they invest billions in proprietary hardware, they continue purchasing NVIDIA's latest offerings, acknowledging the company's technological lead remains substantial.
The geopolitical dimension has become equally dramatic. The US Department of Commerce has imposed performance limits on AI GPUs, leading NVIDIA to develop "China-compliant" versions with intentionally reduced capabilities FinancialContent. China has responded with approximately $47.5 billion invested in domestic semiconductor development through initiatives like "Big Fund 3.0," pursuing self-sufficiency particularly in 7-22nm logic chip production FinancialContentFinancialContent. What was once a globally integrated supply chain driven by efficiency has fractured into competing technological ecosystems.
The implications cascade through every layer of the AI stack. Geopolitical challenges are disrupting traditional channel partnerships as reshoring, friendshoring, and nearshoring gain momentum Deloitte Insights. South Korea produces nearly 75% of the world's DRAM memory chips, while China controls approximately 90% of rare earth metals essential for AI hardware Deloitte. Any disruption in these concentrated supply chains could trigger global consequences.
For startups and mid-sized enterprises, the landscape has become particularly treacherous. Increased component costs, fragmented supply chains, and intensified competition for limited advanced GPUs create barriers to entry that favor established players with deep pockets. In India, despite $20 billion in AI commitments by 2025, a "compute gap" hinders scaling of early-stage AI startups, with hundreds of innovators struggling with infrastructure access and computing resource affordability Analytics India Magazine.
The response has been the emergence of "sovereign AI"—national strategies to develop independent AI capabilities. Canada has launched a $2 billion Sovereign AI Compute Strategy to provide compute access for researchers, startups, and innovators Nexgencloud. The US CHIPS Act allocates over $52.7 billion in subsidies, while the EU Chips Act mobilizes over €43 billion FinancialContent. These aren't merely industrial policies; they represent a fundamental reorientation where technological sovereignty rivals traditional security concerns.
Yet sovereignty comes with trade-offs. Building domestic capabilities requires duplicated infrastructure, specialized talent that remains in short supply globally, and years of development time. Constructing new fabrication plants takes 3-5 years and requires intricate coordination across vast supplier ecosystems FinancialContent. The efficiency gains of global specialization are being sacrificed for strategic resilience.
For enterprises navigating this landscape, the strategic implications are profound. Reliance on a single infrastructure provider creates vulnerability to supply disruptions, pricing power, and geopolitical restrictions. Yet diversification is costly and technically complex. Organizations face three sovereignty postures: cloud-native only (startups all-in on hyperscalers), cloud-first but sovereignty-aware (enterprises wanting agility with future portability), and sovereignty-first (defense and regulated industries requiring full control) SiliconANGLE.
The concentration extends to foundation models themselves. Training state-of-the-art models requires thousands of advanced GPUs, expertise in distributed systems, and substantial capital—resources available to fewer than a dozen organizations globally. This creates a power asymmetry where most AI applications depend on foundation models controlled by a handful of companies, primarily based in the United States.
The competitive dynamics are evolving rapidly. While NVIDIA's dominance appears secure in training workloads, the inference market—deploying AI models for real-world tasks—is seeing intensified competition from AMD, specialized startups, and custom chips from cloud providers. The question isn't whether NVIDIA will maintain its monopoly indefinitely, but whether the broader concentration of AI infrastructure remains entrenched.
Looking ahead, we're witnessing the formation of distinct technological blocs. Eighteen new fabrication facilities are slated to begin construction in 2025, with the AI hardware market projected to reach approximately $500 billion by 2034 FinancialContent. The era of seamless global technology integration is giving way to regional ecosystems with different standards, capabilities, and restrictions.
For startups and enterprises seeking independence, the path forward requires strategic foresight: evaluating hybrid architectures that balance cloud convenience with portability, investing in expertise that spans multiple platforms, and designing systems that can adapt to shifting geopolitical realities. The future belongs not to those who achieve complete independence—an increasingly unrealistic goal—but to those who maintain optionality in an increasingly fragmented landscape.
The political economy of foundation models ultimately poses a fundamental question: In an age where AI capabilities determine economic competitiveness and national security, who controls the infrastructure that makes intelligence possible? The answer will shape innovation, economic opportunity, and geopolitical power for decades to come.

