Subscribe
An illustration showing satellites, computer chips and the earth as a representation of artificial intelligence.

Artificial intelligence is the 21st century’s most vital technology. The nation that sets the standards reaps both economic rents and strategic leverage. That is why America’s goal cannot be merely to out-innovate China at home; we must ensure the entire U.S. AI stack becomes the global default. (U.S. Army illustration)

Artificial intelligence is the 21st century’s most vital technology. The nation that sets the standards reaps both economic rents and strategic leverage. That is why America’s goal cannot be merely to out-innovate China at home; we must ensure the entire U.S. AI stack becomes the global default.

Some argue that the Trump administration’s recent decision to approve sales of NVIDIA’s H20 accelerator to Chinese customers “gives away the store.” Yet that chip was specifically defined, reviewed and approved under U.S. export-control rules designed to protect national security. Those calling compliance “evasion” rewrite history and undermine the credibility of a bipartisan control regime.

A core idea of those who oppose the sale of H2O in China is that its domestic ecosystem can produce only 200,000 accelerators annually, but this is contradicted by open-source data. Huawei’s own shipping records and channel checks show over 1 million Ascend NPUs likely in 2025, while Bernstein projects 670,000 next-generation Ascend units (910B + 910C) the same year. If the United States withholds compliant products, domestic substitutes will fill the gap, at the expense of U.S. influence.

Control of the software frameworks, libraries, compilers and cloud services that make AI useful is more durable than hardware watt-for-watt supremacy. Each H20 shipped carries CUDA, TensorRT, NeMo and a decade of scaling-law insight that competitors struggle to match. Every Chinese model trained or inferred on an American stack increases switching costs and embeds U.S. intellectual property at the center of the global AI supply chain. That network-effect leverage is analogous to the role of the dollar in global finance.

Denying access, by contrast, incentivizes the development of parallel ecosystems in China. History warns us: embargoing 64-bit CPUs directly led to China’s Loongson project; cutting off Android apps accelerated the growth of Huawei’s HarmonyOS. An enforced vacuum will be filled, just not by us.

Export-control thresholds remain indispensable for cutting off the most advanced compute that could accelerate military applications. But once a chip is ruled non-sensitive, distribution confers oversight benefits: we learn where and how it is deployed, and firmware updates can enforce future safeguards. Selling H20s under an export license is therefore more secure than watching clandestine channels move banned hardware, as illustrated by recent black-market flows of restricted GPUs.

Some voices pleading for tighter bans are, candidly, more interested in protecting domestic margins than in protecting the nation. That is self-defeating. The U.S. wins by scaling production, driving down cost curves, and allowing free-market competition, fueled by American IP, to set the pace of global AI adoption.

The scaling laws of AI indicate that larger models, combined with better data and more compute, unlock nonlinear gains. The fastest path to that virtuous cycle is global demand running on U.S. technology. Accepting lawful, rules-based commerce in chips, such as the H20, positions the American stack — hardware, frameworks and cloud APIs — as the inevitable standard. That secures supply chain visibility, royalty revenue and geopolitical leverage for decades to come.

Winning the AI race requires an export offensive, not a fortress mentality. By diffusing compliant U.S. technology worldwide, we ensure that when the next generation of models arrives, the world will need our chips, our software and our talent. That is the clearest path to both prosperity and security.

Mitesh Agrawal is CEO of Positron AI, an AI hardware startup developing systems for AI inference use. He recently transitioned from his role as chief operating officer of Lambda, a multibillion-valued cloud infrastructure company focused on AI and Nvidia GPU-powered services. At Positron, he leads efforts to develop energy-efficient hardware optimized for transformer model inference, critical for applications like ChatGPT. Previously, at Lambda, he played a pivotal role in shifting the company’s focus to cloud infrastructure and data centers, significantly scaling its revenue.

Sign Up for Daily Headlines

Sign up to receive a daily email of today's top military news stories from Stars and Stripes and top news outlets from around the world.

Sign Up Now