Subscribe
Soldiers conduct drone test flights.

Soldiers assigned to the Artificial Intelligence Integration Center and other units conduct drone test flights and software troubleshooting during a training exercise near Hohenfels, Germany, on March 6, 2024. The Army is establishing a new career pathway for officers to specialize in artificial intelligence and machine learning. (Micah Wilson/U.S. Air Force)

About the author: Patrick McSpadden is a retired U.S. Air Force lieutenant colonel and former intelligence officer with more than 21 years of service, including multiple operational deployments. He writes on defense policy, military technology and national security. The views expressed are his own and do not reflect the official position of the Department of Defense or the U.S. government.

The Pentagon’s recent friction with Anthropic over using its artificial intelligence model inside classified systems is not just another contract story. It points to a bigger issue the military has not fully worked through yet.

For years we have heard that AI will define the next era of warfare. Leaders warned that if the United States does not move fast, China will. The assumption was simple. Once the tools were ready, we would plug them into secure networks and move forward.

Now we are learning that plugging them in is the easy part.

The harder question is who controls them once they are inside.

Anthropic’s model entered classified Defense Department systems through an industry partnership. Soon after, disagreements emerged about how it could be used. Corporate safety policies met military mission requirements. Each side saw the issue differently.

That tension was predictable.

Technology companies answer to shareholders and public scrutiny. The U.S. military answers to civilian leadership and operates under lawful authority to defend the country. When AI moves from commercial environments into classified warfighting networks, those realities collide.

This is not about writing emails faster or summarizing reports. Inside classified systems, these tools can sift through intelligence feeds, surface patterns, and suggest courses of action. Used well, they can help commanders decide faster and with more context.

Used poorly, they create confusion about responsibility.

If an AI assessment shapes a targeting discussion or influences an operational plan, who owns that call? If the analysis is wrong, who answers for it? The operator? The commander? The company that built the model?

And if a provider limits certain uses, even when those uses are lawful under U.S. policy, does that narrow military options?

Those are not theoretical questions. They go to the core of military authority.

There is also a practical reality that complicates this. The most advanced AI models are built in the commercial sector. The Defense Department does not own that ecosystem. It relies on partnerships to bring capability into classified environments, from model developers like Anthropic to integration platforms like Palantir that help make these tools usable across defense systems.

That dependence creates leverage on both sides. The Pentagon wants mission freedom. Companies want guardrails and control over how their technology is used. When those priorities clash, friction follows.

Adversaries are not pausing while we debate this in public. They are integrating AI into military systems quickly and with fewer visible constraints.

The United States cannot afford paralysis. But moving fast without clear authority is not leadership. It is risk.

What concerns me most is not the contract dispute. It is what this means for young service members.

It will not be senior officials debating policy who use these systems every day. It will be captains running operations floors, staff sergeants in intelligence units, lieutenants briefing commanders in the middle of the night. They will be the ones looking at AI outputs and deciding what matters.

They need clarity.

Young airmen, soldiers, sailors and Marines are comfortable with technology. They grew up with it. But comfort is not understanding limits. AI systems sound confident. They produce answers quickly. That can create a false sense of certainty.

Military culture is built on judgment and accountability. AI has neither. It produces outputs from data patterns, not responsibility. If we do not train the force to treat these systems as tools rather than authorities, we risk dulling the judgment we rely on in crisis.

The Pentagon needs to do more than sign contracts. It needs clear rules for how AI fits into classified warfighting systems. Who has final say. How decisions are documented. How errors are reviewed. How operators are trained to question what they see.

Industry relationships must mature as well. Companies working with national security customers need to understand the seriousness of that commitment. Mission-critical partnerships cannot remain conditional once technology is embedded inside classified systems.

If the concern is that contracts are hard to unwind, this may be the moment to reassess alignment. Contractors that cannot operate inside national security realities should not assume permanence. Replacement is not disruption. It is accountability.

Young service members are being told AI will shape the future of war. At the same time, they are watching government and tech companies argue over who controls it. That disconnect creates doubt.

The technology will keep improving. That is certain.

The real question is whether our institutions improve with it.

If we are serious about integrating AI into classified systems, we need to settle authority now. The chain of command cannot become blurry because the software is impressive.

The next generation of operators will inherit these tools. They should not inherit confusion.

When the stakes are high, someone must be clearly in charge.

That answer cannot be left to an algorithm.

Sign Up for Daily Headlines

Sign up to receive a daily email of today's top military news stories from Stars and Stripes and top news outlets from around the world.

Sign Up Now