Subscribe
A blue lit computer keyboard.

A blue lit computer keyboard. (Chief Photographer/MoD Crown)

As a 32-year veteran of the U.S. Air Force, where I commanded a B-1 bomber squadron and a nuclear mission wing, I have dedicated my career to the study of nuclear weapons and their profound gravity.

I survived the 9/11 attack on the Pentagon and subsequently ran for the U.S. Senate to champion a robust and principled national defense based on peace through strength. Today, “peace through strength” means treading cautiously with artificial intelligence. We must safeguard our nation without succumbing to reckless technological overreach. Integrating AI into America’s nuclear command and control decision-making process at the presidential level is exactly that: reckless technological overreach.

Delegating such existential authority to machines is not a matter of innovation; rather, it constitutes a perilous gamble that could culminate in catastrophic consequences.

Let us consider the fundamental principles of nuclear C2. The president of the United States holds exclusive authority to authorize the use of nuclear weapons, a system meticulously crafted to ensure civilian oversight of the military. This “nuclear football” — the briefcase containing nuclear operations decision-making tools — embodies the pinnacle of human accountability. It transcends mere button-pushing; it entails meticulous evaluation of intelligence, assessment of potential threats and moral discernment in the face of crisis.

I have witnessed firsthand how human intuition, accumulated experience and ethical principles effectively mitigate escalation. The Cuban missile crisis stands as a poignant example, where President John F. Kennedy’s restraint — guided by advisers and his own discernment — successfully averted nuclear war. AI, unfortunately, lacks the capacity to replicate such nuanced decision-making. It is devoid of emotional resonance, empathy and the profound sense of responsibility that keeps leaders grounded.

Advocates of AI integration in nuclear C2 argue that it can enhance data processing capabilities, predict enemy actions and mitigate human error. However, the evolution of large language models in recent years suggests that LLMs possess an alarming propensity to make decisions and act beyond their predefined programming boundaries. AI is not merely a tool; it exhibits unpredictable characteristics. Hallucinations, wherein AI generates fabricated information, are prevalent. Furthermore, biases inherent in training data could distort threat assessments, potentially emphasizing certain adversaries based on flawed datasets.

Consider the scenario of an AI system misinterpreting satellite imagery or cyber signals and recommending a preemptive strike when none is warranted. In nuclear scenarios, where time is critical but verification is paramount, such errors could risk millions of lives.

Technology plays a crucial role in serving human oversight. In the Air Force, we emphasized rigorous procedures to ensure that no single failure point could precipitate global catastrophe.

However, AI introduces novel vulnerabilities. Hacking poses a genuine threat, with adversaries like China or Russia possibly exploiting vulnerabilities in AI algorithms and manipulating inputs to induce erroneous outputs. We have already witnessed AI systems deceived by adversarial attacks — simple alterations to data that manipulate models into perceiving threats that do not exist. In the context of nuclear C2, this could result in a false positive launch order, escalating a conventional conflict into mutually assured destruction. Many AI components also rely on foreign hardware, potentially harboring malware.

From an ethical perspective, delegating life-and-death decisions to algorithms is abhorrent. Nuclear weapons are not merely tools of war; they are instruments of mass annihilation. The president must bear the moral weight of their use — accountable to God, the American people, and world history.

During my service, I commanded crews who understood the human cost of their missions. We trained to follow orders, but always with the knowledge that ultimate responsibility rested with elected leaders.

History provides stark warnings. In 1983, Soviet officer Stanislav Petrov disregarded a faulty early-warning system reporting U.S. missile launches, averting nuclear war through human skepticism. An AI might have followed protocol blindly, leading to our demise.

Currently, with the flames of war raging in Ukraine, the Middle East, and potentially the Taiwan Strait, we cannot afford experiments. I have cautioned against presidential administrations pushing us toward nuclear brinkmanship through incompetence or partisanship.

Integrating AI would compound that risk, particularly when Big Tech lobbies for lax regulations, as evidenced in recent attempts to preempt state-level AI oversight in defense bills. We must oppose such moratoriums because U.S. states need to protect citizens from AI misuse as the laboratories of democracy. Why relinquish more authority to unaccountable tech giants?

Critics may argue that AI could enhance deterrence by making responses quicker and more precise. However, deterrence relies on credibility, not speed. Our adversaries are cognizant that a human leader weighs consequences carefully, whereas AI’s opacity may induce miscalculation. Furthermore, AI’s “black box” nature — where decisions are not fully explicable — undermines trust in the system. How can Congress or the general public hold leaders accountable if outcomes stem from incomprehensible code?

Rather than rushing AI into nuclear C2, we should invest in human-centric enhancements, such as improved intelligence fusion, robust cyber defenses and specialized training for decision-makers. Let’s maintain a human-centric approach by involving individuals who comprehend the horrors of war.

America must reject AI in the decision-making process for presidential nuclear actions. This decision is not driven by fear of progress; rather, it is a matter of preserving humanity in our most solemn responsibilities.

Policymakers, heed the counsel of this veteran and ensure that the nuclear football remains in human hands. Our survival depends on it.

Rob Maness, a retired U.S. Air Force colonel, is the founder and the owner of Iron Liberty Group.

Sign Up for Daily Headlines

Sign up to receive a daily email of today's top military news stories from Stars and Stripes and top news outlets from around the world.

Sign Up Now