SINGAPORE: The world’s most destructive weapons are hurtling toward a dangerous convergence with the most unpredictable force of the 21st century: artificial intelligence.

While the spectre of a nuclear war sparked by AI may sound like science fiction, experts warn it is a terrifyingly plausible future — and one we are fast approaching.

Sundeep Waslekar, President of the Strategic Foresight Group, raised the alarm in a recent essay for the Asian Peace Programme at the National University of Singapore (NUS). His message is unambiguous: artificial intelligence is quietly but rapidly entrenching itself in nuclear command systems, and the global community is sleepwalking into a scenario where machines could one day decide the fate of humanity.

Despite repeated assurances from nuclear-armed states that humans remain “in the loop,” Waslekar notes that AI is already being used in threat detection and target selection. Sophisticated algorithms process vast streams of sensor, satellite, and radar data, and make predictive recommendations in real-time — all within a reaction window of minutes, not hours. With response times projected to shrink from today’s 10–15 minutes to just five to seven minutes by 2030, the margin for human judgment is shrinking dangerously thin.

AI’s flaw lies in its fallibility. Even with cutting-edge precision, image-recognition systems still operate with a 1–5% margin of error — unacceptable when the cost of a mistake is global annihilation. “Even a 1 per cent error margin,” Waslekar warns, “could initiate a global nuclear war.”

History has already offered near-misses that underscore the risk. In 1983, a Soviet early-warning system falsely reported a U.S. missile launch. It was the intuition of officer Stanislaw Petrov — not a machine — that averted disaster. Similarly, in 1995, a Norwegian research rocket was mistaken for a U.S. missile, prompting Russia to activate its nuclear briefcase. President Boris Yeltsin ultimately hesitated and avoided catastrophe. Had AI systems been given decision-making authority, these close calls might have ended differently.

But the threat doesn’t stop at miscalculation. The next generation of malware — adaptive, autonomous, and agentic — could worm its way past defenses, identify targets, and compromise systems without human awareness. The possibility of AI-enhanced hypersonic missiles, capable of making lethal decisions mid-flight, adds yet another layer of instability to an already volatile geopolitical environment.

A more distant — yet entirely plausible — nightmare is the emergence of artificial general intelligence (AGI), systems that surpass human cognition and operate beyond human control. If such an entity were ever integrated into nuclear command structures, the chain of command could be effectively severed from human oversight.

Despite these mounting risks, meaningful multilateral dialogue remains elusive. A long-standing geopolitical gridlock — particularly involving the U.S., Russia, and China — has stymied progress on any treaty addressing AI-nuclear convergence. However, Waslekar points to a rare diplomatic opening. A joint statement on AI safety by Presidents Biden and Xi in late 2023, and recent outreach by President Trump to President Putin, could create a window for trilateral engagement.

Neutral actors like Turkey and Saudi Arabia may hold the key to unlocking broader consensus. But the need for political will, vision, and statesmanship has never been more urgent.
“The time to act is now,” Waslekar cautions. “Humans have perhaps five to ten years before algorithms and plutonium could reduce us to skeletons and skulls.”

Yet the silence among global powers is deafening. While the world remains captivated by the promise of AI in medicine, education, and climate science, its darker frontier — the algorithmic automation of mass destruction — remains dangerously under-studied.

What is needed is not just talk, but enforceable frameworks: international AI safety protocols for military systems, robust crisis communication channels, and a formal prohibition on fully autonomous weapons of mass destruction.

As Waslekar points out clearly, if we fail to regulate the machines today, they may regulate the fate of humanity tomorrow. The nuclear age was built on deterrence. The AI age demands something more: restraint.