South Asia’s Nuclear Dilemma in the Age of the Intelligent Bomb
Whatever the intent behind a certain technology in nuclear systems is, it is the perception of that capability that will drive the response of other states.
The Soviet Union began conceptualizing “Dead Hand,” an automated system for guaranteed retaliation, formally called Perimeter, back in 1979. Perimeter is a classic example of a human-on-the-loop system, where a human operator can intervene to shut down the system, but the trigger and launch process is otherwise automated.
Perimeter lies dormant in peacetime, continuing to collect and analyze information. As Russia Beyond put it: “If [Perimeter] detects, for example, multiple point sources of powerful ionizing and electromagnetic radiation, it compares them with data on seismic disturbances in the same locations, and makes a decision whether or not there was a massive nuclear strike.” As the war in Ukraine persists, there are apprehensions that Dead Hand may be at play.
Perimeter is but one instance of the quest for the ”perfect” nuclear deterrent. There has been a series of technological interventions, starting from the first test of a thermonuclear bomb in 1952 and the Minuteman III MIRVed ballistic missile tested in 1968, each promising to make a nuclear force more lethal, more effective, and more secure.
Military interest in artificial intelligence (AI) and machine learning (ML), while not new, has taken on new dimensions in the decades since. First, AI research today is a rapidly-evolving field, making massive strides in speed and efficiency each year, breaking past AI winters. For instance, in 2018, researchers from Sony trained an AI algorithm to 75 percent accuracy using ImageNet/ResNet-50 in around four minutes. By February 2019, researchers from China’s SenseTime leveraged network optimization techniques to get ImageNet training down to a minute and a half.
Second, the incorporation of these technologies in nuclear systems is part of a broader trend spearheaded by rising global geopolitical tensions, particularly between the United States and China, but with imprints in other nuclear rivalries as well.
There are several potential applications of AI/ML being explored and assessments of their impact on deterrence stability oscillate between “new toys, same old game” and “AI in nuclear systems will lead to the end of the world.” Applications range from deploying predictive maintenance to AI-enabled intelligent decision support systems to autonomous delivery systems, targeting, and swarming. In reality, the impact of these technologies on deterrence stability will likely be more nuanced: The way AI/ML is integrated into nuclear systems will be driven primarily by how states view the utility of their nuclear deterrent, which in turn can warp and evolve over time alongside their geopolitical context.
Nuclear Instability in South Asia?
While adoption of advanced AI in the military is still five to 10 years away in India, the cascading effect of nuclear “intelligentization” led by China has triggered a sea change – albeit glacial – within the Indian strategic community. India’s then-Chief of the Army Staff General Bipin Rawat said in January 2019, “Our adversary on the northern border is spending huge amounts of money on artificial intelligence and cyber warfare. We cannot be left behind.”
In July 2022, the Ministry of Defense launched 70+ applications as part of the inaugural AI in Defense Symposium. These included autonomous systems; command and control systems, ISR, logistics and supply chain management, simulators and natural language processing. The same year, the government established the Defense AI Council and the Defense AI Project Agency to facilitate AI adoption in the armed forces.
In South Asia, strategic stability often defies explanation. Many of the dilemmas that existed in 1998 have compounded over the past two decades: India’s fear of encirclement by China; the role of performative politics in compounding crises; and the unpredictable interplay of deterrence between the region’s three nuclear powers, India, Pakistan, and China. This three-legged chair begs the question: Would emerging technologies compound the already-fractious relationship between the three nuclear powers?
The Promise Is the Peril
There are two schools of thought on nuclear stability in South Asia. One position holds that the absence of full-scale war despite potentially escalatory events like the Kargil War in 1999 and the 2001 Parliament attacks are evidence of the “stability-instability paradox”: an absence of high-intensity conflict, but greater probability of low-intensity conflict. Others have countered, saying that nuclear weapons have led to instability at a broader strategic level as well. Furthermore, an inextricable component of this dynamic is China, Pakistan’s “all-weather friend” and India’s other geostrategic adversary, whose own nuclear force posture is directed toward the United States.
There is some promise in AI-enabled nuclear systems, particularly when it comes to nuclear safety. In Pakistan’s case theft and nuclear terrorism have been frequently cited concerns. Predictive maintenance, an IoT-enabled technology that can assess and predict when a machine will break down, could theoretically aid in maintaining a safe nuclear arsenal through preemptive fixes.
China, India, and Pakistan all keep the land and air legs of their nuclear triad in a disassembled state – i.e. missiles are stored separately from warheads – across multiple secret facilities. While there is next to no publicly-available information on the exact number, location, and processes with regard to these facilities, there would be practical application for a network of sensors that continuously monitor nuclear components and alert operators when components, including trigger mechanisms, need to be replaced.
A second application would be integrating AI-enabled degraded modes of operation, a “reduced level of service invoked by equipment outage or malfunction.” In other words, should a maintenance system or an internal AI unit detect equipment failure or unauthorized use, it would issue a warning and trigger a fail-safe.
A second potential application is AI as an adviser. Intelligent decision support systems (IDSS) is an AI-enabled “model-based set of procedures for processing data and judgments to assist decision makers to solve semi-structured and unstructured decision tasks.” AI-augmented military decision-making has already been researched in China for enhancing speed and efficiency in the battlefield. India’s Center for AI Research (CAIR) too has developed the Command Information and Decision Support System (CIDSS), which “facilitates storage, retrieval, processing (filtering, correlation, fusion) and visualization of tactical data and provides effective decision support to the commanders.”
There are limitations to such a system in the nuclear realm. “AI,” says one analyst, “shines in problems where the goals are understood, but the means are not. It is easy to connect one of your actions to an observable consequence in the outside world, so you can easily figure out what did and did not work.” Nuclear deterrence is psychological, and its success is unverifiable. In South Asia particularly, there is no unified vision shared by all sides for what constitutes strategic stability. Finally, deterrence is an interplay of not just military but also diplomatic and economic elements as well as audience costs (i.e., how citizens are responding to conflict). This was evident in the Pulwama crisis and Balakot strikes, where Twitter played a central role in shaping narratives and government response.
The second barrier would be the employment of intelligent decision-support being perceived by a nuclear-armed adversary as lowering the barrier to nuclear use. Furthermore, the comprehensive intelligence, surveillance, and reconnaissance (ISR) capabilities needed for intelligent decision-support may be perceived as a threat by an adversary as it could open up a country’s nuclear triad to counterforce strikes.
Conclusion
It is difficult to predict how emerging technologies in nuclear systems would interact with an ever-changing political and geostrategic context. The one fact we can project into the future is that the way India, Pakistan, Israel, or even North Korea would view the applications of AI/ML in the nuclear domain would differ greatly from each other, and will most certainly be divergent from how Russia, the United States, and China deploy them.
Another predictable dynamic is that whatever the intent of a certain technology in nuclear systems outlined by a state is, it is the perception of that capability that will drive the response of other states. India’s BrahMos supersonic cruise missile is an example: It is not, explicitly, a nuclear-tipped missile but is usually categorized as “nuclear-capable” and is the frequent centerpiece of arguments by India’s neighbor for enhancing nuclear deterrence.
Want to read more?
Subscribe for full access.
SubscribeThe Authors
Trisha Ray is deputy director of the Centre for Security, Strategy and Technology at the Observer Research Foundation. Her research focuses on geotech, the security implications of emerging technologies, AI governance & norms and cybersecurity. Trisha is also a member of the UNESCO’s Information Accessibility Working Group, and a 2022 International Strategy Forum Asia Fellow with Schmidt Future.