Table of Contents

As traditional nuclear arms control treaties collapse—with the INF Treaty dead and New START hanging by a thread—AI-powered satellite monitoring systems are being positioned as a technological alternative to diplomatic verification. While algorithms can process satellite imagery millions of times faster than human analysts, the question remains: Can code truly replace the trust-building mechanisms that kept nuclear powers from annihilation for decades? The answer, according to nuclear policy experts, is both promising and deeply unsettling.

The Collapse of the Old Order

The post-Cold War arms control architecture is crumbling before our eyes. The Intermediate-Range Nuclear Forces (INF) Treaty, signed by Reagan and Gorbachev in 1987, was formally terminated in August 2019 when the United States withdrew, citing Russian violations with the SSC-8 cruise missile1. Russia followed suit, announcing in August 2025 that it would no longer abide by the treaty’s provisions2.

The New START treaty, which limits strategic nuclear warheads and provides for extensive on-site inspections, faces its own existential threats. While extended through 2026, its future remains uncertain amid deteriorating U.S.-Russia relations and the suspension of inspection activities since 2020 due to COVID-19 and geopolitical tensions3.

Even the Open Skies Treaty, which allowed nations to conduct unarmed reconnaissance flights over each other’s territory, collapsed in 2020 when the United States withdrew, followed by Russia in 20214. What remains is a patchwork of bilateral agreements and the Comprehensive Nuclear-Test-Ban Treaty (CTBT), which despite being adopted by the UN General Assembly in 1996, has never entered into force due to non-ratification by key states including the United States, China, India, Pakistan, and North Korea5.

The Verification Vacuum

The loss of these treaties creates what arms control experts call a “verification vacuum”—a dangerous blind spot where nations can develop and deploy nuclear capabilities without oversight. Traditional verification relied on a combination of National Technical Means (NTM)—satellite imagery, signals intelligence, and seismic monitoring—plus on-site inspections and data exchanges6.

Without treaty-mandated verification mechanisms, the risk of miscalculation increases dramatically. When nations cannot verify each other’s compliance, they tend to assume the worst and build accordingly. This security dilemma was a primary driver of the nuclear arms race during the Cold War.

Enter the Machines: AI-Powered Verification

As diplomatic mechanisms falter, artificial intelligence is stepping into the breach. Machine learning algorithms, particularly convolutional neural networks (CNNs) and computer vision systems, can now analyze satellite imagery with remarkable speed and accuracy—detecting changes, identifying objects, and flagging anomalies that might indicate treaty violations7.

How AI Verification Works

Modern AI verification systems operate through several complementary approaches:

Automated Change Detection: Machine learning models can compare satellite images taken at different times, automatically identifying new construction, equipment movements, or infrastructure changes at suspected nuclear facilities. These systems can process terabytes of imagery in hours—work that would take human analysts weeks or months8.

Object Recognition: Computer vision algorithms trained on thousands of images can identify specific nuclear-related equipment: missile launchers, enrichment centrifuges, reactor components, and transport vehicles. The ImageNet Large Scale Visual Recognition Challenge demonstrated that CNNs can now match or exceed human performance on object classification tasks9.

Multi-Source Fusion: Advanced systems combine satellite imagery with signals intelligence (SIGINT), geospatial data, and open-source information to create comprehensive monitoring networks. Geospatial Intelligence (GEOINT)—defined as intelligence about human activity on Earth derived from imagery and geospatial information—has become a critical discipline for nuclear verification10.

Hyperspectral Analysis: AI can analyze data from hyperspectral imaging satellites that capture information across hundreds of wavelength bands, detecting materials and activities invisible to conventional cameras. This capability can identify uranium enrichment activities, plutonium production, and other nuclear processes through their spectral signatures11.

The Promise: Faster, Cheaper, Always-On

The advantages of AI verification are compelling. Unlike human analysts who require rest and can only examine a limited number of images, AI systems operate 24/7, processing vast datasets without fatigue or bias.

Speed and Scale

A single commercial satellite like Maxar’s WorldView-3 can capture images covering 680,000 square kilometers per day12. With thousands of satellites now in orbit—over 7,000 as of 2024, up from fewer than 1,000 a decade ago—the volume of imagery has exploded beyond human analytical capacity13. AI systems can process this flood of data, flagging suspicious activities for human review.

Cost Reduction

Traditional verification through on-site inspections requires expensive diplomatic coordination, travel, and personnel. AI monitoring, once established, operates at marginal cost. For nations with limited resources, algorithmic verification offers a more affordable alternative to maintaining large teams of analysts and inspectors.

Reduced Tensions

Proponents argue that transparent, AI-powered monitoring could reduce tensions by providing objective data about compliance. When both sides can independently verify the other’s activities through satellite data, the need for intrusive inspections—and the tensions they generate—diminishes.

The Peril: What Algorithms Can’t See

Despite the technological optimism, AI verification faces fundamental limitations that make many experts deeply skeptical about its ability to replace traditional treaty mechanisms.

The Black Box Problem

Machine learning models, particularly deep neural networks, operate as “black boxes”—making decisions through internal processes that even their creators cannot fully explain14. In the high-stakes world of nuclear verification, this opacity is problematic. When an AI flags a facility as suspicious, decision-makers need to understand why. Is it identifying genuine violations, or is it detecting benign activities that happen to resemble violations?

Deterrence vs. Detection

Perhaps the most profound limitation is conceptual. Traditional arms control treaties don’t just verify compliance—they create mutual vulnerability that deters cheating in the first place. The knowledge that inspectors could arrive at any time, that on-site measurements would catch violations, creates powerful incentives to comply.

AI verification, by contrast, is purely detection-based. It can identify violations after they occur, but it cannot deter them in the same way. As one analyst noted: “You can’t sanction a neural network. You can’t negotiate with an algorithm.”

Adversarial AI

The same machine learning techniques used for verification can be used for deception. Adversarial AI systems can generate synthetic imagery that fools detection algorithms, create camouflage patterns that evade object recognition, or predict when satellites will be overhead to schedule suspicious activities during coverage gaps16.

The Attribution Problem

Even when AI detects suspicious activity, determining responsibility remains a political and analytical challenge. Is a detected missile launcher a treaty violation, or a permitted activity under a different interpretation? AI can identify objects; it cannot interpret international law or negotiate ambiguous provisions.

Comparative Analysis: Human vs. Machine Verification

DimensionTraditional Treaty VerificationAI-Powered Verification
SpeedDays to weeks for inspection schedulingReal-time to hours for image analysis
CostHigh (personnel, travel, diplomatic overhead)Lower marginal cost after initial development
IntrusivenessHigh (on-site presence required)Low (remote observation only)
ExplainabilityHuman inspectors can testify and explain findingsAI decisions may be opaque (“black box”)
DeterrenceStrong (fear of on-site discovery)Weak (detection after the fact)
ScopeLimited to treaty-defined sites and activitiesGlobal coverage of all observable activities
NegotiationCan resolve ambiguities through dialogueCannot interpret or negotiate
Adversarial vulnerabilitySocial engineering, physical obstructionAdversarial examples, data poisoning

The China Factor

The collapse of bilateral U.S.-Russia treaties highlights a critical challenge that AI cannot solve: the multipolar nuclear age. China, now estimated to possess over 500 operational nuclear warheads with projections of 1,000 by 2030, was never party to the INF Treaty or New START17.

When the United States withdrew from the INF Treaty, officials explicitly cited China’s unconstrained intermediate-range missile buildup as a primary motivation. The DF-26 missile, with its 4,000-kilometer range capable of threatening Guam, would violate INF limits if China were a party to the treaty18.

AI verification systems cannot resolve this fundamental structural problem. Even perfect satellite monitoring of Chinese facilities cannot substitute for the binding limits and confidence-building measures that treaties provide. Technology can enhance transparency, but it cannot create the political commitments necessary for genuine arms control.

The Road Ahead: Hybrid Approaches

Most experts agree that the future lies not in AI replacing human verification, but in hybrid systems that combine algorithmic efficiency with human judgment and diplomatic engagement.

AI-Assisted Inspections

Rather than replacing on-site inspections, AI could enhance them. Satellite monitoring could identify sites requiring attention, allowing inspectors to focus their limited time on high-priority locations. Change detection algorithms could flag facilities that have modified their configurations, guiding inspectors to areas most likely to contain violations.

Open-Source Verification

Commercial satellite imagery, combined with AI analysis and open-source data, is democratizing verification. Non-governmental organizations like the Federation of American Scientists and the Monterey Institute now conduct independent monitoring of nuclear facilities, publishing findings that complement official verification efforts19.

This “verification from below” creates additional transparency, though it cannot replace the legal authority and access that treaty-based inspections provide.

Frequently Asked Questions

Q: Can AI completely replace human nuclear inspectors? A: No. While AI can process satellite imagery and detect anomalies with remarkable speed, it cannot negotiate ambiguous situations, interpret legal frameworks, or build the trust relationships that effective verification requires. The most effective approach combines AI’s analytical power with human diplomatic engagement.

Q: What happens if AI makes a mistake in verification? A: This is one of the most serious concerns. False positives could trigger unnecessary tensions or accusations, while false negatives could allow violations to go undetected. The “black box” nature of many AI systems makes it difficult to audit decisions or assign accountability when errors occur.

Q: Could adversarial AI be used to fool verification systems? A: Yes. Adversarial machine learning techniques can generate synthetic imagery that fools detection algorithms, create camouflage that evades object recognition, or exploit known weaknesses in AI systems. This creates an arms race between verification AI and deception AI.

Q: Does AI verification work for all types of nuclear activities? A: No. AI is most effective at monitoring visible, above-ground activities at known facilities. It struggles with detecting activities in underground facilities, identifying weaponization activities (as opposed to production), or verifying the dismantlement of warheads—activities that require physical inspection and measurement.

Q: What role can non-governmental organizations play in AI verification? A: NGOs can leverage commercial satellite imagery and open-source AI tools to conduct independent monitoring, creating transparency that complements official verification efforts. While they lack the legal authority of treaty-based inspectors, their findings can inform public debate and diplomatic pressure.

The Path Forward

The collapse of traditional nuclear arms control treaties represents both a crisis and an opportunity. The crisis is clear: without binding agreements and verification mechanisms, the world faces an increasingly dangerous nuclear future. The opportunity lies in developing new approaches that combine technological innovation with renewed diplomatic commitment.

AI-powered verification offers genuine advantages in speed, scale, and cost. It can provide transparency that was previously impossible and detect violations that might otherwise go unnoticed. But it cannot replace the political will, legal frameworks, and human relationships that make arms control effective.

The future of nuclear security likely lies not in algorithms alone, but in hybrid systems that leverage AI’s analytical capabilities while preserving the diplomatic and legal structures that have kept the peace for decades. As one arms control expert put it: “Technology can help us see, but only humans can help us trust.”

Footnotes

  1. U.S. Department of State. “INF Treaty Fact Sheet.” August 2019.

  2. Russian Ministry of Foreign Affairs. “On Russia’s withdrawal from the INF Treaty.” August 2025.

  3. Arms Control Association. “New START at a Glance.” 2024.

  4. U.S. Department of State. “Open Skies Treaty Withdrawal.” 2020.

  5. Comprehensive Nuclear-Test-Ban Treaty Organization. “Status of Signature and Ratification.” 2024.

  6. Federation of American Scientists. “National Technical Means of Verification.” 2023.

  7. Nature. “Deep learning for satellite imagery analysis.” 2019.

  8. IEEE. “Automated change detection in satellite imagery using CNNs.” 2020.

  9. ImageNet. “Large Scale Visual Recognition Challenge Results.” 2017.

  10. U.S. National Geospatial-Intelligence Agency. “What is GEOINT?” 2024.

  11. NASA. “Hyperspectral imaging for nuclear facility monitoring.” 2021.

  12. Maxar Technologies. “WorldView-3 Technical Specifications.” 2024.

  13. Union of Concerned Scientists. “Satellite Database.” 2024.

  14. Nature Machine Intelligence. “The black box problem in deep learning.” 2019.

  15. MIT Media Lab. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” 2018.

  16. arXiv. “Adversarial attacks on object detection systems.” 2020.

  17. U.S. Department of Defense. “China Military Power Report.” 2024.

  18. Missile Defense Project. “DF-26 (Dong Feng-26).” Center for Strategic and International Studies.

  19. Federation of American Scientists. “Nuclear Information Project.” 2024.

Enjoyed this article?

Stay updated with our latest insights on AI and technology.