Published 2025-12-12 | Version v1.0
Working PaperOpenPublished

Strategic Discontinuity in AI-Enabled Warfare

Machine-Speed vs Human-Speed OODA

Description

This working paper develops the concept of strategic discontinuity in AI-enabled warfare, defined as the structural mismatch between machine-speed OODA cycles and human-speed legal, doctrinal, and institutional oversight. It analyzes how autonomous weapon systems, algorithmic command-and-control, missile-defense automation, kill-chain acceleration, and swarm-based autonomy challenge meaningful human control, international humanitarian law, command responsibility, and strategic stability.

Abstract

Artificial intelligence (AI), autonomous weapon systems, and automated command-and-control architectures are accelerating the tempo of warfare toward machine-speed OODA cycles measured in milliseconds. These developments challenge the foundational assumption of international humanitarian law (IHL) and regulatory instruments such as DoD Directive 3000.09 that human judgment can feasibly govern the use of force. The resulting strategic discontinuity arises when military effectiveness requires machine-speed autonomy while legal and institutional systems remain anchored in human-speed oversight and accountability. This article analyzes the structural nature of this discontinuity across temporal, doctrinal, organizational, and stability dimensions; demonstrates its real-world manifestations in missile-defense automation, algorithmically accelerated kill chains, and swarm-based distributed autonomy; and develops a governance framework for mitigating, though not resolving, the gap between human cognition and machine-tempo operations. The framework integrates risk-tiered autonomy, layered oversight architectures, machine-readable rules of engagement, auditability and reversibility requirements, and international confidence-building measures. Together, these pathways outline how meaningful human control and strategic stability may be preserved as warfare increasingly exceeds the temporal limits of human decision-making.

Files

PDF preview
Files
NameType
Strategic Discontinuity in AI-Enabled Warfare.pdf
Full-text PDF of the working paper
application/pdfDownload

Keywords

  • AI-enabled warfare
  • Strategic discontinuity
  • Machine-speed OODA
  • Human-speed OODA
  • Autonomous weapon systems
  • Meaningful human control
  • International humanitarian law
  • DoD Directive 3000.09
  • Algorithmic warfare
  • Autonomous kill chains
  • Swarm autonomy
  • Strategic stability
  • Escalation risk
  • Algorithmic rules of engagement
  • Auditability
  • Human oversight

Subjects

  • International Humanitarian Law
  • AI Governance
  • Military Technology
  • Autonomous Weapon Systems
  • Strategic Stability
  • Security Studies
  • Human–Machine Decision-Making

Recommended citation

Wu, Shaoyuan. (2025). Strategic Discontinuity in AI-Enabled Warfare: Machine-Speed vs Human-Speed OODA. Global AI Governance and Policy Research Center, EPINOVA LLC. https://doi.org/10.5281/zenodo.18089642. DOI: To be assigned after Crossref membership approval.

APA citation

Wu, S. (2025). Strategic discontinuity in AI-enabled warfare: Machine-speed vs human-speed OODA. Global AI Governance and Policy Research Center, EPINOVA LLC. https://doi.org/10.5281/zenodo.18089642. DOI: To be assigned after Crossref membership approval.

Alternate identifiers

SchemeIdentifierDescription
DOI10.5281/zenodo.18089642Zenodo/DataCite DOI from early ORCID-derived metadata record
ORCID put-code201017495ORCID public API put-code from early metadata record
File nameStrategic Discontinuity in AI-Enabled Warfare.pdfSource PDF file name
Publication date2025-12-12Date shown in the PDF title page and early metadata record

Related works

No related works listed.

References

  1. Altmann, J., & Sauer, F. (2017). Autonomous weapon systems and strategic stability. Survival, 59(5), 117–142.
  2. Berge, T., & Brehm, M. (2023). Escalation risks in AI-enabled military operations: Emerging dynamics and mitigation options. Journal of Strategic Studies, 46(4), 512–538.
  3. Boulanin, V., & Verbruggen, M. (2017). Mapping the development of autonomy in weapon systems. Stockholm International Peace Research Institute (SIPRI).
  4. Boyd, J. (1996). The essence of winning and losing. Unpublished briefing, U.S. Department of Defense.
  5. Crootof, R. (2015). The killer robots are here: Legal and policy implications. Cardozo Law Review, 36(5), 1837–1915.
  6. Cummings, M. L. (2021). Artificial intelligence and the future of warfare. Chatham House Research Paper.
  7. Department of Defense. (2023). DoD Directive 3000.09: Autonomy in Weapon Systems. U.S. Department of Defense.
  8. Docherty, B. (2023). Crunch time on killer robots: Why new international law is needed for autonomous weapons. Human Rights Watch & International Human Rights Clinic.
  9. Ekelhof, M. (2019). Lifting the fog of targeting: ‘Autonomous weapons’ and human control through the lens of military targeting. The Lawfare Research Paper Series, 1–38.
  10. Freedman, L. (2017). The future of war: A history. Public Affairs.
  11. Hall, A., & Philpott, M. (2022). Algorithmic misperception and crisis instability: AI, deterrence, and miscalculation. International Security, 46(3), 72–109.
  12. Hoadley, D., & Sayler, M. (2020). Artificial intelligence and national security (CRS Report R45178). Congressional Research Service.
  13. Horowitz, M. C. (2019). When speed kills: Autonomous weapons systems, deterrence, and stability. International Security, 43(4), 44–80.
  14. International Committee of the Red Cross. (2015). International humanitarian law and autonomous weapon systems: ICRC position paper. ICRC.
  15. Payne, K. (2021). I, Warbot: The dawn of artificial intelligence, autonomous weapons, and human conflict. Hurst.
  16. Roff, H. M., & Moyes, R. (2016). Meaningful human control, artificial intelligence and autonomous weapons. Article 36 Briefing Paper.
  17. Roff, H. M. (2016). Autonomous weapons and the problem of meaningful human control. Ethics & International Affairs, 30(2), 203–216.
  18. Santoni, A. (2021). Reframing meaningful human control: Autonomy, oversight, and machine auditability in future warfare. Journal of Military Ethics, 20(3–4), 145–162.
  19. Scharre, P. (2018). Army of none: Autonomous weapons and the future of war. W. W. Norton.
  20. United Nations Convention on Certain Conventional Weapons. (2021). Report of the Group of Governmental Experts on emerging technologies in the area of lethal autonomous weapons systems (LAWS). United Nations Office at Geneva.
  21. Wong, W., & Scharre, P. (2023). The speed of war: Stability, decision time, and technological acceleration in military conflict. Center for a New American Security (CNAS).