Ethical, Legal and Operational Challenges of AI-Driven Warfare and Autonomous Systems

  • Published
  • By JSOU/NATO SOF HQ

As AI gains the capacity to make decisions, coupled with the need for increased reaction time that may pressure SOF leaders and planners to remove human in the loop capabilities, what are the ethical, legal, and operational challenges of deploying AI-driven warfare and autonomous systems in United States, partner nation, or NATO SOF? How can these challenges be addressed to ensure compliance with international law and ethical standards? How do we balance the need for security with the preservation of freedom and human rights? What legal frameworks govern the use of AI and autonomous systems in warfare? How can accountability be maintained when using autonomous systems in operations? What strategies can be employed to ensure ethical compliance and operational effectiveness?


  • Casey, Maj. Keith R., "The Ethical Implications of the Increased Use of AI and ML in Operations," AF Fellows Paper (RAND PAF Strategy and Doctrine), 2023, 37 pgs.  
    • This paper highlights the operational reality that autonomous systems can respond to attacks faster than a military force can decide and act, driving the pressure for their deployment. It answers the question by identifying the lack of governance over LAWS and the ambiguity surrounding international definitions of "meaningful human control". To ensure ethical compliance, the author suggests focusing research on explainable artificial intelligence and robust training data sets, which will allow operators to understand AI behaviors and mitigate algorithmic biases that could lead to unintended civilian casualties or swarm malfunctions.
  • Dalrymple, Dash, "Applying Ethics in an Artificial Intelligence Arms Race," AFGC thesis, 2024, 45 pgs. 
    • This paper addresses the pressure to remove human operators from the decision-making loop to increase the speed of the Observe, Orient, Decide, Act (OODA) loop in combat, noting that human intervention severely lengthens the military decision cycle. It answers your question by highlighting that AI lacks a human conscience and cannot instinctively apply "situational ethics" to balance military goals with moral rules. To address these challenges and ensure compliance, the author recommends programming AI agents to operate according to a strict ethical framework based on Kant's categorical imperative, implementing formal ethical training and testing for human operators, and developing a feedback loop to identify and correct unethical AI actions.
  • Gonzalez. Maj. Jorge, "Adapting to the Challenges of Modern Technology in Warfare: Defining the Outer Limits of AI/ML in Joint Operations," AFGC thesis, 2024, 43 pgs.
    • This paper discusses the ethical and legal risks of removing human decision-makers from lethal engagements, noting that unchecked AI could be used without respect for human life. It answers the question by exploring how Lethal Autonomous Weapons Systems (LAWS) challenge Just War theory and Kantian ethics regarding human dignity. To address these issues, the paper recommends establishing strict ethical boundaries during the software development phase and ensuring that fully autonomous lethal weapons undergo rigorous self-regulation and testing to prevent them from operating in an ethical gray zone
  • Iwanenko, Lt. Col. Tanya, "Bridging the Gap: Strategic Leadership at the Intersection of AI, Ethics and the Laws of War," AWC/CWRTF paper, 2025, 33 pgs. 
    • ​​​​​​​​​​​​​​This paper explores the tension between the necessity for speed in machine-enabled decision-making and the ethical requirement for human oversight. It answers the question by examining the "accountability gap" and the legal ambiguities created because traditional laws of war were designed for human, not automated, decision-making. To balance security with ethical standards, the paper suggests that strategic leaders must keep humans in the loop for high-stakes decisions, embed Judge Advocate (JA) officers into AI innovation cycles to ensure compliance with international law, and adapt international legal frameworks to keep pace with advancing technology.
  • Orr, Michael, "Killing without Control--Isaac Asimov's Influence on Future Law of War Policies," AFGC thesis, 2024, 40 pgs. 
    • ​​​​​​​​​​​​​​This paper focuses on the transition toward "Human-Out-Of-The-Loop" (HOOTL) weapons, where AI executes the kill chain because human judgment can no longer react in time to defeat threats. It answers the question by analyzing how the principles of precaution, discrimination, and proportionality must be applied to Sentient AI to protect non-combatants and adhere to the Geneva Conventions. The paper argues against an outright ban on autonomous systems, instead recommending that the military adopt statutory guidance to ensure that Sentient AI-enabled weapons are programmed with ethical constraints and held accountable to the Laws of Armed Conflict.
  • Pantaleon, Maj. Bridget K. and Maj. E. Minnenne Holloway, "Warfare in the Age of AI: Upholding International Humanitarian Law amid Technological Advancements," AF Fellows portfolio (MIT Lincoln Labs), 2024, 62 pgs. 
    • ​​​​​​​​​​​​​​This paper examines how Autonomous Weapons Systems (AWS) challenge the foundational principles of International Humanitarian Law (IHL)—specifically distinction, proportionality, and humanity. It answers the question by asserting that while existing IHL is broad enough to govern AI, significant gaps exist in accountability and attribution due to the opaque "black box" nature of complex AI algorithms. To ensure ethical compliance and accountability, the author recommends equipping governing bodies with the technical expertise needed to properly attribute AI actions and proposes new international treaties to mandate transparency and interpretability during the engineering phase of AI systems
  • Young, Maj. Ryan H., "Beyond Bans: Crafting Ethical Guidelines for Autonomous Technology" ACSC elective paper (Robots, Drone and Artificial Intelligence), 2024, 13 pgs. 
    • ​​​​​​​​​​​​​​This paper addresses the global debate over whether to ban or regulate fully autonomous weapons that lack meaningful human control. It answers the question by exploring the accountability gap that arises when an autonomous robot performs unpredictably, complicating civil and military liability under international humanitarian law. To ensure operational effectiveness and ethical compliance, the author argues against a broad ban that would stifle innovation, proposing instead the implementation of comprehensive regulatory frameworks. Strategies to balance security with human rights include classifying autonomous robots similarly to military working dogs to establish legal accountability, and programming robots to precisely target enemy weapons rather than humans to minimize civilian harm.