I. Introduction

The development of weapons and warfare technology inevitably raises new moral and legal questions, and the advent of the use of autonomous weapon systems is no exception. Salient concerns arise when weapons are designed to be increasingly autonomous, especially with regard to liability in the event of international humanitarian law violations. Once weapon technology develops to the point at which human actors are removed from the decision about what to target and when to strike, establishing responsibility when things go wrong becomes complicated, as the weapon’s nearest potentially liable human actor may have only a remote connection to the harm.

II. What are autonomous weapon systems?

The International Committee of the Red Cross (ICRC) defines autonomous weapons as: “[a]ny weapon system with autonomy in its critical functions – that is, a weapon system that can select (search for, detect, identify, track or select) and attack (use force against, neutralize, damage or destroy) targets without human intervention.”1 This type of weapon system essentially removes humans from the act of selecting and attacking targets, substituting them with computer-controlled processes.2 The use of robots in warfare presents significant advantages, including potentially lower costs and casualty rates, as well as increased speed and accuracy in situations where a “human operator is incapable of making a timely decision.”3 However, the speed and distance at which they are able to operate, in addition to advanced artificial intelligence software that enables them to adapt and learn from their environment, means that their continued development may be synonymous with the elimination of meaningful human control over warfare.

 It is important to note that fully autonomous weapons are not yet in use. That being said, weapons development is surely moving in the direction of incorporating increasingly sophisticated artificial intelligence, and absent a comprehensive treaty preventing their development, the future of warfare could likely by characterized by the use of these “killer robots.”4 Some of the world’s most powerful militaries have already voiced their opposition to an autonomous weapons ban—most notably the US, UK, Australia, Russia, and Israel—and at the moment it appears that an autonomous weapons race may develop in the future.5 For example, although the US policy on such weapons currently states that “autonomous and semi-autonomous weapons systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgement over the use of force,”6 the US Army recognized in its 2017 Robotic and Autonomous Systems Strategy that “[b]ecause enemies will attempt to avoid our strengths, disrupt advanced capabilities, emulate technological advantages, and expand efforts beyond physical battlegrounds… the Army must continuously asses [Robotic and Autonomous Systems] efforts and adapt.”7 This demonstrates that although there is an awareness of the importance of maintaining a certain level of human control, there is also an urgency to not be left behind in the development of autonomous weapons.

 Currently, only weapons with limited autonomous functions, including anti-materiel defensive weapons and loitering munitions, are used by several States.8 According to a Scientific and Policy Advisor of the Arms Unit of the ICRC, “[t]here is general agreement among Convention on Certain Conventional Weapons (CCW) States Parties that ‘meaningful’ or ‘effective’ human control, or ‘appropriate levels of human judgement’ must be retained over weapon systems and the use of force.”9 However, a number of countries are moving in the direction of developing weapons that possess a growing degree of autonomy, as well as increased adaptation and “learning” software mechanisms.10 

III. The Liability Gap

There are various degrees and forms of control that humans can exercise over semi-autonomous and autonomous weapon systems, including: 1) at the development and testing stage, such as the programming of artificial intelligence software; 2) at the time that a commander takes the decision to activate the weapon in an armed conflict; and 3) during the operation of the weapon, which typically takes the form of intervening and canceling an already-initiated attack.11

As weapon systems become increasingly autonomous, either because they are able to operate at a speed that surpasses a human’s ability to monitor them, or because they are able to independently adapt to new situations pursuant to complex software systems, human interaction becomes increasingly remote. For example, if a weapon is able to identify and strike a target faster than a human can determine whether it is a legitimate military object or not, then the operator’s ability to intervene is rendered meaningless. As the remoteness between the closest human actor and the weapon increases, the liability gap in which wrongdoing during armed conflict may remain in impunity widens, merely because responsibility cannot be adequately assigned to a natural person, or even a legal person.

The development of increasingly autonomous weapon systems presents a number of challenges with regard to establishing liability in the event of international humanitarian law violations, including grave breaches of the Geneva Conventions amounting to war crimes. As Tim McFarland and Tim McCormack observed, “a highly autonomous weapons system has the potential to partly or fully displace combat personnel from the roles they have traditionally occupied, so that accountability for proscribed acts committed via such systems may not be easily ascribed to those personnel or to their commanders.”12 This is particularly true when the degree of control exercised  by the system over actions subject to legal regulation has increased to the point where it is no longer reasonable to assert that a human operator alone made the decision to strike.13

There are several ways in which the use of autonomous weapons may result in a liability gap. First, while combatants in an armed conflict are tasked with complying with the rules of international humanitarian law, including the application of the principles of distinction and proportionality, there are no clear answers on what happens if artificial intelligence is left to conduct these analyses.14 There is a general consensus that human judgement and a sense of human morality is at the heart of these determinations, and thus machines cannot be trusted to determine whether the use of force is justified, especially in close-call situations where the risk to civilian life is at stake.15 Related to this, because autonomous weapons are not human, many of the goals of justice and accountability, including punishment and deterrence, would have no bearing on them.16

Second, establishing liability for humanitarian law breaches, in particular under (international) criminal law, requires the existence of a guilty mind. Because autonomous weapon systems are robots without the ability to form intent, the elements of a crime may be impossible to establish.17 Even modes of secondary liability, such as command responsibility, require the existence of an underlying crime, and a lack of the requisite mens rea by the “primary perpetrator” (the robot) might preclude liability.18

Third, the more autonomous a weapon system is, the less predictable it may be. First, as weapons become more flexible in the types of tasks they complete and more mobile across time and space, it become more difficult to predict when and where specific attacks may take place.19 Moreover, as artificial intelligence algorithms become increasingly complex, the software system’s ability to learn and adapt becomes more sophisticated in its responses to new environments and situations, including by reacting in unexpected ways.20 These challenges are further compounded when multiple autonomous weapons work together and feed off each other’s information.21 A lack of predictability could preclude the finding of liability under criminal law, including by secondary (human) actors accused of aiding and abetting or commanding war crimes, because the weapon’s unpredictability would interfere with the establishment of the knowledge held by the secondary actor. Similarly under civil law, the unpredictability of the weapon could prevent a finding of negligence if the closest reasonable human actor was unable to foresee the consequences of developing or deploying the weapon.

IV. Are There Potential Paths to Liability? An Analysis of International Criminal Law

The dawn of autonomous weapon systems presents a serious risk of a liability gap. The following section illustrates this by analyzing potential paths to legal liability under international criminal law in the event of war crimes caused by an autonomous weapon. Because the future of autonomous weapons will likely eliminate the role of the operator, only the liability of the commander who launches the weapon and the developer or programmer who designed the weapon will be contemplated here.

The Rome Statute of the International Criminal Court (ICC) lays out a number of modes of liability available to cases brought before the court.22 Relevant here are 1) perpetration as an individual, co-perpetration, and perpetration through another under Article 25(3)(a); 2) aiding and abetting under Article 25(3)(c); and 3) command responsibility under Article 28.

With regard to perpetration as provided by Article 25(3)(a), the jurisprudence of the ICC developed a “control” theory of liability, in which the accused is considered a principal perpetrator if he exercised control over the relevant crime, regardless of whether he physically committed the relevant actus reus “as an individual”.23 While direct perpetration requires that the accused “physical carried out an objective element of the offense,”24 he may also be considered a principal perpetrator (as opposed to accessorial perpetrator) through an indirect act as long as he exercised a certain degree of control over the principal.

To be liable as a co-perpetrator, there must exist a common plan or agreement between two or more persons that will result in the commission of a crime if implemented, and the accused must provide an essential contribution to this common plan.25 The mental element is two-fold, requiring that the accused and another perpetrator intended the crime or were at least aware that  implementing the common plan would lead to the commission of the crime in the ordinary course of events, and that the accused was aware of his essential contribution to the implementation of the common plan.26

To be liable for perpetrating a crime through another, the accused uses another person to commit a crime and thus is an indirect perpetrator. The degree of control and domination that the accused exercises over the principal perpetrator, which enables him to use the direct actor as an instrument of his crime, is central to the notion of perpetration through another. The ICC stated that “[t]he underlying rationale of this mode of criminal responsibility is that the perpetrator behind the perpetrator is responsible because he controls the will of the direct perpetrator.”27 The accused must also possess knowledge and intent, including the intent that the crime would be committed as well as the intent to use the direct perpetrator as an instrument.28

The ICC has combined the modalities of co-perpetration and perpetration through another to create indirect co-perpetration, also called perpetration through an organization. This arises when the accused was part of a common plan or agreement, but did not directly perform the criminal conduct; instead, he controlled the outcome of the crime through an organized apparatus of power.29 Here, the degree of control that the accused exercises over the apparatus of power is crucial: even if he does not control each individual within the organization and a member of the organization refuses to comply, he can be certain that, based on the automated structure of the organization and his control over it, another member will step in and execute orders.30

A person aids or abets a crime when he provides practical or material assistance to a principal perpetrator “for the purpose of facilitating the commission of […] a crime.”31

Command responsibility arises when a military commander fails to take necessary and reasonable measures to prevent or punish the commission of crimes. The accused must first exercise effective command and control over the forces that committed the crime, and commander cannot “fail” to exercise control properly without actually having effective control in the first place.32 Furthermore, the commander’s duty to take all reasonable and necessary measures is not boundless; it is “intrinsically connected” to the commander’s actual material ability to prevent or repress the commission of a crime in light of the proportionality and feasibility of such measures at the time.33 The mens rea element for command responsibility requires that the accused either knew, or owing to the circumstances at the time, should have known that the forces were committing or about to commit one or more of the crimes.34 The ICC considered that “the ‘should have known’ standard requires more of an active duty on the part of the superior to take the necessary measures to secure knowledge of the conduct of his troops and to inquire, regardless of the availability of information at the time on the commission of the crime.”35

When attempting to apply the existing international criminal modes of liability to the scenario of autonomous weapons used in war crimes, a number of shortcomings come to light. First, liability may be contingent on the existence of an underlying crime, and since the autonomous weapon is unable to form intent the elements of an underlying crime cannot be met.36 For example, perpetration through another requires that an underlying crime be committed by a person, and perpetration through an organization still requires that a person who is a member of the organization that the accused controls ultimately perpetrate the crime.37 If an autonomous weapon independently renders a decision that results in war crimes, the direct perpetration cannot be assigned to a human.

Second, the accused, whether he is a commander, aider and abettor, or perpetrated the crime through another, must meet at least a knowledge and intent requirement, or even a purpose standard. Unless the programmers or activators deliberately designed the machines to violate the laws of war, they are likely not liable under existing law.38 If, for instance, an autonomous weapon that continues to learn from its environment as it roams, “mis-learns” a key indicator and mistakes a group of civilians for combatants and targets them, this is not reflective of the intent of the commander who made the decision to launch the weapon, nor of the programmer who designed it. Because the intent element requires dolus directus rather than dolus eventualis, which would allow for the imposition of liability if the commander or programmer was conscious of the likelihood of illegal ramifications stemming from his decision to design or launch the weapon, the inability to establish requisite intent means that liability cannot be established.

Third, for perpetration through another, perpetration through an organization, or command responsibility, the accused must exercise a certain degree of control over the direct perpetrator, which in this context would be the autonomous weapon. The control factor is of particular importance if the accused acted indirectly (through another or through an organization), as liability is justified “when the indirect perpetrator has the power to decide whether and how the crime will be committed.”39 However, in the context of autonomous weapons systems, factors such as speed, distance, and advanced artificial intelligence mean that the level of control exercised by the nearest human actor is only minimal, undoubtedly below the requisite threshold for establishing liability.

Finally, Article 8 of the Rome Statute Element of Crimes requires war crimes to take place in the context of and associated with armed conflict. At least in the case of programmer liability, their design would likely take place before the commencement of the armed conflict, meaning that this contextual element cannot be met.40

V. Conclusion

The development of increasingly autonomous weapons is inevitable, yet existing law to keep the use of these weapons in check is far from developed. As the analysis of international criminal liability demonstrates, the law is designed for human actors and does not consider future advances in artificial intelligence that will enable weapons to act independently from human control. As weapon systems become more and more autonomous, the law will need to adjust in order to fill the liability gap. Possible modifications include an adjustment of the intent requirement so that programmers and commanders who recklessly design and deploy weapon systems that can potentially commit atrocity crimes will be brought to account. Furthermore, in order to establish liability for indirect perpetration, control over autonomous weapons should understood differently than control over another or an organization, so that liability is not precluded merely because the autonomous weapon operated more quickly than the closest human counterpart, or in unpredictable ways due to its learning software.

Shortcomings in legal liability exist under civil modes of liability, as well as potentially under the law of State responsibility. These will be further analyzed in an upcoming blog post.

 

  1. International Committee of the Red Cross (ICRC), “Convention on Certain Conventional Weapons (CCW) Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS) 11-15 April 2016, Geneva Views of the ICRC on autonomous weapon system”, 11 April 2016, p. 1.
  2. Neil Davison, “A Legal Perspective: Autonomous Weapon Systems under International Humanitarian Law”, UNODA Occasional Papers, No. 30, pp. 8-9; International Committee of the Red Cross (ICRC), “Convention on Certain Conventional Weapons (CCW) Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS) 11-15 April 2016, Geneva Views of the ICRC on autonomous weapon system”, 11 April 2016, p. 2; Human Rights Watch, “Mind the Gap: The Lack of Accountability for Killer Robots”, (2015), p. 6.
  3. See Michael T. Klare, “Autonomous Weapons Systems and the Laws of War”, Arms Control Today, March 2019, p. 2; Human Rights Watch, “Mind the Gap: The Lack of Accountability for Killer Robots”, (2015), p. 7; Tim McFarland and Tim McCormack, “Mind the Gap: Can Developers of Autonomous Weapons Systems be Liable for War Crimes?”, 90 International Legal Studies 361, 368 (2014).
  4. Human Rights Watch, “Mind the Gap: The Lack of Accountability for Killer Robots”, (2015), p. 6; Tim McFarland and Tim McCormack, “Mind the Gap: Can Developers of Autonomous Weapons Systems be Liable for War Crimes?”, 90 International Legal Studies 361, 371 (2014)
  5. Damien Gayle, “UK, US and Russia Among Those Opposing Killer Robot Ban”, TheGuardian.com, 29 March 2019.
  6. US Defense Department Directive, November 2012.
  7. Michael T. Klare, “Autonomous Weapons Systems and the Laws of War”, Arms Control Today, March 2019, p. 4.
  8. International Committee of the Red Cross (ICRC), “Convention on Certain Conventional Weapons (CCW) Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS) 11-15 April 2016, Geneva Views of the ICRC on autonomous weapon system”, 11 April 2016, p. 2.
  9. Neil Davison, “A Legal Perspective: Autonomous Weapon Systems under International Humanitarian Law”, UNODA Occasional Papers, No. 30, p. 11.
  10. International Committee of the Red Cross (ICRC), “Convention on Certain Conventional Weapons (CCW) Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS) 11-15 April 2016, Geneva Views of the ICRC on autonomous weapon system”, 11 April 2016, p. 2.
  11. Damien Gayle, “UK, US and Russia Among Those Opposing Killer Robot Ban”, TheGuardian.com, 29 March 2019 (reporting that the UK just invested £ 2.5m in autonomous weapon development); Neil Davison, “A Legal Perspective: Autonomous Weapon Systems under International Humanitarian Law”, UNODA Occasional Papers, No. 30, p. 12.
  12. Tim McFarland and Tim McCormack, “Mind the Gap: Can Developers of Autonomous Weapons Systems be Liable for War Crimes?”, 90 International Legal Studies 361, 366 (2014).
  13. McFarland and Tim McCormack, “Mind the Gap: Can Developers of Autonomous Weapons Systems be Liable for War Crimes?”, 90 International Legal Studies 361, 370 (2014).
  14. See Michael T. Klare, “Autonomous Weapons Systems and the Laws of War”, Arms Control Today, March 2019; Neil Davison, “A Legal Perspective: Autonomous Weapon Systems under International Humanitarian Law”, UNODA Occasional Papers, No. 30, p. 7; Human Rights Watch, “Mind the Gap: The Lack of Accountability for Killer Robots”, (2015), p. 2.
  15. Human Rights Watch, “Mind the Gap: The Lack of Accountability for Killer Robots”, (2015), pp. 6-7.
  16. See Human Rights Watch, “Mind the Gap: The Lack of Accountability for Killer Robots”, (2015), p. 19.
  17. Human Rights Watch, “Mind the Gap: The Lack of Accountability for Killer Robots”, (2015), p. 19; Tim McFarland and Tim McCormack, “Mind the Gap: Can Developers of Autonomous Weapons Systems be Liable for War Crimes?”, 90 International Legal Studies 361, 365-66 (2014).
  18. Human Rights Watch, “Mind the Gap: The Lack of Accountability for Killer Robots”, (2015), p. 21-22.
  19. Neil Davison, “A Legal Perspective: Autonomous Weapon Systems under International Humanitarian Law”, UNODA Occasional Papers, No. 30, p. 16; International Committee of the Red Cross (ICRC), “Convention on Certain Conventional Weapons (CCW) Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS) 11-15 April 2016, Geneva Views of the ICRC on autonomous weapon system”, 11 April 2016, p. 3.
  20. Neil Davison, “A Legal Perspective: Autonomous Weapon Systems under International Humanitarian Law”, UNODA Occasional Papers, No. 30, p. 16; International Committee of the Red Cross (ICRC), “Convention on Certain Conventional Weapons (CCW) Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS) 11-15 April 2016, Geneva Views of the ICRC on autonomous weapon system”, 11 April 2016, p. 3; Peter M. Ansaro, “The Liability Problem for Autonomous Artificial Agents”, Association for the Advancement of Artificial Intelligence, 2016, pp. 191-92.
  21. International Committee of the Red Cross (ICRC), “Convention on Certain Conventional Weapons (CCW) Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS) 11-15 April 2016, Geneva Views of the ICRC on autonomous weapon system”, 11 April 2016, p. 3.
  22. Not all of these are reflected identically under international criminal law or the case law of the ad hoc international criminal tribunals.
  23. Jocelyn Courtney and Christodoulos Kaoutzanis, “Proactive Gatekeepers: The Jurisprudence of the ICC's Pre-Trial Chambers”, 15 Chicago Journal of International Law 518, 529 (2015); ICC, Judgment pursuant to Article 74 of the Statute (Lubanga Trial Judgement), 14 April 2012, para. 1003; ICC, Lubanga Decision on the Confirmation of Charges, 29 January 2007, para. 330.
  24. ICC, Decision Pursuant to Article 61(7)(a) and (b) of the Rome Statute on the Charges of the Prosecutor Against Bosco Ntaganda, 9 June 2014, para. 136.
  25. ICC, Lubanga Trial Judgement, 14 April 2012, para. 1006.
  26. ICC, Lubanga Trial Judgement, 14 April 2012, para. 1013.
  27. ICC, The Prosecutor v. Katanga and Chui Decision on the confirmation of charges, 30 September 2008, para. 497; see also ICC, The Prosecutor v. Charles Ble Goude Decision on the confirmation of charges (PTC), 11 December 2014, para. 135 (“The Appeals Chamber considers that the most appropriate tool for conducting such an assessment is an evaluation of whether the accused had control over the crime, by virtue of his or her essential contribution to it and the resulting power to frustrate its commission, even if that essential contribution was not made at the execution stage of the crime.”).
  28. ICC, Judgment pursuant to article 74 of the Statute (Katanga Trial Judgement), 7 March 2014, para. 1413-1415.
  29. ICC, The Prosecutor v. Katanga and Chui Decision on the confirmation of charges, 30 September 2008, para. 510.
  30. ICC, Katanga Trial Judgement, 7 March 2014, para. 1408-09.
  31. ICC Rome Statute, Article 25(3)(c).
  32. See e.g. ICC, Bemba Gombo Decision Pursuant to Article 61(7)(a) and (b) of the Rome Statute on the Charges 15 June 2009, 15 June 2009, paras. 412-17, 422; see also ICC, Judgment on the appeal of Mr Jean-Pierre Bemba Gombo against Trial Chamber III’s “Judgment pursuant to Article 74 of the Statute (Bemba Appeals Judgement), 8 June 2018, para. 167.
  33. ICC, Bemba Appeals Judgement, 8 June 2018, paras. 167-170.
  34. ICC Rome Statute, Article 28(b)(i).
  35. ICC, Bemba Gombo Decision Pursuant to Article 61(7)(a) and (b) of the Rome Statute on the Charges 15 June 2009, 15 June 2009, para. 432-434. In the Bemba Appeals Judgement, the Appeals Chamber clarified that a finding of failure to impose necessary and reasonable measures must be predicated upon a consideration of measures that were actually at the commander’s disposal, and that hypothetical measures presented ex post facto should not be used to evaluate the reasonableness or necessity of the conduct. The Appeals Chamber made no proclamations regarding the constructive knowledge standard, but its holding supports the argument that “should have known” must also be evaluated in light of the commander’s material ability to inquire and access knowledge of criminal conduct by his subordinates.
  36. Human Rights Watch, “Mind the Gap: The Lack of Accountability for Killer Robots”, (2015), p. 21-22.
  37. ICC, The Prosecutor v. Katanga and Chui Decision on the confirmation of charges, paras. 488-95.
  38. Neil Davison, “A Legal Perspective: Autonomous Weapon Systems under International Humanitarian Law”, UNODA Occasional Papers, No. 30, p. 17; Human Rights Watch, “Mind the Gap: The Lack of Accountability for Killer Robots”, (2015), p. 2.
  39. ICC, Katanga Trial Judgement, 7 March 2014, para. 1396.
  40. McFarland and Tim McCormack, “Mind the Gap: Can Developers of Autonomous Weapons Systems be Liable for War Crimes?”, 90 International Legal Studies 361, 373-74, 379-80 (2014). This is particularly so because the Article 22(2) requires the Rome Statute to be strictly construed in favor of the defendant.