By Ana P Rumualdo
- Do Robots Have a License to Kill During War?
In his article ‘The Paradox of Riskless Warfare’, Paul Kahn (2002) argues that the justifying condition for combat is the fact that opponents in both sides put themselves at risk and act in self-defence:
“… combatants are allowed to injure each other just as long as they stand in a relationship of mutual risk. (…) The morality of the battlefield, accordingly, is a variation on the morality of individual self-defense. The soldier’s privilege of self-defense is subject to a condition of reciprocity.” (Kahn, 2002, p.3)
For as long as “the only alternative to a combatant’s own injury or death may be the successful injuring of another” (Kahn 2002, p.2), a space is created where combatants are granted a license to kill (Kahn, 2002). According to this, combatants are authorised to injure or kill their opponents. Still, each army will be trying to create an asymmetrical situation in which the enemy suffers more risk of injury while its own forces stay safe (Kahn, 2002). The quest for asymmetry is very likely to put an army in a favourable position, but when it undermines reciprocity “the paradox of riskless warfare arises” (Kahn 2002, p.2). Therefore, Kahn (2002) claims that since reciprocity of risk is the core requirement for combatants to injure each other while acting in self-defence, any injury outside the limits of self-defence is excessive and should be forbidden because it is morally unjustifiable. For that reason he considers that robots are an example of unjustifiable force that lacks reciprocity and is outside the realm of war ethics:
“A regime capable of targeting and destroying others with the push of a button, with no human intervention but only the operation of the ultimate high tech weapon, propels us well beyond the ethics of warfare. Such a deployment of force might be morally justified (…) but we cannot appeal to the morality of warfare to justify this mode of combat.” (Kahn, 2002, p.3).
If this were true, it would be necessary to outlaw the deployment of robots in the battlefield since they do not bear any ‘real’ risk, nor do they act in self-defence. However, those claims are based on out dated reasoning from the fifteenth and sixteenth centuries. To prove it, this essay will argue that self-defence and risks, in the way that are represented by Paul Kahn, are not the justification for combatants to injure the enemy, nor the moral justification of combat and argues that there are no changes needed in the current legal framework to allow robots in the battlefield and therefore urges for them not to banned.
- A Few Good Men (of Honour)
Kahn (2002) based his argument in the existence of mutual risk and self-defence and states that without it, warfare lacks the possibility of chivalry. But how relevant is chivalry in the context of modern war? Vale (1981) considers chivalry an idea of honour in a medieval form: a romantic ideal to which men (or, more appropriately, knights) aspired in their quest for honour, loyalty, courage and generosity.
The basic meaning of chivalry refers to the behaviour of people who ride horses (Braudy, 2005) and it is linked to cavalry. In the past, having a horse trained for war was a feature of the warrior class that also served to point out social differences and, moreover, gave a noble character to the battles fought by knights and enshrined them instead of portrayed them as slaughters (Braudy, 2005). Cavalry also served to ensure military masculinity whilst evoking traditional power (Braudy, 2005). Given its importance as the expression of a brave spirit, along with its status, (Vale, 1981) during the fifteenth century heavy cavalry was still used despite it “had outlived its usefulness yet still provided the core and the most prestigious arm of every army” (Vale, 1981, p. 101). For the chivalric ideal, a skilful use of close-combat weapons, such as the sword, elevated the value of a knight, whose greatest honour was to fight in hand-to-hand battles (Braudy 2005). It was this fascination with romantic ideals and old-school combats that influenced the initial resistance for the use of firearms despite later on, armies developed a strong interest in making modern weapons:
“A passion for single combat produced a widespread resistance to military innovation and invention and that new and more lethal weapons (such as firearms) were resisted on moral grounds (…) In their desire to perpetuate chivalrous warfare, the later mediaeval nobility refused to make use of such weapons or to adapt themselves to the changing techniques of warfare.” (Vale, 1981, p. 102.)
Chivalry is clearly linked to the idea of risk during war since, as Vale (1980) argues, it proves men’s value and honour in a situation of great personal danger. Theoretically, honour is an inner code of personal behaviour (Braudy, 2005). In war, a display of individual honour helps “men make themselves men in the eyes of other men and in their own” (Braudy, 2005, p.56) and justifies violence. This approach to masculinity suggests the formation of men’s character. Hence, if war becomes impersonal, its function as a school for knightly character would be ruined (Vale 1981). Braudy (2005) argues that chivalric masculinity was used by the government as propaganda during World War I to evoke the Arthurian ideal and his quest for the Holy Grial, thus demonstrating that civilizations and individuals hold onto archaic types of masculinity in the same way armies refused to adopt new technologies on the grounds of masculine myths. Those images not only contrasted with the bleak panorama of war, but also held onto a “single story” (Adichie, TED 2009) of masculinity that pays lip service to the ‘just war’ theory, as we will see below.
Images of knights in shining armours have been said (Braudy, 2005) to support the impression that, unlike modern wars, past wars were more honourable and humane. A shallow impression nonetheless, given that past wars were extremely violent as shown in the following passage:
“By the outbreak of the Hundred Years’ War, military masculinity had become a mixture of the most appalling slaughter in which men on horseback fought, chopped one another down in battle, stripped the armour from the dead, and often cut off their heads (to ensure a proper tally) – with a deeply held belief that there was an overarching chivalry fellowship (…) and, whereas courtesy towards women might be part of an ideal chivalric code of honour, this behaviour hardly existed in wartime practice…” (Braudy, 2005, p. 84).
So this raises questions regarding Kahn’s reasons to regret its absence in a war fought with the help of robots. Is it because it allegedly undermines reciprocity or because it undermines masculinity?
Moving to a different point, regarding self-defence, in the context of international relations between countries, it does not apply to individuals, but to a nation or a group of nations:
“… under certain conditions set by international law, a State acting unilaterally – perhaps in association with other countries – may respond with lawful force to unlawful force or, minimally, to the imminent threat of unlawful force.” (Dinstein, 2001, p.159).
Conditions to exercise the right to self-defence are set in Article 51 of the United Nations (UN) Charter 1945 and they refer to an armed attack. Nonetheless, it is a legal concept that does not always play a key part in the hands-on issue of war and can be used as a political excuse for the deployment of military forces (Dinstein, 2001). As Kahn (2002) himself pointed out, there are some reasons for a combatant to be morally innocent, such as being a victim of the regime or complying with a legal obligation. In both cases, contenders on either sides may not have freedom of choice. Equally their first concern is survival, thus soldiers in the trench do not even think about self-defence, but staying alive. In this regard, Braudy (2005) claims that combatants act in the name of national interests. As a result of these factors, one could say that combatants have been used as errand boys to deliver messages on behalf of their nations and whose personal objective, without self-defence considerations, is surviving.
Unlike ideals, war is a reality covered in blood. Still, based on the medieval cult of chivalry and honour, Kahn (2002) considers that a high-tech war is incompatible with the self-defence principle of a just war and eliminates the possibility of chivalry. In the media, this thought has been depicted as potentially causing a backlash, as reported by The Atlantic (2011), because “our enemies mark us as dishonourable and cowardly for not willing to engage them man to man (…) Terrorists would use that resentment to recruit more supporters and terrorists” and oversimplified from a macho point of view: “the use of automated weapons in the battlefield is seen as sending machines to fight instead of real soldiers”, as reported by The Economist (2010). Considering itself under attack, the myth of masculinity reacts overlooking the benefits and emphasizing the risks of the deployment of robots in the battlefield and stand against it because they are likely to take away war’s honour. Apparently, the honour of war does not care about saving soldiers’ lives if they are not brave enough to fight a war in person and face death. At this point it is worth asking the identity of the canon fodder sent to war. Are they mature high-ranked adult soldiers or young and frightened boys? Most likely the latter, and in that case sending those expendables to war does not seem a better option than sending robots.
- The ‘just war’ theory
According to Kahn (2002), an asymmetrical high-tech warfare embodies a profound challenge that takes us out of the ethics of warfare and eliminates the expectation of loss as a restrains to military actions. Actually his reasoning poses a challenge to the just war theory. Traditionally, the just war theory indicated that the reasons for going to war should be examined first and then subsequently an analysis of the way force is used. What Kanh (2002) is doing is using the latter (deployment of modern weapons) to justify or condemn the reasons for going to war, in other words “the employment of jus in bello to determine the absence of jus ad bellum” (Johnson, 1984, p. 75). However they must both be used for engaging in a war considering that jus ad bellum has priority over jus in bello because “only after the fundamental question is answered about the moral justification of employing force to protect values does the second question, about the morally requisite limits governing the use of that force, arise in turn” (Turner 1992, 56). It is possible that the deployment of high-tech military forces make obsolete the traditional concept of just war (Steele and Heinze, 2014).
The fact that one of the biggest problems about the deployment of such force is the lack of loss demonstrates a lack of ability to think beyond past war experiences and impedes the development of new useful doctrines (Singer, 2009). This point is well illustrated in the way the British army adapted insufficiently to airplanes from in comparison to the German army:
“… there was no plan to coordinate ground operations with (…) the airplane. The British Army had little interest in what officers described as “infernal machines” (…) Having lost the previous war, the Germans were a bit more open to change. (…) They developed new doctrines based not only in what had worked in the past, but also on what could work in the future. The force soon centered on a doctrine that would later be called blitzkrieg.” (Singer, 2009, p. 209).
It is important to think about war in terms of its suitable to current times instead of being attached to what has been considered necessary circumstances of war. As Singer (2009) explains, this is the first time that geographical limits and personal risk are removed from the war experience with the new generation of “cubicle warriors” (Singer, 2009, p. 330). A warrior like that may not meet the traditional definition of soldier, so a paradigm change is required to keep up with the pace of fast moving technology (Singer, 2009). This new warfare may have positive consequences and should not be obstructed by the old-fashioned values that are automatically attributed to a soldier (Singer, 2009) especially considering, as we saw above, that those values are romantic ideals.
- Legal framework
As Sharkey (2008) noted, unmanned weapons are not new. However, there are no legal provisions allowing or prohibiting them, nor guidelines for their use. But probably it is possible to make use of the existing current provisions to cover them.
According to article 35 (2) and (3) of the Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts (Protocol I) 1977, weapons that cause “superfluous injury or unnecessary suffering” are prohibited. Superfluous injury is related to perfidy; unnecessary suffering is not a clear term and, according to the International Committee of the Red Cross (ICRC) (1987, p. 408), “it seems impossible at the present stage of medical knowledge to objectively define suffering or to give absolute values permitting comparisons between human individuals.” However, aiming to avoid unnecessary suffering, Article 48 of the 1977 Protocol provides that all parties to a conflict must “distinguish between the civilian population and combatants” and Article 51 prohibits indiscriminate attacks. Robots are not to cause superfluous injury or unnecessary suffering; on the contrary as reported by The Wall Street Journal (2013) their intention is “to reduce suffering and protect human lives (…) and may reduce risks to civilians.”
Regarding new weapons, article 36 of the 1977 Protocol, provides that “In the study, development, acquisition or adoption of a new weapon, means of methods of warfare,” it is necessary for both parties to determine if employment of the weapons would be prohibited by the aforementioned Protocol or any other applicable international law. According to Human Rights Watch (HRW) (2012) “there is no existing treaty that prohibits [robotic weapons] as a class.” However, the same documents highlights the alleged importance of the following commentary:
“The use of long distance, remote control weapons, or weapons connected to sensors positioned in the field, leads to the automation of the battlefield in which the soldier plays an increasingly less important role. (…) [A]ll predictions agree that if man does not master technology, but allows it to master him, he will be destroyed by technology.” (ICRC 1987, p. 428)
This worrying commentary makes clear that we have not foreseen all the potential benefits that robotic technology may bring and instead we have concentrated on the commonplace opposition from an apocalyptic Skynet-like scenario. However, it made a vey interesting point when referring to the ban of new weapons because it stresses the fact that they have helped to move from obsolete systems. However this sensible comment comes alongside with a useless reiteration about the complexity of the problem:
“…there have been a number of attempts in the past aimed at prohibiting certain weapons for disinterested humanitarian motives (…) These attempts did not have the desired effect (…) Means as simple as the bow and arrow or the crossbow, or as sophisticated for the time of the European Middle Ages as gunpowder, became temporarily subject to prohibitions. Nevertheless, they were used and have contributed to the disintegration of an outdated social order. This makes it difficult to pass judgment.” (ICR, 1987, p. 401)
Moving to a different point, regarding precautionary measures, article 57 of the 1977 Protocol provides that “In the conduct of military operations, constant care shall be taken to spare the civilian population, civilians and civilian objects”. This is the principle of discrimination in war. Its objective, as noted by the ICRC (1987) is to impose a duty to parties in conflict to distinguish between civilians and combatants; military objectives and civilian objectives. In this regard, Sharkey (2008) indicates that robots are not yet ready to take up a challenge like that because current technology governing them is not at that point “no visual or sensing systems are up to that challenge (Sharkey, 2008, p. 87). However, it could be argue that neither humans are up to it. Regarding the same article 57 of the 1977 Protocol, the ICRC (1987) commented that the distinction made between military and civilian objectives is due to errors which occurred during the Second World War:
“The history of the Second World War and subsequent conflicts contains numerous cases of attacks which were launched in error against non-military objectives, or against objectives whose destruction produced only an insufficient military advantage compared with the losses inflicted on civilians.” (ICR, 1987, p. 680)
During that war, humans conducted the attacks and errors still took place meaning that potential mistakes are not reason enough to consider that robots represent a danger in the battlefield bigger than humans. Moreover, because of their faulty sensory perception it is improbable that in an adverse environment human combatants are able to make perfect or even more accurate discrimination between civilians and combatants than a robot.
With respect to humanitarian regulations contained in the Hague Convention 1907 (IV) Respecting the Laws and Customs of War on Land and the Geneva Conventions 1949, it has been said that robots are not able to comply with them because they do not have human qualities (HRW, 2015). However, treatment given by humans does not guarantee human treatment. This point is confirmed by the existence of human rights violations such as the committed in Abu Ghraib, as reported by the New York Times (2005).
Finally, since robots are not illegal, banning them would not have the desired effect in the cause for peace. It actually could be detrimental since something outlawed, as suggested by Kahn (2002) ignores its potential benefits and emphasisies only risks. It is not possible to know beforehand the impact that the deployment of robots in the battlefield would have but the advantages are worth exploring (Anderson and Waxman, 2013).
- A Vision From the Other Side
Let’s imagine for a moment that self-defence and reciprocity of risk are actually the moral justifications for attacking enemy combatants and that chivalry is an ideal for soldiers’ behaviour. Let’s only focus on the allegedly moral wrong of allowing a robot to kill other people. First of all, according to one of the five principles of robotics (Bryson, 2012) robots can be designed as weapons for security reasons. This is because robots are seen as tools and as such are not limited to one type of use. They pose as many risks as guns or other weapons but should be generally accepted and, as any tool, they may expand our current capabilities:
“Tools have more than one use. (…) Knives can be used to spread butter or to stab people. In most societies, neither guns nor knives are banned but controls may be imposed if necessary (…) to secure public safety. Robots also have multiple uses. Although a creative end-user could probably use any robot for violent ends, (…) we view it as an essential principle for their acceptance as safe in civil society.” (Engineering and Physical Sciences Research Council, 2010).
Kahn forgot to consider the possibility of both sides fighting a war with their robots. In that case is there also a lack of self-defence and reciprocity of risk? Let’s not forget that once developed, automated weapons such as robots are a cheap and mass replicable technology, so the enemy can just buy them and even improve them (Singer 2009). Outweighing our existing possibilities, even a robotic war may represent a better way to fight because there will be no human casualties (Asaro, 2008). This should not represent a danger but a convincing argument for nations to stop sending humans to war, to stop the possibility for children to become ruthless soldiers. Thus as opposed to Kahn’s (2002) idea, “it would not seem to be immoral to develop and use that technology, and we might go even further and say it is morally required for that nation to protect its children from becoming soldiers if it is within their technological capacity to do so.” (Asaro, 2008, p.14).
It is important to bear in mind that robots are not the ones who make a war fair or unfair and even if they make it easier to go to war, that isolated fact does not make war fair or unfair (Asaro 2008). Moreover, “[t]here is no reason to believe that zero-casualty, zero-risk, zero-defect warfare actually result in a safer world” (Ignatieff 2001, p. 212). The deployment of robots in the battlefield is not unlawful or unethical by itself, therefore the only necessary thing is using them responsibly and effectively (Anderson and Waxman, 2013). Robots may provide safety and minimise or eliminate casualties but that possibility does not make war less terrible. It is just a better option in comparison with the existing one.
We already stated that young, frightened soldiers are very less likely to think about self-defence or reciprocity during war. They are not thinking about becoming men in front of the eyes of their peers and superiors. They are wrestling with death, “the most unexciting contest you can imagine. (…) [W]ithout spectators, without clamour, without glory, without the great desire of victory, (…) without much belief in your own right, and still less in that of your adversary” (Conrad, 2007). So if robots are a possibility, why they are not widely accepted? What kind of war are soldiers fighting? A mythic war possibly. Events occurred during mythic war are permeated with false meaning that allow people to fight absolutes that make sense when considered part of a heroic path lead by a bigger will (Hedges 2003).
There are maybe other reasons to disagree with the use of robots in war, but the argument of self-defence and lack of reciprocity can be exposed as an attempt to defend the mythical masculinity involved in the act of fighting a war, which “is essential to justify the horrible sacrifices required for war” (Hedges, 2003, p. 26). Hence if a war is to be fought by robots perhaps all the destruction and chaos would lack justification.
Accuracy falls short attributing to Plato the following quote “only the dead have seen the end of war”. However, it is true that war has been a constant in the history of humanity justified throughout time with unrealistic ideals, such as chivalry, which is an archaic archetype that exalts manly values. Kahn (2002) reiterated the idea that war is for men, that war is the ideal space to display bravery. Since robots are not men and are unable to display bravery, war is not for robots. Such impoverished vision about robotic technology and its potential benefits are one of the deterrents for the deployment of robots in the battlefield.
Self-defence in the context of war refers to nations, not individuals. Hence it has nothing to do with the moral justification of war, nor with the ‘license to kill’ that enemy combatants supposedly hold. Nations claim that license to kill when they enter war. They do not symmetrically transfer individual licenses. They employ it in the most convenient way to win the war. However, self-defence and reciprocity of risk serve as manly excuses to keep the traditional concept of war and to maintain the myth of masculinity as the necessary paradigm of war and warriors as well. We could even say that myth is required to maintain war as a social order. Current laws do not prohibit the deployment of robots in the battlefield, hence no changes are needed, but as long as macho ideals are imbedded in war, it is highly unlikely that robotic technology in the battlefield will be properly assessed, outweighing its benefits and risks.
Braudy, L. (2005) From chivalry to terrorism: War and the changing nature of masculinity. New York: Knopf Doubleday Publishing Group.
Conrad, J. (2007) Heart of darkness. Coyote Canyon Press.
Dinstein, Y. (2001) War, aggression and self-defence. 3rd edn. United Kingdom: Cambridge University Press.
Hedges, C. (2003) War is a force that gives us meaning. New York: Knopf Doubleday Publishing Group.
Ignatieff, M. (2001) Virtual war: Kosovo and beyond. London: Vintage.
Johnson, J. T. (1984) Can modern war be just? New Haven: Yale University Press.
Singer, P. W. (2009) Wired for war: The robotics revolution and conflict in the 21st century. United Kingdom: Penguin Group (USA).
Steele, B. J. and Heinze, E. A. (2014) ‘From Smart to Autonomous Weapons’, in Gentry, C. E. and Eckert, A. E. (eds.) The future of just war: New critical essays. United States: University of Georgia Press.
Vale, M. G. A. (1981) War and chivalry: Warfare and aristocratic culture in England, France, and burgundy at the end of the middle ages. London: Gerald Duckworth & Co.
Bryson, J. J. (2012) ‘The Making of the EPSRC Principles of Robotics’, AISB Quarterly, (133).
Kahn, P. (2002) ‘The Paradox of Riskless Warfare’, Faculty Scholarship Series. Paper 326.
Sharkey, N. (2008) ‘Grounds for Discrimination: Autonomous Robot Weapons’, Rusi Defence Systems, pp. 86–89.
Geneva Conventions 1949.
Hague Convention (IV) Respecting the Laws and Customs of War on Land 1907.
Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts (Protocol I) 1977.
Reports and other sources
Human Rights Watch (2015) Losing Humanity. The Case against Killer Robots. Available at: https://www.hrw.org/report/2012/11/19/losing-humanity/case-against-killer-robots (Accessed: 20 December 2015).
International Committee Of The Red Cross (1987) Commentary on the additional protocols of 9th June, 1977, to the Geneva Convention of 12th august, 1949. Edited by Bruno Zimmermann, Yves Sandoz, and Christophe Swinarski. Martinus Nijhoff.
Adichie, C. N. (2009) The danger of a single story. Available at: https://www.ted.com/talks/chimamanda_adichie_the_danger_of_a_single_story?language=en (Accessed: 21 December 2015).
Anderson, K. and Waxman, M. (2013) Killer robots and the laws of war. Available at: http://www.wsj.com/articles/SB10001424052702304655104579163361884479576 (Accessed: 17 December 2015).
Bernard, S. (2002) Did plato write: ‘Only the dead have seen the end of war’? Available at: http://plato-dialogues.org/faq/faq008.htm (Accessed: 20 December 2015).
Engineering and Physical Sciences Research Council (2010) Principles of robotics. Available at: https://www.epsrc.ac.uk/research/ourportfolio/themes/engineering/activities/principlesofrobotics/ (Accessed: 20 December 2015).
Zernike, K. (2005) Detainees describe abuses by guard in Iraq prison. Available at: http://www.nytimes.com/2005/01/12/world/detainees-describe-abuses-by-guard-in-iraq-prison.html (Accessed: 21 December 2015).