8 pages/≈2200 words
Human Control in Lethal Autonomous Weapons (Term Paper Sample)
Although various governments, including the United States and the United Kingdom, through their defense ministries, have made it clear that lethal automated weapons require some degree of human intervention, the degree of human involvement in these computerized weapons is debatable. This paper analyzes the literature on the aspect of human control as a public policy in the use of lethal automated weapons with specific emphasis on its advantages and disadvantages and whether this policy is the best for regulating LAWS source..
Name Instructor Course Date Human Control in Lethal Autonomous Weapons Introduction The advent of automated technology has been a subject of discussion for the longest time, with different parties citing ethical, legal, humanitarian and operational issues that this technology pose. This discussion is even more heated with the emergence of the lethal autonomous weapons systems (LAWS), which, unlike previously automated systems, can independently apply computer algorithms in new environments, detect targets, employ weapons, engage and destroy the target without human influence (Gill et al. 59-98). The implication is that these weapons now carry the decision between life and death in combat and other military operations, something that has raised grave concerns given that technology does not apply reasonable judgment as humans and, therefore indiscriminative. Different organizations, including the Campaign to Stop Killer Robots, advocate for the complete banning of LAWS, while others, such as the United Nations, argue that a degree of the human input should apply in the operation of these weapons to help discriminate between civilians and militaries and preserve the International Human Rights. Although various governments, including the United States and the United Kingdom, through their defense ministries, have made it clear that lethal automated weapons require some degree of human intervention, the degree of human involvement in these computerized weapons is debatable. This paper analyzes the literature on the aspect of human control as a public policy in the use of lethal automated weapons with specific emphasis on its advantages and disadvantages and whether this policy is the best for regulating LAWS. Human Control Human beings are not perfect, especially when making time split decisions or processing vast chunks of information simultaneously. Still, humans have a meta-cognition, empathy, and logical reasoning that set them apart from computerized machines such as the LAWS. This reasoning forms the basis of human control public policy, which calls for a certain degree of human judgment or influence over autonomous weapons. Human control was first used officially in the 2012 US autonomous weapons policy, stating that lethal autonomous weapons should be designed to allow operators and commandants to exercise necessary human judgment and control over the use of force (Firlej and Taeihagh). However, the term "human control" is perceived as too restrictive, threatening to make lethal autonomous weapons lose their independence and become objects of human manipulation hence beating the logic behind the automation. According to Firlej and Taeihagh, human control can either be direct or indirect depending on certain human factors. Direct human control involves the dependency of autonomous weapons on human input, including whether the human judgment can override the weapon's decision (Firlej and Taeihagh). Also known as "finger on the button" control, this type of human control applies directly to real-time situations such as overriding a termination decision by the weapon. Here human and computer capabilities interplay throughout the operation stages. On the other hand, indirect human control involves the level of power or trust in lethal autonomous weapons determined during design. Design control includes making the system reliable and predictable, allowing human judgment at different stages. Also, this control involves allowing the weapons system to be guided by a human in ethical situations, such as when an accident is unavoidable (Ekelhof). Determining the level of human control, whether direct or indirect, is complex and debatable, but it should be based on what computers can do better than humans and what humans can do better than computers. As Santoni and co-authors explain, the theory of human control in LAWS calls for letting go of mythical dreams about full autonomy and focusing on addressing the design and deployment challenges of the 21st-century warship (Santoni de Sio and van den Hoven). For instance, while computers are better at calculations, processing large information and data, quick response, and routine tasks, humans can reason deliberately, identify novel patterns, apply meta-cognition, and reason inductively. Therefore human control is a partnership between these computerized weapons systems and human judgment aimed at promoting effective combat while at the same time ensuring a more humanitarian impact. Benefits and Doubts on the Subject of Human Control: Literature Analysis Human control in Lethal Autonomous Weapons remains a subjective topic owing to the benefits and doubts raised by different researchers and stakeholders. According to research, direct human control is limited compared to indirect control, which is formed during the weapon design (Firlej and Taeihagh). The authors go ahead to highlight that if an operator can only intervene in real-life situations, such as pressing the button to engage or disengage, then their power is limited. The main human control lies in the design because this is where the trustworthiness of the lethal autonomous weapon is determined. As the article indicates, trust is a core factor in combat, and therefore indirect human control helps build trust in the weapons. On the other hand, direct human control is beneficial for applying the humanitarian factor in the military. For instance, a human can differentiate between a civilian and military or terrorist and therefore guide the weapon to execute operations accordingly. In this case, humans override the decision of the weapon, acting as last resort insurance in case of a mistake and also known as "human-on-the-loop." Therefore according to the authors, both forms of human control offer a level of predictability, reliability, and insurance in using lethal autonomous weapons. While human control may provide insurance against the limitations of autonomous weapons technology, it may also drag in the aspect of human error. Bills, in his article, argue that human control brings in emotions, biases, self-interests, and many other human limitations rendering the technology useless. The author explains that humans are likely to let fear and confusion affect their decision-making in the face of danger or dilemma, which would not happen if the LAWS were left off human interference (Bills). Also, in cases of direct human control, it is easy to involve human bias in identifying and engaging targets. For instance, humans might hold the perception that civilians would run when confronted in war. Such bias might lead to the killing of civilians who, for some reason, do not run away from fire exchanges. Another highlight of the author's arguments is how human control limits force multiplication and force projection because humans must be near the weapons to monitor and judge their actions. Therefore, it involves more war casualties and projects less force in combat because a human must be in the loop. Human control in lethal autonomous weapons helps exploit the weaknesses and strengths of both the weapons and humans. Sharkey, in chapter two of the book, Autonomous Weapons Systems: Law, Ethics, Policy, agrees that human control is necessary as a supervisory program to mitigate the inadequacies of LAWS (Bhuta and Sharkey 176-208). More specifically, the chapter highlights how human control is beneficial in ensuring humanitarian considerations. Therefore, rather than designing lethal autonomous weapons to aid in killing and destroying targets more effectively, these weapons should be designed with humanity considerations (Bhuta and Sharkey 176-208). According to the authors, LAWS design should be able to eliminate civilian casualties and encourage combatants to surrender instead of terminating everyone. This level of sophistication can only be achieved by incorporating human control, particularly in the design stage. In this case, the primary advantage of human control is incorporating humanitarian aspects into combat. Additionally, it helps achieve a sense of responsibility and legality in using weapons. If LAWS are left unsupervised, it will be difficult to hold anyone responsible for errors. Also, the absence of human influence makes it hard to ensure the legality of the weapons, including enforcement of policies surrounding operations. Ideally, human control is a broader perspective, with some using meaningful control, effective control, and necessary human judgment to describe the level of human control required in lethal autonomous weapons. Lewis, in his report, argues that human control should not be limited to the pulling trigger moment but rather should be incorporated into different military operations. The report presents the International Committee of the Red Cross (ICRC) and ICW provisions of the required human control, which should start during the creation and testing of the weapons, also known as the development stage (Lewis). Human control should also feature in the activation phase and finally in the operational stage, where the decision to attack is made. However, the report critically cites combat incidences where human-made errors lead to civilian casualties. For example, it tells of a US soldier who attacked and destroyed a women and children convoy in Afghanistan, thinking it was the militants' convoy. This limitation to human control points to the fact that although humans' involvement in autonomy is beneficial on ethical, humanitarian, and legal fronts, it does not eliminate such judgment errors. For this reason, the report suggests autonomous weapons should not rely on human control only in the execution stage but should be designed with humanity in mind. Amaroso Tamburrin, in their study, agrees with the authors above that human control is vital in solving the ethical and legal concerns about lethal autonomous weapons. As the authors point out, meaningful human control over computer...
Get the Whole Paper!
Not exactly what you need?
Do you need a custom essay? Order right now:
- Youtube influence in the community Technology Term PaperDescription: YouTube makes videos available across the internet, and is mostly without any charge but one is needed to register and join and consume the content. Registered users called YouTubers create and upload videos on this platform (Klobas et al. 131). The users include; musicians, institutional content providers...7 pages/≈1925 words| 3 Sources | MLA | Technology | Term Paper |
- Compare & contrast ANDROID & IOS operating SystemsDescription: Operating systems are now open across platforms. The growing shift to mobility devices (particularly smartphones) and open source systems (developed and edited by community peers) are redefining user experience in several ways considered far-fetched a few years ago. ...9 pages/≈2475 words| 22 Sources | MLA | Technology | Term Paper |
- Benefits of Improved TechnologyDescription: Through globalization, technology has advanced immensely. Most organizations and manufacturing firms have adopted the use of technology in their operations, to increase productivity and maximize outputs. The increase in demand for the utilization of new technologies has been instrumental in the advancement ...6 pages/≈1650 words| 2 Sources | MLA | Technology | Term Paper |