Ethics of Autonomous Weapon

  • Category
    Ethics
  • Published
    7th Jun, 2021

The French defense ethics committee has identified the guiding principles and made recommendations relating to methodology, research, use, design, but also training concerning the development of techniques contributing to autonomy in weapons systems. This proposal is being scrutinised by the Ministry.

Context

The French defense ethics committee has identified the guiding principles and made recommendations relating to methodology, research, use, design, but also training concerning the development of techniques contributing to autonomy in weapons systems. This proposal is being scrutinised by the Ministry.

Background

Automated devices have long been used in weapon systems employed in military operations on land, at sea or in the air. However, the concept of "autonomous weapon systems", currently used in civil society to condemn the very principle or, in certain international fora, to restrict their use, evokes, in all cases, the idea of a clear disruption in both technology and ethics

Due to the prospects offered by the development of robotics, use of lethal weapon systems described as "autonomous" is a source of ethical questions linked to the very foundations of the military function:

  • How can the operational superiority of our armed forces be maintained without losing our values?
  • What role should be reserved for humans in warfare?
  • How do we preserve the moral integrity of combatants?
  • To what extent will humans be responsible in the conduct of war?

Analysis

What is Lethal Autonomous Weapon System (LAWS)?

  • There is no agreed definition of lethal autonomous weapon systems that is used in international fora.
  • However, it can be understood as lethal weapon system programmed to be capable of changing its own rules of operation particularly as regards target engagement, beyond a determined framework of use, and capable of computing decisions to perform actions without any assessment of the situation by human military command.

What is Partially Autonomous Lethal Weapon System (PALWS)?

  • A lethal weapon system integrating automation and software: - to which, after assessing the situation and under their responsibility, the military command ‘can assign’ the computation and execution of tasks related to critical functions such as identification, classification, interception and engagement within time and space limits and under conditions; - which include technical safeguards or intrinsic characteristics to prevent failures misuse and relinquishment by the command of two vital duties, namely situation assessment and reporting.

Reasons for renouncing the use of LAWS

Use of LAWS would:

  • break the chain of command;
  • run counter to the constitutional principle of having liberty of action to dispose of the armed forces;
  • not provide any assurance as to compliance with the principles of international humanitarian law (IHL);
  • be contrary to our military ethics and the fundamental commitments made by Indian  soldiers, i.e. honour, dignity, controlled use of force and humanity.

What are the ethical issues associated?

  • Moral acceptability of using force without human intervention
    • Confidence in our armed actions could be undermined by the feeling of being watched by weapons, fear for safety, and concerns about the absence of humans in the loop or risks of a technological error
  • Risks of Machine Learning
    • A system designed to identify, designate or even neutralise targets without being able to provide intelligible explanations for its proposals or choices could be regarded with mistrust.
    • Without appropriate control over what the system "learns", it could lead to unexpected and unwanted behaviour outside the intended framework of use.
  • Risk of blurring responsibility in the event of an incident
    • In the event of an incident involving LAWS causing inacceptable damage or unwanted firing, establishing responsibilities by means of an ex-post investigation could prove difficult
    • It also raises questions like, is it the operational chain of command, from the decision-maker to the soldier who used the system? Who decided to use this system in this environment? Were they aware of the potential error? Was this deployment compliant with doctrine? Were the risks known? Were the risks documented during design?
  • Vulnerabilities induced by Digital Technology
    • Like other weapon systems, governments and other actors are acquiring offensive cyber capacities with which they could take control of or alter the integrity of a system and change the targeting functions. The lack of ultimate human control over open fire functions could facilitate such diversion.
  • Exogenous Risks
    • There is a risk of civil society refusing the PALWS employment framework, for philosophical or religious reasons.
    • There is a permanent risk of proliferation for this type of weapon like all others, requiring control and regulation

What should be the guiding principles for use of these weapons?

  • The risks of alteration in human control and the acceptability of assigning use of force to a machine should be systematically assessed during research, design, development and use of PALWS.
  • The consequences of lethal actions carried out by a PALWS must be systematically evaluated by the command. In particular, only the chain of command shall have authority to change the targets of a mission in progress or to cancel the mission.
  • The command should define a framework to transpose doctrine, i.e. target to be reached, space and time limits, constraints, engagement rules, for each mission performed by a PALWS. A PALWS should never be operated without an employment framework and should never have the capacity to depart from it without intervention by the chain of command.
  • In any urgent operational situation, the chain of command must be alerted and must explicitly validate any new PALWS employment framework.
  • A PALWS should not be enabled to assign to another PALWS a mission that departs from the initial framework without prior validation by the chain of command.
  • The conditions under which continuous machine learning during a mission can be implemented for on-line computation of new tasks should be clearly specified.
  • When drafting doctrine on the use of weapon systems, appropriate information on automated decision-making functions should be provided. The conditions and limits of use should be clarified relying on technical and operational performance criteria and relevant ethical considerations.
  • The chain of responsibility involved in the definition, design, development, qualification and use of a PALWS should be formally defined in order to clearly identify the respective responsibilities of all the parties involved.

Conclusion

India should foster research in the fields of defence artificial intelligence and weapon systems automation for several reasons. First to avoid the country losing ground in the scientific and technological fields; second to counter enemy development of LAWS; and finally, to be able to defend ourselves against this type of weapon in the likely event of their use by an enemy State or terrorist group against our troops or population. Such research must be governed by a strict ethical and legal framework and be conducted in compliance with legal review mechanisms.

X

Join Us on
WhatsApp

Enquire Now