Aside from the obvious and tragic potential humanitarian consequences of such a decision, operational effectiveness demands that weapons and tools reliably contribute effectively to achieving military objectives.
![world in conflict modern warfare world in conflict modern warfare](https://i.pinimg.com/736x/a3/6e/11/a36e11c3a7242ad2950175317eeb2f24--federal-soldiers.jpg)
Military leaders will decline to adopt and deploy technologies and capabilities that do not respond accurately and reliably during combat. It is essential to note that similar questions arise when evaluating such a system from the perspective of military operational effectiveness. Would it be enough that the AI demonstrated reliable compliance with the principle of distinction during development, or must humans remain involved in monitoring the system after deployment? If the latter, what level of human involvement would be legally required? For example, would the deployment of such a system comply with the principle of distinction if the only human involvement occurred during post-strike assessments? Or, alternatively, does the law require that a human be “in the loop” throughout the detection and response phases of the system’s defense measures? Important questions arise in this context about human oversight of the development and deployment of such a system. Furthermore, the AI must consistently and reliably demonstrate such capabilities. For example, the LOAC requires that the AI be capable of distinguishing between an incoming missile and civilian objects, such as commercial aircraft. Thus, in deploying a missile defense system that uses artificial intelligence, military leaders must evaluate the accuracy and reliability of the technology.
![world in conflict modern warfare world in conflict modern warfare](https://megagames.com/sites/default/files/game-content-images/wic2011053012520548.png)
To examine just one, the use of such a system must comply with the LOAC’s foundational principle of distinction, which requires the parties to an armed conflict to “ distinguish between the civilian population and combatants and between civilian objects and military objectives and accordingly direct their operations only against military objectives.” Implicit in this obligation is the prohibition against indiscriminate attacks, which includes, among other things, attacks that are incapable of being directed at a specific military objective. Many potential legal issues arise in this example. For example, consider the context of a missile defense system designed to use AI to detect and respond to hypersonic missile attacks. Weapons that have existed for decades are rapidly developing more effective capabilities, for example, missiles that travel far faster than sound and thus perhaps require defense systems that detect and respond more quickly than humans.įor the law of armed conflict, or LOAC, to remain operationally relevant and effective in its purpose, military leaders must be thoughtful about how the law will apply to these new weapons and capabilities. Artificial intelligence algorithms designed to engage in air warfare have begun competing with, and perhaps one day will surpass, human military pilots. Global powers are infiltrating the civilian electrical power grids and other critical infrastructure of potential adversaries, perhaps in part to prepare opportunities to deploy cyber weapons in a future armed conflict. Can the law of armed conflict-a sometimes nebulous body of international law developed incrementally over centuries through both custom and treaty-keep up with the unprecedented pace of development in military technology and weaponry? Reports that may seem like science fiction detailing technological breakthroughs are all around us.