By Daw Hla Myet Chell (International Law)
THE integration of Artificial Intelligence (AI) into modern warfare has revolutionized military strategies, introducing unprecedented capabilities while presenting profound legal and ethical challenges. From autonomous drones to AI-driven cyber de-fence systems, these advancements have outpaced existing international legal frameworks, creating a pressing need to ad-dress gaps and ambiguities in laws governing AI in conflict. This article explores the use of AI in military operations, focusing on its application in the Middle East and Ukraine, and examines the challenges it presents under international law.
Evolving Legal Frameworks: Old Principles Meet New Technologies The rules governing war-fare, such as the Geneva Conventions and the Hague Conventions, are grounded in principles like proportionality, distinction, and necessity. However, these frameworks were crafted in an era when AI was not a consideration, leaving significant gaps in addressing its unique challenges.
The emergence of autonomous weapons systems (AWS) poses one of the most significant tests for international law. These systems operate with minimal human intervention, blurring lines of accountability and complicating the enforcement of principles like distinction and proportionality. AI’s ability to independently analyze vast data sets and execute operations challenges the traditional understanding of state and individual accountability under international humanitarian law (IHL).
Innovations and Implications
AI’s military applications are prominently displayed in the Middle East and Ukraine, regions marked by complex and dynamic conflicts.
In the Middle East, autonomous drones and AI-powered targeting systems are central to military operations. For in-stance, Israel employs cut-ting-edge AI programs such as “Daddy’s Home”, “Gospel”, and “Lavender”. These systems integrate machine learning to enhance target identification, surveillance, and missile de-fence, with “Lavender” playing a crucial role in the Iron Dome’s precision defence capabilities.
“Daddy’s Home” “Daddy’s Home” is a high-precision targeting system that utilizes advanced machine learning algorithms to assist in identifying and prioritizing military targets. By analyzing vast datasets from multiple intelligence sources, including satellite imagery, drone feeds, and human intelligence, “Daddy’s Home” offers real-time insights to decision-makers. It is designed to minimize civilian casualties and ensure compliance with international humanitarian law (IHL). The system is particularly effective in urban warfare settings, where distinguishing between combatants and civilians is critical. However, critics have raised concerns about the system’s reliance on data accuracy, as errors in input data could lead to unintended consequences.
“Gospel” The “Gospel” program focuses on real-time surveillance and intelligence gathering. Using sophisticated AI algorithms, it processes video and sensor data from drones, satellites, and ground-based sensors. “Gospel” excels in pattern recognition, enabling it to detect unusual movements, potential threats, or hidden combatants that may escape human observation. This capability allows military commanders to act on intelligence with speed and precision. However, the autonomy of such systems raises ethical and legal questions, particularly regarding their use in environments where accurate differentiation between civilian and military targets is essential.
“Lavender” The “Lavender” project integrates AI with Israel’s renowned Iron Dome missile de-fence system, adding an extra layer of predictive analytics to existing capabilities. “Lavender” enhances the system’s ability to evaluate threats by considering multiple factors such as the trajectory, size, and payload of incoming projectiles. This rapid analysis enables the Iron Dome to prioritize which missiles to intercept, ensuring optimal re-source use. In addition to missile defence, “Lavender” is applied in offensive operations, lever-aging AI to determine the most effective strike points. The speed and accuracy of this system are unparalleled, but it also faces scrutiny for the potential lack of human oversight in critical decisions
These programs showcase Israel’s commitment to leveraging AI for military advantage while adhering to the principles of precision and proportionality as mandated by international law. Nevertheless, they raise broader concerns about transparency, accountability, and the ethical implications of AI-driven warfare.
While these technologies aim to minimize civilian casualties, their autonomous nature raises questions about compliance with IHL principles, especially in distinguishing between combatants and civilians.
Similarly, in Ukraine, AI systems are employed to counter cyber threats, improve surveillance, and enhance missile defence capabilities. Ukraine’s innovative use of AI to predict and respond to Russian military strategies showcases its potential for real-time decision-making. However, such systems, if not properly regulated, risk violating IHL’s prohibition of indiscriminate force, particularly when algorithms lack contextual judgment.
Who is Liable? Determining responsibility for AI-driven military actions is a critical legal and ethical challenge. Under current IHL, states are accountable for their armed forces’ conduct. However, when lethal decision-making is dele-gated to machines, it becomes difficult to attribute blame for unlawful acts, such as targeting errors or civilian casualties.
The lack of clear accountability mechanisms undermines the very foundation of IHL, necessitating urgent legal reforms. Proposals such as the concept of “meaningful human control” over AWS emphasize the need for human oversight in all AI-driven military actions to ensure ethical and legal compliance.
Bridging the Gap: Proposals for Regulation To address the challenges posed by AI in warfare, international law must evolve to balance technological innovation with ethical accountability. Possible approaches include:
• Updating Existing Treaties: Expanding the scope of treaties like the Convention on Certain Conventional Weapons (CCW) to include specific provisions on AI technologies.
• New Legal Instruments: Crafting treaties that regulate autonomous decision-making, mandate human oversight, and limit the deployment of AWS in sensitive conflict zones.
• Regulation of Private Sec-tor Involvement: Establishing guidelines for private companies developing military AI technologies to ensure compliance with IHL principles.
A Call for Action AI’s integration into military operations offers both unparalleled advantages and complex challenges. While its potential to enhance precision and reduce human casualties is undeniable, its autonomous nature raises ethical and legal concerns that current frameworks are ill- equipped to address.
Efforts by organizations like the United Nations and the Inter-national Committee of the Red Cross to address these issues demonstrate global recognition of the urgency of regulating AI in warfare.
A Call for Action AI’s integration into military operations offers both unparalleled advantages and complex challenges. While its potential to enhance precision and reduce human casualties is undeniable, its autonomous nature raises ethical and legal concerns that current frameworks are ill- equipped to address.
For AI to be used responsibly in warfare, the international community must urgently establish comprehensive legal standards that uphold the principles of accountability, proportionality, and distinction. Only through proactive regulation can the global community en-sure that AI serves as a tool for enhancing security rather than exacerbating the horrors of war. References
1. United Nations. (2018). International Law and the Use of Force. Retrieved from https://www.un.org
2. Schmitt, MN (2013). The Regulation of Autonomous Weapons in Armed Conflict. International Law Studies, 89(1), 87-108.
3. Sharkey, N (2018). The Ethics of Autonomous Weapons Systems. Inter-national Review of the Red Cross, 100(909), 387- 406. https://doi.org/10.1017/ S1816383119000325
4. Scharre, P (2018). Army of None: Autonomous Weapons and the Future of War. WW Norton & Company.
5. Cummings, ML (2017). Ar-tificial Intelligence and the Future of Warfare. Chatham House Report. Retrieved from https://www.chatham-house.org
6. International Committee of the Red Cross (ICRC). (2019). Autonomous Weapon Systems and International Humanitarian Law: A View from the ICRC. Retrieved from https://www.icrc.org
7. Binns, L (2019). AI in War-fare: The Global Arms Race and Ethical Dilemmas. Jour-nal of Strategic Studies, 42(5), 640-658.
8. Elbit Systems. (2022). Iron Dome and AI-Powered Tar-geting Systems. Retrieved from https://www.elbitsys-tems.com
9. Israel Defence Forces (IDF). (2022). Use of AI in Israel’s Military Operations. Retrieved from https://www. idf.il
10. United States Department of Defence. (2020). Artifi-cial Intelligence Strategy. Retrieved from https://www. defense.gov