Edited by

Gary Corn

Document Type

Article

Publication Date

4-2026

Abstract

This paper, written as a draft chapter for the Lieber Institute for Law and Land Warfare's forthcoming book on International Law and Artificial Intelligence in Armed Conflict: The AI-Cyber Interplay, explores the IHL implications of a specific subfield or category of AI—Generative AI (GenAI). This new and rapidly evolving technology does not merely analyze or classify data; it also generates original image, audio, and video content. This synthetic content can be highly deceptive and manipulative, as in the case of so-called “deepfakes.” Combined with other information and communications technologies (ICT) and AI capabilities, GenAI has the potential to drastically transform information operations (IO) from a labor-intensive process with limited scope and reach into automated operations capable of significantly reshaping the information environment and deceiving intended and unintended audiences with unprecedented speed, scale, and sophistication. Employing this new technology, especially when combined with the distributive power of other ICTs, presents real and not yet fully understood risks of negatively impacting those IHL seeks to protect. For example, the increasing capacity for hyper-realistic simulation, psychological targeting, and mass dissemination at low cost lowers the barrier to belligerents spreading terror among civilians or inciting unlawful violence. Yet like IO itself, the use of GenAI as an IO capability is not per se unlawful; quite the opposite. As a non-forcible method of war, IHL leaves substantial leeway for the conduct of IO. Furthermore, definitive legal analysis of GenAI-enhanced or enabled IO is challenged by the rapidly evolving, nonmonolithic nature of the technology and the legal gray zones into which it is and will be deployed. How to best understand and manage these risks is the focus of this chapter.

Share

COinS