Military AI and X-Risk
Author: Natasha Karner
Summary: This paper explores how Artificial Intelligence (AI) risks discussed in the existential (or x-risk) community can be identified in the military domain. Whilst extreme risks to humanity wrought from AI are often discussed in x-risk or Global Catastrophic Risk (GCR) literature, this piece seeks to draw attention to their application in the military domain – a context where risks of human harm inherently exist. In particular, this paper utilises the case study of the application of AI to Autonomous Weapons Systems (AWS). Conversely, whilst there is much discussion from advocacy groups and researchers working on AWS on how fully-autonomous weapons or “killer robots” will fundamentally change humanity, there is little convergence with x-risk materials. This paper is a humble attempt to connect these two communities. For its discussion, this paper offers three considerations on AI risk and AWS: misalignment, malevolence, and misperception. It also incorporates recent developments, such as AI and AWS applications in current conflicts. Due to the limited time and scope of this project, this piece should be considered an “introduction” to some of these ideas, with aims for further expansion in future works. Overall, it is hoped that this piece inspires further discussion and collaboration on the topic of AI x-risk and military applications of AI.