How to Operationalize Responsible AI in the Military Domain
Responsible AI (RAI) is an emerging AI governance approach that entails different normative tools including principles and ethical risk assessment frameworks to guide lawful, safe, and ethical design, development, and use of AI. While it is widely being debated as the fitting approach to AI governance, RAI is a young and evolving field of research and practice, particularly in the defense sector where currently a handful of States and intergovernmental organizations have publicly adopted principles, standards and/or risk assessment frameworks for military uses of AI technologies. As increasing number of States are embracing RAI, this roundtable discussion will examine what kind of responsible AI tools are needed for the military domain, and how they can be effectively and sustainably operationalized.