How to regulate irresponsible AI in cyber operations
Determining what responsible AI in the military domain by default also determines what ‘irresponsible AI’ might be. What are the (possible) uses of AI in military cyber operations that would be considered outside of the scope of legitimate use, and is it possible or feasible to develop a framework of responsible state behavior around the spread and use of such AI applications?
However, the nature of the ‘weapon’ (not a traditional weapon, but lines of code) and the secretive character of the market for zero days and exploits make the governance of AI-enabled military applications challenging. So what are possible governance solutions for this problem? Do export control regimes that deal with digital technology – such as the Wassenaar Arrangement – fit and, if so, how? How can arms control regimes be adapted to the problem of AI-enabled offensive cyber operations?
How to regulate irresponsible AI in cyber operations speakers
Kerstin VignardResearch Scholar for Science Diplomacy and Tech Policy in the Institute for Assured Autonomy, Senior Analyst at the Johns Hopkins University Applied Physics Lab
Marietje SchaakeInternational Policy Director at Stanford University Cyber Policy Center
Dennis BroedersProfessor of Global Security and Technology at Leiden University and Senior Fellow of The Hague Program on International Cyber Security
Tal MimranAcademic Coordinator of the International Law Forum and the Research Director at the Federmann Cyber Security Research Center at the Hebrew University of Jerusalem