How to regulate irresponsible AI in cyber operations
Determining what responsible AI in the military domain by default also determines what ‘irresponsible AI’ might be. What are the (possible) uses of AI in military cyber operations that would be considered outside of the scope of legitimate use, and is it possible or feasible to develop a framework of responsible state behavior around the spread and use of such AI applications?
However, the nature of the ‘weapon’ (not a traditional weapon, but lines of code) and the secretive character of the market for zero days and exploits make the governance of AI-enabled military applications challenging. So what are possible governance solutions for this problem? Do export control regimes that deal with digital technology – such as the Wassenaar Arrangement – fit and, if so, how? How can arms control regimes be adapted to the problem of AI-enabled offensive cyber operations?