228cda3bedfd53baa0be9106f0ae769a
10:30 am - 11:45 am (CET)
Thursday 16 February 2023
Breakout session
Yangtze 2
Leiden University
Kerstin Vignard
Marietje Schaake
Dennis Broeders
Tal Mimran
Back to view programme

How to regulate irresponsible AI in cyber operations

Determining what responsible AI in the military domain by default also determines what ‘irresponsible AI’ might be. What are the (possible) uses of AI in military cyber operations that would be considered outside of the scope of legitimate use, and is it possible or feasible to develop a framework of responsible state behavior around the spread and use of such AI applications?

 

However, the nature of the ‘weapon’ (not a traditional weapon, but lines of code) and the secretive character of the market for zero days and exploits make the governance of AI-enabled military applications challenging. So what are possible governance solutions for this problem? Do export control regimes that deal with digital technology – such as the Wassenaar Arrangement – fit and, if so, how? How can arms control regimes be adapted to the problem of AI-enabled offensive cyber operations?

The Hague Program logo

How to regulate irresponsible AI in cyber operations speakers

  • Kerstin Vignard
    Kerstin Vignard
    Research Scholar for Science Diplomacy and Tech Policy in the Institute for Assured Autonomy, Senior Analyst at the Johns Hopkins University Applied Physics Lab
  • Marietje Schaake
    Marietje Schaake
    International Policy Director at Stanford University Cyber Policy Center
  • Dennis-Broeders
    Dennis Broeders
    Professor of Global Security and Technology at Leiden University and Senior Fellow of The Hague Program on International Cyber Security
  • Tal Mimran
    Tal Mimran
    Academic Coordinator of the International Law Forum and the Research Director at the Federmann Cyber Security Research Center at the Hebrew University of Jerusalem

Register now for REAIM 2023

Register now for REAIM 2023

Register now