Implementing Policies to Ensure the Responsible Use of Artificial Intelligence and Autonomy for the U.S. Department of Defense
The United States Department of Defense (DoD) has adopted many policies and implemented a wide range of practices across the Department to encourage responsible behavior in the development, deployment, and use of artificial intelligence (AI), autonomy, and other technologies. For example, in 2022, the DoD signed and released the Responsible AI Strategy and Implementation Pathway, which outlines the Department’s strategic approach for operationalizing the DoD AI Ethical Principles. Last month, the Department updated its policy on autonomy in weapon systems, DoD Directive 3000.09, reflecting the DoD’s strong and continuing commitment to being a transparent global leader in establishing responsible policies regarding military uses of autonomous systems and AI.
In the first portion of this session, the DoD will provide an overview of its RAI strategy, discuss implementation progress, share lessons learned from its experience developing and implementing responsible AI policies that may be useful for other states considering or developing their own policies, and highlight selected efforts towards operationalization. The second portion of this break-out session will cover the Department’s recent update of DoD Directive 3000.09, Autonomy in Weapon Systems. It will include an overview of the requirements established in the Directive and the most relevant changes made to the policy.
Implementing Policies to Ensure the Responsible Use of Artificial Intelligence and Autonomy for the U.S. Department of Defense speakers
Diane StaheliChief Digital and Artificial Intelligence Officer (CDAO) Chief of Responsible Artificial Intelligence
Dr. Michael HorowitzDirector of the Emerging Capabilities Policy Office in the Office of the Under Secretary of Defense for Policy