Prioritizing AI Risk Analysis through DOE’s AI Risk Management Playbook

The U.S. Department of Energy’s (DOE) AI risk management playbook (AI RMP) examines 138 risks and provides recommended mitigations for each. It precedes the NIST Risk Management Framework (second draft was released for public comment on August 18, 2022.)

The AI development core principles I identified in The NIST AI Risk Management Framework and AI Classification post are referred to in AI RMP as “Primary Principles” and they are identified below. (NIST refers to the core principles as “Guiding Principles.”)

Some of the AI RMP primary principles map directly (green) to the core principles, others partially (orange) and others do not (red). I note the relationship between them in the list below along with the number of risks each deals with.

  1. Accountable (14)
  2. Accurate, reliable, and effective (46)
  3. Lawful and respectful of our Nation’s values (15) 
  4. Purposeful and performance-driven (11)
  5. Regularly monitored (8)
  6. Responsible and traceable (8)
  7. Safe, secure, and resilient (25)
  8. Transparent (5)
  9. Understandable (6) 

It is immediately evident that items 2 and 7 receive the most focus. Though this does not necessarily mean that items that got less attention are somehow less worthy or less risky, it does highlight how the DOE thinks about AI risks. This itself is important for developers to pay close attention to in their design considerations. For lawyers it can be a helpful guide for prioritizing risk analysis in AI deals.