The AI Resilience Act
No, such an Act doesn’t exist. But it could. I’ll show you what I’m thinking about.
The EU has a new Cyber Resilience Act (CRA). As I was reading it, I thought “we can learn from this and apply it to AI.” Next step was to distill what’s required of manufacturers in the CRA and place it in the context of AI. Below is a (very) rough draft of what an “AI Resilience Act” (AIRA) could look like.
But before we go further, a couple of preliminary notes:
- “Resilience” is one of 37 life cycle core principles. You can check out all the principles, but here’s what “resilience” is composed of: Failure recovery capable; the greater the capability to autonomously recover (i.e., without manual patching) the more resilient the application is; model is resistant to attack vectors that pollute learning sets; resistant to misinformation prompts; maps to the reliability core principle; references ISO/IEC CD TS 8200.
- AIRA makes the case for two things: A development permit (“permit” is, by the way, another life cycle core principle) and following the AI Data Stewardship Framework, or a similar framework. There’s a lot I have to say about the permit part, but I’ll limit it to the following: A developer that does not comply with AIRA would have their permit suspended and in more extreme cases revoked. No permit, no development. Simple.
And now, let’s take a look.
Developer’s Obligations Under the AIRA:
- Ensure Application Security: Developers are primarily responsible for designing and developing AI applications that meet the security requirements described in the “Security” life cycle core principle.
- AI Risk Assessment: Developers need to conduct a thorough AI risk assessment for each application and incorporate the findings into the design and development phases. This assessment should be documented and included in the application’s technical documentation.
- Vulnerability Management: Establish and maintain effective vulnerability handling processes throughout the application’s life cycle. These processes should include measures for identifying, assessing, and remediating vulnerabilities, potentially involving coordinated vulnerability disclosure policies that encourage external parties to report vulnerabilities.
- Incident Reporting: Developers are required to promptly report actively exploited vulnerabilities and any security incidents affecting their applications to the Federal Trade Commission. This reporting enables the Commission to assess and address emerging threats.
- Information Provision: Developers need to provide clear and comprehensive information to end users, including instructions for secure operation of the application. (Note: This would blow up the current practice of drafting terms and conditions that disclaim anything in the known and unknown galaxies.)
- Corrective Actions: If an application fails to meet the essential security requirements, developers are obligated to take corrective action.
Every one of these obligations maps to multiple life cycle core principles. You already got an idea of how that works insofar as ensuring the security of the application, so let me offer just another one for you to consider: The AI Risk Assessment requirement maps to accountability, reliability, metrics, and transparency. (This is just a sampling, there are others.)
So, what do you think?