The European Union’s AI Bill: How it Syncs with XAI and Licensing

The EU’s recent AI bill seeks to “turn Europe into the global hub of trustworthy Artificial Intelligence.” The bill identifies four AI application categories: (1) Unacceptable risk; (2) High risk; (3) Limited risk; and (4) Minimal risk.

This bill ties in with the AI application taxonomy and iterative liability, which I discussed here. It also supports the case for independent XAI monitoring apps discussed in my The Role of Explainable AI (XAI) in Regulating AI Behavior: Delivery of “Perfect” Information. Furthermore, the AI bill’s application categories sync with the AI licensing concept (think: permit), which is also discussed in the XAI post.

Combining the principles of iterative liability and licensing/permitting yields a framework in which AI applications that belong to the unacceptable and high risk categories will (or at least should) require an application development permit. Alternatively, but perhaps less ideally, a self-insuring mechanism could be worked out for some developers, though the risk to achieving and maintaining a trustworthy AI ecosystem with that variable in play might ultimately prove too high.

*Postscript*

June 3, 2022: The Digital Regulation Cooperation Forum (DRCF) recently issued a discussion paper titled “Auditing algorithms: the existing landscape, role of regulators and future outlook.” (April 28, 2022.) There is no doubt that internal auditing (QA) is important for mitigating risk for all stakeholders and there are well-documented best practices (e.g., peer review) for doing so. External algorithmic audits, on the other hand, are somewhat more problematic and the paper provides a good discussion of those. One element that is missing from all of this, however, is the role AI licensing can play (see also the Licensing Powerful and Complex AI post). Bottom line is that licensing should be viewed as a key feature of the algorithmic regulatory framework. It helps mitigate harm by rendering more efficient the assignment of liability and compensation mechanisms.

June 10, 2021: Controller to Controller transfers under Clause 10, Module 1 of the EU’s new Standard Conditions of Contract appear to contain a reference to the use of AI and XAI. This is hinted at in the use of terms such as “automated processing” and “automated decision,” with the XAI (possibly) making an appearance in subsection (d)(i) with use of “logic involved.” As it relates to AI and XAI, this is too vague to be practical or meaningful for the parties and, by extension, data subjects.