The European Union’s AI Bill: How it Syncs with XAI and Licensing

The EU’s recent AI bill seeks to “turn Europe into the global hub of trustworthy Artificial Intelligence.” The bill identifies four AI application categories: (1) Unacceptable risk; (2) High risk; (3) Limited risk; and (4) Minimal risk.

This bill ties in with the AI application taxonomy and iterative liability, which I discussed here. It also supports the case for independent XAI monitoring apps discussed in my The Role of Explainable AI (XAI) in Regulating AI Behavior: Delivery of “Perfect” Information. Furthermore, the AI bill’s application categories sync with the AI licensing concept (think: permit), which is also discussed in the XAI post.

Combining the principles of iterative liability and licensing/permitting yields a framework in which AI applications that belong to the unacceptable and high risk categories will (or at least should) require an application development permit. Alternatively, but perhaps less ideally, a self-insuring mechanism could be worked out for some developers, though the risk to achieving and maintaining a trustworthy AI ecosystem with that variable in play might ultimately prove too high.

*Postscript*

June 10, 2021:┬áController to Controller transfers under Clause 10, Module 1 of the EU’s new Standard Conditions of Contract appear to contain a reference to the use of AI and XAI. This is hinted at in the use of terms such as “automated processing” and “automated decision,” with the XAI (possibly) making an appearance in subsection (d)(i) with use of “logic involved.” As it relates to AI and XAI, this is too vague to be practical or meaningful for the parties and, by extension, data subjects.