Mitigating Liability with XAI: The Case for Standardization

The legal value of XAI can be significant, especially (though by no means exclusively) in mitigating developer and end-user liability.¹

This discussion invariably tees up the necessity to delve into XAI standardization. Though it is somewhat early to presume that the perfect information model introduced in The Role of Explainable AI (XAI) in Regulating AI Behavior: Delivery of “Perfect Information”can be viewed as a mature standard, the model does possess the necessary proto qualities and can be reasonably viewed as a proto standard. Therefore, a properly developed XAI is one that possesses, at a minimum, all the attributes of perfect information. And once that parameter is fixed, the XAI is deemed properly developed and ready to provide a variety of risk mitigation benefits.

One example of how this can work is in dispositive-centric efforts, including in crafting safe harbors. Here we see, for example, the Department of Housing and Urban Development’s (HUD) effort in pursuing rulemaking that would reduce, perhaps even eliminate liability in certain credit discrimination lawsuits.  It’s one thing to use AI for credit decisions (see also the AI bias post here), but being in a position to leverage an XAI that acts as a liability shield/diffuser is not merely a desirable feature, it is a game changer for end users and developers. But it has to be done correctly and for that to happen there is need for standardization.

Standardizing XAI is an important effort because it promotes outcome-predictability and reliability. Furthermore, it helps eliminate reliance on external, third party audits of algorithmic performance and compliance, as is being suggested by HUD. This is not to say that auditing as a practice is entirely unnecessary, it doesn’t. But removing this externality dependency disrupts the potential emergence of a (redundant) cottage industry that thrives on providing XAI certification, which renders more cumbersome an important liability-mitigating function. Stated differently, use of a properly developed XAI, one that is standardized, sufficiently automates and internalizes (within the AI app) the algorithmic auditing function, rendering human intervention and external oversight unnecessary and undesirable.

____

¹ This also ties in with the concept of iterative liability. All entity iterations of AI would be subject to an XAI “audit” function which is hard-coded; a fixed feature, iteration-independent capability. For an additional discussion on XAI, see here.)

***Postscript**

December 3, 2019: Use of AI in applications that implicate privacy concerns stands to become increasingly vulnerable to bias scrutiny, especially in light of the growing momentum of governmental and regulatory forces that are driving privacy laws. This scrutiny can be expected to generate pressure to standardize XAI, which can also help foster innovation in AI generally (not just in privacy related applications).