The Role of Explainable AI (XAI) in Regulating AI Behavior: Delivery of “Perfect” Information

Before presenting my thoughts on XAI as a behavior-regulating feature, it is important to recap an excerpt from what I wrote last month in my “five observations” post:

Regulating AI behavior is necessary in order to mitigate harm. One approach for achieving this is imposing a legal requirement that prior to deployment, the AI must be certified as having passed training on what constitutes acceptable behavior. (Another way to think of this certification is that once the AI passes this training, it is in effect licensed to operate.) The AI’s acceptable-behavior framework, the learning set, is constructed from a variety of universally-accepted criteria, including, for example, applicable international standards, which helps yield uniform application and operational performance. The AI’s acceptable-behavior model is then algorithmically isolated in the application (be it cyber or cybernetic) and hard-coded, meaning it is made to be operationally independent from the AI’s capabilities, rendering it immune from iterative code changes. This acceptable-behavior approach dynamically disciplines the AI’s behavior. It enables real-time deterrence and allows regulating AI behavior.

XAI plays an important part in the makeup of the acceptable behavior framework, so much that its absence may be reasonably viewed as not merely curious, but arguably negligent and, from a licensee’s perspective, contractually unacceptable. Of course, the XAI interface must be such that it efficiently overcomes vulnerabilities in presenting information and this means that the XAI delivers to the human user what can be regarded as “perfect” information. “Perfect” information is information that is: (1) relevant, (2) easily understood, and (3) is not prone to misrepresentation. As to the third element, this also encompasses data integrity, meaning that the information delivered is unalterable, which suggests that in some AI applications, especially those in heavily-regulated sectors, the use of a blockchain infrastructure becomes necessary.


***Postscript***

December 8, 2020

XAI has the potential to promote trust, understanding and effective management of an ML application. We have reached a use-maturity point where XAI should be thought of as a must-have feature in ML apps, a feature that is applicable in a broad spectrum of applications, including, but not limited to, those that are bias-vulnerable. Contemporary conceptualization holds XIA as a native feature; i.e., one that resides within the parent ML application. But that is only one possible iteration. A more intriguing possibility, however, is one where were the explainability function is delivered via a separate, independent XAI that monitors a given ML application (a “target”). And even though, in the short term, the prospect of a limited array of application compatibility will likely limit what ML apps can be monitored, that should not serve as a reason not to pursue this independent-monitor model. In certain settings (particularly those that are bias-sensitive) an independent XAI has the potential to offer important legal benefits, enhancing audit and other license enforcement (from a licensee’s perspective) efforts.

October 7, 2020

Portland, Boston and San Francisco represent some cities that have banned or heavily curtailed use of facial recognition. Controversy centers on its efficacy: Critics argue facial recognition is flawed. Proponents argue it helps solve crime. A careful review of the contours of this controversy highlights a valuable point, suggesting that standardization and proper deployment of Perfect information in XAI could actually serve to reduce the efficacy uncertainty to the point that facial recognition becomes a trusted crime solving tool.

September 4, 2020

Last month, NIST published a draft white paper: “Four Principles of Explainable Artificial Intelligence“(Draft NISTIR 8312). These principles are: (1) Explanation, (2) Meaningful, (3) Explanation Accuracy, and (4) Knowledge Limits.

The NIST principles (green) map to the Perfect information (orange) as follows:

Explanation <—> Relevant

Meaningful <—> Easily Understood

Explanation Accuracy <—> Relevant

Knowledge Limits <—> Relevant

All four of the NIST principles are missing the third Perfect information element: Not prone to misrepresentation. Any system output that is vulnerable to misrepresentation is suspect, regardless of its initial, for example, relevancy.

July 12, 2020

XAI does not absolve humans from oversight. Sure, XAI will provide an understanding of what and why a certain outcome occurred, but keep in mind that this outcome already occurred and the attendant harm may be difficult or impossible to reverse. Therefore, human oversight in AI applications needs to be a fundamental functional (i.e., application-level) and policy and procedure requirement for legally-reasonable AI deployment. At the application level it involves–at a minimum–a UI that enables meaningful operator intervention and what that means depends on the type of application being used (at the most basic level, however, it means that effective intervention can be performed in a timely manner).

At the policy and procedural level, oversight entails licensing. This is a concept that is useful for the AI application (as discussed above) and it is also equally important for the AI operator. Requiring that an AI operator be licensed (with periodic recertification) to operate the AI application promotes effective, executable risk mitigation. The AI operator license is granted (and reissued) once the operator meets the requirements set by standard setting organizations (ISO, IEC, IEEE, etc.), which should also be synchronized, in relevant part, with the AI certification discussed above.

Human oversight is not required in all AI applications. But identifying which class of AI applications would benefit from implementing the UI and licensing requirements will help ensure harm is more effectively avoided.

June 1, 2020

Last month, the U.S. Government Accountability Office published its Forensic Technology – Algorithms Used in Federal Law Enforcement technology assessment. The report examines algorithmic evidence analysis as applied to: probabilistic genotyping, latent print, handwriting recognition, face recognition, iris recognition and voice recognition. The report addresses the use of AI as a way to reduce human error and bias, but it does not touch on the most important and difficult problem: algorithmic bias. The GAO will most certainly have to delve into that analysis if it is to attain any relevancy (not to mention justify the tax dollars spent on its production, but I digress). The next phase of the GAO’s analysis should incorporate the Perfect information principles discussed above. The incorporation of XAI as a must-have feature in law enforcement’s use of AI will help alleviate problematic algorithmic bias and also help inject much-needed transparency into a process that will otherwise remain shrouded in shadows. Stated differently, the Perfect information feature set will help generate and maintain public goodwill towards law enforcement use of AI. Emphasis on this feature set is necessary in these times where substantial, robust, and long-term efforts need to be directed to rebuilding trust in law enforcement.

October 18, 2019: If the California Privacy Rights and Enforcement Act (CPREA) comes into effect, it will essentially make use of XAI a requirement for businesses that profile consumers using AI. XAI will need to be capable of delivering Perfect information in order for businesses to comply.