Explainable AI (XAI) Impact on Bias, Law Enforcement, and Transparency

  • Use of AI in applications that implicate privacy is increasingly vulnerable to bias scrutiny, especially in light of the growing momentum of governmental and regulatory forces that are driving the proliferation of privacy laws. This scrutiny is beneficial: It should generate pressure to standardize XAI and that can help foster innovation in AI generally, not just in privacy related applications, but also in the medical, financial and autonomous vehicle sectors.
  • U.S. Government Accountability Office’s Forensic Technology – Algorithms Used in Federal Law Enforcement technology assessment examines algorithmic evidence analysis as applied to: probabilistic genotyping, latent print, handwriting recognition, face recognition, iris recognition and voice recognition. The report addresses the use of AI as a way to reduce human error and bias, but it does not touch on the most important and difficult problem: algorithmic bias. The GAO will most certainly have to delve into that analysis if it is to attain any relevancy (not to mention justify the tax dollars spent on its production, but I digress). To that end, its analysis should incorporate the Perfect Information principles I discussed in The Role of Explainable AI (XAI) in Regulating AI Behavior: Delivery of “Perfect” Information. The incorporation of XAI as a must-have feature in law enforcement’s use of AI will help alleviate problematic algorithmic bias and also help inject much-needed transparency into a process that will otherwise remain shrouded in shadows. Stated differently, the Perfect information feature set will help generate and maintain public goodwill towards law enforcement use of AI. Emphasis on this feature set is necessary in these times where substantial, robust, and long-term efforts need to be directed to rebuilding trust in law enforcement.
  • XAI has the potential to promote trust, understanding and effective management of an ML application. We have reached a use-maturity point where XAI should be thought of as a must-have feature in ML apps, a feature that is applicable in a broad spectrum of applications, including, but not limited to, those that are bias-vulnerable. Contemporary conceptualization holds XIA as a native feature; i.e., one that resides within the parent ML application. But that is only one possible iteration. A more intriguing possibility, however, is one where were the explainability function is delivered via a separate, independent XAI that monitors a given ML application (a “target”). And even though, in the short term, the prospect of a limited array of application compatibility will likely limit what ML apps can be monitored, that should not serve as a reason not to pursue this independent-monitor model. In certain settings (particularly those that are bias-sensitive) an independent XAI has the potential to offer important legal benefits, enhancing audit and other license enforcement (not just from a licensee’s perspective) efforts.
  • XAI has the potential to ameliorate the potential (even partial) irrelevance of transparency requirements in Level D AI applications, where the value of the original algorithm is diminished by its age. In Level D applications, the more iterations the AI has gone through, what the original developer can disclose becomes increasingly meaningless.

***Postscript***

3-14-22: Another model for audit can be based on the Control Objectives for Information and Related Technology (COBIT) used by IT auditors to assess Sarbanes-Oxley (SOX) compliance.

12-23-21: Beginning on January 1, 2023, New York City will require employers that use AI in their hiring decisions to comply with a number of steps. Among these, is to conduct an algorithmic bias audit and publicize the results. The audit must be conducted by an “independent auditor,” but the law does not identify what that means and it is unclear what audit methodologies must be followed.

There are a number of ways to deal with this, such as modeling the process after the Qualified Security Assessor (QSA) model used to audit compliance with the Payment Card Industry Data Security Standard (PCI DSS), but a more intriguing method is to use XAI. Initially, a bias audit by the application’s own XAI module (presuming it has one and, if not, this should be seen as a call to have one) might be sufficient to count as “independent” so long as there is a legally-binding representation and warranty by the developer to that end. This approach also makes sense as it follows the self-attestation model used in PCI DSS compliance. Additionally, or alternatively, an independent XAI application could be used to audit the AI application. Additionally, and this is more of a long-term consideration, it will likely be desirable to white list certain independent XAI apps and tie that to emerging standards that identify which XAI apps are deemed compliant with best practices.