Solving Algorithmic Bias via Artificial Intelligence Computational Law Applications

Algorithmic bias presents a variety of vexing challenges. Key among these is informational transparency, or more precisely, the lack thereof. This “black box” phenomena presents an operational reality where some AI becomes an obfuscatory force. Instead of being a force for positive informational use, the black box algorithm becomes yet another impenetrable data layer. It heaps on additional difficulties in building and maintaining trust that the information it presents is indeed as valuable as the end user expects it to be. This lack of transparency is essentially like a drug that does not disclose its side effects. Go ahead and use it, but you don’t know exactly what you’re getting yourself into and you may not like what happens next.

Since there is no universal standard for tagging algorithmic bias, and since it is unlikely that AI developers will voluntarily disclose their algorithmic bias profile, end users find themselves dealing with AI applications that have zero accountability. Not only can use of these applications yield untrustworthy data, but the end user might not even be aware of that risk. Compounding all of this is essentially two things: (1) virtually all of these AI applications can be expected to barricade themselves with as-is warranties and other pro-developer legal mechanisms that end users invariably agree to without actually ever having a meaningful opportunity to understand what’s at stake; and (2) the algorithmic bias is shrouded in secrecy, bolstered by trade secret provisions, non-circumvention (e.g., no reverse engineering) obligations and other features that make it difficult to detect.

The AI computational law applications described in the “Artificial Intelligence and Computational Law: Democratizing Cybersecurity” post (the Prometheus Project at Stanford Law School) could offer a solution. The Prometheus application is not limited to aid in the democratization of the cybersecurity ecosystem; its task can be broadened to the analysis of AI algorithmic bias by, for example, comparing a particular AI application’s operational characteristics to that of other, similar applications by referencing a mission-centric ontology. Tagging and developing the required bias profile for a given application, the Prometheus app could help the end user determine if continued use of that particular AI application is desirable; if it is, then that helps promote building and maintaining trust in the information received from the AI application.

*** Postscript ***

September 26, 2019 – Solving algorithmic bias is dependent on: (1) effective XAI (see also the discussion here), (2) the availability of an effective XAI auditor (e.g., the Prometheus app), and (3) prompt execution of corrective action in the most efficient way.

June 28, 2019 – The NYC council’s “Automated Decision Systems Used by Agencies” law provides that whenever an automated decision system (read “AI”) is used, that the algorithm’s source code be publicly disclosed. The challenge with this is access and ultimately, when it comes to evolutionary algorithms, this approach will be rendered irrelevant.

January 19, 2019The Emerging Irrelevance of Algorithmic Transparency in AI. Suppose AI developers agree to be “transparent”, that they are willing to disclose their algorithm. Ultimately, in some instances, this willingness may dispel the allure we normally accord to transparency because when we are dealing with machine learning AI applications, the value of the disclosed algorithm is diminished by its age. Stated differently, the more iterations the AI has gone through, what the original developer can disclose becomes more and more meaningless. So while we may be able to take a look under the hood, our desire to understand the “why” of what happened may not be satisfied. This, in turn, might bring us to the (uncomfortable) conclusion that we simply don’t understand, or at least don’t fully understand (as much as we’d like to) why the AI produced the result that it did. With that, we will have to learn to be satisfied that the actions of machine-learning AI applications cannot be fully understood. This observation directly ties in with the issue of developer liability. I first discussed the concept of iterative liability in my post from July 2011 (“The Case for Robot Personal Liability: Part II – Iterative Liability”), where I wrote that “evolutionary algorithms” make it difficult, and ultimately impractical, to assign liability to the original developer. Taking this one step further and tying all of this together with my observations in “Artificial Intelligence App Taxonomy and Iterative Liability” produces the next-step conclusion that transparency is rendered irrelevant when we are dealing with Level D AI applications.