A Brief XAI Recap from SLS’ 17th Annual Digital Economy Conference

The following is a brief recap of my comments on XAI during our AI panel – Day 2 of Stanford Law School’s 17th Annual Digital Economy Conference.

  • We live in a Narrow AI world. But public perception of AI is heavily influenced by science fiction, which virtually always depicts instances of Artificial General Intelligence and Artificial Super Intelligence. By the way, the average estimate out there is that we get to the former in 2099 and the latter, well at some point after that. Now, public perception is highly permeable. It frequently dictates, for example, how lawyers talk about AI. (By the way, I didn’t mention the following during the presentation, but I will here: The incessant chatter that AI needs rights, such as being a named inventor on a patent application, is anchored in a science-fiction mindset. Let’s talk in 2099. It may be relevant then.) So even though pop culture depiction of AI can be fascinating, it derails relevant and substantive efforts to deal with the topic. (All of this was offered up with the intent to make it clear that XAI belongs in the Narrow AI world. It is relevant now.)
  • XAI (currently) is mostly relevant to machine learning (supervised/unsupervised and deep learning/Convolutional Neural Networks).
  • XAI plays a critical role in promoting ethics and degrading bias. But it is not just about ethics. XAI promotes trust, understanding and effective management of the AI application. As such, XAI is a desirable feature that should be demanded in applications that call for such capabilities.
  • XAI could be an independent application or a feature within the AI application. As an independent application it could serve an important audit function. This could be desirable in applications that are more vulnerable to bias and other harmful actions.