AI Liability: When “Intelligent Deviation” is Undesirable

In June 2019 I highlighted the need and desirability of synchronizing AI law with standard setting organizations and thought-leaders (e.g., ISO, IEC, IEEE, NIST, IARPA, DARPA). Then, in an update posted on October 2019, I introduced a concept called “intelligent deviation,” which refers to the capability of certain AI applications to deviate from strictly adhering to their objective where doing so is optimal.

Enabling intelligent deviation capabilities in AI needs to be carefully considered through a variety of different perspectives. From a liability perspective, for example, the AI’s deviation threshold; i.e., the allowance within which the AI application can vary its actions, can render risk mitigation difficult for the developer and end user. Is the AI’s action range predictable? If not, is that the product of an intentional design?

In some applications, such as autonomous vehicles, the deviation threshold should be predictable. If it is not, it can make effective management of the possible attendant harms more difficult, which can in turn have a detrimental impact on using the AI. As such, a scenario in which operating the AI is inefficient or altogether impractical because it is uninsurable, for instance, makes enabling intelligent deviation undesirable.

***Postscript**

January 15, 2021: “Interactive teaching” is what Amazon calls Alexa’s (currently unique) capability of asking questions about commands it has never heard before. This allows Alexa to dynamically grow and relieves Amazon’s engineers from having to manually update it. Interactive teaching also correlates with intelligent deviation. How does it impact the AI’s operational latitude and to what extent will that capability strain the programmer’s liability parameters? Stated somewhat differently, the operational thresholds of interactive teaching should be carefully managed so they sync with intelligent deviation, not exceed it.

April 2, 2020: AI can predict what people will click on in cyberspace. Similar supervised learning models could be constructed to predict where people will go in the physical world. In our current pandemic-seized world, there’s lots of noise about tracking people and it seems somewhat inevitable that this will be part of our new reality. The Kansas Department of Health and Environment, for example, is reportedly tracking residents through their cell phones. Using AI to predict where people will physically go is cell phone tracking on steroids. And while doing so is technically possible, perhaps the most important question is whether doing so is a good idea in the first place. It may sound good (to some) now, but what about the day-after? If the decision is to go ahead with it, what type of controls will be built in to minimize harm (which extends beyond privacy)? For instance, will controls similar to geo-fencing found in drones be used to limit the tracking? If so, what will they look like? For one, they could take the form of intelligent deviation, which is discussed above.