A Recap of Key Observations from AI Presentations on August 23, 2018
I gave two AI presentations yesterday. The first was a review of AI-powered computational law applications at Mitchell-Hamline School of Law’s “IP is Everywhere” presentation series. For the second, I partnered with Daniel Garrie (Law and Forensics) and Dennis Garcia (Microsoft) in a Thomson Reuters webinar to talk about AI in the legal profession. Below is a sampling of some key observations that I offered:
- The persistent AI common law vacuum strengthens the role of applicable standards, injecting a “surrogate” quality to their role in defining ultimately what is and isn’t acceptable/permissible in AI design. This void elevates their importance and the longer it persists the more likely these standards will eventually morph into common law, with only a few perceptible changes, mostly reflecting linguistic adaptation to a legal framework. A similar phenomena can be seen in the Federal Trade Commission’s cybersecurity enforcement, turning NIST’s Cybersecurity Framework into a de facto common law.
- Iterative liability (which I introduced in 2011 in “The Case for Robot Personal Liability: Part II – Iterative Liability”) is a concept that is intrinsically connected to the programmer’s liability to the Level D application’s algorithm’s evolution, not to the AI application’s chain of custody and use.
- While much of the focus in AI discourse centers on building intelligent entities, not all of these entities need to have, as a matter of default, a human-like thinking scale to qualify as AI. There is ample room for entities with “sub-human” computational capabilities.
- AI capabilities in data security and privacy are diverse and hold promise for delivering robust capabilities. Even though recent privacy-centered legal frameworks, specifically, the GDPR and CCPA do not call for the use of AI, it is not inconceivable that they will be amended to follow a HIPAA-like principle, making AI adoption an “addressable” implementation (as HIPAA does with encryption) and in its absence require that covered organizations implement compensating controls.
- The key to successfully introducing an AI application in the firm is clearly defining what problem(s) it will solve.