NIST and the Development of Artificial Intelligence (Common) Law

Last August I wrote here that the persistent AI common law vacuum strengthens the role of AI-centric standards, injecting a “surrogate” quality to their role in defining ultimately what is and isn’t acceptable/permissible in AI design. This void elevates the importance of standards and the longer this void persists the more likely these standards will morph into common law, with only a few perceptible changes, mostly reflecting linguistic adaptation to a legal framework. Similar phenomena can be seen in the Federal Trade Commission’s cybersecurity enforcement, turning NIST’s Cybersecurity Framework into a de facto common law.

With the Executive Order on Maintaining American Leadership in Artificial Intelligence, NIST is now charged with producing “a plan for Federal engagement in the development of technical standards and related tools in support of reliable, robust, and trustworthy systems that use AI technologies.” Given its solid track record on cybersecurity, we can expect that a healthy portion of what NIST develops here will make its way into the legal system.

What will this look like? Among the NIST deliverables, I think we can expect to see an AI application taxonomy. It may not look exactly like the one I presented at SLS (2012 Intellectual Property Scholars’ Conference), but it will embody similar principles.

***Postscript***

July 22, 2019: The AI standards ecosystem is robust and continues to grow. A quick look at what’s currently brewing at ISO reveals 18 standards. Under JTC 1/SC 42: three already published (focused on big data) and 10 under development (covering big data, concepts and terminology, machine learning, bias, trustworthiness, robustness, governance, and risk management. Several ISO committees and subcommittees have 15 works in progress, spanning, for example, personal identification, information security, cybersecurity and privacy protection, biometrics, IoT and robotics. A number of other standards-setting organizations, such as the Institute of Electrical and Electronics Engineers (IEEE) are also busy drafting AI standards and the (expected) overlap between them is apparent. All of this activity can be expected to help yield a relevant array of information from which the AI common law will emerge.