Artificial Intelligence and Computational Law: Democratizing Cybersecurity

A few years ago, I was invited to Minnesota Public Radio to speak about various legal issues related to cybersecurity. To my left was Bruce Schneier, a famous and respected cybersecurity researcher and prolific author. There wasn’t much disagreement between us during the interview, though I recall emphasizing a bit more the FTC’s cybersecurity efforts, noting that I thought they were doing a pretty good job in the current regulatory vacuum, building a de-facto common law as they went along.

Fast forward to today. In his latest book, “Click Here to Kill Everybody,” Schneier argues, among other things, that there is a systemic lack of security in all things computer (something he calls “Internet +”, essentially an extension of IoT) and that what is needed to fix this is government intervention. (I’ll use his “Internet +” term here, making a clearer connection between my views and his.)

Schneier’s call for intervention comes in the form of a new government agency, one that has the ability to “coordinate and advise with other agencies” on the Internet +. While I don’t disagree with the role the government can play (I still think the FTC does a good job), I think that relying on it to fix the vexing, systemic insecurity is somewhat myopic and overall suffers from anemic public credibility and confidence. Something else is needed here. After all, an epistemic configuration that relies on legacy principles to try and fix new problems is likely to yield solutions that fall short. An effective solution to a new problem has to incorporate different thinking, an epistemic reconfiguration, if you will. It is  a thinking that manifests in novel tools, combined with effective deterrence in the form of penalties for non-compliance. But going back to the different-thinking variable, I think that a good part of an effective solution is available in shifting focus from regulatory efforts to end-user empowerment, essentially enabling end-users to become smarter consumers in the Internet + ecosystem.

This is where artificial intelligence and computational law applications come into the picture. (Again, they are not the entire solution, but a good part of it.) Together they can present the platform for this novel tool set, one that essentially works to democratize cybersecurity; enabling end-users to make meaningful, timely decisions about their Internet + purchase choices. In my law review article on this topic, “Rise of the Intelligent Information Brokers: Role of Computational Law Applications in Administering the Dynamic Cybersecurity Threat Surface in IoT,” 19 Minn. J.L. Sci & Tech. 337 (2018), I describe the role of these AI driven applications and how they can work to deliver this wave of cybersecurity democratization. The bottom line is that once Internet + end-users can actually make better choices about what they use, they have the technical prowess to do so, it is reasonable to expect that over time this Internet + ecosystem will be dominated by constituencies that have best-in-class security features built in and do not suffer from the same systemic insecurity we see today.

***Postscript***

May 19, 2023: Another method for enabling meaningful presentation of the Notice and Explanation principle is through a simple certification label. The label is deemed “simple” because it is intended to quickly convey important information without requiring the end user read through complex narrative. This then brings us to the question of who issues the label? This cannot be a free-for-all and public trust in the issuing agency is essential. To succeed, therefore, the label issuer needs to function like an accrediting entity, such as is the case with the Better Business Bureau (BBB) or the International Organization for Standardization (ISO). If the accreditor is already well-known (again, like the BBB or ISO) then the task of conveying a meaningful Notice and Explanation is relatively simple, as opposed to if it came from an unknown entity. Consider for example a label that classifies the chatbot application as “Friendly AI” coupled with the accrediting entity’s logo. This label can be displayed by the developer on the application’s login screen or other conspicuous location. The AI classification can change depending on the nature of the AI application. For example, an application that is more security oriented might better benefit from a “Secure AI” classification. Other designations may be selected from the AI Life Cycle Core Principles. (Note: The narrative provided for each of the core principles can help in the classification process.)

May 11, 2023: One of the principles in the White House Blueprint for an AI Bill of Rights is Notice and Explanation. It begins with: “You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.” The principle calls for the disclosure to be in the form of “clear, timely, and accessible” documentation that’s provided in “plain language,” with “clear descriptions” of how the system functions, what it’s intended for, who is responsible for it, and that is periodically updated so that it remains current. This is all good, but what the Blueprint does not address is the method of presentation. This is an important variable to consider: The more effective the format is the better the result, meaning that the information provided to the end user becomes actionable; i.e., the end user is empowered to make informed decisions based on it. One way to accomplish this is using the AI Fact Label (see below) as it offers an effective method for presenting the information.

January 21, 2021: On December 18, 2020, the UK House of Lords published a post warning against governmental “complacency” towards AI. Chapter 2 “Living with Artificial Intelligence” of the report leads with that “[i]t is important that members of the public are aware of how and when artificial intelligence is being used to make decisions about them, and what implications this will have for them personally.” Yes, of course, but the problem with this is that being “aware of” (and its synonymous variants such as “well-versed”) of how AI is used is much too vague. And so is the statement that the government should “explain to the general public the use of their personal data by AI.” Explain how? What is missing in all of this is ensuring that the “aware of” (and its variants) matures into a practical, actionable end user capability. This means making the awareness-of meaningful, which is the subject of the discussions throughout this post and others. So be it through AI-powered computational law apps, the AI Fact Label or other similar methods, the key principle to observe is to deliver/present relevant information in an easy to understand manner, one that is coupled with easy to use tools that execute the end user’s choices.

December 10, 2019: AI Algorithmic transparency/accountability comprises the lion’s share of current lawmaking efforts in the U.S. Getting AI developers to comply with these is one thing. But getting users to be aware–in an effective, actionable way–that a particular activity/transaction that they are involved involves exposure to AI is much more challenging. One way to help deliver this is through an ‘AI Fact Label‘ similar in principle to the FDA’s nutrition fact label. The AI Fact Label would disclose to the user the AI features/capabilities that are being used in an application and could even contain easy to understand guidance on the risks and whether an opt-out (from the AI) would be available and the consequences of doing so for the activity/transaction.

August 10, 2019: Fashwell develops deep learning image recognition AI for products. Their AI “automatically recognizes products in images” and makes them “instantly shoppable.” I discussed a similar approach in 2012 in relation to effectively dealing with complex boilerplate in my post “Maximizing Representative Efficacy: Part II.” Essentially, a similar (to Fashwell) method can be used for powering the Prometheus computational law AI engine. Here, for example, an IoT product’s cybersecurity profile could be extracted from its image and presented to the user in a simple-to-understand interface.

Update April 5, 2019: California’s IoT laws (which go into effect January 1, 2020) generate a foggy cybersecurity-requirements landscape. These laws are intentionally silent on what features constitute as “reasonable” for the purpose of achieving “appropriate” cybersecurity device functionality. The laws leave it up to the manufacturers to decide. This approach perpetuates legacy-type thinking about novel problems, which render the laws vulnerable to being ineffective (to say the least). What we can see here, however, is further (albeit, indirect) endorsement of the use-case for the AI-enabled computational law apps I described above, and which are the subject of the on-going “Prometheus” project at Stanford Law School’s CodeX.