AI Risk Ratio: Standardizing IoT Cybersecurity Settings

Determining how much security is “enough” for a specific IoT device, is not a decision that should be left solely to the device manufacturer. It should initially and primarily be determined through an objective standard/metric that the manufacturer has to comply with. This approach helps curb adoption of random and inaccurate security settings. It’s also legally efficient as it helps more accurately frame a reviewing court’s liability analysis.

Part of this security-labeling effort begins with assigning a risk value to a particular IoT device. Since my area of interest focuses primarily on AI, I stick with AI-capable IoT devices and the risk-value algorithm I propose is something I call the “AI Risk Ratio” (ARR).

As I have written here before, ARR stands for the proposition that the greater the computing power of AI integrated and used within an IoT device, the greater the probability that the specific device will be capable of generating, storing and transmitting higher-quality/value data, garnering a higher target value score[1], requiring stronger security protections.

Each security level is assigned a numerical value. On the lowest security-requirement scale we have Level 1 and on the highest Level 5. (This security level scale and value dovetails with the AI taxonomy I first proposed in 2012 and expanded on in here.)

The ARR security level assignment can be set through a number of mechanisms. One possible model, arguably the favored one for now, follows the FICO credit score system. Unlike FICO, the ARR would not, at least initially, necessarily have to rely on multiple report sources (as is the case with the three large credit bureaus). A central entity, with an ISO-like status, for example, could take on the task, eventually broadening the scope of data points it uses and certifying additional entities to provide it with evaluative data.

Once the ARR score is set, it is used by AI-enabled computational law (CLAI) applications to set their default communication protocols. Since these apps can deliver actionable information to the user in a simple (e.g., signaling) or in a complex (e.g., chat) format, the ARR score determines their default setting. Consider the following illustration: a NEST thermostat, has a ARR Level 1 score and an implantable medical device has an ARR Level 4 score. In this example, the CLAI communication protocol with the NEST user is set to “signal,” whereas the medical device is set to “chat.”

Additional refinements to this standardization methodology will be forthcoming.

___________

[1] From a data thief’s perspective.

***Postscript***

Update September 4, 2019: To be effective/practical, the ARR needs to possess dynamic representational capabilities. It needs to be designed with an explicit uncertainty-capable algorithm, which means that it can effectively deal with a wide variety of AI applications and with infinitesimal unexpected situations, adjusting throughout accordingly. Note also that such capability correlates with the “perfect” information discussed in my XAI post (see here) specifically as it relates to the first element–the “relevant” data characteristic, which is itself a subset of the AI’s representational efficacy.

Update August 26, 2019: Security is a matter of degree. It resides on a spectrum and changes depending on the absence or presence of relevant variables, which also affects the ARR score. Fine-tuning the ARR score can become increasingly complicated in an ever-expanding ecosystem; i.e., the greater the proliferation of AI is (don’t forget, it’s also going to get cheaper)–the more ubiquitous it becomes–the challenge intensifies. A possible solution to this is endowing the AI with a game-theory capability, which can help ensure the ARR score it delivers is continuously relevant in any particular set of circumstances.

Update August 11, 2019: In one of the chapters I wrote in The Law of Artificial Intelligence and Smart Machines I noted that “[b]y using ARR it becomes possible to see that hacking risks are properly regarded as being ‘elevated’ in those devices that garner a high ARR score.” It is important to note that the ARR is not limited to cybersecurity considerations. The same principles it embodies can also be ported to broader operational security considerations. A high ARR score can signal or drive a requirement, for example, that a prospective user/consumer have a license to purchase the specific AI item. The ARR score can also signal to an insurer that the AI item represents a higher risk and a concomitant increase in premium will be required. From a more general legal perspective, the ARR score can help, for example, calibrate judicial review/attitude and the level of liability that should be attached to the manufacturer.

Update November 9, 2018: The Food and Drug Administration’s Content of Premarket Submissions for Management of Cybersecurity in Medical Devices (Published Oct. 18, 2018) discusses, in relevant part to this post, a two-tier cybersecurity risk framework for medical device cybersecurity. Tier 1 is “Higher Cybersecurity Risk” and Tier 2 is “A medical device for which the criteria for a Tier 1 device are not met.” This tiered approached is similar in concept to the ARR score and synchronizing between the two could emerge as a cybersecurity best practice for medical devices. Additionally, the FDA’s tiers could also be evaluated and implemented in relation to the AI taxonomy and its liability corollaries that I have previously described.