Determining how much security is “enough” for a specific IoT device, is not a decision that should be left solely to the device manufacturer. It should initially and primarily be determined through an objective standard/metric that the manufacturer has to comply with. This approach helps curb adoption of random and inaccurate security settings. It’s also legally efficient as it helps more accurately frame a reviewing court’s liability analysis.
Part of this security-labeling effort begins with assigning a risk value to a particular IoT device. Since my area of interest focuses primarily on AI, I stick with AI-capable IoT devices and the risk-value algorithm I propose is something I call the “AI Risk Ratio” (ARR).
As I have written here before, ARR stands for the proposition that the greater the computing power of AI integrated and used within an IoT device, the greater the probability that the specific device will be capable of generating, storing and transmitting higher-quality/value data, garnering a higher target value score, requiring stronger security protections.
Each security level is assigned a numerical value. On the lowest security-requirement scale we have Level 1 and on the highest Level 5. (This security level scale and value dovetails with the AI taxonomy I first proposed in 2012 and expanded on in here.)
The ARR security level assignment can be set through a number of mechanisms. One possible model, arguably the favored one for now, follows the FICO credit score system. Unlike FICO, the ARR would not, at least initially, necessarily have to rely on multiple report sources (as is the case with the three large credit bureaus). A central entity, with an ISO-like status, for example, could take on the task, eventually broadening the scope of data points it uses and certifying additional entities to provide it with evaluative data.
Once the ARR score is set, it is used by AI-enabled computational law (CLAI) applications to set their default communication protocols. Since these apps can deliver actionable information to the user in a simple (e.g., signaling) or in a complex (e.g., chat) format, the ARR score determines their default setting. Consider the following illustration: a NEST thermostat, has a ARR Level 1 score and an implantable medical device has an ARR Level 4 score. In this example, the CLAI communication protocol with the NEST user is set to “signal,” whereas the medical device is set to “chat.”
Additional refinements to this standardization methodology will be forthcoming.
 From a data thief’s perspective.