A previous post focused on AI code writing, suggesting that privacy laws should be seeded to AI developers and these, in turn, would enhance the AI-enabled veiled identity applications. This post is focused more broadly on how these AI applications can augment privacy. And to get into that, we need take a look at what is unquestionably the most fertile privacy framework: the Fair Information Principles (FIPS).
FIPS is old; it came out in the early 70’s. But it is monumentally important. It is built around seven principles that are required of entities that collect and process personal information: (1) placing limits on information use; (2) formalizing data minimization; (3) limiting disclosure of personal information; (4) collecting and using only information that is accurate, relevant, and up-to-date; (5) enabling individuals with notice, access, and correction rights; (6) building transparent data processing systems; and (7) providing security for personal information.
There are at least two ways to consider the role of AI as it relates to FIPS and both tie to the AI-enabled veiled application that I described in my Application of an Autonomous Intelligent Cyber Entity as a Veiled Agent paper. The first way is where this type of application is used to monitor, inform and enforce remedies. The second involves a different approach, one in which FIPS, at least in the manner it is currently understood, is in a certain sense, rendered inert.
Let’s begin with a look at the second way, because it informs an understanding on the first. FIPS was designed with and around the understanding that the goal is protecting an identifiable individual from fraud, reputational ruin, etc. But once a veiled agent application is used, it fundamentally changes this understanding. Like a firewall, this application stands between the hordes of personal information collectors and the individual to which the information belongs. FIPS’ notice, access and correction rights, for example, become largely irrelevant when the individual is no longer personally identified with and in the transaction. In this setting, the concept of “personal information” is no longer valid. It has been subsumed (in a good way) by the veiled application.
But what is the effect on the personal information collectors/users (the advertisers, etc.)? Will they be disadvantaged? Will the use of a veiled application hamper or render their sales efforts obsolete? Surprisingly, maybe not. If a seller can still effectively sell their goods or services, what difference does it make to them if they know or don’t know the actual identity of the buyer? Take it one step further, if the seller can predict with the same, or largely equivalent level of accuracy, what the veiled application will purchase, that is all that matters…or should matter.
Now let’s look at the first way the veiled application interacts with FIPS. Here the application’s monitor, inform and enforce capabilities can be seen as FIPS-enabling. For example, limits on personal information use and data collection (with an eye on minimization) and disclosure could be more effectively monitored by this type of application than by the human the information belongs to.
For the conclusion, let’s turn again to the second way. Now, maybe even here FIPS is not completely rendered irrelevant. (I hinted to this with the “largely” modifier, indicating it becomes “largely irrelevant.”) These seven principles, with some modifications, could still be applied even to the veiled application’s meta data. For example, requiring the data collector to maintain security for this type of data could remain desirable as a way to protect from decreasing entropy, which, at least in theory, would eventually harm the veiled application’s end-user. The bottom line is: AI-enabled veiled applications are not only consistent with FIPS, they can serve to make it more effective.
April 23, 2021: Most states are driving hard to put in place privacy laws. Some, like Florida (HB 969) aim to stand out, adopting a tougher stance than California’s CCPA/CPRA. All of these efforts rely on the legacy protection scheme discussed above. How effective these laws will ultimately be in protecting privacy is going to be difficult to accurately measure. Will it be based on the number of class of actions with meaningful payouts? Does that really protect privacy? No, it does not. It just makes it more expensive for some companies to operate. Will it be based on how many consumers exercise their right of deletion, restriction, portability, opt-out, etc? Maybe. But here it would not be surprising if those numbers are disappointing. Bottom line is that a different mindset is necessary for protecting privacy. The conceptual framework for making that work is available in the form of the AI-enabled veiled entity discussed above.