Using XAI to Comply with the California Privacy Rights Act

XAI has made a first-of-its kind (likely not last) appearance in privacy legislation and the influence of standard-setting organizations (well, at least one of them) on the law’s approach is plainly visible.

XAI makes its inaugural legislative appearance in the California Privacy Rights Act (CPRA). This law requires that businesses provide “meaningful” information about the logic used in automated “decisionmaking [sic] technology.” (Section 1798.185(a)(16).)

Though the CPRA does not explicitly reference XAI, it is sufficiently clear that it is being referenced. The tell-sign here is the use of the term “meaningful.” This exact term is used in NIST’s “Four Principles of Explainable Artificial Intelligence“(Draft NISTIR 8312). Coincidence? Highly unlikely. NIST’s publications carry significant weight in the development of technology related legislation and regulations so referencing it in the CPRA makes complete sense. For example, The Federal Trade Commission publicly proclaimed NIST, namely the NIST SP 800-53 (“SP” stands for “special publication”) as the benchmark for measuring whether legally-reasonable cybersecurity practices have been put in place.

Now, it will be interesting to see how regulators and courts interpret “meaningful” information; i.e., what are the characteristics of a communication that satisfies the “meaningful” requirement for a consumer. This is not an easy task. Information that is “meaningful” to a consumer, for example, needs to be something that is easily understood and is relevant. But these parameters can be slippery, difficult to define.

But there is a way to alleviate this challenge. It begins with using XAI to glean the relevant information from the application and present it to the consumer in a way that is easily understood. (Note that the XAI can be a feature of the decision-making application, or an independent application, which I discuss here.) This approach ties in with my introduction and discussion of the concept of Perfect Information and how it maps to NISTIR 8312. Again, Perfect Information is defined as information that is: (1) relevant, (2) easily understood, and (3) not prone to misrepresentation. (For more on this, see The Role of Explainable AI (XAI) in Regulating AI Behavior: Delivery of “Perfect” Information.)

The “easily understood” quality is addressed with presenting the information in a familiar, proven format. The one I think that is suitable for the task is what we see in the consumer credit report. Credit reports are easily understood. They do not explain the precise correlation between a specific credit event and its precise impact on the FICO score. (Even if disclosed, it is likely difficult to understand.) All they inform the consumer about is that a certain, identified event caused their FICO score to increase or decrease by a number of points and that is why a certain financial decision was made; there is no explanation as to why only by x and not y points. Thus we can see that using XAI to identify the parameters (not methodology) used by the decision-making application and providing it in a way similar to a credit report would likely comply with the CPRA’s “meaningful” requirement.