Maximizing Representative Efficacy: Part II

The folks behind ToS;DRPrivacy Icons and Open Notice share the common and important goal of digesting ToS and privacy policy/notices into manageable, meaningful consent mechanisms (collectively, “ToS.”). Launched in June 2012, ToS;DR is the newcomer. They employ signaling icons, but their core competency is site-grading on an A to E scale. Sites with “very good” ToS earn a “Class A” while those with “very bad” ones earn a “Class E” rating. (They have quite a lot of grading to get through.)

As it’s name implies, Privacy Icons signals critical ToS provisions with graphical representations. (See also Walter Effross’ Logos, Links and Lending: Towards Standardized Privacy and Use Policies for Banking Web Sites 24 Ohio N.U. L. Rev. 747 1998.)  And for its part, Open Notice is taking on a moderator role, seeking to “help projects find and talk to each other.” As far as I can tell, the former two rely on a manual analytical process (their methodology is unknown) and all of them are now looking for volunteers.

Part 1 of my post examined efficient representation of information, specifically in contractual settings.  Signaling ToS is an important example, especially in light of their ubiquity and that every second they create enforceable contractual obligations for countless users.  But more immediately important here is that they are not user-friendly; they are difficult to understand and overbearing in nature.  Thus it’s no surprise people don’t read them and appear to have given up trying to wade through them.  In Privacy Merchants, for example, Amitai Etzioni cites a DoubleVerify 5-billion ad survey that demonstrated privacy policy signaling icons were clicked on 0.002% of the time.  Out of these clicks, only 0.00002% users opted out of the targeted advertising. L. Gordon Crovitz argues this dismal rate is evidence that people don’t care because they simply can’t; they find the provisions “impenetrable.”

En-masse surrender to ToS impenetrability is not the least dispositive of the need to find a solution.  On the contrary.  Site owners aren’t giving up on them and users may belatedly find they are entering into some pretty onerous, enforceable legal commitments.  So the question is how do we go about doing a better job about making progress in cracking this impenetrability nut?  The answer:  We make the ToS user-friendly; i.e., accessible and understandable.  How do we accomplish that?  With AI-powered computational law apps.

Initially, the company that efficiently leverages AI will win, at least in the short term.  To be clear, the efficiency quality of this feature will be the game decider.  The company that can efficiently represent to a user whether a given ToS is “acceptable” will have the winning solution.  This acceptability result will be the product of an AI agent that delivers relevant information through a personalized, risk-tolerance profile algorithm (RTPA).  It is also dynamic, meaning that the algorithm exponentially learns and adapts along with the number of on-line interactions the user has.  This approach is a vast improvement over the manual ToS signaling efforts.  And as Steve Jobs was fond of saying: “Oh, and one more thing.”   The company that will efficiently integrate AI with an RTPA app that resides on our smartphone, tablet and other computing platforms will dominate the space (for some time). Hint? Ok, here you go: Apple.

So what is Apple thinking about?  We can glimpse into their creative thinking through their published patent applications.  There are currently (at least) 11 such applications related to Siri (7 of which were obtained as a result of Apple’s acquisition). When you read the embodiments in, for example, US 20070100790, it is possible to extrapolate into other embodiments that employ RTPA.  Siri could employ it as it feeds from Apple’s server farms that store an ontology for every known ToS. This data repository would have a static component, being made up of ToS deposited by participating members, and a dynamic component constructed from data mining bots.  In line with my posts on Siri here, and here, and the computational lawyering aspects I wrote on here I see Siri as the single most compelling platform/engine for maximizing representative efficacy and Apple’s patent applications suggest that Siri and an RTPA are not far-fetched.

Imagine the following example:  I ask Siri whether Twitter’s ToS is acceptable? She comes back and advises me that “it’s ok, but avoid posting pictures because Twitter will tell you they own them.”  And it doesn’t stop there. After asking the question once, Siri will update me whenever Twitter makes changes that I care about. For example, she will inform me: “Twitter’s ToS has just become more user-friendly!  They no longer claim to own your pictures!” Siri will also be on the lookout for alternatives to Twitter (however unwieldly that might currently seem), which renders her advice on ToS more powerful and much more multi-dimensional than current manual ToS signaling efforts.

While Siri is currently the only viable candidate, undoubtedly additional contenders will emerge.  Their challenge will be to sufficiently differentiate themselves from her.  That’s a tough task.  Developing and launching a viable alternative in this current über copy-cat environment doesn’t bode well for those waiting for something other than Siri. Aside from full-blown Siri-like contenders, we will see smaller, niche players, like for instance, Ask Ziggy.  (They recently landed $5M in venture financing.)  I wouldn’t be surprised if Ask Ziggy gets acquired in the next year or two and integrated into a larger AI engine.  (On a side note: Ask Ziggy is working closely with Nuance Communications a company with roots that trace back to SRI and Ray Kurzweil.  Yep, the same SRI that gave birth to Siri.  Also of interest, assuming it is true, is that it’s the Nuance servers that are already feeding Siri’s magic-like knowledge.  So, is Apple a potential buyer?)

As noted in Part 1 of this post, “our legal system, particularly in the realm of contract law, has fallen short of properly dealing with the ‘practical issues involved in regulating information.’”  AI, coupled with RPTA is poised to change that and render ToS user-friendly.


Update November 8, 2017

Integrating computational law AI apps (CLAI) into augmented reality (AR) devices is the next iteration in empowering the user’s informed-decision making. While interfacing with an intelligent assistant (e.g., Siri) currently requires a voice command to initialize, an AR device with CLAI capabilities can provide relevant, meaningful and actionable information virtually instantly depending on what the user was looking at. The more attractive, user-friendly AR devices become, the broader consumer adoption will be, helping herald the age in which these two technologies merge. Will Apple lead the way?

Update June 29, 2017

Six years ago I first introduced the concept of computational law AI applications (CLAI). The discussion back then was limited to Siri, but it is clear that we have more options today. In the context of healthcare apps, the CLAI should work independently (of the health app) to empower users. Within that configuration, users will be able to arrive at informed decisions on a number of different relevant aspects of their app. From a privacy perspective, they would be able to intelligently assess whether or not to trust a certain provider. That decision would be driven by the CLAI combining a cybersecurity score garnered, for example, from the Principles of Fair and Accurate Security Ratings (recently advanced by the U.S. Chamber of Commerce) and other publicly available “beacons” such as SOC3 certificates and other sources. As the app environment becomes more complex, more capable, so does the potential role for CLAI.

Update, June 6, 2017

The recent Government Accountability Office (GAO) IoT report identifies privacy as a major challenge. Section 4.2.2, for instance, discusses user-notification practices, stating that they should comply with the “openness” principle of the OECD Fair Information Practices (FIPS). While important as a principle, “openness” is insufficient as a practical matter; it fails in solving the Crovitz “impenetrability” problem. Think of this failure from a regulatory perspective (discussed above and in Part I). From that perspective, the openness principle achieves little, if anything, to remedy the information-deficiency that continues to plague contemporary contract practices. This same deficiency can be remedied, however, by updating FIPS; mandating the integration of computational law apps into IoT devices. These apps would serve as “informational intermediaries” with the primary goal of providing users with actionable, and by extension, empowering information that would make privacy notices meaningful.

Update, July 23, 2015

The SPY Car Act legislation (Sens. Blumenthal and Markey) seeks (among other things) to protect drivers from security and privacy risks through the development of a “cyber dashboard” rating system. The dashboard concept is representative of the need to provide consumers with an efficient representation of information, in this case how well the vehicle protects the driver from cyber-related risks.

As I have written here before, part of the inquiry is not only the vehicle’s protection score. The more interesting issue is whether this score should be acceptable to the driver. The RTPA, integrated into the user’s smartphone, enables a better, educated decision by the driver, one that can independently assess the representation made by the manufacturer, the FTC and the NHTSA.

Tags: ,