Why Writing Law Should be Like Writing Source Code: Artificial Intelligence and the Future of Privacy Laws

During the 2010 Spring Symposium of the Association for the Advancement of Artificial Intelligence, I presented a privacy-focused paper: “Application of an Autonomous Intelligent Cyber Entity as a Veiled Agent.” It described using an AI application as a public-facing avatar; a privacy shield. The setup would be accomplished through a mechanism similar to that used for corporations.

Now here we are, more than 10 years later and a lot has changed in the privacy law ecosystem. We have witnessed a massive increase in privacy-related legal jurisprudence, giving rise to new frameworks like the GDPR and the CCPA. Industry-driven efforts to use encryption as a privacy shield were also exposed to vigorous public debate, such as was the case in the aftermath of the 2015 San Bernardino shooting, with the legal battle between the DOJ/FBI and Apple highlighting complex public policy and privacy questions that remain outstanding to this day.

But privacy law can only go so far. Yes, generally, it can do a pretty good job in providing the range of do-and-don’t data collection practices. Yet protecting privacy, really protecting it, means protecting sensitive data from public exposure. As such, there is, to borrow from the world of software coding, an execution flaw: At the end of the day the very people that the law is supposed to protect don’t really understand their legal rights (think: CCPA) and won’t even bother to take the time to learn them (it is, after all, a daunting task). And so we have a legal framework were most people won’t be reasonably successful in enforcing their legal rights, not so much from the perspective of cashing out in litigation-driven settlements, but more so as a matter of effectively maintaining their private data private.

The AI application I described in the symposium paper can serve as a useful privacy-enhancing companion, not only for people that desire to take their privacy practice to the next level, but also to law makers and the AI application developers. For this to work, it means, among other things, adopting a novel approach to writing privacy law (or any other law, for that matter) and bringing it more in line with the practice of writing source code. This entails, and requires, maintaining a synchronous, cooperative relationship, one where new/updated privacy laws are periodically “seeded” to AI developers and these, in turn, build and maintain the veiled identity AI applications with an eye on maximizing the user’s utility. This approach alleviates the execution flaw that is currently the state of privacy law drafting.