Note: This post was originally written with cybersecurity in mind and was published on LinkedIn. Even as I was writing it, I could see the growing synapses to computational law. And at first I thought, I should write another post that is computational law specific, but I decided to publish it here as-is instead. The legal compliance challenges deep neural AI will pose (and tying that in with my thoughts on quantum computing published here) are ripe for addressing by similarly-powered computational law apps. (I first wrote about those in 2012, and you can read that here.)

***

Google’s AlphaGo’s defeat (4-1) of Go master Lee Se-Dol marks an important milestone in the road to mainstreaming deep neural network artificial intelligence (AI). These applications successfully tackle disambiguation, think through alternatives, use intuition to determine optimal mode of operation and they will irreversibly drive change into how we think and operate in cybersecurity. The threat spectrum is deep and wide. Take, for example, the risk such applications pose to big health data. Existing deanonymization capabilities will be super-turbo charged once these AI applications are more accessible (which is only a matter of time).

The HIPAA Security Rule, which requires covered entities and their business associates to continuously evaluate risks and vulnerabilities and implement appropriate security controls to address them is written broadly enough to accommodate this change. But the question is whether covered entities and their business associates will be able to meet the challenge. As it is, the health care sector is already extremely susceptible to cyber attack. This reality recently triggered action by the U.S. Department of Health and Human Services’ Health Care Industry Cybersecurity Task Force, which was formed under  Cybersecurity Information Sharing Act (CISA). So now these entities will need to recalibrate their policies and procedures to account for this looming threat capability.

Consider, for instance, the impact these AI apps have on the requirements under 45 CFR 164.306 (b)(2)(iv). The burden here is far from trivial as the covered entity and its business associate are directed to analyze the “probability and criticality of potential risks” posed to ePHI. What we’ve known so far has been rendered virtually obsolete by these AI apps. Bundle this directive with best practices borrowed from the security control selections under FIPS 200 and NIST SP 800-53 and choosing baseline security controls becomes a relatively much more difficult and expensive task.

So far, I have only briefly illustrated the impact of these AI apps on ePHI, but as I mentioned above, the threat spectrum is deep and wide. And because we live in a legal analytical framework that increasingly borrows and references industry standards (e.g., PCI DSS in Nevada’s NRS603A.215), what constitutes a reasonable cybersecurity posture will equally have far reaching requirements.

Throw into the mix my thoughts from my post “Eviscerating the Paradigm of Harm: Quantum Computing and the Remodeling of Article III Standing” and it is easy to appreciate that the field of cybersecurity is going to get very, very interesting.