Reining in the Drones

Three years ago, I introduced (here) the concept of computational law AI applications (CLAI). Briefly, it described smartphone applications (fueled by AI) that provide legal advice and can do so in a jurisdictionally-sensitive manner (through GPS and other location-enabling mechanisms). The same principle naturally extends from apps to autonomous and semi-autonomous vehicles. While these apps deliver legal advice to a human, these vehicles adjust their operational behavior to the local laws and rules in a manner that intentionally bypasses the user.

But this is all moving too quickly. So let’s slow down for a bit. Imagine autonomous and semi-autonomous vehicles (terrestrial and aerial) harnessing the power of computational law. I call these vehicles, collectively, “ACLVs.” These ACLVs are always “connected,” to transportation-centric legal ontologies (TCO) from which they can download real-time operational rules. The TCO infrastructure also includes a variety of behavioral “beacons,” which can be used by businesses (like stadiums and movie sets) to signal the ACLV to stay away. (This is similar, in concept, to what robots.txt does for thwarting unwanted web crawlers.) Net result is that just as these ACLVs are able to effectively negotiate difficult terrain or crowded, sensitive airspace, they can similarly efficiently conform to the legal restrictions and behavioral rules that exist where they are deployed.

Using ACLVs helps create a positive operational environment for manufacturers and operators alike. The latter, are blind, have no knowledge of the behind-the-scenes workings. They also do not need to know about them. In fact, in this case, their ignorance is a significant blessing. Their lack of knowledge, combined with the lack of (short of jailbreaking) capability to modify the settings affords them a much needed liability shield. In a drone scenario, for instance, by removing the operator’s ability to overrule the drone’s (dynamic) operational parameters, an ACLV drone designer can ensure that the drone does not rise up above 300 feet within 5 miles of any airport. The manufacturers/designers also benefit from this technology and design methodology because it means that they can continue to develop exciting new products without incurring debilitating liability.

The laws and rules around the operation of ACLVs are complex and jurisdictionally-sensitive. Again taking drones as an example, we are witness to how the Federal Aviation Administration (FAA) seems to be constantly playing catch-up, throwing all sorts of regulations up into the air (pun intended) in the hope of controlling drone user behavior. This is far from an ideal environment. From a user’s perspective, understanding and complying with this legal potpourri quickly becomes a nightmare. Pushing these rules and laws out into the drone’s operational manual is a non-starter. The likelihood a user will read them is, at best, only a few (very few) percentage points more favorable than web surfers reading terms of use (browsewraps or clickwraps) prior to visiting a site. The “brave” operators ignore the rules and laws, while the others just give up on it, not wanting to risk fines and other nasty sanctions.

ACLVs, in contrast, can read these laws and rules. They are not hampered by human microscopic attention spans and inability to digest directions. ACLVs can quickly adjust their operational parameters and comply with them in almost real time. They know their coordinates thanks to their GPS and can therefore access relevant, local TCOs. Thus in one scenario, we see an ACLV drone logging in to the TCO prior to launch to confirm its operational database is current with the rules governing its location and to download any necessary updates. Once that process is complete, the ACLV drone is ready to receive operator orders (though some orders might be ignored if they are determined to be outside of scope of what is permitted). Once airborne, the ACLV drone stays away from stadiums and does not rise more than 300 feet (because there is an airport nearby).

It will be exciting to see how ACLV is implemented. There is at least one company already thinking in that direction: Verifly.

****

Update October 26, 2017: Roughly nine years ago, I was invited by Dr. Sven Beiker (then Executive Director of the Center for Automotive Research at Stanford) to join him in a meeting at Stanford with leaders from Honda, GM, Ford, Chrysler and Nissan. We spoke at length about a plethora of technical and legal issues surrounding the design and adoption of autonomous vehicles. Nearly a decade has gone by, and the key issue that worried these car manufacturers the most back then remains stubbornly intact to this day: liability. Specifically, how to minimize it. Now looking back, I connect the proverbial dots, tracing my thoughts on this topic to my Computational Law Applications and the Unauthorized Practice of Law (Jan 2012) post, which then gave birth to the Reining in the Drones post (Dec 2014). In both of these posts, computational law served as an enabler. And while we frequently think of it in the context of aiding the consumer’s access to law, it is clear from these two posts that computational law can serve car manufacturers as well. So today, almost nine years after that meeting with Dr. Beiker, I can say that the manufacture of autonomous computational law vehicles (ACLVs) offers a key method by which to mitigate autonomous vehicle manufacturer liability.

Update April 16, 2017: Predictive video algorithms (Scene Dynamics) could play a useful contributory data-feeding role for building the TCO. In their paper “Generating Videos with Scene Dynamics,” MIT researchers demonstrated a generative adversarial network that predicts “plausible futures” from still images using a spatio-temporal convolutional architecture. Scene Dynamics’ minimal-supervision internal learning capability could be augmented by the TCO, which could take on a “supervisor” role, validating predictive learnings as “true” when they match known conditions. Combining this spatio-temporal convolutional architecture with fractal representations that deliver algorithmic objective differentiation (AOD) could help yield better, more accurate predictions, a much desired upgrade from predictions which are merely “plausible” and, at the bottom line, strengthening the data credibility of the TCO.

Update: With Intel’s announcement today that it is buying Mobileye, it joins Tesla and Nvidia to make the top-3 companies in the autonomous vehicle market. (Mobileye was a supplier to Tesla, among other companies.) Intel is paying $15 billion for Mobileye, and this can be seen as a direct investment, one of many others to come, in creating the TCO I talked about here. Ultimately, it will be important that all market players standardize into a single-standard TCO so we don’t end up with silo environments, which would be operationally difficult and problematic from a legal perspective. The desired standard could come in an ISO format, and incompatibility with it would/should expose the manufacturer to legal liability.

Update: TCOs can also include social-sourced data, similar to Waze. Here, ACLVs report their experiences while in operation, which then get broadcast to other ACLVs, helping them avoid hazards.

Update: Convolutional deep neural network object recognition capabilities, which closely match that of humans, can feed data to the TCO. This means ACLV operational reports will have one additional data type/source, which should serve to promote navigational accuracy.

Update: The Digital Millennium Copyright Act (DMCA) prohibits the circumvention of technological protection measures (TPMs) in copyrighted works. There are some exceptions, however. As of October 2016, the DMCA allows circumvention of TPMs in (in relevant part) autonomous vehicles. The exemption permits circumvention to “allow the diagnosis, repair or lawful modification of a vehicle function.” This exemption, particularly as it relates to the “lawful modification” has a potential negative effect, rendering ACLVs much more vulnerable to hacking.