Monitoring, Biometrics and Robotics: AI in the ‘Day-After’ COVID-19

My last post, AI and COVID-19: Securing the Intensified Reliance on AI Prime Operational Qualities discussed the “safe” and “efficient” operational features as the “prime operational” aspects desirable in an environment where there is an intensified reliance on AI.

Even before all of this, but definitely in the ‘day-after’ COVID-19, we can expect AI (in varying flavors) to be integrated into countless applications where its capabilities serve to enhance their function. For the time being, I offer observations on the following three high level applications:

  • Monitoring – AI will be used for predicting the physical movement of people. Monitoring will aim at specific people (not just those diagnosed), geographies, and other parameters.  See also, the April 2 update to AI Liability: When “Intelligent Deviation” is Undesirable. Biometrics can also be expected to play a role in enhancing monitoring.
  • Biometrics – Growing aversion to “touch” interfaces will increase reliance on alternative input mechanisms. This translates to more emphasis on, for example, image and voice recognition. Will biometric data “leak” to enhance AI monitoring? Probably.
  • Robotics – Increased reliance on robots to carry out activities traditionally handled by humans will turbocharge the need to ensure our legal system is sufficiently capable of managing this surge. Iterative liability and other AI-related liability models will be helpful and required to assist in delivering predictable remedies. (For more on “iterative liability,” see Artificial Intelligence App Taxonomy and Iterative Liability.)

***Postscript***

May 28, 2020:  AI-powered monitoring raises accountability, ethics, fairness, and nondiscrimination issues. Developers and business customers should incorporate relevant guideposts into their policies and procedures to help ensure the AI applications they build and use are legally compliant (the FTC’s “Using Artificial Intelligence and Algorithms” is a useful reference). For organizations purchasing AI-powered monitoring applications, the following vendor gating questions should be considered in their contract negotiation phase: (i) what application controls are in place to ensure the data set is representative? (ii) does the data model account for biases, how is that bias reported, and to what extent is it auditable? (iii) what is the application’s accuracy threshold? (iv) what other ethics/fairness controls are integrated into the application? As a matter of policy, the organization should have its own misuse-and-abuse controls to further reduce the risk of adverse application functionality. All this being said, it is important to keep in mind that there is no legal requirement to achieve 100% certainty in controlling for these issues, but it is important to be in a position to reasonably articulate and demonstrate that there is a policy in place that aims for it.

April 21, 2020: Forbidding re-identification (absent a countervailing, legally-sanctioned need) of anonymous data is necessary for curbing abuse. While this can be easily accomplished in data-use applications where a contract exists between the parties, mass data usage applications can present a tougher challenge. COVID-19-related data gathering applications qualify as “mass” data usage instances and the challenge of protecting anonymized data in this setting from re-identification needs to be thought through differently. Where it comes to AI-driven collection applications, it will be beneficial to also build-in data monitoring capabilities that can alert proper authorities (state AG, for example) to re-identification attempts. But for this to be truly an effective protection mechanism, it will be necessary to adopt a universal requirement for having this type of monitoring capability. Paired with automatic fines (even discounted for imperfect enforcement), AI-driven re-identification monitoring can be an important safeguard.

April 16, 2020: AI can help make sense of the massive amount of COVID-19 related data. Monitoring physical movement can provide certain insight, but will that really be useful in this fight? Let’s pretend, for the moment, that end user privacy is in fact effectively protected (Google says its Community Mobility Reports data, for example, is aggregated and anonymized). Will knowing where people of interest are and, ultimately, where they are going to be, actually help curb infection in a meaningful way? The speed of infection, after all, exceeds the speed of intervention. Let’s further pretend that this data will actually be useful in the fight. (A very big leap of faith is necessary here, but bear with me.) Now, the promise of anonymity is a big one (do we believe it?) and difficult to keep and even more difficult to enforce in the long term. So what’s the solution? Well, for starters, we need to consider hard-coding well-known limited-purpose principles, such as the OECD’s Fair Information Practice Principles, into the AI applications – like firmware-based geo-fencing in drones. Taking this step may prove useful in dampening the risk that a data-hungry party (not necessarily Google, Apple or even the feds) will get its grubby hands on it. In the rush to ‘do something’ about COVID-19 it is important to think a few steps forward and ensure that we’re not creating additional (though not necessarily bigger) problems that will dilute the picture of success.

 April 9, 2020: The New York Intelligencer reports that drones are being used in some cities in the US to enforce social distancing. Now, read the first bullet again. Load up drones with AI-powered monitoring capabilities and they will be able to go to where people “of interest” will be before they get there, or right when they get there, helping direct needed resources into hot spots before they become “hot.” Of course, balancing privacy with health and safety concerns will be and remain a challenge; results will not be perfect, but if done correctly, the benefits will be worth it.