AI and COVID-19: Securing the Intensified Reliance on AI Prime Operational Qualities

AI is playing a central role in aiding and speeding-up the search for treatments and a cure to the COVID-19 pandemic. The intensified energy/focus placed on AI these days (and coming months) is likely to have an interesting and positive reverberating effect. We can expect this to further accelerate and deepen AI’s propagation and implementation across many sectors, not just health care.

Concomitantly with this trend we are likely to find an intensified reliance emerge on ensuring AI applications contain prime operational qualities: That they are safe and efficient. (We can parse out the “safe” and “efficient” prime qualities into various sub-qualities. For example, “certified for operation” — as discussed in the Role of Explainable AI post — would belong to the safe prime quality and “accurate” or “perfect information” to the efficient quality.) With this intensified reliance we will find that it’s important to have effective tools that mitigate against the erosion risk of the safe and efficient prime qualities.

The level of reliance on the safe and efficient prime qualities will fluctuate depending on what the AI application is used for and the reasonably foreseeable collateral effects it has that will need to be dealt with. Used in settings that are, for example, prone to snag on privacy concerns, may require, as a condition precedent to deployment, the hard coding of algorithmic acceptable behavior. One such AI application could be, for example, population screening. The individual-level impact of decisions arrived through reliance on these types of AI applications can be expected to have significant collateral effects on the individual and the greater community. Absent ensuring that the safe and efficient prime qualities are properly accounted for prior to deployment (in the algorithmic design and contractual obligations), the potential for harm is ineffectively and incompletely abated, rendering their erosion inevitable.

***Post Script***

September 21, 2021: iOS 15.1 beta was introduced today. And with it, iPhone users can upload their COVID-19 vaccination status to the Health app and it will generate a vaccination card in Apple Wallet. This card can then be shown at venues that require proof of vaccination before allowing access. Before I go any further, it is important to note that Apple has, relatively speaking, a stellar privacy record. There is no other company that has so publicly and so practically (in terms of its technology) devoted itself to the championship of privacy. But with that being said, it is important to keep in mind that Apple will likely have hundreds of millions of vaccination records at its disposal. And even if Apple was not interested in leveraging this data, and I believe it would not, it is impossible to ignore how attractive this information could be for hackers. Now, the Health app is not AI, but AI might be used to leverage this data indirectly. For example, tracking how many people used their Health app to gain admittance to a certain venue and then using AI to extrapolate and provide value-add observations/predictions that would undoubtedly amount to a violation of privacy under CCPA, GDPR and other similar regimes.