The AI “Weak Link” Problem: A Distortive Variable in the EU’s AI Bill

In stark contrast to the plethora of AI applications launched everyday, you will be hard pressed to find any trace of meaningful design standards. It’s pretty much a free for all. Yes, there is talk of ethical design. NIST, IEEE, ISO and others publish “guidelines” and some developers pay attention. But at the end of the day all of this pales to nonexistence against the proliferation of unregulated AI apps, which will continue to rapidly grow and outpace the soft/borderline inert (i.e., non-legal) regulatory voices.

The “weak link” problem/phenomena arises from the proliferation of AI apps and the concomitant regulatory vacuum in which they operate. The lack of meaningful regulation and the decreasing cost of development and deployment can lead to a situation where less-sophisticated AI apps “infect” similar and even relatively more sophisticated AI apps. This type of pollutive effect could negatively impact, for example, unsupervised machine learning applications where one app could “feed” another app corrupt or bad/low quality data.

The weak link phenomena could have a distortive effect on AI classification schemas. Take the EU AI bill, for example. It identifies four AI application categories: Unacceptable Risk; High Risk; Limited Risk; and Minimal Risk. An AI application that was deployed under a “Limited Risk” design framework could be distorted by a less sophisticated app (i.e., the weak link) and this might be the result of intentional or unintentional activity. Essentially, the weak link would distort the Limited Risk app into a High Risk or Unacceptable Risk app, and the developer and/or end user might never know or find out only after it is too late and they find themselves in damage-assessment and control mode.

One of the solutions to this challenge is to have a development and operation permitting structure in place that is legally enforceable. I have written about this in various posts, including, for instance, in the discussion on Quantum Computing and AI Algorithmic Bias, in The Role of Explainable AI (XAI) in Regulating AI Behavior: Delivery of “Perfect” Information.