Licensing Powerful and Complex AI

The AI power:complexity ratio (for more on this see here) can serve as a guide for determining which AI apps (cyber and cybernetic) require a license; not the covenant-not-to-sue type, but the to develop-and-operate version. To recap: the more powerful and complex an AI application is, the stronger the need for a license.

What makes a given AI “powerful and complex?” One benchmark is how it is used. For example, an AI medical application used for diagnosing cancer. Its “power” is a function of the computing resources (CPUs) required to run it. The “complex” variable reflects its method of operation, what type of machine learning platform is driving it and what sort of potential vulnerabilities might plague it, such as those relating to bias and the existence or lack of XAI functionality.

Licensing is jurisdictional and is not necessarily confined to a given country’s borders. It can and should be broader when it comes to powerful and complex AI applications. One way to execute this effectively is to adopt a treaty format. It expands the pool of participants and also serves as a way to bolster adherence to (and deter/reduce divergence from) an accepted AI ethics standard. Since a license is provided and revoked based on an individual’s qualifications and actions, adherence to the ethical triad (safe, reliable, robust) can be periodically monitored and corrective action taken as required.

Another benefit of a license schema is that it helps mitigate harm. Essentially, unlicensed use of these types of AI applications makes it easier to assign liability and compensate for harm. Continuing with the medical application example from above, a hospital would/should be loathe to make unlicensed use of such an diagnostic AI application. The liability is simply too great.