Fixing the Galactic Disclaimer Problem in AI Warranties
We need to get serious.
There is universal agreement that AI applications need to align with the life cycle core principles. But we need to get serious about operationalizing this fundamental concept. How can we expect to have AI applications that are Trustworthy, Explainable, Secure, Reliable, etc., when developers are allowed to disclaim anything and everything under the known (and unknown) galaxies? Yes, contracts rarely contain any meaningful warranties and remedies for AI applications. But here’s the thing: This is precisely where we need to ensure developers are complying with the life cycle core principles.
I was recently asked if law students should use generative AI. I answered in the affirmative, but qualified it with “as long at they understand what they’re getting into.” I also brought in the ABA Model Rule of Professional Responsibility and Formal Opinion 512 as I expanded my answer. But understanding what we’re getting into when using these apps is not as straightforward as it might initially seem. If you’re old enough to remember what a user manual is, you’ll be able to reflect and nod your head in agreement that you don’t remember seeing one in a long time. That’s because there really isn’t such a thing anymore; not that lawyers, or any other consumer for that matter, ever read those anyway. These days AI applications are wrapped with shiny marketing hype, decorated with grandiose claims designed to do one thing: lure end users. All the while, the real important information, what the developer really thinks about the application, located in the warranty and remedies, is tucked away, like an embarrassing relative. It’s definitely not something any developer wants to waive around. Why? Because that’s where the developer tells the end user that they have zero confidence in the app.
Now, what I want to point out here is that this warranty and disclaimer practice is not something that we just need to shrug our shoulders on and accept. No. If we’re really serious about ensuring AI apps align with the life cycle core principles, we need to make sure developers are contractually obligated to do so. Now, this can get pretty complicated, as it implicates the Permit principle, which I don’t want to get into that here, so I’ll limit my recommendation to the following. We need to require that developers draft warranties and remedies that are consistent with at least the following core principles: Accuracy, Explainability, Ethics, Resiliency, Safety, Reliability, Objectivity, and Security. End users can reference these principles as they negotiate the contract with the developer and use them to flush out the developer’s obligations in a way that mitigates the end user’s risk.
Developers aren’t going to do this out of their own free will. We need to require that they do. We need to get serious.