GPT-3 and the Unauthorized Practice of Law

In the early days of 2012, as Siri was taking its first steps way into iPhones, I described here how AI driven computational law apps (CLAI) could legally deliver legal advice. Back then my thinking was linear and iterative: Siri’s capabilities could be (and should) scaled to where it would do just that. At first glance this type of activity might be deemed (especially by lawyers) as rubbing against the principles that give rise to the need for the body of law that comprises the unauthorized practice of law (UPL). An AI giving legal advice? No thank you. But now more than 11 years after writing that, we have GPT-3. AI giving legal advice just got very real.

OpenAI, the developer of GPT-3, claims that there are over 300 apps using it and there is a waiting list. (By the way, GPT-3’s semantic search capabilities are demonstrated here and are worth viewing to understand just how powerful it is and the range of computational law applications it could drive.)

Now, taking the principles I discussed in the 2012 post and applying them to GPT-3 brings me to the following conclusion: So long as the GPT-3 driven CLAI contains the necessary UPL safeguards (see here), allowing/releasing it as a CLAI is beneficial and should not be vulnerable to UPL.

Take for instance GPT-3 ‘s use in Augrented, which helps renters deal with a variety of situations, including, for example, eviction prevention. Here, Augrented helps renters write a “clear formal” rent negotiation letter to their landlords. Some of the other uses currently include dealing with lease fraud and it will likely be extended to other areas such as lease negotiation (e.g., how do I know my lease is good?)

Augrented’s developers currently steer away from the UPL quagmire by making it clear that the app does not replace the need for an attorney in some situations. But I think that as long as Augrented meets (in future iterations) the UPL safeguards, this will no longer be an issue.


December 12, 2021: A release date has not yet been announced, but GPT-4 will reportedly have 100 trillion parameters, which means it will be about 500 times bigger than GPT-3. The capabilities of GPT-4 as an AI powered computational law application (CLAI) could be seen as being augmented by a x500 factor, but this does not necessarily equate with an equivalent risk increase, especially as long as effective design safeguards are put in place. This means that CLAI design best practices and the legal safeguards and requirements surrounding it (consider the licensing requirement I discuss in the Licensing Powerful and Complex AI post) should be contemplated with the capabilities of GPT-4 (and beyond) in mind.