In the early days of 2012, as Siri was taking its first steps way into iPhones, I described here how AI driven computational law apps (CLAI) could legally deliver legal advice. Back then my thinking was linear and iterative: Siri’s capabilities could be (and should) scaled to where it would do just that. At first glance this type of activity might be deemed (especially by lawyers) as rubbing against the principles that give rise to the need for the body of law that comprises the unauthorized practice of law (UPL). An AI giving legal advice? No thank you. But now more than 11 years after writing that, we have GPT-3. AI giving legal advice just got very real.
OpenAI, the developer of GPT-3, claims that there are over 300 apps using it and there is a waiting list. (By the way, GPT-3’s semantic search capabilities are demonstrated here and are worth viewing to understand just how powerful it is and the range of computational law applications it could drive.)
Now, taking the principles I discussed in the 2012 post and applying them to GPT-3 brings me to the following conclusion: So long as the GPT-3 driven CLAI contains the necessary UPL safeguards (see here), allowing/releasing it as a CLAI is beneficial and should not be vulnerable to UPL.
Take for instance GPT-3 ‘s use in Augrented, which helps renters deal with a variety of situations, including, for example, eviction prevention. Here, Augrented helps renters write a “clear formal” rent negotiation letter to their landlords. Some of the other uses currently include dealing with lease fraud and it will likely be extended to other areas such as lease negotiation (e.g., how do I know my lease is good?)
Augrented’s developers currently steer away from the UPL quagmire by making it clear that the app does not replace the need for an attorney in some situations. But I think that as long as Augrented meets (in future iterations) the UPL safeguards, this will no longer be an issue.