Generative AI: Its Impact on AI Legal Advice and AI Computational Law Apps

Generative AI apps frequently raise concerns about copyright infringement. But there is more to it. Another issue that I first wrote about in 2012 concerns the unauthorized practice of law (UPL). The recent media frenzy around ChatGPT was my call to revisit this issue.

So let’s start with the conclusion: (1) AI apps will augment, not replace, lawyers; and (2) Generative AI apps like ChatGPT, no matter how powerful they become, will play an important, but limited, in role of enabling AI computational law apps.

As I set out to write this post, I decided to run the following query through ChatGPT: “Can you provide legal advice?” It responded with:

“AI can be used to assist with legal research and document analysis, but it is not currently capable of providing legal advice on its own. Providing legal advice requires a deep understanding of the law and its application to specific situations, as well as the ability to evaluate the risks and potential outcomes of different courses of action. This is a complex task that requires knowledge and judgment that can only be provided by a trained and licensed legal professional. AI can be used to assist with tasks such as searching through legal documents and identifying relevant cases, but it cannot replace the expertise and judgment of a human lawyer. In the future, it is possible that AI may be able to assist with more complex tasks, such as predicting the outcomes of legal cases, but it is unlikely that AI will be able to fully replace human lawyers in the near future.”

Overall, this is pretty cool and the response was generated within less than 10 seconds. But that’s about it. Even without close inspection, it’s possible to discern the response “seams” here; you can see that it resembles a bunch of Google search results stitched together (go ahead Google it, you’ll see what I mean).

But is ChatGPT giving us a glimpse into a future where robot lawyers replace human lawyers? No. Even under the most optimistic and enthusiastic prism this is not going to happen. But let’s take this a step further. If we begin with the (remarkably grand) assumption that the technological requirements for such a machine can be met, that the development cost is reasonable, the usage fee is within reach of many, the warranty and liability issues are accounted for, the legal advice is actually sound, we can see that this is still not enough. What’s missing is trust. Trust is a dispositive variable. If people don’t trust it, they will not use it, so it doesn’t matter that an AI lawyer exists. We also need to ask whether we should even trust an AI lawyer? How much? In every context, or in limited scenarios? It is easy to appreciate that it will be virtually impossible to trust in an AI lawyer in a legal dispute where the stakes are high (you lose you go to jail). But if the stakes are sufficiently low, such as is the case in dealing with traffic violations, trusting an AI app with legal advice should not be too difficult. (DoNotPay already does this.)

Now let’s set aside the assumptions above and turn to the questions around liability. The central issue here is UPL. It is pretty straightforward that a developer who builds an AI tool that provides legal advice is exposed to liability for UPL. But what if the developer is also a lawyer or the development was supervised by a lawyer? Maybe in those scenarios liability for UPL can be avoided.

At the end of the day, ChatGPT is a neat trick; it’s entertainment with an undeniable cool factor. But it is not about a future without human lawyers. Instead, I find that ChatGPT offers a preview of what may be possible for lawyers to use in our practice. Here I can see similar applications integrated in legal research tools such as Westlaw and Lexis, and in Microsoft Word. But outside of generative AI apps used and or developed by lawyers, it is important that developers like OpenAI be cautious in enabling their apps to engage in activity that is legally proscribed.

***PostScript***

1-17-2023: VALL-E, Microsoft’s “neural codec language model” reportedly can simulate a person’s voice from a three-second sample. Along with ChatGPT and other generative AI apps, the arsenal of human-like applications is growing. Telling these apart from human generated output (verbal and text based) is increasingly going to be important to curb harms from intellectual property infringement, deception (e.g., deepfakes, impostor), privacy violations, etc. This is certainly the subject for a dedicated post, but for now I will note only that this raises the need to consider a way to quickly and easily identify when such content is generated by these types of apps. Efficiently doing so will mean to legally require the developer to “hard wire” (cannot be removed by the end user) an automatic identifier. In the case of an application like VALL-E, it would come in the form of a verbal disclaimer such as “AI Synthesized Audio.” In the case of ChatGPT and similar apps, the identifier would be in the form of metadata or in a conspicuous watermark-like symbol.