A Supervisory AI Agent Approach to Responsible Use of GenAI in the Legal Profession

by Megan Ma, Asst. Director of Codex and John Nay, Codex Affiliate and Founder & CEO of Norm.ai

As the infamous lawyer that used ChatGPT to draft his brief made headline news,[1] serious attention was brought to the use and integration of emerging large language model (LLM) technology for legal work. Even amid techniques that have improved the accuracy of these models,[2] the continued pervasiveness of hallucinations[3] remains a deep-seated concern of the legal community. What began as a visceral fear,[4] and now measured hesitation, around generative AI soon transitioned to a questioning of the role of legal professionals and their particular use of these tools. That is, as risk-averse and emblematic keepers of our laws, how should legal professionals be engaging with foundation models[5]?

Legal Services Disruption

The legal community was witnessing first-hand not only the impact of generative AI on their own work, but also the widespread impact to the clients they advise. As a result, legal professionals are simultaneously asked to counsel on how to sufficiently mitigate risks for their clients and stakeholders, while also undergoing the exact same tensions.

The significance of this technological disruption has been made clear through Chief Justice John Roberts’s Annual Report[6] at the end of 2023. In his report, Chief Justice Roberts centered on the advent of AI and the associated promises and perils that come with it. He contextualizes this technical intervention through the historical innovations that have made their mark in both courts and across the legal domain. Yet, unlike other technologies, his core thesis focused on the careful balance of human-machine collaboration, specifically in aspects of reasoning. Chief Justice Roberts describes the fundamental importance of nuance that cannot be replaced. How then do we appropriately parameterize notions of nuance for humans and nuance that machines can manage? What mechanisms must be in place to ensure and continuously verify that appropriate expert human oversight and accountability exists?

Here, we outline the natural progression and innovative potential of regulating professional and responsible conduct via an integrated human-machine approach, what we call AI agent-assisted human supervision of AI. We consider first how professional conduct is currently framed and discussed in this space, and how these conversations effectively set the stage for tangible technical interventions. We then define more specifically how supervisory AI agents can play an important role in the future regulation of responsible behavior. We provide a few examples of how duties of responsible use can be upheld through the intervention of an agent-assisted approach. Finally, we conclude with next steps and a brief discussion of forthcoming research.

Defining Responsible Use

After ChatGPT made its debut November 2022, mounting questions of responsible and accountable oversight triggered sufficient unease such that the MIT Task Force, consisting of both legal practitioners and academics, felt necessary to convene on ways to offer direction around responsible use of generative AI. What began as an exploratory brainstorm at the Stanford Generative AI and Law Workshop[7] became active and ongoing solicitations of input from legal  and technical experts across a spectrum of industry verticals; and eventually, evolving to a formalized set of guiding principles.[8] The outcomes of the initial report demonstrated that existing responsibilities of legal professionals (e.g. ABA Rules of Professional Conduct) largely account for how lawyers should be using these technologies. Yet, what differed is their specific application. Providing relatable examples of consistent and inconsistent practice clarified how these tools interact with the day-to-day work of legal professionals and what actions they must be aware of. This report offered preliminary guidance that translated known duties of professional conduct into responsible behavior for the generative AI era.

On November 16, 2023, the Committee on Professional Responsibility and Conduct (COPRAC) of the State Bar of California developed and adopted as Practical Guidance a set of duties that carefully reconciled existing laws with directives on how lawyers should act when using generative AI tools. The Practical Guidance is an extension of the MIT Task Force principles and guidelines.[9] COPRAC diligently drafted the Practical Guidance and maintained a high level of specificity. Explicit descriptions of the types of acts that would be seen to breach the outlined duties were clarified in the text, illustrative of the intention of the Committee to ground principles to actions. Though these texts have generated immense value in defining what is inconsistent behavior, what remains outstanding is precisely how to implement and effectuate these in practice.

An analogy that often surfaces is of another technological wonder: the autonomous vehicle. What initially began as amorphous questions of safety and tortious liability quickly pivoted to practical discussions on what devices are needed to alert human drivers of their need to “take back the wheel.” A recent empirical study, for example, highlighted the relative effectiveness of auditory alarms in re-engaging inattentive drivers.[10] Advances in this direction continued to develop increasingly complex alarm systems.[11] The coupling of intelligent “alarms” that continuously monitor and account for live conditions of the road and alert drivers accordingly must be weighed against the explicit, written rules of the road. In practice, successful oversight and accountability are frequently a hybrid human-machine collaboration inclusive of three key components: (1) an automated continuous monitoring mechanism; (2) a flagging or alert system; and (3) defined guidelines for verification.

We see the potential to extend the analogy of autonomous vehicles to the generative AI space. That is, while governance has largely taken qualitative, descriptive forms, there are few approaches that support the transition from policy to practice. These include open-source libraries, such as Guardrails AI (GAI)[12] and Phoenix[13]. These technical interventions both involve a combination of the aforementioned three key components. GAI is a Python library that allows users to add structure, type, and quality boundaries to the outputs of foundation models. This means that users may specify and validate how a foundation model should behave and attempt remedial action when the model behaves outside of these predetermined rules. This includes, for example, preventing security vulnerabilities by verifying responses that do not contain confidential information. These toolkits act as safeguards that provide more behavioral bounds. Leading law firms like DLA Piper recommend other technical practices, such as red teaming to audit and assess AI applications against legal vulnerabilities.[15]

Supervisory AI Agents

The ideal approach dynamically integrates LLM guardrails and legal analysis in a way that continuously captures the nuance of the legal profession. We consider the role of supervisory AI agents, assistants that provide not only continuous monitoring of real-time engagements with foundation models, but also offer an initial triaging capability that facilitates intervention and defers decision-making to the human expert. Put differently, supervisory AI agents may conceivably be the “intelligent alarm systems” we say with autonomous vehicles for the governance and implementation of responsible practice.

Zooming out, an “AI agent” is an autonomous program that can take actions to achieve goals. Generalist agents that can take actions across domains and contexts in fluid manner like a human are not possible with today’s underlying models; however, AI agents specific to certain tasks have just been made possible with significant orchestration of carefully crafted calls to LLMs, e.g. in the regulatory AI platform, Norm Ai.[16] Agents can be built-for-purpose in sufficiently narrowly scoped contexts; for instance, to specifically complete guardrail checking tasks for other AI agents.

Supervisory AI agents can be designed to evaluate whether proposed content or actions are compliant with relevant regulations or guidance. They can serve as a round of checks and point to what may be problematic with what rule, enabling human oversight from experts that possess the requisite legal expertise to quickly understand and potentially intermediate an interaction between an AI and a user (whether that user is a human or another AI).

In this way, supervisory AI agents can serve as a critical unblock for safe deployment of AI systems in areas of the economy that have professional codes of conduct and important certification regimes, such as medicine, financial planning, and legal services. Duties of professional care are imposed on powerful service providers (e.g., directors of corporations, investment advisers, lawyers, and doctors) to guide their behavior toward the wellbeing of the humans for which they are providing services (e.g., corporate shareholders, investment and legal clients, and healthcare patients). Concepts like fiduciary duty are core to building trust between the providers of critical services and those receiving those services. Legal services, in particular, are ripe for the productive deployment of LLMs, but are bound by strict existing guardrails and professional obligations, such as attorney-client privilege.

Case Study

In this case study, we explore leveraging supervisory AI agents for a chat-based deployment of an LLM built to answer regulatory questions for a human user. Figure 1 visualizes this AI agent-assisted human supervision system for GenAI use within legal and regulatory-related services.

We demonstrate four types of supervision: (1) determining if a question a human asks the chat AI system is within scope of that particular AI system, and if not sending the question and AI-produced answer to a lawyer for review; (2) determining if an input is a potential malicious attack, and if so, ending the interaction; and (3) determining if personal data is input into the system, and if so, ending the interaction. Fourth, we illustrate one aspect of the user interface that enables better supervision: the system clearly labeling when a human or an AI is producing content.

A Supervisory AI Agent's Approach to Responsible Use of GenAI in the Legal Profession 5
Figure 1: Overall workflow

An AI supervisory agent leverages LLMs to determine if a question the human asks is within the scope of a chat AI system, and if not sends the question and chat-AI-produced answer to a human lawyer for review. This is illustrated in Figure 2, where, in this case, the primary AI system is designed to answer questions about a specific regulatory area of law If the question is predicted by the supervisory AI agent to be out of the scope of that area of legal expertise, the question is automatically routed for human review.

A Supervisory AI Agent's Approach to Responsible Use of GenAI in the Legal Profession 6
Figure 2: Out of scope response detected by a supervisory AI agent

In this case study, a second supervisory AI agent determines if an input is a potential malicious attack, and if so, ends the interaction. This is illustrated in Figure 3.

A Supervisory AI Agent's Approach to Responsible Use of GenAI in the Legal Profession 7
Figure 3: A malicious attack detected by a supervisory AI agent

Another supervisory AI agent determines if it is at least moderately confident that personal data may have been input into the system, and if so, ends the interaction. This is illustrated in Figure 4.

A Supervisory AI Agent's Approach to Responsible Use of GenAI in the Legal Profession 8
Figure 4: Personal data detected by a supervisory AI agent

Finally, the system clearly labels when a human or an AI is producing content, illustrated in Figure 5.

A Supervisory AI Agent's Approach to Responsible Use of GenAI in the Legal Profession 9
Figure 5: Human vs. AI response labeling

Next Steps

This case study illustrates that the potential for supervisory AI agents is not only promising, but currently underexplored. The technical infrastructure is capable of managing automated AI monitoring, and we anticipate deep intersections with how they can support legal professionals to ensure their commitment with responsible use of generative AI tools. Diving further into the case study, we see clear points of engagement for lawyers. That is, there are signposts that signal precisely when human experts can and should intervene, and when they may defer to machines. Moreover, these examples align with the professional and ethical duties of attorneys. For instance, Figure 3 on preventing malicious attack correlates with the duty to comply with the law; and Figure 5 correlates with the duty of disclosure and communication of generative AI use. As state bar associations continue to adopt guidance on lawyers’ use of generative AI,[17] future research directions involve collaborating with law.MIT.edu, as part of the next phase of the MIT Task Force on Responsible Use of Generative AI, to explore how technical approaches, specifically supervisory AI agents, would play a role in the verification and assurance of state bar rules of professional conduct.

 

—————————————————————————————————————————————————————————–

[1] Mata v. Avianca Inc., No. 1:2022cv01461 – Document 54 (S.D.N.Y. 2023) https://law.justia.com/cases/federal/district-courts/new-york/nysdce/1:2022cv01461/575368/54/.

[2] Kim Martineau, “What is Retrieval Augmented Generation?” IBM Blog (August 22, 2023), available at: https://research.ibm.com/blog/retrieval-augmented-generation-RAG.

[3] Matthew Dahl, Varun Magesh, Mirac Suzgun, and Daniel Ho, “Hallucinating Law: Legal Mistakes with Large Language Models are Pervasive,” January 11, 2024, available at:  https://hai.stanford.edu/news/hallucinating-law-legal-mistakes-large-language-models-are-pervasive?fbclid=IwAR3OINcdoxptTBKLI85p9iyls_YFkDdQ_NcM7FggUdy9M7Wm3B6tuB2zA2o.

[4] Megan Cerullo, “Texas judge bans filings solely created by AI after ChatGPT made up cases,” CBS News (June 2, 2023), available at:  https://www.cbsnews.com/news/texas-judge-bans-chatgpt-court-filing/. See also, Rabihah Butler, “Generative AI and the courts: Balancing efficiency and legal obligations”, Thomson Reuters (August 28, 2023), available at: https://www.thomsonreuters.com/en-us/posts/government/generative-ai-courts/.

[5] See Center for Research on Foundation Models, https://crfm.stanford.edu/; see also for more detail, Mike Murphy, “What are foundation models?”, IBM Blog (May 9, 2022), available at:  https://research.ibm.com/blog/what-are-foundation-models.

[6] Chief Justice John G. Roberts, 2023 Year-End Report on the Federal Judiciary, https://www.supremecourt.gov/publicinfo/year-end/2023year-endreport.pdf

[7] Generative AI Workshop, CodeX FutureLaw 2023, https://conferences.law.stanford.edu/futurelaw2023/generative-ai-workshop/

[8] MIT Task Force on Responsible Use of Generative AI for Law, MIT Computational Law Report,  https://law.mit.edu/ai.

[9] Recommendations from Committee on Professional Responsibility and Conduct on Regulation of Use of Generative AI by Licensees, The State Bar of California (November 16, 2023), available at: https://board.calbar.ca.gov/docs/agendaItem/Public/agendaitem1000031754.pdf

[10] Ji-Hyeok Han and Da-Young Ju, Advanced Alarm Method Based on Driver’s State in Autonomous Vehicles, Electronics 2021, 10(22), 2796; https://doi.org/10.3390/electronics10222796.

[11] “Engineering team develops autonomous alarm system for work zone vehicles,” Mizzou Engineering (February 21, 2023), available at: https://engineering.missouri.edu/2023/engineering-team-develops-autonomous-alarm-system-for-work-zone-vehicles/.

[12] Guardrails AI, https://www.guardrailsai.com/docs/

[13] Phoenix, Arize AI, https://github.com/Arize-ai/phoenix

[14] NeMo Guardrails, https://github.com/NVIDIA/NeMo-Guardrails.

[15] See David Jolly, “DLA Piper Audits the Robots as Big Law Digs into AI Work,” Bloomberg Law (December 13, 2023), available at: https://news.bloomberglaw.com/artificial-intelligence/dla-piper-audits-the-robots-as-big-law-digs-into-ai-work. See also their practical guide, Red teaming: A Practical Guide to testing AI models, DLA Piper (2023).

[16] Norm Ai, https://www.norm.ai/platform.

[17] On January 19, 2024, the Florida Bar Association adopted guidelines and principles similar to the California State Bar on confidentiality, oversight, and regulatory compliance when using generative AI. See Florida Bar Ethics Opinion, (January 19, 2024), available at: https://www.floridabar.org/etopinions/opinion-24-1/.