From Fine Print to Machine Code: How AI Agents are Rewriting the Rules of Engagement: Part 2 of 3

Part 2 of 3

by Diana Stern and Dazza Greenwood, Codex Affiliate 

Your AI shopping assistant is humming along, finding deals and making purchases for your customers. Then one day, it happens: the bot buys 100 self-heating mugs instead of 1, maxes out a customer’s credit card on duplicate Xbox orders, or shares your customer’s shipping address with an unauthorized third party. As the company behind this digital dealmaker (the “Transactional Agent Provider”), what happens when your AI assistant makes mistakes?

As a refresher, in our prior post, we defined Transactional Agents and uncovered why Transactional Agent Providers should be thoughtful about whether they serve as a legal agent for their customers (fiduciary duties abound!). We also identified a new business opportunity for third parties to take on this role.

Mistakes and Errors – at AI Scale

At a practical level, given the myriad possible contract permutations, the Transactional Agent could easily overstep its intended authority by filling in the gaps where its specific direction is not programmed, resulting in unintended obligations for the user (like ponying up enough cash to keep 100 self-heating mugfuls of matcha tea going at once). Will these agreements be binding if the Transactional Agent makes a mistake or exceeds its intended scope of authorization?

The Uniform Electronic Transactions Act (UETA) is broadly adopted commercial law in the United States that has provisions specifically addressing errors made during automated transactions conducted by Transactional Agents. For example, a relevant provision of UETA addressing errors permits the user to reverse transactions if the Transactional Agent did not provide a means to prevent or correct the error. This provision should be carefully understood by Transactional Agent Providers to ensure their process flow and ultimate user interaction support and reflect adequate means to prevent or correct these types of errors.

Likewise, under another provision of UETA, if the parties had an agreed security procedure in place and one party failed to abide by that procedure but would have caught the issue if they had, then the other party may be able to reverse the transaction. Even with this uniform law, such changes and errors’ legal and practical implications are complex and largely untested. Would these provisions mean that no transaction conducted by a Transactional Agent should be considered finalized until or unless its user has had an opportunity to review and determine no error requires correction? How long a period of time would be reasonable?

If a Transactional Agent Makes a Mistake, Who is on the Hook?

If a Transactional Agent doesn’t stick to customer instructions and makes a purchasing mistake, several different issues could come up in court. While tort law claims could fill their own textbook (we’ll leave those for our litigator friends), let’s zoom in on the contract law side of things.

In terms (heh) of contract formation, the mistake doctrine could apply. Under the Restatement (Second) of Contracts § 153, a mistake by one party could allow her to get out of the contract if:

  • The mistake was about a basic assumption on which she made the contract;
  • The mistake had a material effect on how the contract was carried out that negatively impacted her;
  • She does not bear the risk of the mistake; and
  • The other party knew or had reason to know of the mistake or the effect of the mistake would make the contract unconscionable (extremely one-sided or unjust) to enforce.

Whew, that was a mouthful.

Let’s bring this to life. Say you as the Transactional Agent Provider are acting as your customer’s legal agent, as explained in our last post. The actions your Transactional Agent takes within its scope of authority bind the customer. Let’s say your Transactional Agent books your customer on a trip to Paris, France instead of time-sensitive tickets to a conference in Paris, Texas. Your customer assumed the bot would book destinations accurately, and she would be adversely affected by having plans in France instead of Texas. Even assuming refundable bookings, she might miss her conference in Texas or have to pay higher room rates.

Does the risk of the Transactional Agent booking a trip to the wrong city fall on the customer (does she bear the risk)? What if the Transactional Agent Provider had disclaimers that the customer would bear the risk? Is that enough? Is the risk of Transactional Agents not following instructions so well known that customers bear the risk just by using them? Is that a desirable policy outcome?

And when is the Transactional Agent’s mistake so obvious, the other party should have known? What if the Transactional Agent left a reservation note to the French hotel that the customer was coming for the annual cryptocurrency conference in Paris, Texas? These answers will emerge as industry norms and expectations evolve.

Fortunately, there are ways for Transactional Agent Providers to mitigate some of these risks. As we discussed earlier, the Uniform Electronic Transactions Act (UETA) Section 10(2) offers a powerful tool in this regard. This provision allows customers to reverse transactions if the Transactional Agent did not provide a means to prevent or correct the error. By implementing a user interface and process flow that enables customers to review and correct transactions before they are finalized, providers not only comply with UETA but also establish a strong argument for ratification. If a customer has the opportunity to correct an error but chooses not to, they have arguably adopted the transaction as final. Moreover, this provision of UETA cannot be varied by contract, which means this rule allowing customers to reverse transactions will apply even if providers insert disclaimers or other contract terms insisting the customer holds all responsibility and liability for mistakes and errors committed by the Transactional Agent.

Given this is the law of the land in the U.S., with UETA enacted in 49 states, it is prudent to take these rules seriously. This design pattern – proactively building in error prevention and correction mechanisms – is therefore not just about legal compliance; it’s a fundamental aspect of responsible Transactional Agent development that helps define the point of finality and clarify the allocation of risk. But it’s also just good practice and a fair rule. By implementing these mechanisms, providers can significantly reduce their risk of liability. By embracing error avoidance and corrections protocols in the design and deployment of Transactional Agents, perhaps the most valuable benefit will not be avoiding liability for reversed transactions but legitimately earning Transactional Agent customers’ trust and reliance upon this new technology and way of doing business.

Enter the Regulators

Depending on the frequency and severity to which Transactional Agents’ mistakes harm customers, regulators like state attorneys general might investigate whether such conduct constitutes unfair or deceptive practices under consumer protection statutes.

Privacy issues add another layer of complexity. When Transactional Agents follow their open-loop model to complete tasks, they may use information in unexpected ways. Your friendly neighborhood shopping assistant might leverage information from your customer’s health-related queries to recommend products for purchase. This raises thorny questions about context integrity, consent, and compliance with privacy frameworks like GDPR, especially when these systems can make complex inferences about customers from seemingly innocuous data.

Designing Transactional Agents for compliance with existing laws is further complicated by certain regulators’ shift toward new, AI-specific laws. For example, last year, Regulation (EU) 2024/1689 (the “EU AI Act”) became the first AI-specific legal framework across the EU. While the EU AI Act makes a nod to existing EU privacy regulations, stating that they will not be modified by the Act, it may prove challenging for companies to comply with both if inconsistencies between the two bodies of law arise as more varied Transactional Agents are deployed. In the U.S., California’s Assembly Bill 2013 Generative Artificial Intelligence: Training Data Transparency will require builders to publish summaries of their training datasets, including whether aspects of the datasets meet certain privacy law definitions, increasing compliance overhead.

And this is just the tip of the agentic iceberg. The legal challenges posed by Transactional Agents bear some resemblance to those faced when open-source software first emerged. Just as the legal and developer communities grappled with novel issues surrounding open source licensing – such as who is liable for a bug in the code – we’re now confronting unprecedented questions about Transactional Agents and liability.

What About Missteps between the Transactional Agent Provider and LLM Provider?

Another persnickety contract-related risk lies in the terms of service between the Transactional Agent Provider and the LLM it uses. In our research, we observed that many LLM providers place a great deal of liability on the Transactional Agent Provider, leaving them with one-way indemnities and uncapped liability for certain claims. Others take a more even-handed approach. One commonality is that they leverage broad principles the Transactional Agent Provider must follow. LLM providers need to account for the innumerable edge cases that emerge when Transactional Agents are released in the wild. These principles range from restrictions against building competing services and circumventing safeguards to compliance with law. While useful for LLM-side lawyers drafting around a large set of risks posed by a rapidly developing technology, these principles become quite complicated when Transactional Agent Providers consider how to make them programmable. You would need to deal with thousands of areas of law in multiple jurisdictions around the world in the context of an open-loop interaction where you cannot predict outputs. Some of this uncertainty can be solved through thoughtful technical architecture that appropriately uses deterministic outputs to mitigate risk, but it’s not the only way.

Stay tuned for our third and final post, where we’ll share more solutions for managing Transactional Agent legal risks. We’ll explore everything from clear delegation frameworks to zero-knowledge proofs.

—————————————————————————————————————————————————————————————————————————–

Diana Stern is Deputy General Counsel at Protocol Labs, Inc. and Special Counsel at DLx Law. Dazza Greenwood runs Civics.Com consultancy services, and he founded and leads law.MIT.edu and heads the Agentic GenAI Transaction Systems research project at Stanford’s CodeX.

Thanks to Sarah Conley Odenkirk, art attorney and founder of ArtConverge, and Jessy Kate Schingler, Law Clerk, Mill Law Center and Earth Law Center, for their valuable feedback on this post.