From Fine Print to Machine Code: How AI Agents are Rewriting the Rules of Engagement: Part 1 of 3

Part 1 of 3

by Diana Stern and Dazza Greenwood, Codex Affiliate 

Picture this: you’ve just developed a sleek new AI shopping assistant. It’s ready to scour the internet for the best deals, compare prices faster than you can say “discount,” and make purchases quicker than you can reach for your wallet. But wait, there’s a catch. How do you ensure this digital dealmaker doesn’t make mistakes that could bind you or your customer to a bad deal, create liability under privacy laws, or violate terms of service that it (and, let’s face it, probably you) never actually read?

This three-part series will identify U.S. legal issues raised by this type of AI agent and how to address them. In this post, we’ll start by level setting on AI agent terminology. Next, we’ll dispel the misnomer that liability can be pushed to the AI agents themselves and explain why the company offering services like this AI shopping assistant to customers could be left holding the bag o’ risks. Finally, we’ll touch on how software companies can helpfully leverage principal agent law to manage this risk.

What is a Transactional Agent?

AI agents are an umbrella category of AI systems that execute tasks on behalf of users. In addition to your AI shopping bot that purchases goods online, think of virtual assistants that book flights or event tickets and meeting schedulers that reserve tables at restaurants. There are a variety of AI agents with diverse capabilities.

This series focuses on what we’ll call “Transactional Agents”: AI agent systems that conduct transactions involving monetary or contractual commitments. These systems leverage large language models (LLMs) to move beyond basic query-response interactions. What makes them special is their ability to perform dynamic, multi-step reasoning and take action without human review or approval. Imagine your shopping bot doesn’t just find products but compares prices across retailers, checks reviews, confirms availability and makes purchases – all while sticking to your customer’s specified budget and preferences. Transactional Agents achieve this through key capabilities like:

  • Tool use: Accessing external services like payment processors or APIs
  • Memory management: Retaining context and user preferences across interactions
  • Iterative refinement: Learning from past decisions to improve future outcomes

Their ability to make binding commitments, including payments, differentiates Transactional Agents from simple chatbots and other types of AI agents. These systems can spend real money or enter into contracts on one’s behalf. Let’s say your company provides an AI shopping bot consumer app powered by a third-party LLM. On the surface, this seems like it could be a straightforward SaaS offering, but it has hidden challenges and risks related to security, authorization, and trust. How do you ensure the app follows your customers’ requests? How do you prevent errors? Misuse? These are some of the challenges we’ll explore in this series.

Your Transactional Agent Is Not A Legal Agent, But You Might Be

Your Transactional Agent cannot be held liable nor enter agreements itself because it’s not a legal entity – it’s software! So, how are they able to buy the perfect pair of Jimmy Choo’s for your customer right when they go on sale? Under the Uniform Electronic Transactions Act, which we will discuss further in a future post, it is well-settled that Transactional Agents can form contracts on behalf of their users, but principal-agent law may also be operating in the background.

If you’ve bought a house, a real estate agent may have acted on your behalf to buy the property, negotiate prices, and handle paperwork. Not all principal-agent relationships are made through an express agreement like in real estate. They can also be implied, like a whiskey bar manager who is in charge of curating the menu and decides to enter into agreements on the bar’s behalf to buy mocktail supplies in January. In addition, a principal-agent relationship can be based on “apparent authority”, when a third party reasonably believes an agent has the authority to act on the principal’s behalf. For example, when the bar manager tells a non-alcoholic spirit distributor, she is authorized to enter into agreements for new products on the bar’s behalf.

Under state common law (law primarily developed through court cases), a common law agent has a fiduciary duty to the principal (legal nerds can see Restatement (Third) of Agency § 8.01). This is a big deal! A fiduciary duty is one of the highest standards of care imposed by law. It is a legal obligation to act in the best interests of the other party within the scope of the business relationship. The agent owes other duties as well, including avoiding conflicts of interest and acting in line with the agency agreement.

When a company offering a Transactional Agent to customers (“Transactional Agent Provider”) operates the Transactional Agent, a principal-agent relationship *may* exist. If the customer went to court, they could argue there was a principal-agent relationship between them and the Transactional Agent Provider in order to get the Transactional Agent Provider on the hook. The court would likely look at the customer’s actions in deploying and configuring the Transactional Agent as well as the terms they agreed to, among other factors.

Apparent authority may be a particularly relevant consideration for the court, since third parties interacting with the AI may not know the actual instructions given to the Transactional Agent by the user, but rather, are relying on what they see from the Transactional Agent. The court would consider how the Transactional Agent Provider’s authority was communicated to third parties, including representations, disclaimers, and industry standards.

Even if a Transactional Agent Provider exceeded its authority, a court might analyze whether the customer ratified the action, meaning the customer essentially gave the Transactional Agent Provider authority to do that action after the fact.

In short, when it comes to Transactional Agents, the customer could be the principal delegating authority to the Transactional Agent Provider as their agent. Et voila, the Transactional Agent Provider would become legally liable under principal-agent laws.

Making Agency (or Alternatives) Work For You

Agency law is a familiar legal framework for courts and can potentially clarify liability issues, so, in some cases, it might be advantageous to state there is an agency relationship in Transactional Agent Provider terms of service. We have seen this already in our review of existing Transactional Agent Provider terms of service. At the same time, since the standard of care for an agent is so high, Transactional Agent Providers may wish to structure these relationships as independent contractor relationships if they can ensure that the terms and the way the customer interacts with the Transactional Agent align with this characterization. Likewise, there may be a competitive advantage in embracing some fiduciary duties as a Transactional Agent Provider to create and retain customer trust.

In addition, there’s a potential business opportunity here. Transactional Agent Providers may look to third parties to take on the responsibility of being the customer’s legal agent. This already happens in the payments industry where some companies act as the “merchant of record” and take on some liability for the actual provider or manufacturer of products and services sold.

In conclusion, as more Transactional Agents with increasingly advanced capabilities come online every day, customers should choose their Transactional Agent Providers wisely, and Transactional Agent Providers should be proactive in determining the principal-agent legal strategy appropriate for their business.

Click here to read the next post in the series


Diana Stern is Deputy General Counsel at Protocol Labs, Inc. and advises clients in her role as Special Counsel at DLx Law. Dazza Greenwood runs Civics.Com consultancy services, and he founded and leads law.MIT.edu and heads the Agentic GenAI Transaction Systems research project at Stanford’s CodeX.

Thanks to Sarah Conley Odenkirk, art attorney and founder of ArtConverge, and Jessy Kate Schingler, Law Clerk, Mill Law Center and Earth Law Center, for their valuable feedback on this post.