Terms of Use in the Age of AI: UI/UX Prototyping of Consumer-Facing AI Products
by Dr. Megan Ma, Associate Director, Codex and Yan Luo, Partner, Covington & Burling
The technological landscape surrounding large language models (LLMs) has evolved significantly, bringing about not only remarkable capabilities but also complex risks. In response, companies in the LLM sector have intensified efforts to establish robust responsible AI frameworks to mitigate these risks. Amidst broader concerns around hallucinations and misuse, underlying criticism about LLMs involved the blurriness of accountability between those developing and those deploying the technology. Moreover, a tension has since arisen around the medium of the chatbot as a form of user engagement that inadvertently fosters trust.
A plethora of AI governance frameworks from corporations and other related organizations have since emerged, aiming to promote best practices within the industry. Legislative actions such as the EU Artificial Intelligence Act also aim to nudge companies to design, develop, and deploy AI systems with a focus on documenting aspects of system design such as risk assessment, data governance, transparency and explainability, human oversight, and record-keeping and reporting.
Nevertheless, the primary focus of current risk mitigation efforts in AI governance is on reinforcing the trustworthiness of the technology itself. This involves a heavy emphasis on embedding safeguards directly within the technology to ensure it operates as intended and according to its design specifications. Such efforts are largely process-oriented, concentrating on the development stages of large language models (LLMs), where there already exists rigorous testing and evaluation of the models. However, this technology-centric approach could potentially neglect the equally crucial aspect of user interaction. By concentrating on the technological development at the model layer, there may be an oversight in how users realistically engage with and experience the technology, leaving a critical gap in the user-centric aspects of AI governance.
Let us consider the pivotal lessons learned from privacy governance. As developed over the past two decades, it is evident that an effective privacy program is structured around three fundamental components: (1) privacy by design; (2) user experience and user interface (UX/UI) innovations; and (3) comprehensive privacy documentation for transparency and accountability. Each of these elements serves a distinct but complementary function.
First, “privacy by design” is an essential framework that embeds privacy into the very architecture of the product design process. It involves deliberate choices about what data to collect and the potential impacts of that data collection, right from the inception of the product design. This proactive approach ensures that privacy considerations are integral and not an afterthought.
Second, UX/UI features, such as dashboards or other privacy tools, provide users with practical means to control their personal data. These interfaces empower consumers by offering clear choices about what information they share with companies, thereby enhancing user autonomy and trust.
Finally, privacy policies and other documentation serve to enhance accountability and transparency. They provide crucial information about data handling practices, informing users about how their data is used and safeguarded. Together, these documents form the backbone of privacy assurance and help form a relationship of trust between companies and their users.
The prevailing AI governance framework places substantial emphasis on establishing robust safeguards for AI systems, resonating with the concept of “privacy by design.” This method embeds responsible AI considerations directly into the development and deployment of technology, ensuring that these protections are foundational rather than supplemental.
In contrast to the robust privacy governance framework discussed above, there appears to be a gap in how AI governance frameworks address the need for consumers to fully understand their options when interacting with AI technologies and the consequences of these choices. For example, the FTC has recently come forward with concerns of AI companies “quietly” changing Terms of Service (ToS) agreements as potentially problematic to the average user. Specifically, there is a scarcity of user experience (UX) and user interface (UI) tools that empower consumer decision-making.
Several factors could contribute to this oversight. First, enhancing UX and UI design has not been prioritized within the existing AI governance frameworks. This may be due in part to the early development stage of consumer-facing AI products. Presently, consumer interaction with AI is primarily limited to AI-powered chatbots, and other AI-driven products are still in their formative stages. This limited deployment could explain the minimal focus on refining UX/UI, as the market has yet to fully mature and unveil the full extent of user needs and complexities.
Additionally, there is significant uncertainty surrounding liability in the use of AI products. With the anticipated proliferation of AI consumer products in the coming years, consumer interactions with these technologies are expected to become increasingly varied. Moreover, as the ecosystem expands, numerous new deployers, each with their own liability concerns, will enter the market. Thus, many existing ToS and end-user license agreements (EULAs) often contain ambiguous language regarding the definitions of harm, risk, hazards, and failure modes to avoid future liabilities.
This lack of clarity necessitates a reevaluation of the consumer documentation and interfaces that companies develop, aiming to ensure that users are thoroughly informed about the functionalities and potential risks associated with AI products.
The research project led by Dr. Megan Ma and Yan Luo seeks to stimulate further discussion in this area. It plans to build on interdisciplinary efforts spanning legal, policy, and responsible AI domains. We consider contractual agreements such as ToS and EULAs become crucial tools in delineating the conditions and requirements for identifying misuse and, consequently, potential harms. Meanwhile, enhancements in UX/UI design are poised to improve consumer choice and engagement, bridging the gap between technology and user-centric governance. We anticipate developing prototypes to explore and experiment with questions of user engagement and how we may better empower individual users to understand the risks and choices in the use of consumer-facing AI products.