The Power of Persuasion (“Captology”) in the Age of AI and Quantum Computing

We are in the initial stages of entering through the gates of a new era, one where the power of technology to persuade us is unchartered. Why so? Because the type of technology we’re now (and soon will be) facing comes with unprecedented “augmentation” factor fueled by AI and quantum computing (more on that below). Befitting their power, we can expect that these technologies, individually or in combination, will throw at us some interesting challenges and surprises when it comes to their power to persuade us; some which we might find useful, sometimes pleasant, and certainly some that bear none of those characteristics. So what does this mean from a legal and regulatory framework? As I see it, the current picture is not all that great. We already have some early-stage laws, regulations and standards, but most of these just fall short of the mark. Many are riddled with shallow, vague, broad, vague, unrealistic, and merely aspirational (even naïve to the point of being Pollyanna’ish) language. This setting is fertile ground for creating a lot of buzz, but that’s just about it. There is very little attention to the looming “augmentation” effects on persuasion. Now, to be clear, none of this is surprising. The law, the regulations (and all of the standards and best practices that feed into them) are virtually always quite a few steps behind the rapid technological developments and their exponential pace of adoption.

At a high level, “augmentation” was initially focused on the relationship between the power and capability of AI and/or quantum technology to generate unpredictable outcomes from data on which these technologies are employed. It recognized that the greater the power/capability of a given AI/and or quantum computing application, the more likely it is that the latent data value on which it is employed will increase. I found that this tends to create a destabilizing transactional effect. Data that wasn’t initally seen as all that valuable by its owner and allowed to be used with little or no restriction suddenly receives a massive face lift when exposed to augmentation by AI and quantum computing. The destabilizing transactional effect comes into play with the data owner’s new recognition and interest in placing use restrictions, but it may be too late.

Let’s now turn to the question of persuasion and how it fits with augmentation. The power of technology to persuade people was the title and subject matter of a book by Stanford Professor B.J. Fogg: Persuasive Technology: Using Computers to Change What we Think and Do. In this book, Professor Fogg used the term “Captology” (which stands for computers as persuasive technologies) to describe an ecosystem in which interactive computer systems are designed to change people’s attitude and behaviors. Fogg’s Captology is represented by a “functional triad” where computers serve as a “tool,” as “media,” and as “social actors.” In their role as a “tool” computers help increase the capability to efficiently reach and persuade people. As “media” they serve up an experience for people. And as “social actors” they drive persuasion by creating relationships through rewards and positive feedback.

The augmentation effect on Captology makes it necessary to think about how to build guard rails to minimize the potential for unprecedented abusive persuasion (short of deception). One potential place to start this is with the AI Life Cycle Core Principles (Principles). I should also clarify that while these were designed with AI in mind, most of the content can also apply to quantum computing. Now, in contrast to much of the current high level discussion around legal and regulatory initiatives, the Principles offer practical, actionable guidance that can help minimize the potential for harm that can come from augmentation.


January 4, 2024

Consider the following excerpt from Bruce Schneier’s essay on AI and trust and how it corresponds with Captology:

“Taxi driver used to be one of the country’s most dangerous professions. Uber changed that. I don’t know my Uber driver, but the rules and the technology lets us both be confident that neither of us will cheat or attack each other. We are both under constant surveillance and are competing for star rankings.”

Captology here is manifested by the ability of the computers that run Uber (I hesitate to say they are AI applications), and our reliance on those computers, to persuade us that it is in our (all of us who use Uber anyway) best interest to abide by the terms of service that Uber sets out. As Schneier points out, this system is doing a good job.

But more can be gleaned of this. As we look at this a bit more broadly, an AI system’s ability to persuade us that we can trust it begins with how it is portrayed by the developer in the marketing materials. But that’s the relatively “shallow” part. Quickly thereafter it is clearly evident how much trust is generated through the application’s performance. All is good if it’s living up to the marketing promises; trust is established and can grow so long as there is no disruption. But once it disappoints, trust is quickly eroded and is difficult (arguably impossible) to establish. Tying it to the Uber scenario above, our confidence that we can trust each other (driver:passenger) is only maintained as long we, both the driver and the passenger, are satisfied with the Uber application’s performance. From this we can appreciate that the AI’s computing power, its Captology, is doomed to irrelevance or severely diluted once it goes through the pivot point that marks the boundary between the end user’s trust and distrust and into the realm of distrust. With this we can appreciate that the degree of weight/importance that is and should be accorded to a given application’s Captology is directly relational to the amount of trust the application generates and is capable of sustaining over its life cycle.

July 18, 2023

Think about persuasion as an output variable. Generally, the more computationally powerful the application producing the content, the more likely it is that the content is persuasive. This raises the question of whether and to what extent AI regulation should be based on computational power and not on the purpose for which it is used. In a computational-power centric legal regime, the strictest controls would be imposed on the most powerful AI systems while relatively more lenient controls would be applied to less powerful AI. (For a discussion of AI power, see, for example, the discussion around Class C or D applications.) But this is not how law makers are currently looking at this. To examine this closer, we look to one of the most influential bodies of law drafted specifically to regulate AI, the EU AI Act (AI Act). As we look at the AI Act’s risk spectrum we can see that it is focused on the intended utilization of the AI application; what’s driving the application and its computational power is not considered. In the EU AI Act, the higher the risk in utilization, the stricter the rules. On one end of the risk spectrum sit the “unacceptable risk” AI systems. These are systems that are used for real-time biometric identification and manipulative (deep fakes) or exploitative ends and are never allowed. And on the opposite end of the spectrum are AI systems that create low or minimal risk and merely require informing the end user of their existence. Sitting in the middle are the “high risk” AI systems that are used for employment screening, credit, critical infrastructure and might otherwise put the life and health of citizens at risk. Anyone using these types of systems is required to notify end users that they are present. So the AI Act really doesn’t focus on computational power; it’s computational-power agnostic. And this approach has self-imposed limitations. To illustrate, suppose there is an AI system that is used to generate deep fakes, but they aren’t very good. They are not believable, they have timing issues, the lips move out of sync with the words, and it has other flaws. This AI application’s shoddy performance is not surprising as it turns out the application is powered by a sloppy algorithm with scant computing resources. Yes, it’s generating deep fakes, but so what? It’s so bad that it’s not really misleading anyone. There is no harm. As this example shows, not focusing on the computational power variable opens the door for the AI Act to be applied to operationally irrelevant applications. This limitation is self imposed and can be remedied by considering the augmentation factor of powerful AI.

June 21, 2023

The augmentation effect also has an inverse quality in that it can degrade the capability to exercise the requisite level of professional judgment. We can see this at play, in for example, bad lawyering. Here AI’s allure, its hypnotic-like capability to persuade an end user that it can deliver accurate output got one lawyer in a whole lot of trouble. When Roberto Mata sued Avianca airlines, his lawyer, Steven A. Schwartz, used ChatGPT to counter Avianca’s motion to dismiss. Schwartz’s brief cited case law that on its face seemed relevant and persuasive. But that’s as far as it went; it only looked like it did. The problem was that it was all fabricated and Schwartz had no idea, and never even bothered to check. While Schwartz is waiting to see how his legal career is going to be hit by this, we can point a finger at ChatGPT and talk about the need to regulate it (and other similar applications) so this does not recur. Not to dismiss the importance of regulation, but doing so in this case is little more than a knee-jerk reaction; regulating ChatGPT will not protect us from bad lawyering. But more importantly, and this should come as no surprise to lawyers, there is no need for any new regulations because, you may have guessed it, we already have the tools we need to curb and deal with bad lawyering. The ABA Model Rules of Professional Conduct Rule 1.1, for instance, states that “A lawyer shall provide competent representation to a client. Competent representation requires the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation.” Additionally, Comment [8] to Rule 1.1: Competence provides that: “To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology, engage in continuing study and education and comply with all continuing legal education obligations to which the lawyer is subject.” To sum this all up, existing regulations already tell us that if we are going to use ChatGPT, and other generative AI applications in the practice, we need to take the time to learn what it can and cannot do. It is possible to counter the degradation effect of powerful technology, but doing so requires compliance with existing professional rules of conduct.