The Power of Persuasion (“Captology”) in the Age of AI and Quantum Computing
We are in the initial stages of entering through the gates of a new era, one where the power of technology to persuade us is unchartered. Why so? Because the type of technology we’re now (and soon will be) facing comes with unprecedented “augmentation” factor fueled by AI and quantum computing (more on that below). Befitting their power, we can expect that these technologies, individually or in combination, will throw at us some interesting challenges and surprises when it comes to their power to persuade us; some which we might find useful, sometimes pleasant, and certainly some that bear none of those characteristics. So what does this mean from a legal and regulatory framework? As I see it, the current picture is not all that great. We already have some early-stage laws, regulations and standards, but most of these just fall short of the mark. Many are riddled with shallow, vague, broad, vague, unrealistic, and merely aspirational (even naïve to the point of being Pollyanna’ish) language. This setting is fertile ground for creating a lot of buzz, but that’s just about it. There is very little attention to the looming “augmentation” effects on persuasion. Now, to be clear, none of this is surprising. The law, the regulations (and all of the standards and best practices that feed into them) are virtually always quite a few steps behind the rapid technological developments and their exponential pace of adoption.
At a high level, “augmentation” was initially focused on the relationship between the power and capability of AI and/or quantum technology to generate unpredictable outcomes from data on which these technologies are employed. It recognized that the greater the power/capability of a given AI/and or quantum computing application, the more likely it is that the latent data value on which it is employed will increase. I found that this tends to create a destabilizing transactional effect. Data that wasn’t initally seen as all that valuable by its owner and allowed to be used with little or no restriction suddenly receives a massive face lift when exposed to augmentation by AI and quantum computing. The destabilizing transactional effect comes into play with the data owner’s new recognition and interest in placing use restrictions, but it may be too late.
Let’s now turn to the question of persuasion and how it fits with augmentation. The power of technology to persuade people was the title and subject matter of a book by Stanford Professor B.J. Fogg: Persuasive Technology: Using Computers to Change What we Think and Do. In this book, Professor Fogg used the term “Captology” (which stands for computers as persuasive technologies) to describe an ecosystem in which interactive computer systems are designed to change people’s attitude and behaviors. Fogg’s Captology is represented by a “functional triad” where computers serve as a “tool,” as “media,” and as “social actors.” In their role as a “tool” computers help increase the capability to efficiently reach and persuade people. As “media” they serve up an experience for people. And as “social actors” they drive persuasion by creating relationships through rewards and positive feedback.
The augmentation effect on Captology makes it necessary to think about how to build guard rails to minimize the potential for unprecedented abusive persuasion (short of deception). One potential place to start this is with the AI Life Cycle Core Principles (Principles). I should also clarify that while these were designed with AI in mind, most of the content can also apply to quantum computing. Now, in contrast to much of the current high level discussion around legal and regulatory initiatives, the Principles offer practical, actionable guidance that can help minimize the potential for harm that can come from augmentation.