Experts React to Reuters Reports on Meta’s AI Chatbot Policies
Summary
Robert Mahari, Associate Director, Codex Center, Stanford University
First, the fact that the victim here is an adult underscores that vulnerability isn’t limited to children. AI companions present a distorted type of relationship where users can craft a perfect persona to suit their needs — one that has no needs of its own. While children are particularly vulnerable to this, the category of vulnerable people is much larger, potentially encompassing anyone experiencing loneliness. I worry that policymakers will overemphasize mechanisms like age-gating to protect children, leaving adults with limited safeguards.
Second, we need to think about interventions that target the economic incentives at play. If we allow AI companionship providers to profit more when people spend more time with the platform, this creates dangerous incentives to craft addictive products. In some cases, AI companionship can provide genuine value, so I’m not advocating for a wholesale ban. Instead, we should anticipate markers of unhealthy usage. While we clearly need more high-quality research on what these markers are, the amount of time spent with the companion is likely a key indicator. Understanding when engagement with AI companions becomes unhealthy can help us formulate an effective policy response (e.g., interventions for people who spend more than a certain amount of time with the platform).
Third, I’m skeptical of disclosures and consent being meaningful or fair in this context. Consent-based frameworks shift responsibility to users. While they create a sense of autonomy, I doubt they will do much to make these products safer. Most people using AI companions likely do not believe them to be human or infallible. Rather, the companions fill a need for their users, and a simple disclosure is unlikely to change that for most users. At the same time, Bue’s story underscores that some users will misunderstand the nature of AI companions. I’m not sure what the best intervention is — perhaps a mandatory “training” provided by a neutral third party for users who want to unlock romantic capabilities? But I highly doubt that a one-line disclaimer stating the companion isn’t real will suffice, especially because it is hard to ensure that the companion will not contradict this disclaimer, either explicitly or through the nature of the interactions.
Finally, the harms here often stem from second-order actions, not from the consumption of the AI companion directly. It’s not necessarily unhealthy for someone to engage with an AI companion if this doesn’t undermine their real-world relationships. But when users take actions in the real-world — like traveling to “meet” the companion or withdrawing from relationships — harms may arise. This makes liability much more difficult, since there tends to be a complex causal link between the AI companion and the real-world harm. I expect that rather than focusing on liability, effective policy will need to mandate responsible design choices for these systems.
Read More