Facebook has come under increased scrutiny in recent months, the social media giant’s efforts to protect its users’ data questioned. Now, it has come to light that Cambridge Analytica, a data analytics company that has been credited with playing a role in the Leave campaign for Britain’s EU membership referendum and in the digital operations of Donald Trump’s election campaign, was given access to the personal information of millions of Facebook users through an independent app developer. How is the data collected by Facebook and its app developers used? Is it protected sufficiently? In the discussion that follows, Daphne Keller, Director of Intermediary Liability at the Stanford Center for Internet and Society, discusses these issues.
Can you help us to understand the role of Global Science Research (GSR), a development company that allegedly harvested tens of millions of Facebook profiles and provided the data to Cambridge Analytica?
The reporting on this keeps evolving, but here is what I think we know as of now. GSR built a Facebook quiz app, and some 270,000 Facebook (FB) users installed it. Some were paid to do so. Like most FB apps, it collected information from people who installed it. And, like many FB apps, it collected far more information than would seem to be necessary for the app’s purpose or utility to the user. This included not only information about the user who installed it, but information about his or her FB friends (FB has since limited apps’ ability to collect that info). It was supposedly this info about friends that brought the number of affected people up to a reported 50 million.
So how did Facebook user data get to Cambridge Analytica (CA)?
What happened here was a breach of the developer’s agreement with FB — not some kind of security breach or hacking. GSR did more with the data than the TOS permitted—both in terms of keeping it around and in terms of sharing it with CA. We have no way of knowing whether other developers did the same thing. FB presumably doesn’t know either, but they do (per reporting) have audit rights in their developer agreements, so they, more than anyone, could have identified the problem sooner. And the overall privacy design of FB apps has been an open invitation for developments like this from the beginning. This is a story about an ecosystem full of privacy risk, and the inevitable abuse that resulted. It’s not about a security breach.
Is this a widespread problem among app developers?
Before we rush to easy answers, there is a big picture here that will take a long time to sort through. The whole app economy, including Android and iPhone apps, depends on data sharing. That’s what makes many apps work—from constellation mapping apps that use your location, to chat apps that need your friends’ contact information. Ideally app developers will collect only the data they actually need—they should not get a data firehose. Platforms should have policies to this effect and should give users granular controls over data sharing.
User control is important in part because platform control can have real downsides. Different platforms take more or less aggressive stances in controlling apps. The more controlling a platform is, the more it acts as a chokepoint, preventing users from finding or using particular apps. That has competitive consequences (what if Android’s store didn’t offer non-Google maps apps?). It also has consequences for information access and censorship, as we have seen with Apple removing the NYT app and VPN apps from the app store in China.
For my personal policy preferences, and probably for most people’s, we would have wanted FB to be much more controlling, in terms of denying access to these broad swathes of information. At the same time, the rule can’t be that platforms can’t support apps or share data unless the platform takes full legal responsibility for what the app does. Then we’d have few apps, and incumbent powerful platforms would hold even more power. So, there is a long-complicated policy discussion to be had here. It’s frustrating that we didn’t start it years ago when these apps launched, but hopefully at least we will have it now.
What does Cambridge Analytica do with the data it collects?
My understanding is that CA used this to refine its ad targeting in the 2016 election campaign. But presumably the data has plenty of other uses, not limited to the US or any one CA client. The British Data Protection Authority today applied for a warrant to look at CA’s servers and learn more answers to these questions. Presumably one concern for them is how these developments may have played out in the Brexit election.
I think most Facebook and Google users know, at least anecdotally, that their online movements are tracked and used in various ways, as is apparent when they search for say red shoes and five ads for red shoes magically appear in their Facebook feed. Can you talk about Cambridge Analytica’s use of personal data and how it is different?
As far as I can tell now, the two issues are unrelated. The ad tracking involves data the FB itself collects, and then uses in targeting ads — whether for shoes or for political candidates. It uses it on advertisers’ behalf to target individuals, but does not, as far as we know, give raw data identifying people to the advertisers. (Accidentally disclosing unique user ID numbers to advertisers was one of the things it got in trouble with the FTC for a few years ago, and agreed to prevent as part of its consent decree.)
The data that flowed to GSR would be different – reflecting users’ activity on the FB platform itself — their likes, social graph, etc.
Sandy Parakilas, platform operations manager at Facebook between 2011 and 2012, told the Guardian that he warned senior executives at the company that its lax approach to data protection risked a major breach and has charged that Facebook did not use its enforcement mechanisms, including audits of external developers, to ensure data was not being misused. Can Facebook be held responsible for the way in which the data it collects has been misused?
Very good question, and one the tech legal world is asking itself. One reason FB has been so adamant that this was not a “breach” is because that word, in the sense of “security breach,” has legal consequences. If you have read about or experienced high-profile breaches like the Target or Equifax breaches, you will remember the scramble to properly notify users, often coupled with paid access to credit reporting, etc. When those breaches happen, companies have time-sensitive, cumbersome, and expensive obligations. FB wants to be clear that it does not think it’s in that situation.
Given that this is more a product of a known, longstanding, public product design, coupled with bad faith by a developer, what are the options? I am sure we will see private lawsuits raising privacy, unfair competition, or similar claims. Regulatory investigations in the US and EU are also surely pending. In the US, FTC lawyers are presumably scrutinizing Facebook’s existing consent decree with the agency, to see if this violates commitments made there or if new charges are appropriate. EU Data Protection law will provide the more muscular source of enforcement authority, since their legal model for privacy is not as “quasi-contractual” as ours is. In other words, here a defendant can sometimes say “this is fine, because the user agreed to it,” while in Europe it is more often the case that improper data collection, processing, or sharing is forbidden regardless of user notification or consent.
What, if any, legal liabilities are there for Global Science Research and Cambridge Analytica?
CA will likely be in a lot of trouble with European privacy regulators, for starters. And Facebook could sue them for a number of things. One is breach of their contract. Other claims might be things like intentional interference with contractual relations (those between FB and its users).
And one set of legal nerds is wondering if FB will do the same thing it did in a suit against Power Ventures, a company that used users’ login credentials – with permission from those users – to aggregate social media feeds. Facebook objected to this and sued based on, among other things, the Computer Fraud and Abuse Act (CFAA) and its state equivalents.
The CFAA was intended as an anti-hacking statute. But there is a long-running dispute, and varying case law, on whether a defendant can be sued under the CFAA for breaching a legal agreement, such as a Terms of Service, even if she never hacked through some kind of technical barrier in the more classic “security breach” sense. This matters in part because the CFAA has both civil and criminal provisions, and they use the same language. If breaching a TOS is a civil CFAA violation, it is also a crime. That brings an absurd range of ordinary behavior within the scope of criminal law. Suppose I gave Facebook a fake birthday, for example. That would likely violate the TOS. Should I be at risk of jail for it? Should we rely on prosecutorial discretion to avoid bad outcomes? The serious problems with that approach were highlighted by the tragic suicide of activist Aaron Swartz while facing CFAA prosecution. This is a long-standing structural problem in the statute and in dire need of reform. I would not want to see FB make more bad law under the CFAA just to show that it is serious about going after CA.
Could the Global Science misuse of personal data gathered by Facebook be just the tip of the iceberg?
Sure. Who knows what other app developers might have done over the years—or what else CA has done with this data set.
How can policymakers regulate Facebook and other social media companies so that this doesn’t happen again?
Europe’s new General Data Protection Regulation is one attempt to do so. It puts far more detailed constraints in place, backed by threats of incredibly high penalties — up to 4% of global annual turnover. But, per my first response in this Q&A, we have a snarl of related questions we need to sort out — and policy and normative preferences we need to prioritize — to find the right answer.
Daphne Keller is the Director of Intermediary Liability at the Stanford Center for Internet and Society. Before coming to Stanford Law School, she was Associate General Counsel for Intermediary Liability and Free Speech issues at Google.