Computer Crime and Security Expert Jennifer Granick on New Bills Proposed by White House for Online Security and Her Suggestions for Priorities to Achieve a More Secure Internet

jennifer-granick

On February 13, 2015 Stanford University hosted a White House Summit on Cybersecurity with President Barack Obama and key members of the administration participating. Jennifer Granick, Director of Civil Liberties at the Stanford Center for Internet and Society and an expert in computer crime and security, participated in a summit workshop on information sharing. With recent high profile hacks and data breaches, issues of online security are making headlines in the news. After the summitand the announcement last November of the launch of the Stanford Cyber Initiative—expectations are high that we will develop new approaches to the security problem. What are people in the trenches saying about how best to help mitigate this threat? In this interview with the Stanford Lawyer editor, Granick shares her thoughts.


In a recent Stanford Lawyer feature, we highlighted government surveillance—with huge swaths of citizens’ personal digital data gathered, stored and analyzed. At the White House Cybersecurity Summit at Stanford, President Obama called for more cooperative information sharing between business and government as a way to address cybersecurity issues. Is that feasible?

Some sectors of private industry are very comfortable working with the government, but some feel deeply betrayed by the extent of secret spying and no longer trust the government at all. So the question then is: What are we going to do about that? Any information-sharing proposal has to start with an understanding that the government’s activities have created a situation in which Internet companies and their global users feel betrayed.

These companies need to protect their users. And so they are going to start to resist spying in all sorts of ways. One of the ways they are going to resist is by deploying more secure technology. Another way they are going to resist is by challenging things like gag orders and other things that keep surveillance secret—because in secret, it festers. They will start bringing court challenges. And they are not going to voluntarily share information if they think that the government will misuse that information.

Do you think that the president’s call for more information sharing will be heeded?

No. Especially given Congress today, the government is not going to be able to legislate away the trust issue. But I don’t think it has to happen, either. There are a number of private initiatives for companies to share vulnerability data with each other without getting the government involved. For example, Facebook, Yahoo, Google and some other companies just launched a platform called ThreatExchange.

Let’s talk about your workshop on information sharing at the summit. Did anything come out of it?

Just before coming to Stanford for the summit, the White House announced three bills for Internet security, one of which is for information sharing [Cyber Threat Sharing Act of 2015]. The idea of “information sharing” is this: if I get attacked, sharing the way in which that happened will help other people because they will know what to look for if they are being or already have been attacked. This is how to identify the code and techniques being used for attack.

One of the interesting things about our session was that everyone on my panel agreed with me that, as a general rule, the type of information we need to share to mitigate these vulnerabilities does not include private data. [Panel participants were: Moderator: Michael Daniel, National Security Council; Panelists: Michael Brown, CEO, Symantec; John Ikard, CEO, FirstBank; Granick; Matt Olsen, Former Director, National Counterterrorism Center; Alejandro Mayorkas, Deputy Secretary, U.S. Department of Homeland Security].

That’s a big problem with current and past information sharing bills. They have been based on the false assumption that we have to give companies immunity from violating privacy laws or they’ll be too afraid to share their hacking information for liability reasons. But if we are right there really isn’t a need for this private information, companies won’t need liability waivers for sharing. Privacy people are never going to, and I think Internet companies also don’t want to, waive the few privacy rules we have to facilitate this sharing. So if we’re right that we don’t have to, that clears one of the major objections to an information sharing law. We can say: yes please share, but scrub all the private information from the data you share and we’ll get all the benefits without any of the detriment. That was a very important outcome.

So, there was a real takeaway?

We will see. If policy makers were listening, then when the bill comes up for a vote, someone will press industry reps calling for immunity to explain what categories of information they need to share that poses a threat of liability to them. And then we can address if the liability problem is real, or acknowledge that it’s not real and move forward.

How else would you improve the proposed Cyber Threat Sharing Act?

I’m not sure we need a bill for sharing vulnerability information, because there are no real legal obstacles against it. What the government needs to do is incentivize companies to share by making it worth their while. The reason companies aren’t sharing isn’t because they are afraid of liability, it’s because it’s not worth their while to figure out how to do it. The government needs to incentivize sharing, not unnecessarily legislate away privacy.

Now, there are privacy rules that regulate what some companies can tell the government about their users. Those can and should remain in place, and be strengthened. There are legislative proposals to ensure that email cannot be seized and your location cannot be tracked without a warrant. Hopefully those will go forward, but they’ve stagnated in the past.

The White House also proposed two other bills: one on data breach notification [the Personal Data Notification & Protection Act], and the other enhancing penalties to the existing Computer Fraud and Abuse Act (CFAA). Can you give us the key points of the data breach notification proposal?

The data breach notification bill says that if some categories of user or customer data are stolen from you, you have to inform the affected person. Right now, there is a patchwork of state laws requiring breach notification. So it would be nice to have a uniform national standard. But the problem is that some state laws are more protective that this federal law would be. For example, California requires customers to be notified for additional categories of data that aren’t protected under the federal proposal. So a federal law would actually roll back consumer protections in not requiring notification when certain categories of data are stolen. When there’s a patchwork, the default is that companies will apply the most protective statutes in the country, and right now that includes California’s. That may be a bit of a hodgepodge, but it’s more protective than the proposed national standard. So sure, if the idea is to harmonize, then meet California’s more protective standards. But don’t use the bill to lower the standard and give companies a break while hurting consumers.

And what do you think about the proposal to enhance penalties for computer abuse?

It is short sighted and dangerous. The computer crime law potentially penalizes things like modifying the URL in your web browser box, using different browsers to access the same services, or downloading data with an automated program. These shouldn’t even be crimes. And this proposal would make anything that used to be a misdemeanor punishable as felonies with ten-year penalties. The people allegedly attacking our networks aren’t going to care about this increased severity because, generally, they are overseas and are not going to be found. Take the Sony attack, allegedly perpetrated by North Korea. North Korea doesn’t care about whether the hack is a felony or a misdemeanor under the CFAA. They will have no concern whatsoever about the statute. The people who will be concerned are security researchers and people who use the Internet—just regular people who are going to have this overbroad law enforced with a giant stick hanging over their heads. They will be less willing to come forward with security breaches they’ve found. It’s the opposite of encouraging information sharing. I know, because I’ve represented security researchers wanting to publish their results for years.

And the DOJ absolutely cannot be trusted to know which online uses are dangerous and which are not, what constitutes a major security risk and crime and what does not. For example, they vigorously prosecuted Aaron Swartz for downloading online journals quickly, and he was sadly unable to withstand that pressure. I think it’s shameful that they keep coming forward with the same bad ideas even after Aaron died.

The Hewlett Foundation recently launched a $45M cybersecurity initiative with Stanford working closely with Berkeley and MIT “to establish major new academic centers for cybersecurity policy research.” What is your role in the initiative?

Our program, the Center for Internet and Society, will be a beneficiary of the funding, but I have no formal role.

If you were advising the folks leading the initiative, what would you suggest they focus on? What are the most urgent security issues that need to be addressed?

Number one is improving security for critical infrastructure such as water, electricity and even financial market networks. That involves understanding why the things we know how to do to improve security aren’t being done. Research shows that some basic practices, like checking for unauthorized devices on your network and keeping up to date on software patches can mitigate the vast majority of opportunistic attacks. It also means trying to “fail safe”. Defense is hard, so a targeted attack by skilled people on something important could be successful. I’ve always wondered why these insecure critical infrastructure control systems aren’t air gapped—disconnected from the public Internet. Sure, it’s less convenient, but it’s way safer. And when you think about all the network monitoring, privacy invasions, and other bad policies being justified in the name of avoiding “cyber Pearl Harbor”, you wonder why we don’t put serious money and effort behind much simpler tech related solutions first.

Second to that is to encourage better security practices, whether they be technological or otherwise. There are a lot of things we do that interfere with rather than enhance security. We need to stop. The government compromises encryption standards in order to facilitate surveillance. That’s like asking people to leave their doors unlocked, so the government can come in and catch the criminals. The government increases penalties under the Computer Fraud and Abuse Act, which scares researchers away from publishing their research and discoveries. The government insists that companies include back door access to their networks to facilitate surveillance, which also means that other outsiders can gain access to these same networks and so compromises company and consumer security.

How can the Stanford Cyber Initiative help to address these top security concerns?

An academic institution is a great place for the various groups that are involved in security to come together to understand each other better. Right now what’s happening is that people are just talking past each other, so there is little understanding—policy makers don’t understand what the technology creators are doing or what they care about and the technology creators don’t see that anyone in government is responsive to their concerns.

I think it’s also possible for academic institutions to take a global focus, which we really could do a lot better, and take more seriously into consideration the fact that the U.S. is not going to dominate the Internet forever. I think that academic institutions are a good place for those conversations and that kind of work to be done.

When we think of network security solutions, we think first of tech solutions. What is the role of the law and of policy in this challenge?

I agree that tech solutions are extremely important. So, one of the most important things is for law and policy to not get in the way of technological solutions. Law and policy can also encourage adoption of pro-security measures and management of international relationships without dealing with the security of a network that is global and not entirely under the control of any one nation. I think we need laws to protect the network from efforts by government to make it something that they are comfortable with instead of what it has been: a really excellent platform for innovation, creativity, political organizing, and more.

Jennifer Granick is the Director of Civil Liberties at the Stanford Center for Internet and Society. She returned to Stanford after working with the Internet boutique firm of Zwillgen PLLC. Before that, she was the Civil Liberties Director at the Electronic Frontier Foundation. Jennifer practices, speaks and writes about computer crime and security, electronic surveillance, consumer privacy, data protection, copyright, trademark and the Digital Millennium Copyright Act. From 2001 to 2007, Jennifer was Executive Director of CIS and taught Cyberlaw, Computer Crime Law, Internet Intermediary Liability, and Internet Law and Policy. Before teaching at Stanford, Jennifer spent almost a decade practicing criminal defense law in California. She was selected by Information Security magazine in 2003 as one of 20 “Women of Vision” in the computer security field.