Government Hacking Raises New Security Concerns

News of governments such as Russia and North Korea deploying their tech teams to hack into companies for political reasons has made headlines (think Sony after release of the movie The Interview). But what about when the U.S. government “hacks” to get around security measures designed to protect consumers? Can those hacks backfire and put us all at risk? Riana Pfefferkorn, Cryptography Fellow at Stanford Law School’s Center for Internet and Society, looks at these issues in a new paper Security Risks of Government Hacking. Here, she discusses her findings.

Your paper explores the security risks posed by government hacking. Can you explain government hacking?

“Government hacking” refers to when government investigators use vulnerabilities (bugs) in software and hardware products to, first, gain remote access to computers that have information the investigators want, and then remotely search the computer, monitor user activity on it, or even interfere with its operation. These hacking operations can be conducted by intelligence agencies or law enforcement agencies, in furtherance of criminal, national security, or terrorism investigations.

Does the U.S. government have the technical expertise for that? Are they typically government employees?

Riana Pfefferkorn
Riana Pfefferkorn of Stanford’s CIS

The U.S. government, particularly its intelligence agencies, likely has more technical expertise than most if not all other countries in this area. And law enforcement agencies like the Federal Bureau of Investigation request funding from Congress every year to develop their capabilities even further.

Sometimes the people developing government hacking techniques are government employees, and other times not. As the paper explains, the U.S. government may discover vulnerabilities itself and build “exploits” that make use of those vulnerabilities. But there is also a market where third-party entities (that are not governments themselves) sell software and services to governments to conduct their hacking operations, and the U.S. government buys from that market too. For example, in the “Apple vs. FBI” case, the government bought an exploit from an unnamed third party in order to break into the San Bernardino shooter’s iPhone.

These third-party vendors might be very upstanding and conscientious about who their customers are, but they also might sell to oppressive regimes or organized crime. So one of the things the paper discusses is what it means for the U.S. government to participate in a market that also enables the persecution of journalists, human rights activists, and so on.

How widespread is government hacking? And what agencies do it?

We don’t know just how widespread it is, because when it happens on the intelligence side, it’s classified, and when it happens on the law enforcement side, it’s in the context of criminal investigations that will remain secret while they’re ongoing. One of my research areas at Stanford besides cybersecurity is court transparency, and trying to figure out how often the courts (at least federal courts) authorize government hacking is on my to-do list.

From the criminal cases we do know about, it’s clear that government hacking has been used in criminal investigations in the U.S. since at least the start of the 21st century, if not earlier. Both state and federal law enforcement agencies engage in government hacking. We know what federal agencies are or might be doing so thanks to something called the “Vulnerabilities Equities Process,” which is a federal government process for determining whether to keep newly-discovered vulnerabilities secret for offensive purposes or instead disclose them to the maker of the flawed hardware or software product so that the vendor can fix the flaw, thereby improving computer security. As revised in 2017, the “VEP” lists ten high-level departments, offices, and agencies that participate in the process, and many of those have sub-agencies participating too.

The VEP’s agency list includes the ones you’d expect, like the Department of Justice (which includes components such as the FBI), the Department of Defense (which includes the National Security Agency), the Department of Homeland Security, and the Central Intelligence Agency. But there are also some you might not expect, like the Department of Commerce and the Office of Management and Budget. Agencies like those are probably not conducting hacking operations themselves — they are probably there to weigh in on other factors such as cost and procurement considerations.

If the government typically hacks into “targeted” computers, how do innocent people get caught up in this?

This can happen if the way the government gets to targeted computers is by serving its malware from a website the computers visit. I discuss an instance of this in the paper. About five years ago, the FBI took control of a web hosting service’s servers, which included sites serving child pornography as well as sites with legal content. The sites hosted on the servers were only reachable using a browser called Tor, which is supposed to obscure the user’s true IP address. In order to identify and track down the visitors to the illegal sites, the FBI used malware that exploited a flaw in the Tor browser that revealed a user’s real IP address. But when the FBI deployed this malware, it didn’t just do so from the illegal websites — it did it for every site hosted on those seized servers.

That means the FBI’s malware wound up infecting the browsers of people who were visiting other sites, not trying to view any illegal content. Their true IP addresses were still disclosed to the FBI. Those other sites included an anonymous webmail service used by journalists, activists, and dissidents — people who have very good security reasons for trying to keep their online activities from revealing their true identities and locations, especially to governments.

As far as we know, the FBI didn’t notify anybody that they’d been served with malware unless they got indicted. The FBI seems to have tightened up its practices since then, but that operation is a good illustration of how innocent people can get caught up in a hacking operation — and maybe never even know it.

You highlight six ways that government hacking raises broader computer security risks. One concern is about the dilemma that government hackers face: whether to share information about vulnerability that they discover or protect their own hacking capabilities. Can you offer an example that illustrates how this can play out—and be problematic?

One word: “WannaCry.” The National Security Agency had an exploit called EternalBlue that made use of a flaw in Microsoft software. It tried to keep that flaw to itself, because if it notified Microsoft, Microsoft would patch the flaw and then the NSA wouldn’t be able to make use of it anymore. But that exploit became public, along with other NSA tools, after a group called the “Shadow Brokers” apparently obtained the tools from an NSA server which they had hacked in 2013. The Shadow Brokers released those tools online in April 2017. EternalBlue was soon repurposed into WannaCry, a virulent piece of ransomware that infected hundreds of thousands of computer systems worldwide starting in May of last year, including such crucial systems as hospitals and banking.

The NSA figured out it had lost control of its tools, and eventually it notified Microsoft of the flaw used in EternalBlue, so Microsoft was able to issue a software patch about a month before the Shadow Brokers’ public release. But because the NSA waited so long, and because not all Windows systems had been patched yet, WannaCry was still able to wreak havoc and exact a huge economic toll. Even now, there are still around a million Windows computers and networks that haven’t been patched, so WannaCry is still infecting computers today.

How do you think the risks you raise are best addressed?

The paper doesn’t really go into that. What I’m trying to do with the paper is enumerate the six main security risks that I see with government hacking. But the paper doesn’t make any normative recommendations—it doesn’t try to guide how policymakers should weigh those risks. The law and policy issues around government hacking — including whether it should be allowed at all — are a very contentious topic of debate, both here and in other countries that engage in government hacking such as Germany. I want this paper to be a resource for people no matter where they fall in that debate. We really don’t understand the security risks of government hacking all that well, but it’s happening already regardless. So whatever policies or regulations might eventually be put in place, they need to account for these risks. My hope is that policymakers and technologists alike will take this paper as a basis for future work.