Encryption: if this is the best his opponents can do, maybe Jim Comey has a point

  • “We share EPA’s commitment to ending pollution,” said a group of utility executives. “But before the government makes us stop burning coal, it needs to put forward detailed plans for a power plant that is better for the environment and just as cheap as today’s plants. We don’t think it can be done, but we’re happy to consider the government’s design – if it can come up with one.”
  • “We take no issue here with law enforcement’s desire to execute lawful surveillance orders when they meet the requirements of human rights and the rule of law,” said a group of private sector encryption experts, “Our strong recommendation is that anyone proposing regulations should first present concrete technical requirements, which industry, academics, and the public can analyze for technical weaknesses and for hidden costs.”
  • “Building an airbag that doesn’t explode on occasion is practically impossible,” declared a panel of safety researchers who work for industry. “We have no quarrel with the regulators’ goal of 100% safety. But if the government thinks that goal is achievable, it needs to present a concrete technical design for us to review. Until then, we urge that industry stick with its current, proven design.”

Which of these anti-regulation arguments is being put forward with a straight face today? Right. It’s the middle one. Troubled by the likely social costs of ubiquitous strong encryption, the FBI and other law enforcement agencies are asking industry to ensure access to communications and data when the government has a warrant. And their opponents are making arguments that would be dismissed out of hand if they were offered by any other industry facing regulation.

Behind the opponents’ demand for “concrete technical requirements” is the argument that any method of guaranteeing government access to encrypted communications should be treated as a security flaw that inevitably puts everyone’s data at risk. In principle, of course, adding a mechanism for government access introduces a risk that the mechanism will not work as intended. But it’s also true that adding a thousand lines of code to a program will greatly increase the risk of adding at least one security flaw to the program. Yet security experts do not demand that companies stop adding code to their programs. The cost to industry of freezing innovation is deemed so great that the introduction of new security flaws must be tolerated and managed with tactics such as internal code reviews, red-team testing, and bug bounties.

That same calculus should apply to the FBI’s plea for access. There are certainly social and economic costs to giving perfect communications and storage security to everyone – from the best to the worst in society. Whether those costs are so great that we should accept and manage the risks that come with government access is a legitimate topic for debate.

Unfortunately, if you want to know how great those risks are, you can’t really rely on mainstream media, which is quietly sympathetic to opponents of the FBI, or on the internet press, which doesn’t even pretend to be evenhanded on this issue. A good example is the media’s distorted history of NSA’s 1994 Clipper chip. That chip embodied the Clinton administration’s proposal for strong encryption that “escrowed” the encryption keys to allow government access with a warrant.

(Full disclosure: the Clipper chip helped to spur the Crypto War of the 1990s, in which I was a combatant on the government side. Now, like a veteran of the Great War, I am bemused and a little disconcerted to find that the outbreak of a second conflict has demoted mine to “Crypto War I.”)

The Clipper chip and its key escrow mechanism were heavily scrutinized by hostile technologists, and one, Matthew Blaze,discovered that it was possible with considerable effort to use the encryption offered by the chip while bypassing the mechanism that escrowed the key and thus guaranteed government access. Whether this flaw was a serious one can be debated. (Bypassing escrow certainly took more effort than simply downloading and using an unescrowed strong encryption program like PGP, so the flaw may have been more theoretical than real.) In any event, nothing about Matt Blaze’s paper questioned the security being offered by the chip, as his paper candidly admitted.  Blaze said, “None of the methods given here permit an attacker to discover the contents of encrypted traffic or compromise the integrity of signed messages. Nothing here affects the strength of the system from the point of view of the communicating parties.” In other words, he may have found a flaw in the Clipper chip, but not in the security it provided to users.

The press has largely ignored Blaze’s caveat.  It doesn’t fit the anti-FBI narrative, which is that government access always creates new security holes. I don’t think it’s an accident that no one talks these days about what Matt Blaze actually found except to say that he discovered “security flaws” in Clipper.  This formulation allows the reader to (falsely) assume that Blaze’s research shows that government access always undermines security.

The success of this tactic is shown by the many journalists who have fallen prey to this false assumption.  Among the reporters fooled by this line Craig Timberg of the Washington Post,“The eventually failed amid political opposition but not before Blaze … discovered that the “Clipper Chip” produced by the NSA had crucial security flaws. It turned out to be a back door that a skilled hacker could easily break through.” Also taken in was Nicole Perlroth of the New York Times: “The final blow [to Clipper]was the discovery by Matt Blaze… of a flaw in the system that would have allowed anyone with technical expertise to gain access to the key to Clipper-encrypted communications.”

To her credit, Nicole Perlroth tells me that the New York Times will issue a correction after a three-way Twitter exchange between me, her, and Matt Blaze. But the fact that the error has also cropped up in the Washington Post suggests a larger problem: Reporters are so sympathetic to one side of this debate that we simply cannot rely on them for a straight story on the security risks of government access.