Extra-PRG Meeting on the Technical Implications of the NSA and GCHQ Revelations

On the 27th of September, we organized an extra Privacy Research Group (PRG) meeting on the technical implications of the NSA and GCHQ surveillance programs as revealed by Edward Snowden and The Guardian. Specifically, given what we know from media reports and discussions among the security community, the meeting provided us with an opportunity to explore answers to the following three questions:


  1. What are the technical surveillance capabilities of the NSA and GCHQ?
  2. What are some implications of these surveillance capabilities for technical communities (e.g., cryptographers, technical standards makers, and developers), their practices, and the tools that they develop and deploy?
  3. What are some necessary and desirable technical and policy measures in response to the global, intrusive and secretive mass-surveillance programs of the NSA and GCHQ?


At this meeting, in addition to the regular PRGs, we were lucky to welcome our guest Arvind Narayanan (http://randomwalker.info), who is currently an Assistant Professor at Computer Science and CITP at Princeton University. Arvind helped us kick off the meeting with an impromptu lecture on symmetric, asymmetric, and elliptic curve cryptography, as well as an introduction to Public Key Infrastructures (PKIs) based on Certification Authorities. He also explained the role of these cryptographic building blocks and infrastructures in helping computers do authentication and initial cryptographic handshakes on the Internet – both important steps for establishing secure communications.


In the discussion that followed, we turned to what we exactly should imagine as “backdoors” implemented by these intelligence agencies. This led to the following interpretation of backdoors with some examples:

–  crypto backdoors: e.g., attacks on elliptic curve cryptography that are developed by researchers working for the NSA and concealed from the rest of the world.

–  software (and crypto implementation) backdoors: e.g., Man in The Middle (MITM) attacks using implementation weaknesses in the Secure Sockets Layer (SSL).

–  hardware backdoors: e.g., embedding into consumer devices processors that have weak(ened) pseudo random number generators, which are used in deriving cryptographic keys. Note that the example is a mix of hardware and crypto backdoors.

–  infrastructure backdoors: e.g., obtaining rogue certificates from Certification Authorities (CAs). This could or could not be combined with a legal backdoor.

–  organizational backdoors: e.g., embedding NSA personnel in companies, or vice versa.

–  legal backdoors: e.g., asking companies to hand over cryptographic keys and putting the company employees under a gag order.

–  user backdoors: e.g., crunching passwords or running black operations to steal keys or hijack operating systems.

– standards backdoors: e.g., using influence in technical standards bodies to recommend weak(ened) cryptographic building blocks and protocols, or sabotaging the progress of cryptographic standards for standards that would constrain NSA surveillance activities.

Next, we turned our focus to the different reactions from various communities in response to the revelations about the use of backdoors in the NSA/GCHQ surveillance programs. For example, in response to crypto backdoors, cryptographers have taken to intensively re-evaluating those cryptographic primitives and protocols that are secure against crypto backdoors and that may provide better protection against mass surveillance. We all had heard of claims that, given knowns and unknowns about NSAs cryptanalytic capabilities, symmetric crypto is assumed to be more secure then asymmetric crypto. This is surprising given the differences in the construction of the two cryptographic primitives. In a nutshell, symmetric cryptography is based on creating an elaborate design that scrambles clear text into an encrypted text such that the design cannot be attacked in any way other than a brute force (i.e., trying out all possible secret keys one by one) that is too costly to succeed in a reasonable amount of time. Asymmetric crypto on the other hand relies on fundamental mathematical principles, i.e., number theory and the complexity of certain computations. But, how is it that an approach that “scrambles” text into encrypted information, as is the case in symmetric cryptography, is seen to be more reliable than an approach which relies upon mathematical principles, as is the case in asymmetric crypto?


The logic of this unintuitive reasoning builds on some of the assumptions that underlie these cryptographic primitives. Asymmetric cryptographic algorithms depend on the fact that, given the inputs, some functions are easy to calculate, but, given the output, it is difficult to calculate the inputs — such functions are also known as one-way functions. For example, it is easy to identify two large prime numbers and to take their product, but it is difficult to identify those original prime numbers given their product only. This property makes it possible to announce the product of the prime numbers to the world, also called the public key. The public key can then be used to encrypt messages. The person who knows the prime factors, that is, the secret key, is then the only one that can decrypt these encrypted messages. This setup of public and privacy key pairs works if the person picks large enough prime numbers to generate the keys such that it would be impractically long for somebody else to calculate the associated prime factors, given what is currently known about number theory. The hook is in that last bit: it is not known whether NSA mathematicians know more than the general public about number theory, and specifically about prime factorization. If so, it could be that mathematicians at NSA are able to factor larger numbers than is currently assumed feasible, and hence would be able to decrypt communications that rely on smaller keys. Given historical evidence that NSA researchers were at times years ahead of their colleagues in the civilian world, e.g., in the development of elliptic curve cryptography, it has been commonplace in discussions about the NSA revelations to extrapolate on NSA’s current capabilities.


In our discussions, the opacity of what researchers at NSA may know led to some remarks about mathematics and how it is currently practiced. There is an imbalance between the “open” science culture that most mathematicians and cryptographers are avid participants of, and the closed scientific culture that NSA is cultivating. The parallel “closed” world that NSA researchers inhabit has access to the “open” research results but the reverse does not hold. While the NSA may regard their opacity as “necessary” to keeping ahead in the national security game, it creates divides among mathematicians and cryptographers. The distrust this divide creates may have negative consequences for keeping alive the open research culture most of these researchers adhere to and that relies on the ideals of achieving “open” participation, collegial respect and collective knowledge creation with the objective of guaranteeing secure communications for everyone.


One of our participants went a step further and put it into words as follows: “It is probably the case that you can trust the math, but you should not trust the math”. This remark pointed out the necessity to take with a grain of salt some of the claims of mathematicians and NSA people, especially given that, at times, mathematics can also function as a communal belief system, and some of these beliefs may change with time.


Our discussion also took a short detour on a possible meta story that the NSA is “managing” the revelations to strategically debunk popular belief in cryptography, break up the crypto community, or dismiss aspirations to use technology to circumvent government surveillance. We agreed that it would be important for the communities that are most affected by the conspiracies surrounding the revelations to take measures to address some of these matters and to avoid greater damage to the community through conspiracy thinking.


Another interesting line of inquiry was in the comparison of the different backdoors, their advantages and disadvantages to NSA as well as the society at large. Members of the information security and cryptography communities have repeatedly spoken against weakening security for the sake of surveillance, as this would provide backdoors not only to the NSA, but also to other parties with sufficient incentives. While one PRG participant argued that, for example, some of the cryptographic backdoors that were revealed would only make communications susceptible towards NSA surveillance and not towards others, this was seen to rely on the assumption that NSA’s backdoors would remain secret, uneasy to discover and hence secure. However, past cases indicated that this might not always hold true. In the case of DigiNotar, the Certificate Authority based in the Netherlands, it was speculated that the hackers had perhaps been exploiting a pre-existing NSA backdoor. The question was then, whether, given the risks associated with the hijacking of cryptographic, software and hardware backdoors by unintended others, it would be “less risky” for society in general if the NSA would predominantly use legal backdoors, e.g., asking for data followed by gag orders, as their modus operandi. Even if the latter were preferable from a security point of view, most of us agreed that the current legal and organizational set up provides the NSA with disproportionate powers. The accumulation of such powers in the hands of the NSA is unacceptable given its negative consequences for society in general, be it in the US or elsewhere. We also observed that that the feasibility of designing and deploying technology to provide reasonable protections from mass surveillance programs and to guarantee secure communications to society in general can be jeopardized, even if the NSA and GCHQs mainly relied on intrusive use of legal backdoors.


We covered many more topics that ranged from the role of standards organizations like NIST, the manipulation and sabotaging of standard setting procedures, to the lack of transparency and accountability in the functioning of the FISA courts. An interesting one of these was the relationship between the Going Dark program of the FBI and the NSA’s surveillance programs.

The Going Dark program is an initiative to increase the FBI’s authority in response to problems the FBI says it is having in implementing wiretapping measures in the context of new technologies. Juxtaposed with the current Snowden revelations, we shortly discussed weather the Going Dark initiative was a public facing project to legalize the already existing surveillance programs of NSA.


In terms of moving forward, we shortly considered the development of technologies based on encryption and principles of technical and organizational decentralization, i.e., avoiding large information collections as held by Google, Facebook or Microsoft. Some people in the room were confident that, if we were to deploy such technologies and design principles, we would be able to achieve greater protections against surveillance programs like that of the NSA and the GCHQ. Others voiced skepticism towards such long-standing proposals, which have only rarely come to materialize successfully, require a good dedicated community to keep secure, and often do not scale to the masses. However, this is a greater subject worthy of another session, and for the curious who want to go deeper into the subject in the meantime, below are some links to articles on the topic from Arvind Narayanan and some of the PRGs.


We thank all participants of the meeting and look forward to the next round of NSA revelations.



A Critical Look at Decentralized Personal Data Architectures



What Happened to the Crypto Dream?




Unlikely Outcomes?


About seda 1 Article
MCC and ILI Fellow and Member of the ISTC on Social Computing.