• 2018

  • Anomalous keys in Tor relays [PDF] [Abstract, BibTeX]
    In its more than ten years of existence, the Tor network has seen hundreds of thousands of relays come and go. Each relay maintains several RSA keys, amounting to millions of keys, all archived by The Tor Project. In this paper, we analyze 3.7 million RSA public keys of Tor relays. We (i) check if any relays share prime factors or moduli, (ii) identify relays that use non-standard exponents, (iii) characterize malicious relays that we discovered in the first two steps, and (iv) develop a tool that can determine what onion services fell prey to said malicious relays. Our experiments revealed that ten relays shared moduli and 3,557 relays -- almost all part of a research project -- shared prime factors, allowing adversaries to reconstruct private keys. We further discovered 122 relays that used non-standard RSA exponents, presumably in an attempt to attack onion services. By simulating how onion services are positioned in Tor's distributed hash table, we identified four onion services that were targeted by these malicious relays. Our work provides both The Tor Project and onion service operators with tools to identify misconfigured and malicious Tor relays to stop attacks before they pose a threat to Tor users.
        author = {George Kadianakis and Claudia V. Roberts and Laura M. Roberts and Philipp Winter},
        title = {Anomalous keys in {Tor} relays},
        booktitle = {Financial Cryptography and Data Security},
        publisher = {Springer},
        year = {2018},
        url = {},
    George Kadianakis, Claudia V. Roberts, Laura M. Roberts, and Philipp Winter
    Financial Cryptography and Data Security (FC'18) Curaçao, February 2018
  • 2017

  • TorPolice: Towards Enforcing Service-Defined Access Policies for Anonymous Communication in the Tor Network [PDF] [Abstract, BibTeX]
    Tor is the most widely used anonymity network, currently serving millions of users each day. However, there is no access control in place for all these users, leaving the network vulnerable to botnet abuse and attacks. For example, criminals frequently use exit relays as stepping stones for attacks, causing service providers to serve CAPTCHAs to exit relay IP addresses or blacklisting them altogether, which leads to severe usability issues for legitimate Tor users. To address this problem, we propose TorPolice, the first privacy-preserving access control framework for Tor. TorPolice enables abuse-plagued service providers such as Yelp to enforce access rules to police and throttle malicious requests coming from Tor while still providing service to legitimate Tor users. Further, TorPolice equips Tor with global access control for relays, enhancing Tor's resilience to botnet abuse. We show that TorPolice preserves the privacy of Tor users, implement a prototype of TorPolice, and perform extensive evaluations to validate our design goals.
        author = {Zhuotao Liu and Yushan Liu and Philipp Winter and Prateek Mittal and Yih-Chun Hu},
        title = {{TorPolice}: Towards Enforcing Service-Defined Access Policies for Anonymous Communication in the {Tor} Network},
        booktitle = {International Conference on Network Protocols},
        publisher = {IEEE},
        year = {2017},
        url = {},
    Zhuotao Liu, Yushan Liu, Philipp Winter, Prateek Mittal, and Yih-Chun Hu
    International Conference on Network Protocols (ICNP'17) Toronto, Canada, October 2017
  • The Effect of DNS on Tor's Anonymity [PDF] [Abstract, BibTeX]
    Previous attacks that link the sender and receiver of traffic in the Tor network ("correlation attacks") have generally relied on analyzing traffic from TCP connections. The TCP connections of a typical client application, however, are often accompanied by DNS requests and responses. This additional traffic presents more opportunities for correlation attacks. This paper quantifies how DNS traffic can make Tor users more vulnerable to correlation attacks. We investigate how incorporating DNS traffic can make existing correlation attacks more powerful and how DNS lookups can leak information to third parties about anonymous communication. We (i) develop a method to identify the DNS resolvers of Tor exit relays; (ii) develop a new set of correlation attacks (DefecTor attacks) that incorporate DNS traffic to improve precision; (iii) analyze the Internet-scale effects of these new attacks on Tor users; and (iv) develop improved methods to evaluate correlation attacks. First, we find that there exist adversaries that can mount DefecTor attacks: for example, Google's DNS resolver observes almost 40% of all DNS requests exiting the Tor network. We also find that DNS requests often traverse ASes that the corresponding TCP connections do not transit, enabling additional ASes to gain information about Tor users' traffic. We then show that an adversary that can mount a DefecTor attack can often determine the website that a Tor user is visiting with perfect precision, particularly for less popular websites where the set of DNS names associated with that website may be unique to the site. We also use the Tor Path Simulator (TorPS) in combination with traceroute data from vantage points co-located with Tor exit relays to estimate the power of AS-level adversaries that might mount DefecTor attacks in practice.
        author = {Benjamin Greschbach and Tobias Pulls and Laura M. Roberts and Philipp Winter and Nick Feamster},
        title = {The Effect of {DNS} on {Tor}'s Anonymity},
        booktitle = {Network and Distributed System Security Symposium},
        publisher = {The Internet Society},
        year = {2017},
        url = {},
    Benjamin Greschbach, Tobias Pulls, Laura M. Roberts, Philipp Winter, and Nick Feamster
    Network and Distributed System Security Symposium (NDSS'17) San Diego, CA February 2017
  • 2016

  • Online Tracking: A 1-million-site Measurement and Analysis [PDF] [Abstract, BibTeX]
    We present the largest and most detailed measurement of online tracking conducted to date, based on a crawl of the top 1 million websites. We make 15 types of measurements on each site, including stateful (cookie-based) and stateless (fingerprinting-based) tracking, the effect of browser privacy tools, and the exchange of tracking data between different sites ("cookie syncing"). Our findings include multiple sophisticated fingerprinting techniques never before measured in the wild.

    This measurement is made possible by our open-source web privacy measurement tool, OpenWPM, which uses an automated version of a full-fledged consumer browser. It supports parallelism for speed and scale, automatic recovery from failures of the underlying browser, and comprehensive browser instrumentation. We demonstrate our platform's strength in enabling researchers to rapidly detect, quantify, and characterize emerging online tracking behaviors.
        author    = "Steven Englehardt and Arvind Narayanan",
        title     = "{Online tracking: A 1-million-site measurement and analysis}",
        booktitle = {Proceedings of ACM CCS 2016},
        year      = "2016",
    23rd ACM Conference on Computer and Communications Security (CCS 2016) Vienna, Austria October, 2016. [Data and Results]
  • Identifying and characterizing Sybils in the Tor network [PDF] [Abstract, BibTeX]
    Being a volunteer-run, distributed anonymity network, Tor is vulnerable to Sybil attacks. Little is known about real-world Sybils in the Tor network, and we lack practical tools and methods to expose Sybil attacks. In this work, we develop sybilhunter, a system for detecting Sybil relays based on their appearance, such as configuration; and behavior, such as uptime sequences. We used sybilhunter's diverse analysis techniques to analyze nine years of archived Tor network data, providing us with new insights into the operation of real-world attackers. Our findings include diverse Sybils, ranging from botnets, to academic research, and relays that hijacked Bitcoin transactions. Our work shows that existing Sybil defenses do not apply to Tor, it delivers insights into real-world attacks, and provides practical tools to uncover and characterize Sybils, making the network safer for its users.
        author = {Philipp Winter and Roya Ensafi and Karsten Loesing and Nick Feamster},
        title = {Identifying and characterizing {S}ybils in the {T}or network},
        booktitle = {USENIX Security},
        publisher = {USENIX},
        year = {2016},
        url = {},
    25th USENIX Security Symposium (Sec'16) Austin, TX August 2016
  • 2015

  • Can Censorship Measurements Be Safe(r)? [PDF] [Abstract, BibTeX]
    Understanding censorship requires performing widespread, continuous measurements “on the groundâ€. Yet, measuring censorship is potentially dangerous, due to the threat of retaliation against citizens who perform measurements. We must balance measurement accuracy, reliability, and scalability with user safety which leads us to the question: Can we design censorship measurements that mitigate risk to the users who consent to perform them? Although it is almost certainly impossible to eliminate risk (or even determine if we have succeeded in doing so), we posit that we may be able to reduce risk with measurement techniques that are difficult to observe or distinguish from innocuous network activity. We observe that surveillance and censorship systems have different goals, and thus certain types of measurement techniques may be able to characterize a censorship system without triggering a surveillance system. We design and implement several tech- niques for measuring censorship that controlled tests suggest might be less risky than existing methods; we also highlight potential pitfalls, limitations, and avenues for future work Based on our experiences in measuring censorship in several projects, we frame various ethical questions and challenges that we have encountered. We offer this short document to highlight open questions that we view as important to consider when establishing ethical norms for censorship measurement.
        author = "Jones, Ben and Feamster, Nick",
        title = "Can Censorship Measurements Be Safe(R)?",
        booktitle = "Proceedings of the 14th ACM Workshop on Hot Topics in Networks",
        year = 2015,
        location = "Philadelphia, PA, USA",
    ACM HotNets 2015 (HotNets '15) Philadelphia, PA August 2015
  • Examining How the Great Firewall Discovers Hidden Circumvention Servers [PDF] [Abstract, BibTeX]
    Recently, the operators of the national censorship infrastructure of China began to employ "active probing" to detect and block the use of privacy tools. This probing works by passively monitoring the network for suspicious traffic, then actively probing the corresponding servers, and blocking any that are determined to run circumvention servers such as Tor. We draw upon multiple forms of measurements, some spanning years, to illuminate the nature of this probing. We identify the different types of probing, develop fingerprinting techniques to infer the physical structure of the system, localize the sensors that trigger probing—showing that they differ from the "Great Firewall" infrastructure—and assess probing's efficacy in blocking different versions of Tor. We conclude with a discussion of the implications for designing circumvention servers that resist such probing mechanisms.
        title = "Examining How the Great Firewall Discovers Hidden Circumvention Servers"
        author = "Roya Ensafi and David Fifield and Philipp Winter and Nick Feamster and Nicholas Weaver and Vern Paxson",
        booktitle = "Internet Measurement Conference",
        publisher = "ACM",
        year = 2015,
    Roya Ensafi, David Fifield, Philipp Winter, Nick Feamster, Nicholas Weaver, and Vern Paxson
    Proc. Internet Measurement Conference (IMC'15) Tokyo, Japan, October 2015
  • Ethical Concerns for Censorship Measurement [PDF] [Abstract, BibTeX]
    Based on our experiences in measuring censorship in several projects, we frame various ethical questions and challenges that we have en- countered. We offer this short document to highlight open questions that we view as important to consider when establishing ethical norms for censorship measurement.
        title = "Ethical Concerns for Censorship Measurement",
        author = "Jones, Ben and Ensafi, Roya and Feamster, Nick and Paxson, Vern and Weaver, Nick",
        booktitle = "Proceedings of the 2015 {ACM} {SIGCOMM} Workshop on Ethics in Networked {S}ystems Research",
        year = 2015,
    Ben Jones, Roya Ensafi, Nick Feamster, Vern Paxson, and Nick Weaver
    SIGCOMM NS Ethics 2015 (NS Ethics '15) London, UK August 2015
  • Encore: Lightweight Measurement of Web Censorship with Cross-Origin Requests [PDF] [Abstract, BibTeX]
    Despite the pervasiveness of Internet censorship, we have scant data on its extent, mechanisms, and evolution. Mea- suring censorship is challenging: it requires continual measurement of reachability to many target sites from diverse vantage points. Amassing suitable vantage points for longitudinal measurement is difficult; existing systems have achieved only small, short-lived deployments. We observe, however, that most Internet users access content via Web browsers, and the very nature of Web site design allows browsers to make requests to domains with differ- ent origins than the main Web page. We present Encore, a system that harnesses cross-origin requests to measure Web filtering from a diverse set of vantage points with- out requiring users to install custom software, enabling longitudinal measurements from many vantage points. We explain how Encore induces Web clients to perform cross-origin requests that measure Web filtering, design a distributed platform for scheduling and collecting these measurements, show the feasibility of a global-scale de- ployment with a pilot study and an analysis of potentially censored Web content, identify several cases of filtering in six months of measurements, and discuss ethical concerns that would arise with widespread deployment
         author = {Burnett, Sam and Feamster, Nick},
         title = {Encore: Lightweight Measurement of Web Censorship with Cross-Origin Requests},
         booktitle = {Proceedings of the 2015 ACM Conference on Special Interest Group on Data Communication},
         series = {SIGCOMM '15},
         year = {2015},
         isbn = {978-1-4503-3542-3},
         location = {London, United Kingdom},
         pages = {653--667},
         numpages = {15},
         url = {},
         doi = {10.1145/2785956.2787485},
         acmid = {2787485},
         publisher = {ACM},
         address = {New York, NY, USA},
         keywords = {network measurement, web censorship, web security},
    Sam Burnettand Nick Feamster
    ACM SIGCOMM (ACM SIGCOMM '15) London, UK August 2015
  • ASwatch: An AS Reputation System to Expose Bulletproof Hosting ASes [PDF] [Abstract, BibTeX]
    Bulletproof hosting Autonomous Systems (ASes)—malicious ASes fully dedicated to supporting cybercrime—provide freedom and resources for a cyber-criminal to operate. Their services include hosting a wide range of illegal content, botnet C&C servers, and other malicious resources. Thousands of new ASes are registered every year, many of which are often used exclusively to facilitate cybercrime. A natural ap- proach to squelching bulletproof hosting ASes is to develop a reputation system that can identify them for takedown by law enforcement and as input to other attack detection systems ( e.g. , spam filters, botnet detection systems). Unfortunately, current AS reputation systems rely primarily on data-plane monitoring of malicious activity from IP addresses (and thus can only detect malicious ASes after attacks are underway), and are not able to distinguish between malicious and legiti- mate but abused ASes. As a complement to these systems, in this paper, we explore a fundamentally different approach to establishing AS reputation. We present ASwatch, a system that identifies malicious ASes using exclusively the control-plane (i.e., routing) behavior of ASes. ASwatch’s design is based on the intuition that, in an attempt to evade possible detection and remediation efforts, malicious ASes exhibit “agile†control plane behavior (e.g., short-lived routes, aggressive re-wiring). We evaluate our system on known malicious ASes; our results show that ASwatch detects up to 93% of malicious ASes with a 5% false positive rate, which is reasonable to effectively complement existing defense systems.
          author = {Konte, Maria and Perdisci, Roberto and Feamster, Nick},
          title = {ASwatch: An AS Reputation System to Expose Bulletproof Hosting ASes},
          journal = {SIGCOMM Comput. Commun. Rev.},
          issue_date = {October 2015},
          volume = {45},
          number = {5},
          month = aug,
          year = {2015},
          issn = {0146-4833},
          pages = {625--638},
          numpages = {14},
          url = {},
          doi = {10.1145/2829988.2787494},
          acmid = {2787494},
          publisher = {ACM},
          address = {New York, NY, USA},
          keywords = {AS reputation, bulletproof hosting, malicious networks},
    Maria Konte, Roberto Perdisci, and Nick Feamster
    ACM SIGCOMM (ACM SIGCOMM '15) London, UK August 2015
  • CONIKS: Bringing Key Transparency to End Users [PDF] [Abstract, BibTeX]
    We present CONIKS, an end-user key verification service capable of integration in end-to-end encrypted communication systems. CONIKS builds on transparency log proposals for web server certificates but solves several new challenges specific to key verification for end users. CONIKS obviates the need for global third-party monitors and enables users to efficiently monitor their own key bindings for consistency, downloading less than 20 kB per day to do so even for a provider with billions of users. CONIKS users and providers can collectively audit providers for non-equivocation, and this requires downloading a constant 2.5 kB per provider per day. Additionally, CONIKS preserves the level of privacy offered by today's major communication services, hiding the list of usernames present and even allowing providers to conceal the total number of users in the system.
        title = "CONIKS: Bringing Key Transparency to End Users",
        author = "Melara, Marcela S. Melara and Blankstein, Aaron and Bonneau, Joseph and Felten, Edward W. and Freedman, Michael J.",
        booktitle = "Proc.\ 24st {USENIX} {S}ecurity {S}ymposium",
        location = "Washington, D.C.",
        month = Aug,
        year = 2015,
    Proc. 24th USENIX Security Symposium (Sec '15) Washington, D.C. August 2015
  • An Analysis of China's Great Cannon [PDF] [Abstract, BibTeX]
    On March 16th, 2015, the Chinese censorship apparatus employed a new tool, the “Great Cannonâ€, to engineer a denialof-service attack on, an organization dedicated to resisting China’s censorship. We present a technical analysis of the attack and what it reveals about the Great Cannon’s working, underscoring that in essence it consitutes a selective nation-state Man-in-the-Middle attack tool. Although sharing some code similarities and network locations with the Great Firewall, the Great Cannon is a distinct tool, designed to compromise foreign visitors to Chinese sites. We identify the Great Cannon’s operational behavior, localize it in the network topology, verify its distinctive side-channel, and attribute the system as likely operated by the Chinese government. We also discuss the substantial policy implications raised by its use, including the potential imposition on any user whose browser might visit (even inadvertently) a Chinese web site.
    @inproceedings{Marczak2015b, title = "An Analysis of China's Great Cannon",  author = "Bill Marczak and Nicholas Weaver and Jakub Dalek and Roya Ensafi and David Fifield and Sarah McKune and Arn Rey and John Scott-Railton and Ron Deibert and Vern Paxson",  booktitle = "Free and Open Communications on the Internet (FOCI)",  publisher = "USENIX",  year = 2015 }
    Bill Marczak, Nicholas Weaver, , Roya Ensafi, David Fifield, Sarah McKune, Arn Rey, John Scott Railton, Ron Deibert, and Vern Paxson
    Proc. USENIX Free and Open Communications on the Internet (FOCI) (FOCI'15)
  • An empirical study of Namecoin and lessons for decentralized namespace design [PDF] [Abstract, BibTeX]
    Secure decentralized namespaces have recently become possible due to cryptocurrency technology. They enable a censorship-resistant domainname system outside the control of any single entity, among other applications. Namecoin, a fork of Bitcoin, is the most prominent example.
    @article{kalodner2015empirical, title={An empirical study of Namecoin and lessons for decentralized namespace design}, author={Kalodner, Harry and Carlsten, Miles and Ellenbogen, Paul and Bonneau, Joseph and Narayanan, Arvind}, year={2015} }
    Harry Kalodner, Miles Carlsten, Paul Ellenbogen, Joseph Bonneau, and Arvind Narayanan
    14th Workshop on the Economics of Information Security (WEIS 2015) Delft, Netherlands, June 2015
  • Cookies That Give You Away: The Surveillance Implications of Web Tracking [PDF] [Abstract, BibTeX]
    We study the ability of a passive eavesdropper to leverage "third-party" HTTP tracking cookies for mass surveillance. If two web pages embed the same tracker which tags the browser with a unique cookie, then the adversary can link visits to those pages from the same user (i.e., browser instance) even if the user's IP address varies. Further, many popular websites leak a logged-in user's identity to an eavesdropper in unencrypted traffic.

    To evaluate the effectiveness of our attack, we introduce a methodology that combines web measurement and network measurement. Using OpenWPM, our web privacy measurement platform, we simulate users browsing the web and find that the adversary can reconstruct 62-73% of a typical user's browsing history. We then analyze the effect of the physical location of the wiretap as well as legal restrictions such as the NSA's "one-end foreign" rule. Using measurement units in various locations - Asia, Europe, and the United States - we show that foreign users are highly vulnerable to the NSA's dragnet surveillance due to the concentration of third-party trackers in the U.S. Finally, we find that some browser-based privacy tools mitigate the attack while others are largely ineffective.
    @inproceedings{englehardt2015cookies, title="{{Cookies That Give You Away: The Surveillance Implications of Web Tracking}}",
      booktitle="World Wide Web Conference 2015",
      author="Steven Englehardt and Dillon Reisman and Christain Eubank and Peter Zimmerman and Jonathan Mayer and Arvind Naryanan and Edward W. Felten",
      location="Florence, Italy",
    Steven Englehardt, Dillon Reisman, Christian Eubank, Peter Zimmerman, Jonathan Mayer, Arvind Narayanan, and Edward W. Felten
    24th International World Wide Web Conference (WWW 2015) Florence, Italy, May 2015
  • SoK: Bitcoin and Second-Generation Cryptocurrencies [To Appear] [Abstract, BibTeX]
    Bitcoin has emerged as the most successful cryptographic currency in history. Within two years of its quiet launch in 2009, Bitcoin grew to comprise billions of dollars of economic value, even while the body of published research and security analysis justifying the system's design was negligible. In the ensuing years, a growing literature has identified hidden-but-important properties of the system, discovered attacks, proposed promising alternatives, and singled out difficult future challenges. This interest has been complemented by a large and vibrant community of open-source developers who steward the system, while proposing and deploying numerous modifications and extensions.

    We provide the first systematic exposition of the second generation of cryptocurrencies, including Bitcoin and the many alternatives that have been implemented as alternate protocols or "altcoins." Drawing from a scattered body of knowledge, we put forward three key components of Bitcoin's design that can be decoupled, enabling a more insightful analysis of Bitcoin's properties and its proposed modifications and extensions. We contextualize the literature into five central properties capturing blockchain stability. We map the design space for numerous proposed modification, providing comparative analyses for alternative consensus mechanisms, currency allocation mechanisms, computational puzzles, and key management tools. We focus on anonymity issues in Bitcoin and provide an evaluation framework for analyzing a variety of proposals for enhancing unlinkability. Finally we provide new insights on a what we term disintermediation protocols, which absolve the need for trusted intermediaries in an interesting set of applications. We identify three general disintermediation strategies and provide a detailed comparative cost analysis.
      title="{{SoK}: Bitcoin and second-generation cryptocurrencies}",
      booktitle="IEEE Security and Privacy 2015",
      author="Joseph Bonneau and Andrew Miller and Jeremy Clark and Arvind Naryanan and Joshua A. Kroll and Edward W. Felten",
      location="San Jose, CA, USA",
    36th IEEE Symposium on Security and Privacy (Oakland 2015) San Jose, CA, May 2015
  • A Precautionary Approach to Big Data Privacy [PDF] [Abstract, BibTeX]
    Once released to the public, data cannot be taken back. As time passes, data analytic techniques improve and additional datasets become public that can reveal information about the original data. It follows that released data will get increasingly vulnerable to reidentification — unless methods with provable privacy properties are used for the data release. Due to the ad hoc deidentification methods applied to currently released datasets, the chances of reidentification depend highly on the progress of reidentification tools and the auxiliary datasets available to an adversary. The probability of a privacy violation in the future is essentially unknowable. In general, a precautionary approach deals with uncertain risk by placing the burden of proof that an action is not harmful on the person taking the action. Here, we argue for a weak version of the precautionary approach, in which the idea that the burden of proof falls on data releasers guides policies that incentivize them not to default to full, public releases of datasets using ad hoc deidentification methods.
          title={A Precautionary Approach to Big Data Privacy},
          author={Narayanan, Arvind and Huey, Joanna and Felten, Edward W},
  • Upgrading HTTPS in Mid-Air: An Empirical Study of Strict Transport Security and Key Pinning [PDF] [Abstract, BibTeX]
    We have conducted the first in-depth empirical study of two important new web security features: strict transport security (HSTS) and public-key pinning. Both have been added to the web platform to harden HTTPS, the prevailing standard for secure web browsing. While HSTS is further along, both features still have very limited deployment at a few large websites and a long tail of small, security-conscious sites. We find evidence that many developers do not completely understand these features, with a substantial portion using them in invalid or illogical ways. The majority of sites we observed trying to set an HSTS header did so with basic errors that significantly undermine the security this feature is meant to provide. We also identify several subtle but important new pitfalls in deploying these features in practice. For example, the majority of pinned domains undermined the security benefits by loading non-pinned resources with the ability to hijack the page. A substantial portion of HSTS domains and nearly all pinned domains leaked cookie values, including login cookies, due to the poorly-understood interaction between HTTP cookies and the same-origin policy. Our findings highlight that the web platform, as well as modern web sites, are large and complicated enough to make even conceptually simple security upgrades challenging to deploy in practice.
      title="{Upgrading HTTPS in Mid-Air: An Empirical Study of Strict Transport Security and Key Pinning}",
      booktitle="NDSS '15: The 2015 Network and Distributed System Security Symposium",
      author="Michael Kranch and Joseph Bonneau",
      location="San Diego, CA, USA",
    Michael Kranchand Joseph Bonneau
    Network and Distributed System Security Symposium 2015 (NDSS 2015) San Diego, CA, February 2015
  • 2014

  • The Web Never Forgets: Persistent tracking mechanisms in the wild [PDF] [Abstract, BibTeX]
    We present the first large-scale studies of three advanced web tracking mechanisms - canvas fingerprinting, evercookies and use of "cookie syncing" in conjunction with evercookies. Canvas fingerprinting, a recently developed form of browser fingerprinting, has not previously been reported in the wild; our results show that over 5% of the top 100,000 websites employ it. We then present the first automated study of evercookies and respawning and the discovery of a new evercookie vector, IndexedDB. Turning to cookie syncing, we present novel techniques for detection and analysing ID flows and we quantify the amplification of privacy-intrusive tracking practices due to cookie syncing. Our evaluation of the defensive techniques used by privacy-aware users finds that there exist subtle pitfalls - such as failing to clear state on multiple browsers at once - in which a single lapse in judgement can shatter privacy defenses. This suggests that even sophisticated users face great difficulties in evading tracking techniques.
        title = "The Web never forgets: Persistent tracking mechanisms in the wild",
        author = "Acar, Gunes and Eubank, Christian and Englehardt, Steven and Juarez, Marc and Narayanan, Arvind and Diaz, Claudia",
        booktitle = "21st ACM Conference on Computer and Communications Security",
        location = "Scottsdale, Arizona"
        month = Nov,
        year = 2014,
    Gunes Acar, Christian Eubank, Steven Englehardt, Marc Juarez, Arvind Narayanan, and Claudia Diaz
    21st ACM Conference on Computer and Communications Security (ACM CCS 2014) Scottsdale, Arizona, November 2014
  • Efficiently Auditing Multi-Level Elections [PDF] [Abstract, BibTeX]
    In a multi-level election, voters are divided into groups, an election is held within each group, and some deterministic procedure is used to combine the group results to determine the overall election result. Examples of multi-level elections include U.S. presidential elections and some parliamentary elections (such as those with regional groupings of voters). The results of such an election can hinge on a few votes in one group, while being insensitive to large shifts within other groups. These disparities create opportunities to focus election integrity efforts in the places where they have the highest leverage. We consider how to improve the efficiency of post-election audits, such as those that compare paper ballots to corresponding electronic records, in multi-level elections. We evaluate our proposed solutions using data from past elections.
        title = "{E}ficciently {A}uditing {M}ulti-{L}evel {E}lections",
        author = "Joshua A. Kroll and Edward W. Felten and J. Alex Haldermann",
        booktitle = "Sixth International Conference on Electronic Voting",
        location = "Lochau, Austria",
        month = Oct,
        year = 2014,
    6th International Conference on Electronic Voting (EVOTE 2014) Lochau, Austria, October 2014
  • Cognitive Disconnect: Understanding Facebook Connect Login Permissions [PDF] [Abstract, BibTeX]
    We study Facebook Connect's permissions system using crawling, experimentation, and user surveys. We find several areas in which it it works differently than many users and developers expect. More permissions can be granted than developers intend. In particular, permissions that allow a site to post to the user's profile are granted on an all-or-nothing basis. While users generally understand what data sites can read from their profile, they generally do not understand the full extent of what sites can post. In the case of write permissions, we show that user expectations are influenced by the identity of the requesting site although this has no impact on what is actually enforced. We also find that users generally do not understand the way Facebook Connect permissions interact with Facebook's privacy settings. Our results suggest that users understand detailed, granular messages better than those that are broad and vague.
        title = "{Cognitive Disconnect: Understanding Facebook Connect Login Permissions}",
        author = "Nicky Robinson and Joseph Bonneau",
        booktitle = "ACM Conference on Online Social Networks",
        location = "Dublin, Ireland",
        month = Oct,
        year = 2014,
    Nicky Robinsonand Joseph Bonneau
    ACM Conference on Online Social Networks 2014 (COSN 2014) Dublin, Ireland, October 2014
  • Efficient Set Intersection with Simulation-Based Security [Publisher Link, PDF] [Abstract, BibTeX]
    We consider the problem of computing the intersection of private datasets of two parties, where the datasets contain lists of elements taken from a large domain. This problem has many applications for online collaboration. In this work, we present protocols based on the use of homomorphic encryption and different hashing schemes for both the semi-honest and malicious environments. The protocol for the semi-honest environment is secure in the standard model, while the protocol for the malicious environment is secure in the random oracle model. Our protocols obtain linear communication and computation overhead. We further implement different variants of our semi-honest protocol. Our experiments show that the asymptotic overhead of the protocol is affected by different constants. (In particular, the degree of the polynomials evaluated by the protocol matters less than the number of polynomials that are evaluated.) As a result, the protocol variant with the best asymptotic overhead is not necessarily preferable for inputs of reasonable size.
        title = "Efficient Set Intersection with Simulation-Based Security",
        author = "Freedman, Michael J. and Hazay, Carmit and Nissim, Kobbi and Pinkas, Benny",
        journal = "Journal of Cryptology",
        year =  2014,
        month = Oct,
        pages = "1-41",
        issn = "0933-2790",
        doi = "10.1007/s00145-014-9190-0",
        url = "",
    Michael J. Freedman, Carmit Hazay, Kobbi Nissim, and Benny Pinkas
    IACR Journal of Cryptology, October 2014
  • Towards reliable storage of 56-bit secrets in human memory [PDF] [Abstract, BibTeX]
    Challenging the conventional wisdom that users cannot remember cryptographically-strong secrets, we test the hypothesis that users can learn randomly-assigned 56-bit codes (encoded as either 6 words or 12 characters) through spaced repetition. We asked remote research participants to perform a distractor task that required logging into a website 90 times, over up to two weeks, with a password of their choosing. After they entered their chosen password correctly we displayed a short code (4 letters or 2 words, 18.8 bits) that we required them to type. For subsequent logins we added an increasing delay prior to displaying the code, which participants could avoid by typing the code from memory. As participants learned, we added two more codes to comprise a 56.4-bit secret. Overall, 94% of participants eventually typed their entire secret from memory, learning it after a median of 36 logins. The learning component of our system added a median delay of just 6.9 s per login and a total of less than 12 minutes over an average of ten days. 88% were able to recall their codes exactly when asked at least three days later, with only 21% reporting having written their secret down. As one participant wrote with surprise, “the words are branded into my brain.â€
        title = "{Towards reliable storage of 56-bit secrets in human memory}",
        author = "Joseph Bonneau and Stuart Schechter",
        booktitle = "USENIX Security 2014",
        location = "San Diego, USA",
        month = Aug,
        year = 2014,
    Joseph Bonneauand Stuart Schechter
    USENIX Security 2014 (USENIX 2014) San Diego, USA, August 2014
  • Security Audit of Safeplug "Tor in a Box" [PDF] [Abstract, BibTeX]
    We present the first public third-party security audit of Pogoplug's Safeplug device, which markets "complete security and anonymity online" by using Tor technology to protect users' IP addresses. We examine the hardware, software, and network behavior of the Safeplug device, as well as the user experience in comparison to other forms of web browsing. Although the Safeplug appears to use Tor as advertised, users may still be identified in ways they may not expect. Furthermore, an engineering vulnerability in how the Safeplug accepts settings changes would allow an adversary internal or external to a user's home network to silently disable Tor or modify other Safeplug settings, which completely invalidates the security claims of the device. Beyond this problem, the user experience challenges of this type of device make it inferior to the existing gold standard for anonymous browsing: the Tor Browser Bundle.
        title={Security Audit of Safeplug “Tor in a Boxâ€},
        author={Edmundson, Anne and Kornfeld Simpson, Anna and Kroll, Joshua A. and Felten, Edward W.},
        booktitle={4th USENIX Workshop on Free and Open Communications on the Internet (FOCI 14)},
        organization={USENIX Association},
    4th USENIX Workshop on Free and Open Communications on the Internet (FOCI 2014) San Diego, August 2014
  • Portable Software Fault Isolation [PDF] [Abstract, BibTeX]
    We present a new technique for architecture portable software fault isolation (SFI), together with a prototype implementation in the Coq proof assistant. Unlike traditional SFI, which relies on analysis of assembly-level programs, we analyze and rewrite programs in a compiler intermediate language, the Cminor language of the CompCert C compiler. But like traditional SFI, the compiler remains outside of the trusted computing base. By composing our program transformer with the verified back-end of CompCert and leveraging CompCert's formally proved preservation of the behavior of safe programs, we can obtain binary modules that satisfy the SFI memory safety policy for any of CompCert's supported architectures (currently: PowerPC, ARM, and x86-32). This allows the same SFI analysis to be used across multiple architectures, greatly simplifying the most difficult part of deploying trustworthy SFI systems.
        title = "{P}ortable {S}oftware {F}ault {I}solation",
        author = "Joshua A. Kroll and Gordon Stewart and Andrew W. Appel"
        booktitle = "{C}omputer {S}ecurity {F}oundations {S}ymposium",
        location = "Vienna, Austria"
        month = Jul,
        year = 2014,
    Joshua A. Kroll, Gordon Stewart, and Andrew W. Appel
    Computer Security Foundations Symposium 2014 (CSF 2014) Vienna, Austria, July 2014
  • Privacy concerns of implicit secondary factors for web authentication [PDF] [Abstract, BibTeX]
    Passwords remain the dominant mechanism for web authentication despite their well-known weaknesses. Economics makes them very difficult to replace [2], but they can be strengthened by analyzing a number of implicit secondary factors [5, 6]. Potential implicit factors include: 1) User input dynamics including keystrokes [7], touch-screen swipes [4] and mouse movement [9], 2) IP address and geolocation [1], 3) Time of login, 4) Browser information such as user-agent and fingerprint. These implicit factors can help transform authentication from a binary decision problem (based on passwords alone) into a classification problem with a spectrum of possible decisions. For example, unusual values for implicit factors can be used as an indicator to detect merely suspicious logins for which additional explicit authentication actions (such as sending an SMS code) can be taken. Alternately, known values for implicit factors can be used as an indicator that it is safe to relax normal rate-limiting constraints and avoid frustrating users by locking them out due to typos (or worse, requiring password resets).
        title="{Privacy concerns of implicit secondary factors for web authentication}",
        booktitle="WAY 2014: Who are you?! Adventures in Authentication Workshop",
        author="Joseph Bonneau and Ed Felten and Prateek Mittal and Arvind Narayanan",
        location="Menlo Park, CA, USA",
    Who are you?! Adventures in Authentication (WAY Workshop) 2014 (WAY Workshop) Menlo Park, CA. July 2014
  • No silver bullet: De-identification still doesn't work [PDF] [Abstract, BibTeX]
    Paul Ohm's 2009 article "Broken Promises of Privacy" spurred a debate in legal and policy circles on the appropriate response to computer science research on reidentification. In this debate, the empirical research has often been misunderstood or misrepresented. A new report by Ann Cavoukian and Daniel Castro is full of such inaccuracies, despite its claims of "setting the record straight."

    We point out eight of our most serious points of disagreement with Cavoukian and Castro. The thrust of our arguments is that (i) there is no evidence that deidentification works either in theory or in practice and (ii) attempts to quantify its efficacy are unscientific and promote a false sense of security by assuming unrealistic, artificially constrained models of what an adversary might do.
      title = "No silver bullet: De-identification still doesn’t work",
      author = "Narayanan, Arvind and Felten, Ed",
      booktitle = "Manuscript, 2014",
      month = Jul,
      year = 2014,
    Manuscript, 2014 (Manuscript)July 2014
  • On Decentralizing Prediction Markets and Order Books [PDF] [Abstract, BibTeX]
    We propose techniques for decentralizing prediction markets and order books, utilizing Bitcoin's security model and consensus mechanism. Decentralization of prediction markets offers several key advantages over a centralized market: no single entity governs over the market, all transactions are transparent in the block chain, and anybody can participate pseudonymously to either open a new market or place bets in an existing one. We provide trust agility: each market has its own specified arbiter and users can choose to interact in markets that rely on the arbiters they trust. We also provide a transparent, decentralized order book that enables order execution on the block chain in the presence of potentially malicious miners.
        title = "{O}n {D}ecentralizing {P}rediciton {M}arkets and {O}rder {B}ooks",
        author = "Jeremy Clark and Joseph Bonneau and Edward W. Felten and Joshua A. Kroll and Andrew Miller and Arvind Narayanan",
        booktitle = "{W}orkshop on the {E}conomics of {I}nformation {S}ecurity"
        location = "State College, PA"
        month = Jun,
        year = 2014,
    Workshop on the Economics of Information Security 2014 (WEIS 2014) State College, PA, June 2014
  • Automating Isolation and Least Privilege in Web Services [PDF] [Abstract, BibTeX]
    In many client-facing applications, a vulnerability in any part can compromise the entire application. This paper describes the design and implementation of Passe, a system that protects a data store from unintended data leaks and unauthorized writes even in the face of application compromise. Passe automatically splits (previously shared-memory-space) applications into sandboxed processes. Passe limits communication between those components and the types of accesses each component can make to shared storage, such as a backend database. In order to limit components to their least privilege, Passe uses dynamic analysis on developer-supplied end-to-end test cases to learn data and control-flow relationships between database queries and previous query results, and it then strongly enforces those relationships. Our prototype of Passe acts as a drop-in replacement for the Django web framework. By running eleven unmodified, off-the-shelf applications in Passe, we demonstrate its ability to provide strong security guarantees--Passe correctly enforced 96% of the applications' policies--with little additional overhead. Additionally, in the web-specific setting of the prototype, we also mitigate the cross-component effects of cross-site scripting (XSS) attacks by combining browser HTML5 sandboxing techniques with our automatic component separation.
        title = "Automating Isolation and Least Privilege in Web Services",
        author = "Aaron Blankstein and Michael J. Freedman",
        booktitle = "35th IEEE Symposium on Security and Privacy",
        location = "San Jose, California"
        month = May,
        year = 2014,
    Aaron Blanksteinand Michael J. Freedman
    35th IEEE Symposium on Security and Privacy (IEEE Security and Privay 2014) San Jose, California, May 2014
  • Routes for breaching and protecting genetic privacy [PDF] [Abstract, BibTeX]
    We are entering an era of ubiquitous genetic information for research, clinical care and personal curiosity. Sharing these data sets is vital for progress in biomedical research. However, a growing concern is the ability to protect the genetic privacy of the data originators. Here, we present an overview of genetic privacy breaching strategies. We outline the principles of each technique, indicate the underlying assumptions, and assess their technological complexity and maturation. We then review potential mitigation methods for privacy-preserving dissemination of sensitive data and highlight different cases that are relevant to genetic applications.
      title = "Routes for breaching and protecting genetic privacy",
      author = "Erlich, Yaniv and Narayanan, Arvind",
      booktitle = "Nature Reviews Genetics, 2014",
      month = May,
      year = 2014,
    Yaniv Erlichand Arvind Narayanan
    Nature Reviews Genetics, 2014
  • The Tangled Web of Password Reuse [PDF] [Abstract, BibTeX]
    We describe cryptographic protocols for secure execution of warrants or legal orders authorizing access to data held by private parties. Using cryptography enables a better combination of security, privacy, and accountability properties than would otherwise be possible. We describe a series of protocols, based on different assumptions about trust and technical sophistication of the parties, and making use of well-studied cryptographic tools. We report benchmark results from our prototype implementation of the tools involved in one such protocol, and show that the protocol's entire computational cost is easily feasible even for very large data sets, such as "cloud" software service or telecommunications databases comprising billions of records.
      title={Secure protocols for accountable warrant execution},
      author={Kroll, Joshua A and Felten, Edward W and Boneh, Dan}
    Preprint (Preprint) Freedom to Tinker, April 2014.
  • Fawkescoin: A cryptocurrency without public key cryptography [PDF] [Abstract, BibTeX]
    We present, Fawkescoin, a simple cryptocurrency using no public-key cryptography. Our proposal utilizes the distributed consensus mechanism of Bitcoin but for transactions replaces Bitcoin's ECDSA signatures with hash-based Guy Fawkes signatures. While this introduces a number of complexities, it demonstrates that a distributed cryptocurrency is in fact possible with only symmetric operations with no dramatic loss of efficiency overall and several efficiency gains.
      title="{Fawkescoin: A cryptocurrency without public-key cryptography}",
      booktitle="22nd International Workshop on Security Protocols",
      author="Joseph Bonneau and Andrew Miller",
      location="Cambridge, UK",
    Joseph Bonneauand Andrew Miller
    19th International Workshop on Security Protocols (SPW 2014) Cambridge, UK, Mar 21 2014
  • Mixcoin: Anonymity for Bitcoin with accountable mixes [PDF] [Abstract, BibTeX]
    We propose Mixcoin, a protocol to facilitate anonymous payments using the Bitcoin currency system. We build on the emergent phenomenon of currency mixes, adding an accountability mechanism to expose theft by mixes. Unlike other proposals to improve anonymity with Bitcoin our scheme can be deployed immediately with no changes to Bitcoin itself. We demonstrate that economic incentives of mixes and clients can be aligned to ensure that rational mixes will not steal from clients. We compare mixing for nancial anonymity to the much-better studied problem mixing for anonymous communication, demonstrating important and subtle new attacks.
        title = "{M}ixcoin: {A}nonymity for {B}itcoin with accountable mixes",
        author = "Joseph Bonneau and Arvind Narayanan and Andrew Miller and Jeremy Clark and Joshua A. Kroll and Edward W. Felten",
        booktitle = "{F}inancial {C}ryptography"
        location = "Barbados"
        month = Mar,
        year = 2014,
    Joseph Bonneau, Arvind Narayanan, Andrew Miller, Jeremy Clark, Joshua A. Kroll, and Edward W. Felten
    Financial Cryptography 2014 (FC 2014) Barbados, March 2014
  • Why software engineering courses should include coverage of ethics [PDF] [Abstract, BibTeX]
    Software developers create the architectures that govern our online and increasingly our offline lives - from software-controlled cars and medical systems to digital content consumption and behavioral advertising. In fact, software helps shape, not just reflect, our societal values. Are the creators of code aware of this power and the responsibilities thatgo with it? How, and to what extent, are they trained in the ethics of their discipline?
        title = "Why software engineering courses should include coverage of ethics",
        author = "Arvind Narayanan and Shannon Vallor",
        booktitle = "Communications of the ACM"
        month = Mar,
        year = 2014,
    Arvind Narayananand Shannon Vallor
    Communications of the ACM (CACM 2014) March 2014
  • The Tangled Web of Password Reuse [PDF] [Abstract, BibTeX]
    Today's Internet services rely heavily on text-based passwords for user authentication. The pervasiveness of these services coupled with the difficulty of remembering large numbers of secure passwords tempts users to reuse passwords at multiple sites. In this paper, we investigate for the first time how an attacker can leverage a known password from one site to more easily guess that user's password at other sites. We study several hundred thousand leaked passwords from eleven web sites and conduct a user survey on password reuse; we estimate that 43-51% of users reuse the same password across multiple sites. We further identify a few simple tricks users often employ to transform a basic password between sites which can be used by an attacker to make password guessing vastly easier. We develop the first cross-site password-guessing algorithm, which is able to guess 30% of transformed passwords within 100 attempts compared to just 14% for a standard password-guessing algorithm without cross-site password knowledge.
        title = "{T}he {T}angled {W}eb of {P}assword {R}euse",
        author = "Anupam Das and Joseph Bonneau and Matthew Caesar and Nikita Borisov and XiaoFeng Wang",
        booktitle = "{N}etwork and {D}istributed {S}ystem {S}ecurity {S}ymposium",
        location = "San Diego, CA",
        month = Feb,
        year = 2014,
    Anupam Das, Joseph Bonneau, Matthew Caesar, Nikita Borisov, and XiaoFeng Wang
    Network and Distributed System Security Symposium 2014 (NDSS 2014) San Diego, CA, March 2014
  • 2013

  • Routes for breaching and protecting genetic privacy [PDF] [Abstract, BibTeX]
    We are entering the era of ubiquitous genetic information for research, clinical care, and personal curiosity. Sharing these datasets is vital for rapid progress in understanding the genetic basis of human diseases. However, one growing concern is the ability to protect the genetic privacy of the data originators. Here, we technically map threats to genetic privacy and discuss potential mitigation strategies for privacy-preserving dissemination of genetic data.
        title = "{R}outes for breaching and protecting genetic privacy
        author = "Yaniv Erlich and Arvind Narayanan",
        howpublished = \url{}
        month = Oct,
        year = 2013,
    Yaniv Erlichand Arvind Narayanan
    Online Manuscript ( October 11, 2013
  • Privacy Substitutes [HTML, PDF] [Abstract, BibTeX]
    Debates over information privacy are often framed as an inescapable conflict between competing interests":" a lucrative or beneficial technology, as against privacy risks to consumers. Policy remedies traditionally take the rigid form of either a complete ban, no regulation, or an intermediate zone of modest notice and choice mechanisms. We believe these approaches are unnecessarily constrained. There is often a spectrum of technology alternatives that trade off functionality and profit for consumer privacy. We term these alternatives “privacy substitutes,†and in this Essay we argue that public policy on information privacy issues can and should be a careful exercise in both selecting among, and providing incentives for, privacy substitutes.
        title = "{P}rivacy {S}ubstitutes",
        author = "Jonathan Mayer and Arvind Narayanan",
        booktitle = "{S}tanford {L}aw {R}eview",
        volume = 66,
        issue = 89,
        month = Sep,
        year = 2013,
    Jonathan Mayerand Arvind Narayanan
    Stanford Law Review Online, Special Issue on Privacy and Big Data, 2013 (Stanford Law Review) September 2013
  • Privacy technologies: An annotated syllabus [PDF] [Abstract, BibTeX]
    The teaching of graduate courses is symbiotic with research: it helps systematize knowledge, often creates new knowledge, and influences the thinking of the next generation of scholars, indirectly shaping the future of the discipline. I therefore believe that this forum is well-suited to discussing the teaching of privacy technologies. As a starting point, I offer my annotated syllabus and other reflections on the graduate seminar I taught at Princeton during Fall 2012.

    In developing the curriculum, I aimed to provide technical depth while integrating perspectives from economics, law, policy and other schools of thought on privacy. These worldviews, in my opinion, sometimes conflict with and often add much-needed nuance to the narrative on privacy technologies within computer science.

    As a first-year faculty member, I'm keenly aware that my experience as an educator is relatively limited; my intent here is to provoke discussion rather than to be authoritative.
        title = "Privacy technologies: An annotated syllabus",
        author = "Arvind Narayanan",
        booktitle = "{W}orkshop on {H}ot {T}opeics in {P}rivacy {E}nhancing {T}echnologies",
        location = "Bloomington, IN",
        month = July,
        year = 2013,
    The 6th Workshop on Hot Topics in Privacy Enhancing Technologies (HotPETS 2013) Bloomington, IN, July 2013
  • The Economics of Bitcoin Mining, or Bitcoin in the Presence of Adversaries [PDF] [Abstract, BibTeX]
    The Bitcoin digital currency depends for its correctness and stability on a combination of cryptography, distributed algorithms, and incentivedriven behavior. We examine Bitcoin as a consensus game and determine that it relies on separate consensus about the rules and about game state. An important aspect of Bitcoin's design is the mining mechanism, in which participants expend resources on solving computational puzzles in order to collect rewards. This mechanism purportedly protects Bitcoin against certain technical problems such as inconsistencies in the system's distributed log data structure. We consider the economics of Bitcoin mining, and whether the Bitcoin protocol can survive attacks, assuming that participants behave according to their incentives. We show that there is a Nash equilibrium in which all players behave consistently with Bitcoin's reference implementation, along with infinitely many equilibria in which they behave otherwise. We also show how a motivated adversary might be able to disrupt the Bitcoin system and \crash" the currency. Finally, we argue that Bitcoin will require the emergence of governance structures, contrary to the commonly held view in the Bitcoin community that the currency is ungovernable.
        title = "The Economics of {B}itcoin Mining, or {B}itcoin in the Presence of Adversaries",
        author = "Joshua A. Kroll and Ian C. Davey and Edward W. Felten",
        booktitle = "{W}orkshop on the {E}conomics of {I}nformation {S}ecurity",
        location = "Washington, DC",
        month = Jun,
        year = 2013,
    The Twelfth Workshop on the Economics of Information Security (WEIS 2013) Washington, DC, June 2013
  • Shining the Floodlights on Mobile Web Tracking - A Privacy Survey [PDF] [Abstract, BibTeX]
    We present the first published large-scale study of mobile web tracking. We compare tracking across five physical and emulated mobile devices with one desktop device as a benchmark. Our crawler is based on FourthParty; however, our architecture avoids clearing state which has the benefit of continual observation of (and by) third-parties. We confirm many intuitive predictions and report a few surprises.

    The lists of top third-party domains across different categories devices are substantially similar; we found surprisingly few mobile-specific ad networks. The use of JavaScript by tracking domains increases gradually as we consider more powerful devices. We also analyze cookie longevity by device. Finally, we analyze a curious phenomenon of cookies that are used to store information about the user's browsing history on the client.

    Mobile tracking appears to be an under-researched area, and this paper is only a first step. We have made our code and data available at for others to build on.
        title = "Shining the Floodlights on Mobile Web Tracking - A Privacy Survey",
        author = "Christian Eubank and Marcela Melara and Diego Perez-Botero and Arvind Narayanan",
        booktitle = "{W}eb 2.0 {S}ecurity and {P}rivacy",
        location = "San Francisco, CA",
        month = May,
        year = 2013,
    Christian Eubank, Marcela Melara, Diego Perez-Botero, and Arvind Narayanan
    Web 2.0 Security & Privacy 2013 (W2SP 2013) San Francisco, CA, May 2013
  • A Scanner Darkly: Protecting User Privacy from Perceptual Applications [PDF] [Abstract, BibTeX]
    Perceptual, "context-aware" applications that observe their environment and interact with users via cameras and other sensors are becoming ubiquitous on personal computers, mobile phones, gaming platforms, household robots, and augmented-reality devices. This raises new privacy risks.

    We describe the design and implementation of Darkly, a practical privacy protection system for the increasingly common scenario where an untrusted, third-party perceptual application is running on a trusted device. Darkly is integrated with OpenCV, a popular computer vision library used by such applications to access visual inputs. It deploys multiple privacy protection mechanisms, including access control, algorithmic privacy transforms, and user audit.

    We evaluate Darkly on 20 perceptual applications that perform diverse tasks such as image recognition, object tracking, security surveillance, and face detection. These applications run on Darkly unmodified or with very few modifications and minimal performance overheads vs. native OpenCV. In most cases, privacy enforcement does not reduce the applications' functionality or accuracy. For the rest, we quantify the tradeoff between privacy and utility and demonstrate that utility remains acceptable even with strong privacy protection.
        title = "A Scanner Darkly: Protecting User Privacy from Perceptual Applications",
        author = "Suman Jana and Arvind Narayanan and Vitaly Shmatikov",
        booktitle = "IEEE Symposium on Security and Privacy",
        location = "San Francisco, CA",
        month = May,
        year = 2013,
    Suman Jana, Arvind Narayanan, and Vitaly Shmatikov
    IEEE Symposium on Security and Privacy (Oakland 2013) San Francisco, CA, May 2013
  • Unlikely Outcomes? A Distributed Discussion on the Prospects and Promise of Decentralized Personal Data Architectures [PDF]
    Solon Barocas, Seda Gürses, Arvind Narayanan, and Vincent Toubiana
    Institute of Network Cultures (Unlike Us Reader 2013)
  • What Happened to the Crypto Dream? [Part 1, Part 2] [BibTeX]
        title = "What Happened to the Crypto Dream?",
        author = "Arvind Narayanan"
        journal = "IEEE Security and Privacy Magazine",
        volume = 11,
        number = "2 \& 3",
        year = 2013,
    IEEE Security and Privacy Magazine (IEEE SP 2013)
  • Final Report of the FCC Communications Security, Reliability and Interoperability Council III Working Group 6: Secure BGP Deployment [PDF] [Abstract]
    The Border Gateway Protocol (BGP) is the glue that holds the disparate parts of the Internet together, by allowing independently-administered networks (called Autonomous Systems, or ASes) to announce reachability to IP address blocks (called prefixes) to neighboring networks. Like many of the protocols underlying the Internet, BGP is vulnerable to accidental misconfigurations, malicious attacks, and software bugs, which can cause the spread of "bogus" routing information throughout the Internet. When ASes inadvertently select bogus routes, they may direct data traffic into "blackholes" (that drop the traffic) or detour the traffic over circuitous paths that traverse unexpected ASes that may snoop on the data. Bogus routes arise relatively often, with high-profile incidents affecting many Internet services happening several times a year.

    The fundamental notion in all BGP security solutions is distinguishing "legitimate BGP routes" from "bogus BGP routes". Loosely speaking, a BGP route announcement is legitimate if the IP prefix and BGP route attributes it contains are “consistent with the intent of the network address holder and transited network providersâ€. Intent can be expressed as the routing-policy specifications of the ASes along the path, and otherwise is open to interpretation. This working group compared a range of security solutions designed to prevent the propagation of bogus routes. The comparison spanned a variety of dimensions, including technical maturity, implementation cost, trust models and governance, security benefits and residual threats, new attack surfaces, complexity and design trade-offs, the feasibility of incremental deployment, and the impact on the autonomy of network operators.

    The phrase "to secure BGP routing" used in this report means "to add to BGP, or to the operation of BGP, mechanisms preventing or limiting propagation of routes with bogus attributes". Over the years, many solutions have been proposed for BGP security, and we expect more will be explored in the future. In our work, we do not address previously well- studied aspects of BGP security involving session security, router access security, and techniques for defending internal networks, in part because CSRIC Working Group 4 addressed some of these issues. We focus on solutions that are becoming available in the relatively near term, rather than solutions still in the preliminary stages of investigation or standardization. As such, we focus primarily on the Resource Public Key Infrastructure (RPKI) and the use of RPKI data for “origin validation,†since this proposed solution has some initial traction in standards bodies and deployment by Regional Internet Registries (RIRs). We also briefly discuss other solutions under active investigation and experimentation in the body of the report.

    Since RPKI is still in the early stages of deployment, network operators have not had much time to experiment and learn from early implementations. As such, our report focuses on providing high-level guidance concerning participation in RPKI, and a high-level analysis of risks of the RPKI. In particular, the Working Group has the following four recommendations, which are described in greater depth, and with corresponding background information, in the main sections of this document:
    1. 1) Accurate records about Internet number resource holders: All techniques for improving the security of inter-domain routing rely on authoritative, accurate, and timely information about which ASes are authorized to originate routes for each IP address block. To improve the accuracy of records in commonly used Internet Routing Registries (IRRs), AS operators should ensure their IRR records are accurate, complete, and up-to- date. Yet, IRRs have inherent limitations. In addition to maintaining their IRR records, AS operators should begin using systems such as Resource Public Key Infrastructure (RPKI) to generate certificates and Route Origin Authorizations (ROAs). To prevent missing or conflicting information in the RPKI, we encourage the establishment of a single, global “root of trust†for the RPKI. At this stage experimentation and research with alternative origin authorization techniques is not discouraged.
    2. 2) Cautious, staged deployment of RPKI origin validation: The RPKI provides information that AS operators can use to detect and prevent the spread of bogus origin AS information in BGP. Yet, ASes should retain autonomy in setting their policies for selecting and disseminating routes, including a decision whether and how to use the RPKI data. AS operators should follow a cautious and staged deployment of RPKI, starting by using RPKI data in an out-of-band fashion as one of several ingredients in detecting suspicious routes and constructing their route filters. Any future fully automated use of RPKI data in filtering “invalid†routes should come only after AS operators are highly confident in the reliability and timeliness of the RPKI data.
    3. 3) Mitigating risks inherent in the RPKI: Any new technology raises certain risks, including the possibility of misconfiguration or manipulation. To aid in diagnosing and fixing operational problems, we recommend that every RPKI repository employ accurate and effective human contact records, and that corresponding addressing authorities keep their allocation data up to date and synchronized with corresponding RPKI data. That is, it should be possible to easily correlate RPKI data with existing resource allocation data including the identities of resource holders that are decoupled from RPKI records. We also recommend that organizations responsible for operating RPKI databases and managing certificates make available tools to detect possible configuration errors and expiring certificates, as well as flag suspicious changes that may stem from abusive manipulation of the data. The policies of RPKI operators should allow open access to the RPKI record databases and permit general dissemination of the data and derived results (e.g., lists of validated origins), without formal approval.
    4. 4) Improving BGP security metrics and measurements: Quantitative analysis of BGP measurement data is crucial for informing decisions about which security solutions to deploy. The BGP security community should evaluate the security metrics used in previous measurement studies, and extend them where necessary. The community should perform continuous monitoring and analysis of BGP, to quantify the importance of the problem and identify any changes in the frequency, severity or likelihood of security incidents. As RPKI deployments proceed, the community should track participation in the RPKI and gauge its effectiveness in preventing the spread of erroneous routing information.
    While unanimity in recommendations was an objective from the outset, the conclusions of the report are not necessarily shared by all group members. Although Level 3 Communications participated in the CSIRC III WG 6 Task Force, it does not agree with the balance of recommendations expressed in this paper.

    We also note that we intentionally stop short of attaching concrete time frames to our recommendations. Evolving threats and a better understanding of the operational challenges of RPKI would clearly have a major influence on the appropriate times to adopt the new technology, and preserving the autonomy of individual Network Providers and Autonomous System operators is critically important.
    The FCC CSRIC III Working Group 6, (Andy Ogielski (co-chair), and Jennifer Rexford (co-chair))
    FCC CSRIC III, Working Group 6 (Secure BGP Deployment) Final Report (FCC CSRIC III) March 2013
  • 2012

  • Privacy and Integrity are possible in the Untrusted Cloud [PDF] [Abstract, BibTeX]
    From word processing to online social networking, user-facing applications are increasingly being deployed in the cloud. These cloud services are attractive because they offer high scalability, availability, and reliability. But adopting them has so far forced users to cede control of their data to cloud providers, leaving the data vulnerable to misuse by the providers or theft by attackers. Thus, users have had to choose between trusting providers or forgoing cloud deployment's benefits entirely.

    In this article, we show that it is possible to overcome this trade-off for many applications. We describe two of our recent systems, SPORC and Frientegrity, that enable users to benefit from cloud deployment without having to trust providers for confidentiality or integrity. In both systems, the provider only observes encrypted data and cannot deviate from correct execution without detection. Moreover, for cases when the provider does misbehave, SPORC introduces a mechanism, also applicable to Frientegrity, that enables users to recover.

    SPORC is a framework that enables a wide variety of collaborative applications such as collaborative text editors and shared calendars with an untrusted provider. It allows concurrent, low-latency editing of shared state, permits disconnected operation, and supports dynamic access control even in the presence of concurrency. Frientegrity extends SPORC's model to online social networking. It introduces novel mechanisms for verifying the provider's correctness and access control that scale to hundreds of friends and tens of thousands of posts while still providing the same security guarantees as SPORC. By effectively returning control of users' data to the users themselves, these systems do much to mitigate the risks of cloud deployment.
      title={Privacy and Integrity are Possible in the Untrusted Cloud.},
      author={Feldman, Ariel J and Blankstein, Aaron and Freedman, Michael J and Felten, Edward W},
      journal={IEEE Data Eng. Bull.},
    Ariel J. Feldman, Aaron Blankstein, Michael J. Freedman, Edward W. Felten, and
    Butlletin of the IEEE Computer Society Technical Committee on Data Engineering (Data Engineering, 35:4) December 2012
  • A Formally-Verified Migration Protocol For Mobile, Multi-Homed Hosts [PDF, SLIDES-PDF, SLIDES-PPTX] [Abstract, BibTeX]
    Modern consumer devices, like smartphones and tablets, have multiple interfaces (e.g., WiFi and 4G) that attach to new access points as users move. These mobile, multi-homed computers are a poor match with an Internet architecture that binds connections to fixed endpoints with topology-dependent addresses. As a result, hosts typically cannot spread a connection over multiple interfaces or paths, or change locations without breaking existing connections.

    In this paper, we create an end-to-end connection control protocol (ECCP) that allows hosts to communicate over multiple interfaces with dynamically-changing IP addresses and works with multiple data-delivery protocols (i.e., reliable or unreliable transport). Each ECCP connection consists of one or more flows, each associated with an interface or path. Through end-to-end signaling, a host can move an existing flow from one interface to another, or change its IP address, without any support from the underlying network. We develop formal models to verify that ECCP works correctly in the presence of packet loss, out-of-order delivery, and frequent mobility, and to identify bugs and design limitations in earlier mobility protocols
      title={A formally-verified migration protocol for mobile, multi-homed hosts},
      author={Arye, Matvey and Nordstrom, Erik and Kiefer, Robert and Rexford, Jennifer and Freedman, Michael J},
      booktitle={Network Protocols (ICNP), 2012 20th IEEE International Conference on},
    Matvey Arye, Erik Nordstrom, Robert Kiefer, Jennifer Rexford, and Michael J. Freedman
    International Conference on Network Protocols (ICNP '12) Austin, TX, October 2012
  • A New Approach to Interdomain Routing Based on Secure Multi-Party Computation [PDF] [Abstract, BibTeX]
    Interdomain routing involves coordination among mutually distrustful parties, leading to the requirements that BGP provide policy autonomy, flexibility, and privacy. BGP provides these properties via the distributed execution of policy-based decisions during the iterative route computation process. This approach has poor convergence properties, makes planning and failover difficult, and is extremely difficult to change. To rectify these and other problems, we propose a radically different approach to interdomain-route computation, based on secure multi-party computation (SMPC) . Our approach provides stronger privacy guarantees than BGP and enables the deployment of new policy paradigms. We report on an initial exploration of this idea and outline future directions for research.
      title={A new approach to interdomain routing based on secure multi-party computation},
      author={Gupta, Debayan and Segal, Aaron and Panda, Aurojit and Segev, Gil and Schapira, Michael and Feigenbaum, Joan and Rexford, Jenifer and Shenker, Scott},
      booktitle={Proceedings of the 11th ACM Workshop on Hot Topics in Networks},
    Debayan Gupta, Aaron Segal, Aurojit Panda, Gil Segev, Joan Feigenbaum, Jennifer Rexford, and Scott Shenker
    11th ACM SIGCOMM Workshop on Hot Topics in Networks (HotNets 2012) Redmond, WA, October 2012
  • Commensal Cuckoo: Secure Group Partitioning for Large-Scale Services [PDF] [Abstract, BibTeX]
    We present commensal cuckoo, a secure group partitioning scheme for large-scale systems that maintains the correctness of many small groups, despite a Byzantine adversary that controls a constant (global) fraction of all nodes. In particular, the adversary is allowed to repeatedly rejoin faulty nodes to the system in an arbitrary adaptive manner, e.g. to collocate them in the same group. Commensal cuckoo addresses serious practical limitations of the state-of-the-art scheme, the cuckoo rule of Awerbuch and Scheideler, tolerating over 35x more faulty nodes with groups as small as 64 nodes (as compared to the hundreds required by the cuckoo rule). Secure group partitioning is a key component of highly-scalable, reliable systems such as Byzantine fault-tolerant distributed hash tables (DHTs).
      title={Commensal cuckoo: Secure group partitioning for large-scale services},
      author={Sen, Siddhartha and Freedman, Michael J},
      booktitle={Proceedings of the 5th Workshop on Large Scale Distributed Systems and Middleware},
    Siddhartha Senand Michael J. Freedman
    The 5th Workshop on Large Scale Distributed Systems and Middleware (LADIS 2011) Seattle, WA, September 2011
  • Social Networking with Frientegrity: Privacy and Integrity with an Untrusted Provider [PDF] [Abstract, BibTeX]
    Today's social networking services require users to trust the service provider with the confidentiality and integrity of their data. But with their history of data leaks and privacy controversies, these services are not always deserving of this trust. Indeed, a malicious provider could not only violate users' privacy, it could equivocate and show different users divergent views of the system's state. Such misbehavior can lead to numerous harms including surreptitious censorship.

    In light of these threats, this paper presents Frientegrity, a framework for social networking applications that can be realized with an untrusted service provider. In Frientegrity, a provider observes only encrypted data and cannot deviate from correct execution without being detected. Prior secure social networking systems have either been decentralized, sacrificing the availability and convenience of a centralized provider, or have focused almost entirely on users' privacy while ignoring the threat of equivocation. On the other hand, existing systems that are robust to equivocation do not scale to the needs social networking applications in which users may have hundreds of friends, and in which users are mainly interested the latest updates, not in the thousands that may have come before.

    To address these challenges, we present a novel method for detecting provider equivocation in which clients collaborate to verify correctness. In addition, we introduce an access control mechanism that offers efficient revocation and scales logarithmically with the number of friends. We present a prototype implementation demonstrating that Frientegrity provides latency and throughput that meet the needs of a realistic workload.
        title = "Social Networking with Frientegrity: Privacy and Integrity with an Untrusted Provider",
        author = "Ariel J. Feldman and Aaron Blankstein and Michael J. Freedman and Edward W. Felten",
        booktitle = "Proc.\ 21st {USENIX} {S}ecurity {S}ymposium",
        location = "Bellevue, WA",
        month = Aug,
        year = 2012,
    Proc. 21st USENIX Security Symposium (Sec '12) Bellevue, WA, August 2012
  • Optimizing the Placement of Implicit Proxies [PDF] [Abstract, BibTeX]
    Traffic filters block clients from communicatin with certain Internet destinations. To prevent clients from evading the filtering policies, traffic filters may also block access to well-known anonymizing proxies. In response, researchers have designed more sophisticated solution techniques that rely on implicit proxies lying along the path to unfiltered destinations. An implicit proxy transparently deflects traffic directed to an unfiltered destination toward the filtered destination. However, the effectiveness of implicit proxies highly depends on their presnce in paths between clients and unfiltered destinations. In this paper, we formulate and solve the problem of proxy placement, and evaluate our algorithms on snapshots of the Internet topology for a variety of client and destination sets. We also consider smart filtering techniques that select alternate routes to avoid implicit proxies, as well as the effects of asymmetric Internet routing. Our results show that a relatively small number of proxies can satisfy a large group of clients across a range of geographic locations.
        title = "{O}ptimizing the {P}lacement of {I}mplicit {P}roxies",
        author = "Jacopo Cesareo and Josh Karlin and Michael Schapira and Jennifer Rexford",
        institution = "Department of Computer Science, Princeton University",
        month = Jun,
        year = 2012,
    Jacopo Cesareo, Josh Karlin, Michael Schapira, and Jennifer Rexford
    Unpublished Report () San Diego, CA, March 2014">Online Tech Report) Princeton, NJ, June 2012
  • Virtual Switching Without a Hypervisor for a More Secure Cloud [PDF, SLIDES] [Abstract, BibTeX]
    Cloud computing leverages virtualization to offer resources on demand to multiple "tenants". However, sharing the server and network infrastructure creates new vulnerabilities, where one tenant can attack another by compromising the underlying hypervisor. We design a system that supports virtualized networking using software switches without a hypervisor. In our architecture, the software switch runs in a Switch Domain (DomS) that is separate from the control VM. Both the guest VMs and DomS run directly on the server hardware, with processing and memory resources allocated in advance. Each guest VM interacts with the software switch through a shared memory region using periodic polling to detect network packets. The communication does not involve the hypervisor or the control VM. In addition, any software bugs that crash the software switch do not crash the rest of the system, and a crashed switch can be easily rebooted. Experiments with our initial prototype, built using Xen and Open vSwitch, show that the combination of shared pages and polling offers reasonable performance compared to conventional hypervisor-based solutions.
      title={Virtual switching without a hypervisor for a more secure cloud},
      author={Jin, Xin and Keller, Eric and Rexford, Jennifer},
      booktitle={Proceedings of the 2nd USENIX conference on Hot Topics in Management of Internet, Cloud, and Enterprise Networks and Services},
      organization={USENIX Association}
    Xin Jin, Eric Keller, and Jennifer Rexford
    2nd USENIX Workshop on Hot Topics in Management of Internet, Cloud, and Enterprise Networks and Services (HOT-ICE 2012) San Jose, CA, April 2012
  • A Certificate Infrastructure for Machine-Checked Proofs of Conditional Information Flow [PDF] [Abstract, BibTeX]
    In previous work, we have proposed a compositional framework for stating and automatically verifying complex conditional information flow policies using a relational Hoare logic. The framework allows developers and verifiers to work directly with the source code using source-level code contracts. In this work, we extend that approach so that the algorithm for verifying code compliance to an information flow contract emits formal certificates of correctness that are checked in the Coq proof assistant. This framework is implemented in the context of SPARK - a subset of Ada that has been used in a number of industrial contexts for implementing certified safety and security critical systems.
      title={A certificate infrastructure for machine-checked proofs of conditional information flow},
      author={Amtoft, Torben and Dodds, Josiah and Zhang, Zhi and Appel, Andrew and
       Beringer, Lennart and Hatcliff, John and Ou, Xinming and Cousino, Andrew},
      booktitle={Principles of Security and Trust},
    Torben Amtoft, Josiah Dodds, Zhi Zhang, Andrew W. Appel, Lennart Beringer, John Hatcliff, Xinming Ou, and Andrew Cousino
    First Conference on Principles of Security and Trust (POST '12), Tallinn, Estonia, March 2012
  • Commensal Cuckoo: Secure Group Partitioning for Large-Scale Services [PDF] [Abstract, BibTeX]
    We present commensal cuckoo, a secure group partitioning scheme for large-scale systems that maintains the correctness of many small groups, despite a Byzantine adversary that controls a constant (global) fraction of all nodes. In particular, the adversary is allowed to repeatedly rejoin faulty nodes to the system in an arbitrary adaptive manner, e.g. to collocate them in the same group. Commensal cuckoo addresses serious practical limitations of the state-of-the-art scheme, the cuckoo rule of Awerbuch and Scheideler, tolerating over 35x more faulty nodes with groups as small as 64 nodes (as compared to the hundreds required by the cuckoo rule). Secure group partitioning is a key component of highly-scalable, reliable systems such as Byzantine fault-tolerant distributed hash tables (DHTs).
      title={Commensal cuckoo: Secure group partitioning for large-scale services},
      author={Sen, Siddhartha and Freedman, Michael J},
      journal={ACM SIGOPS Operating Systems Review},
    Siddhartha Senand Michael J. Freedman
    ACM SIGOPS Operating Systems Review, Volume 46 Issue 1 (SIGOPS OSR) January, 2012
  • 2011

  • Eliminating the Hypervisor Attack Surface for a More Secure Cloud [PDF, SLIDES-PPTX] [Abstract, BibTeX]
    Cloud computing is quickly becoming the platform of choice for many web services. Virtualization is the key underlying technology enabling cloud providers to host services for a large number of customers. Unfortunately, virtualization software is large, complex, and has a considerable attack surface. As such, it is prone to bugs and vulnerabilities that a malicious virtual machine (VM) can exploit to attack or obstruct other VMs - a major concern for organizations wishing to move “to the cloud.†In contrast to previous work on hardening or minimizing the virtualization software, we eliminate the hypervisor attack surface by enabling the guest VMs to run natively on the underlying hardware while maintaining the ability to run multiple VMs concurrently. Our NoHype system embodies four key ideas: (i) pre-allocation of processor cores and memory resources, (ii) use of virtualized I/O devices, (iii) minor modifications to the guest OS to perform all system discovery during bootup, and (iv) avoiding indirection by bringing the guest virtual machine in more direct contact with the underlying hardware. Hence, no hypervisor is needed to allocate resources dynamically, emulate I/O devices, support system discovery after bootup, or map interrupts and other identifiers. NoHype capitalizes on the unique use model in cloud computing, where customers specify resource requirements ahead of time and providers offer a suite of guest OS kernels. Our system supports multiple tenants and capabilities commonly found in hosted cloud infrastructures. Our prototype utilizes Xen 4.0 to prepare the environment for guest VMs, and a slightly modified version of Linux 2.6 for the guest OS. Our evaluation with both SPEC and Apache benchmarks shows a roughly 1% performance gain when running applications on NoHype compared to running them on top of Xen 4.0. Our security analysis shows that, while there are some minor limitations with current commodity hardware, NoHype is a significant advance in the security of cloud computing.
      title={Eliminating the hypervisor attack surface for a more secure cloud},
      author={Szefer, Jakub and Keller, Eric and Lee, Ruby B and Rexford, Jennifer},
      booktitle={Proceedings of the 18th ACM conference on Computer and communications security},
    Jakub Szefer, Eric Keller, Ruby B. Lee, and Jennifer Rexford
    18th ACM Conference on Computer and Communications Security (CCS 2011) Chicago, IL, October 2011
  • Commensal Cuckoo: Secure Group Partitioning for Large-Scale Services [PDF] [Abstract, BibTeX]
    We present commensal cuckoo, a secure group partitioning scheme for large-scale systems that maintains the correctness of many small groups, despite a Byzantine adversary that controls a constant (global) fraction of all nodes. In particular, the adversary is allowed to repeatedly rejoin faulty nodes to the system in an arbitrary adaptive manner, e.g. to collocate them in the same group. Commensal cuckoo addresses serious practical limitations of the state-of-the-art scheme, the cuckoo rule of Awerbuch and Scheideler, tolerating over 35x more faulty nodes with groups as small as 64 nodes (as compared to the hundreds required by the cuckoo rule). Secure group partitioning is a key component of highly-scalable, reliable systems such as Byzantine fault-tolerant distributed hash tables (DHTs).
      title={Commensal cuckoo: Secure group partitioning for large-scale services},
      author={Sen, Siddhartha and Freedman, Michael J},
      booktitle={Proceedings of the 5th Workshop on Large Scale Distributed Systems and Middleware},
    Siddhartha Senand Michael J. Freedman
    The 5th Workshop on Large Scale Distributed Systems and Middleware (LADIS 2011) Seattle, WA, September 2011
  • Security Seals on Voting Machines: A Case Study [PDF, (Prepublication)] [Abstract, BibTeX]
    Tamper-evident seals are used by many states' election officials on voting machines and ballot boxes, either to protect the computer and software from fraudulent modification or to protect paper ballots from fraudulent substitution or stuffing. Physical tamper-indicating seals can usually be easily defeated, given they way they are typically made and used; and the effectiveness of seals depends on the protocol for their application and inspection. The legitimacy of our elections may therefore depend on whether a particular state's use of seals is effective to prevent, deter, or detect election fraud. This paper is a case study of the use of seals on voting machines by the State of New Jersey. I conclude that New Jersey's protocols for the use of tamper-evident seals have been not at all effective. I conclude with a discussion of the more general problem of seals in democratic elections.
     author = {Appel, Andrew W.},
     title = {Security Seals on Voting Machines: A Case Study},
     journal = {ACM Trans. Inf. Syst. Secur.},
     issue_date = {September 2011},
     volume = {14},
     number = {2},
     month = sep,
     year = {2011},
     pages = {18:1--18:29},
     url = {},
     doi = {10.1145/2019599.2019603},
     publisher = {ACM},
     address = {New York, NY, USA},
    ACM Transactions on Information and System Security (TISSEC) September 2011
  • Hiding Amongst the Clouds: A Proposal for Cloud-based Onion Routing [PDF] [Abstract, BibTeX]
    Internet censorship and surveillance have made anonymity tools increasingly critical for free and open Internet access. Tor, and its associated ecosystem of volunteer traffic relays, provides one of the most secure and widely-available means for achieving Internet anonymity today. Unfortunately, Tor has limitations, including poor performance, inadequate capacity, and a susceptibility to wholesale blocking. Rather than utilizing a large number of volunteers (as Tor does), we propose moving onion-routing services to the "cloud" to leverage the large capacities, robust connectivity, and economies of scale inherent to commercial datacenters. This paper describes Cloud-based Onion Routing (COR), which builds onion-routed tunnels over multiple anonymity service providers and through multiple cloud hosting providers, dividing trust while forcing censors to incur large collateral damage. We discuss the new security policies and mechanisms needed for such a provider-based ecosystem, and present some preliminary benchmarks. At today's prices, a user could gain fast, anonymous network access through COR for only pennies per day.
        title = "{H}iding {A}mongst the {C}louds: {A} {P}roposal for {C}loud-based {O}nion {R}outing",
        author = "Nicholas Jones and Matvey Arye and Jacopo Cesareo and Michael Freedman",
        booktitle = "{F}ree and {O}pen {C}ommunication on the {I}nternet"
        location = "San Francisco, CA"
        month = Aug
        year = 2011,
    Nicholas Jones, Matvey Arye, Jacopo Cesareo, Michael J. Freedman, and
    Workshop on Free and Open Communications on the Internet 2011 (FOCI 2011) San Francisco, CA, August 2011
  • Bubble Trouble: Off-Line De-Anonymization of Bubble Forms [PDF] [Abstract, BibTeX]
    Fill-in-the-bubble forms are widely used for surveys, election ballots, and standardized tests. In these and other scenarios, use of the forms comes with an implicit assumption that individuals bubble markings themselves are not identifying. This work challenges this assumption, demonstrating that fill-in-the-bubble forms could convey a respondent's identity even in the absence of explicit identifying information. We develop methods to capture the unique features of a marked bubble and use machine learning to isolate characteristics indicative of its creator. Using surveys from more than ninety individuals, we apply these techniques and successfully re-identify individuals from markings alone with over 50% accuracy.

    This bubble-based analysis can have either positive or negative implications depending on the application. Potential applications range from detection of cheating on standardized tests to attacks on the secrecy of election ballots. To protect against negative consequences, we discuss mitigation techniques to remove a bubble's identifying characteristics. We suggest additional tests using longitudinal data and larger datasets to further explore the potential of our approach in real-world applications.
    @inproceedings{ bubbletrouble:sec11,
        title = "Bubble Trouble: Off-Line De-Anonymization of Bubble Forms",
        author = "Joseph A. Calandrino and William Clarkson and Edward W. Felten",
        booktitle = "Proc.\ 20th {USENIX} {S}ecurity {S}ymposium",
        location = "San Francisco, CA",
        month = Aug,
        year = 2011,
    Joe Calandrino, William Clarkson, and Ed Felten
    Proc. 20th USENIX Security Symposium (Sec '11) San Francisco, CA, August 2011
  • Modular Protections against Non-control Data Attacks [PDF] [Abstract, BibTeX]
    This paper introduces YARRA, a conservative extension to C to protect applications from non-control data attacks. YARRA programmers specify their data integrity requirements by declaring critical data types and ascribing these critical types to important data structures. YARRA guarantees that such critical data is only written through pointers with the given static type. Any attempt to write to critical data through a pointer with an invalid type (perhaps because of a buffer overrun) is detected dynamically. We formalize YARRA's semantics and prove the soundness of a program logic designed for use with the language. A key contribution is to show that YARRA's semantics are strong enough to support sound local reasoning and the use of a frame rule, even across calls to unknown, unverified code. We evaluate a prototype implementation of a compiler and runtime system for YARRA by using it to harden four common server applications against known non-control data vulnerabilities. We show that YARRA defends against these attacks with only a negligible impact on their end-to-end performance.
        title={Modular Protections against Non-control Data Attacks},
        author={Cole Schlesinger and Karthik Pattabiraman and Nikhil Swamy and David Walker and Ben Zorn},
        booktitle={24th IEEE Computer Security Foundations Symposium},
        location={Domaine de l'Abbaye des Vaux de Cernay, France},
    Cole Schlesinger, Nikhil Swamy, David Walker, and Ben Zorn
    24th IEEE Computer Security Foundations Symposium (CSF 2011) Domaine de l'Abbaye des Vaux de Cernay, France, June 2011
  • You Might Also Like: Privacy Risks of Collaborative Filtering [PDF] [Abstract, BibTeX]
    Many commercial websites use recommender systems to help customers locate products and content. Modern recommenders are based on collaborative filtering: they use patterns learned from users' behavior to make recommendations, usually in the form of related-items lists. The scale and complexity of these systems, along with the fact that their outputs reveal only relationships between items (as opposed to information about users), may suggest that they pose no meaningful privacy risk. In this paper, we develop algorithms which take a moderate amount of auxiliary information about a customer and infer this customer's transactions from temporal changes in the public outputs of a recommender system. Our inference attacks are passive and can be carried out by any Internet user. We evaluate their feasibility using public data from popular websites Hunch,, LibraryThing, and Amazon.
        title = "You Might Also LIke: Privacy Risks of Collaborative Filtering",
        author = "Joseph A. Calandrino and Ann Kilzer and Arvind Narayanan and Edward W. Felten and Vitaly Shmatikov",
        booktitle = "Proc.\ 20th {IEEE} {S}ymposium on {S}ecurity and {P}rivacy",
        location = "Oakland, CA",
        month = May,
        year = 2011,
    Joe Calandrino, Ann Kilzer, Arvind Narayanan, Edward W. Felten, and Vitaly Shmatikov
    Proc. IEEE Symposium on Security and Privacy (Oakland '11) (Proc. 32nd IEEE Symposium on Security and Privacy (Oakland '11), Oakland, CA, May 2011)
  • Verified Software Toolchain [PDF] [Abstract, BibTeX]
    The software toolchain includes static analyzers to check assertions about programs; optimizing compilers to translate programs tomachine language; operating systems and libraries to supply context for programs. Our Verified Software Toolchain verifies with machine-checked proofs that the assertions claimed at the top of the toolchain really hold in the machine-language program, running in the operating-system context, on a weakly-consistent-shared-memory machine.

    Our verification approach is modular, in that proofs about operating systems or concurrency libraries are oblivious of the programming language or machine language, proofs about compilers are oblivious of the program logic used to verify static analyzers, and so on. The approach is scalable, in that each component is verified in the semantic idiom most natural for that component.

    Finally, the verification is foundational: the trusted base for proofs of observable properties of the machine-language program includes only the operational semantics of the machine language, not the source language, the compiler, the program logic, or any other part of the toolchain--even when these proofs are carried out by source-level static analyzers.

    In this paper I explain some semantic techniques for building a verified toolchain.
     author = {Appel, Andrew W.},
     title = {Verified software toolchain},
     booktitle = {Proceedings of the 20th European Symposium on Programming},
     series = {ESOP'11/ETAPS'11},
     year = {2011},
     location = {Saarbr\&\#252;cken, Germany},
     pages = {1--17},
     url = {},
    Proceedings of the 20th European Symposium on Programming (ESOP '11), Saarbrücken, Germany, March 2011
  • Can It Really Work? Problems with Extending EINSTEIN 3 to Critical Infrastructure [PDF] [Abstract, BibTeX]
    In an effort to protect its computer systems from malevolent actors, the U.S. government has developed a series of intrusion-detection and intrusion-prevention systems aimed at monitoring and screening traffic between the internet and government systems. With EINSTEIN 3, the government now may seek to do the same for private critical infrastructure networks. This article considers the practical considerations associated with EINSTEIN 3 that indicate the program is not likely to be effective. Considering differences in scale, the inability to dictate hardware and software choices to private parties, and the different regulatory framework for government action in the private sector, this Article discusses why the government may be unable to effectively implement EINSTEIN 3 across the private networks serving critical infrastructure. Looking at what EINSTEIN aims to protect, what it is capable of protecting, and how privacy considerations affect possible solutions, this Article provides suggestions as to how to amend the EINSTEIN program to better protect critical infrastructure.
      title={Can It Really Work? Problems with Extending {EINSTEIN} 3 to Critical Infrastructure.},
      author={Bellovin, Steven M and Bradner, Scott O and Diffie, Whitfield and Landau, Susan and Rexford, Jennifer},
      journal={Harvard National Security Journal},
    Steven M. Bellovin, Scott O. Bradner, Whitfield Diffie, Susan Landau, and Jennifer Rexford
    Harvard National Security Journal (Harvard NSJ, Volume 3, No. 1) January 2011
  • 2010

  • Putting BGP on the Right Path: A Case for Next-Hop Routing [PDF] [Abstract, BibTeX]
    BGP is plagued by many serious problems, ranging from protocol divergence and software bugs to misconfigurations and attacks. Rather than continuing to add mechanisms to an already complex protocol, or redesigning interdomain routing from scratch, we propose making BGP simpler. We argue that the AS-PATH, which lists the sequence of ASes that propagated the route, is the root of many of BGP's problems. We propose a transition from today's path-based routing to a solution where ASes select and export routes based only on neighboring ASes. We discuss the merits and limitations of next-hop routing. We argue that next-hop routing is sufficiently expressive to realize network operators' goals while side-stepping major problems with today's BGP. Specifically, we show that next-hop routing simplifies router implementation and configuratio, reduces BGP's attack surface, makes it easier to support multipath routing, and provably achieves faster convergence and incentive compatibility. Our simulations show that next-hop routing significantly reduces the number of update messages and routing changes and is especially effective at preventing the most serious convergence problems.
      title={Putting BGP on the right path: A case for next-hop routing},
      author={Schapira, Michael and Zhu, Yaping and Rexford, Jennifer},
      booktitle={Proceedings of the 9th ACM SIGCOMM Workshop on Hot Topics in Networks},
    Michael Schapira, Yaping Zhu, and Jennifer Rexford
    Ninth ACM Workshop on Hot Topics in Networks (HotNets 2010) Monterey, CA, October 2010
  • SPORC: Group Collaboration Using Untrusted Cloud Resources [PDF] [Abstract, BibTeX]
    Cloud-based services are an attractive deployment model for user-facing applications like word processing and calendaring. Unlike desktop applications, cloud services allow multiple users to edit shared state concurrently and in real-time, while being scalable, highly available, and globally accessible. Unfortunately, these benefimts come at the cost of fully trusting cloud providers with potentially sensitive and important data.

    To overcome this strict tradeoff, we present SPORC, a generic framework for building a wide variety of collaborative applications with untrusted servers. In SPORC, a server observes only encrypted data and cannot deviate from correct execution without being detected. SPORC allows concurrent, low-latency editing of shared state, permits disconnected operation, and supports dynamic access control even in the presence of concurrency. We demonstrate SPORC's flexibility through two prototype applications: a causally-consistent key-value store and a browser-based collaborative text editor.

    Conceptually, SPORC illustrates the complementary benefits of operational transformation (OT) and fork* consistency. The former allows SPORC clients to execute concurrent operations without locking and to resolve any resulting conflicts automatically. The latter prevents a misbehaving server from equivocating about the order of operations unless it is willing to fork clients into disjoint sets. Notably, unlike previous systems, SPORC can automatically recover from such malicious forks by leveraging OT's conflict resolution mechanism.
        title = "SPORC: Group Collaboration Using Untrusted Cloud Resources",
        author = "Ariel J. Feldman and William P. Zeller and Michael J. Freedman and Edward W. Felten",
        booktitle = "Proc.\ 9th {USENIX} {S}ymposium on {O}perating {S}ystems {D}esign and {I}mplementation",
        location = "Vancouver, BC",
        month = Oct,
        year = 2010,
    Proc. 9th Symposium on Operating Systems Design and Implementation (OSDI '10) Vancouver, BC, Canada, October, 2010
  • How Secure are Secure Interdomain Routing Protocols? [PDF] [Abstract, BibTeX]
    In response to high-profile Internet outages, BGP security variants have been proposed to prevent the propagation of bogus routing information. To inform discussions of which variant should be deployed in the Internet, we quantify the ability of the main protocols (origin authentication, soBGP, S-BGP, and data-plane verification) to blunt traffic-attraction attacks; i.e., an attacker that deliberately attracts traffic to drop, tamper, or eavesdrop on packets.

    Intuition suggests that an attacker can maximize the traffic he attracts by widely announcing a short path that is not flagged as bogus by the secure protocol. Through simulations on an empirically-determined AS-level topology, we show that this strategy is surprisingly effective, even when the network uses an advanced security solution like S-BGP or data-plane verification. Worse yet, we show that these results underestimate the severity of attacks. We prove that finding the most damaging strategy is NP-hard, and show how counterintuitive strategies, like announcing longer paths, announcing to fewer neighbors, or triggering BGP loop-detection, can be used to attract even more traffic than the strategy above. These counterintuitive examples are not merely hypothetical; we searched the empirical AS topology to identify specific ASes that can launch them. Finally, we find that a clever export policy can often attract almost as much traffic as a bogus path announcement. Thus, our work implies that mechanisms that police export policies (e.g., defensive filtering) are crucial, even if S-BGP is fully deployed.
      title={How secure are secure interdomain routing protocols},
      author={Goldberg, Sharon and Schapira, Michael and Hummon, Peter and Rexford, Jennifer},
      booktitle={ACM SIGCOMM Computer Communication Review},
    Sharon Goldberg, Michael Shapira, Peter Hummon, and Jennifer Rexford
    2010 Annual Conference of teh ACM Special Interest Group on Data Communication (SIGCOMM 2010) New Delhi, India, August 2010
  • SVC: Selector-Based View Composition for Web Frameworks [PDF] [Abstract, BibTeX]
    We present Selector-based View Composition (SVC), a new programming style for web application development. Using SVC, a developer defines a web page as a series of transformations on an initial state. Each transformation consists of a selector (used to select parts of the page) and an action (used to modify content matched by the selector). SVC applies these transformations on either the client or the server to generate the complete web page.

    Developers gain two advantages by programming in this style. First, SVC can automatically add Ajax support to sites, allowing a developer to write interactive web applications without writing any JavaScript. Second, the developer can reason about the structure of the page and write code to exploit that structure, increasing the power and reducing the complexity of code that manipulates the page's content.

    We introduce SVC as a stateless, framework-agnostic development style. We also describe the design, implementation and evaluation of a prototype, consisting of a PHP extension using the WebKit browser engine and a plugin to the popular PHP MVC framework Code Igniter. To illustrate the general usefulness of SVC, we describe the implementation of three example applications consisting of common Ajax patterns. Finally, we describe the implementation of three post-processing filters and compare them to currently existing solutions.
    @inproceedings{ zeller2010svc,
        title = "SVC: Selector-Based View Composition for Web Frameworks",
        author = "William P. Zeller and Edward W. Felten",
        booktitle = "Proc. {USENIX} {C}onference on {W}eb {A}pplication {D}evelopment",
        location = "Boston, MA",
        month = Jun,
        year = 2010,
    William P. Zellerand Edward W. Felten
    Proc. USENIX Conference on Web Application Development, 2010 (WebApps '10) Boston, MA, June 2010
  • Prophecy: Using History for High-Throughput Fault Tolerance
    Siddhartha Sen, Wyatt Lloyd, and Michael J. Freedman
    Proc. 7th USENIX/ACM Symposium on Networked Systems Design and Implementation (NSDI '10) San Jose, CA, April 2010. To appear.
  • Formal Verification of Coalescing Graph-Coloring Register Allocation [PDF] [Abstract]
    Iterated Register Coalescing (IRC) is a widely used heuristic for performing register allocation via graph coloring. Many implementations in existing compilers follow (more or less faithfully) the imperative algorithm published in 1996. Several mistakes have been found in some of these implementations.

    In this paper, we present a formal verification (in Coq) of the whole IRC algorithm. We detail a specification that can be used as a reference for IRC. We also define the theory of register-interference graphs; we implement a purely functional version of the IRC algorithm, and we prove the total correctness of our implementation. The automatic extraction of our IRC algorithm into Caml yields a program with competitive performance. This work has been integrated into the CompCert verified compiler.
    Sandrine Blazy, Benoit Robillard, and Andrew W. Appel
    In ESOP 2010: 19th European Symposium on Programming, March 2010
  • Faulty Logic: Reasoning about Fault Tolerant Programs [PDF] [Abstract]
    Transient faults are single-shot hardware errors caused by high energy particles from space, manufacturing defects, overheating, and other sources. Such faults can be devastating for security- and safety critical systems. In order to mitigate these problems, software developers can add redundancy in various ways to their software systems. However, such redundancy is hard to reason about and corner cases are easy to miss, leaving these systems vulnerable. To solve this problem, we have developed a logic, based on Separation Logic, for reasoning about faults as resources.We show how to use this logic as a language of assertions and incorporate it into a Hoare Logic for verifying imperative programs. This Hoare Logic is parameterized by a formal faultmodel and it can be used to prove imperative programs correctwith respect to thatmodel. In addition to developing this basic verification platform, we have designed a modal operator that abstracts away the effects of individual faults, enabling modularization of proofs and greatly simplifying the reasoning involved. The logic is proved sound and studied through a number of examples, including a simplified version of the RSA Sign/Verify algorithm.

    Matthew L. Meolaand David Walker
    To appear in ESOP 2010., March 2010
  • Defeating Vanish with Low-Cost Sybil Attacks Against Large DHTs [PDF] [Abstract, BibTeX]
    Researchers at the University of Washington recently proposed Vanish, a system for creating messages that automatically "self-destruct" after a period of time. Vanish works by encrypting each message with a random key and storing shares of the key in a large, public distributed hash table (DHT). Normally, DHTs expunge data older than a certain age. After they expire, the key is permanently lost, and the encrypted data is permanently unreadable. Vanish is an interesting approach to an important privacy problem, but, in its current form, it is insecure. In this paper, we defeat the deployed Vanish implementation, explain how the original paper's security analysis is flawed, and draw lessons for future system designs. We present two Sybil attacks against the current Vanish implementation, which stores its encryption keys in the million-node Vuze BitTorrent DHT. These attacks work by continuously crawling the DHT and saving each stored value before it ages out. They can efficiently recover keys for more than 99% of Vanish messages. We show that the dominant cost of these attacks is network data transfer, not memory usage as the Vanish authors expected, and that the total cost is two orders of magnitude less than they estimated. While we consider potential defenses, we conclude that public DHTs like Vuze probably cannot provide strong security for Vanish.
        title = "Defeating Vanish with Low-Cost Sybil Attacks Against Large DHTs",
        author = "Scott Wolchok and Owen S. Hoffman and Nadia Henninger and Edward W. Felten and J. Alex Haldermann and Christopher J. Rossback and Brent Waters and Emmet Witchel",
        booktitle = "Proc.\ 17th {N}etwork and {D}istributed {S}ystem {S}ecurity {S}ymposium",
        location = "San Diego, CA",
        month = Feb,
        year = 2010,
    Scott Wolchok, Owen S. Hoffman, Nadia Henninger, Edward W. Felten, J. Alex Haldermann, Christopher J. Rossbach, Brent Waters, and Emmet Witchel
    Proc. 17th Network and Distributed System Security Symposium (NDSS '10), San Diego, CA, February 2010
  • A Survey of BGP Security Issues and Solutions [PDF] [Abstract, BibTeX]
    As the Internet's de facto interdomain routing protocol, the Border Gateway Protocol (BGP) is the glue that holds the disparate parts of the Internet together. A major limitation of BGP is its failure to adequately address security. Recent high-profile outages and security analyses clearly indicate that the Internet routing infrastructure is highly vulnerable. Moreover, the design of BTP and the ubiquity of its deployment have frustrated past efforts at securing interdomain routing. This paper considers the current vulnerabilities of the interdomain routing system and surveys both research and standardization efforts relating to BGP security. We explore the limitations and advantages of proposed security extensions to BGP, and explain why no solution has yet struck an adequate balance between comprehensive security and deployment cost.
      title={A survey of BGP security issues and solutions},
      author={Butler, Kevin and Farley, Toni R and McDaniel, Patrick and Rexford, Jennifer},
      journal={Proceedings of the IEEE},
    Kevin Butler, Toni R. Farley, Patrick McDaniel, and Jennifer Rexford
    Proceedings of the IEEE (Proc. IEEE, Volume 98, No. 1.) January 2010
  • A Theory of Indirection via Approximation [PDF] [Abstract]
    Building semantic models that account for various kinds of indirect reference has traditionally been a difficult problem. Indirect reference can appear in many guises, such as heap pointers, higher-order functions, object references, and shared-memory mutexes.

    We give a general method to construct models containing indirect reference by presenting a "theory of indirection". Our method can be applied in a wide variety of settings and uses only simple, elementary mathematics. In addition to various forms of indirect reference, the resulting models support powerful features such as impredicative quantification and equirecursion; moreover they are compatible with the kind of powerful substructural accounting required to model (higher-order) separation logic. In contrast to previous work, our model is easy to apply to new settings and has a simple axiomatization, which is complete in the sense that all models of it are isomorphic. Our proofs are machine-checked in Coq.
    Aquinas Hobor, Robert Dockins, and Andrew W. Appel
    In POPL 2010: The 37th Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, January 2010
  • 2009

  • A Fresh Look at Separation Algebras and Share Accounting [PDF] [Abstract]
    Separation Algebras serve as models of Separation Logics; Share Accounting allows reasoning about concurrent-read/exclusive-write resources in Separation Logic. In designing a Concurrent Separation Logic and in mechanizing proofs of its soundness, we found previous axiomatizations of separation algebras and previous systems of share accounting to be useful but flawed. We adjust the axioms of separation algebras; we demonstrate an operator calculus for constructing new separation algebras; we present a more powerful system of share accounting with a new, simple model; and we provide a reusable Coq development.
    Robert Dockins, Aquinas Hobor, and Andrew W. Appel
    To appear in Seventh Asian Symposium on Programming Languages and Systems (APLAS 2009), December 2009
  • Collaborative, Privacy-Preserving Data Aggregation at Scale [PDF] [Abstract]
    Combining and analyzing data collected at multiple administrative locations is critical for a wide variety of applications, such as detecting malicious attacks or computing an accurate estimate of the popularity of Web sites. However, legitimate concerns about privacy often inhibit participation in collaborative data aggregation. In this paper, we design, implement, and evaluate a practical solution for privacy-preserving data aggregation (PDA) among a large number of participants. Scalability and efficiency is achieved through a “semi-centralized†architecture that divides responsibility between a proxy that obliviously blinds the client inputs and a database that aggregates values by (blinded) keywords and identifies those keywords whose values satisfy some evaluation function. Our solution leverages a novel cryptographic protocol that provably protects the privacy of both the participants and the keywords, provided that proxy and database do not collude, even if both parties may be individually malicious.

    We implemented a prototype of our design, including an amortized oblivious transfer protocol that substantially improves the efficiency of client-proxy interactions. Our experiments show that the performance of our system scales linearly with computing resources, making it easy to improve performance by adding more cores or machines. For collaborative diagnosis of denialof- service attacks, our system can handle millions of suspect IP addresses per hour when the proxy and the database each run on two quad-core machines.

    Benny Applebaum, Michael Freedman, Haakon Ringberg, Matthew Caesar, and Jennifer Rexford
    In submission, November 2009
  • The New Jersey Voting-machine Lawsuit and the AVC Advantage DRE Voting Machine [PDF] [Abstract]
    As a result of a public-interest lawsuit, by Court order we were able to study, for one month, the hardware and source code of the Sequoia AVC Advantage direct-recording electronic voting machine, which is used throughout New Jersey (and Louisiana), and the Court has permitted us to publicly describe almost everything that we were able to learn. In short, these machines are vulnerable to a wide variety of attacks on the voting process. It would not be in the slightest difficult for a moderately determined group or individual to mount a vote-stealing attack that would be successful and undetectable.
    Andrew W. Appel, Maia Ginsburg, Harri Hursti, Brian W. Kernighan, Christopher D. Richards, Gang Tan, and Penny Venetis
    EVT/WOTE'09, Electronic Voting Technology Workshop / Workshop on Trustworthy Elections, August 2009
  • Some Consequences of Paper Fingerprinting for Elections [PDF] [Abstract, BibTeX]
    Recent research has demonstrated that individual pieces of paper can be fingerprinted and re-identified later at low cost, using commodity scanners. We consider the consequences of this fact for electronic voting. The most obvious consequence is negative: the ability to fingerprint paper endangers the secrecy of ballots in any system that keeps paper records of individual ballots, including standard optical scan and DRE-VVPAT systems. We characterize the resulting risks and discuss when and how they can be mitigated. Less obviously, the ability to fingerprint paper can also have positive consequences, by enabling certain new kinds of post-election audit procedures, both to compare electronic records to the corresponding paper records and to detect the use of forged ballot stock. Paper re-identification presents new challenges for election officials, but careful consideration of its implications now may allow us to preserve ballot secrecy and strengthen election integrity.
        author = {Joseph A. Calandrino and William Clarkson and Edward W. Felten},
        title = {Some Consequences of Paper Fingerprinting for Elections},
        year = {2009},
        booktitle = {Proceedings of EVT/WOTE 2009},
        editor = {David Jefferson and Joseph Lorenzo Hall and Tal Moran}
        organization = {USENIX/ACCURATE/IAVoSS}
        month = aug
    Joe Calandrino, William Clarkson, and Ed Felten
    EVT/WOTE '09, Electronic Voting Technology Workshop / Workshop on Trustworthy Elections (EVT/WOTE 2009) Montreal, Canada, August 2009
  • Rethinking Enterprise Network Control [PDF, IEEE] [Abstract, BibTeX]
    This paper presents Ethane, a new network architecture for the enterprise. Ethane allows managers to define a single network-wide fine-grain policy and then enforces it directly. Ethane couples extremely simple flow-based Ethernet switches with a centralized controller that manages the admittance and routing of flows. While radical, this design is backwards-compatible with existing hosts and switches. We have implemented Ethane in both hardware and software, supporting both wired and wireless hosts.We also show that it is compatible with existing high-fanout switches by porting it to popular commodity switching chipsets. We have deployed and managed two operational Ethane networks, one in the Stanford University Computer Science Department supporting over 300 hosts, and another within a small business of 30 hosts. Our deployment experiences have significantly affected Ethane's design.
        title={Rethinking Enterprise Network Control},
        author={Casado, M. and Freedman, M.J. and Pettit, J. and Jianying
                Luo and Gude, N. and McKeown, N. and Shenker, S.},
        journal={Networking, IEEE/ACM Transactions on},
        month={Aug. },
        keywords={local area networks, switches, telecommunication control, 
                  telecommunication network management, telecommunication 
                  network reliability, telecommunication network 
                  routingEthane, centralized controller, commodity switching
                   chipsets, enterprise network control, fine-grain policy, 
                   flow admittance, flow routing, flow-based Ethernet 
                   switches, high-fanout switches, network architecture, 
                   network management, network security, single 
                   network-wide policy},
        ISSN={1063-6692}, }
    Martin Casado, Michael J. Freedman, Justin Pettit, Jianying Luo, Natasha Gude, Nick McKeown, and Scott Shenker
    IEEE/ACM Transactions on Networking Vol 17, Num 4, August 2009. Pages 1270--1283.
  • On Subliminal Channels in Encrypt-on-Cast Voting Systems [PDF] [Abstract, BibTeX]
    Ballot secrecy, while essential, is difficult to achieve with any voting system cryptographic or otherwise. Moreover, the majority of cryptographic voting systems introduce new ballot secrecy problems. In encrypt-on-cast voting systems, like that of Benaloh [1, 2], a malicious voting machine can use the encrypted votes that it posts to the public bulletin board as a subliminal channel to convey information about voters' choices to a coercer. Although it was known that a machine could manipulate the randomness used to encrypt the votes to leak information [14], we show that this threat is more severe than previously recognized and that existing mitigations may be ineffective. A compromised machine may only need to leak a few bits and modify only a handful of ballots in order to coerce most of the voters in a polling place. In light of this threat, we propose an extension to the Benaloh scheme that allows anyone to verify that every ciphertext on the bulletin board uses the right randomness. Finally, we show that even without manipulating the randomness, a machine can still use the ciphertexts to leak a small, but potentially dangerous, number of bits by strategically flipping a few votes. Overall, we show that while subliminal channels in encrypt-on-cast voting systems can be partially mitigated, they cannot yet be eliminated completely.

      author =       {Ariel J. Feldman and Josh Benaloh},
      title =        {On Subliminal Channels in Encrypt-on-Cast Voting Systems},
      booktitle =    {Proceedings of EVT/WOTE 2009},
      year =         2009,
      editor =       {David Jefferson and Joseph Lorenzo Hall and Tal
      month =        aug,
      organization = {USENIX/ACCURATE/IAVoSS}
    Ariel. J. Feldmanand Josh Benaloh
    Proceedings of the USENIX/ACCURATE/IAVoSS Electronic Voting Technology Workshop, August 2009
  • Can DREs Provide Long-Lasting Security? The Case of Return-Oriented Programming and the AVC Advantage [PDF] [Abstract, BibTeX]
    A secure voting machine design must withstand new attacks devised throughout its multi-decade service lifetime. In this paper, we give a case study of the longterm security of a voting machine, the Sequoia AVC Advantage, whose design dates back to the early 80s. The AVC Advantage was designed with promising security features: its software is stored entirely in read-only memory and the hardware refuses to execute instructions fetched from RAM. Nevertheless, we demonstrate that an attacker can induce the AVC Advantage to misbehave in arbitrary ways—including changing the outcome of an election—by means of a memory cartridge containing a specially-formatted payload. Our attack makes essential use of a recently-invented exploitation technique called return-oriented programming, adapted here to the Z80 processor. In return-oriented programming, short snippets of benign code already present in the system are combined to yield malicious behavior. Our results demonstrate the relevance of recent ideas from systems security to voting machine research, and vice versa. We had no access either to source code or documentation beyond that available on Sequoia's web site. We have created a complete vote-stealing demonstration exploit and verified that it works correctly on the actual hardware.

      author =       {Stephen Checkoway and Ariel J. Feldman and Brian
                      Kantor and J. Alex Halderman and Edward W. Felten
                      and Hovav Shacham},
      title =        {Can {DREs} Provide Long-Lasting Security?  {The} Case
                      of Return-Oriented Programming and the {AVC}
      booktitle =    {Proceedings of EVT/WOTE 2009},
      year =         2009,
      editor =       {David Jefferson and Joseph Lorenzo Hall and Tal
      month =        aug,
      organization = {USENIX/ACCURATE/IAVoSS}
    Stephen Checkoway, Ariel J. Feldman, Brian Kantor, J. Alex Halderman, Edward W. Felten, and Hovav Shacham
    Proceedings of the USENIX/ACCURATE/IAVoSS Electronic Voting Technology Workshop, August 2009
  • Fingerprinting Blank Paper Using Commodity Scanners [PDF] [Abstract, BibTeX]
    This paper presents a novel technique for authenticating physical documents based on random, naturally occurring imperfections in paper texture. We introduce a new method for measuring the three-dimensional surface of a page using only a commodity scanner and without modifying the document in any way. From this physical feature, we generate a concise fingerprint that uniquely identifies the document. Our technique is secure against counterfeiting and robust to harsh handling; it can be used even before any content is printed on a page. It has a wide range of applications, including detecting forged currency and tickets, authenticating passports, and halting counterfeit goods. Document identification could also be applied maliciously to de-anonymize printed surveys and to compromise the secrecy of paper ballots.
        author = {Clarkson, William and Weyrich, Tim and Finkelstein, 
                  Adam and Heninger, Nadia and Halderman, J. Alex and
                  Felten, Edward W.},
        title = {Fingerprinting Blank Paper Using Commodity Scanners},
        booktitle = {SP '09: Proceedings of the 2009 30th IEEE Symposium
                     on Security and Privacy},
        year = {2009},
        isbn = {978-0-7695-3633-0},
        pages = {301--314},
        doi = {},
        publisher = {IEEE Computer Society},
        address = {Washington, DC, USA},
    William Clarkson, Tim Weyrich, Adam Finkelstein, Nadia Heninger, J. Alex Halderman, and Edward W. Felten
    Proc. 30th IEEE Symposium on Security and Privacy (Oakland '09), Oakland, CA, May 2009.
  • Composing Expressive Run-time Security Policies
    Lujo Bauer, Jay Ligatti, and David Walker
    ACM Transactions on Software Engineering and Methodology, Volume 18, Issue 3, pp 9:1-9:43, May 2009
  • NetReview: Detecting when interdomain routing goes wrong [PDF, Andreas Haeberlen's slides] [Abstract]
    Despite many attempts to fix it, the Internet's interdomain routing system remains vulnerable to configuration errors, buggy software, flaky equipment, protocol oscillation, and intentional attacks. Unlike most existing solutions that prevent specific routing problems, our approach is to detect problems automatically and to identify the offending party. Fault detection is effective for a larger class of faults than fault prevention and is easier to deploy incrementally.

    To show that fault detection is useful and practical, we present NetReview, a fault detection system for the Border Gateway Protocol (BGP). NetReview records BGP routing messages in a tamper-evident log, and it enables ISPs to check each other's logs against a high-level description of the expected behavior, such as a peering agreement or a set of best practices. At the same time, NetReview respects the ISPs' privacy and allows them to protect sensitive information. We have implemented and evaluated a prototype of NetReview; our results show that NetReview catches common Internet routing problems, and that its resource requirements are modest.
    Andreas Haeberlen, Ioannis Avramopoulos, Jennifer Rexford, and Peter Druschel
    Proc. Networked Systems Design and Implementation, April, 2009
  • Collaborative, Privacy-Preserving Data Aggregation at Scale [PDF, ePrint] [Abstract, BibTeX]
    Combining and analyzing data collected at multiple locations is critical for a wide variety of applications, such as detecting and diagnosing malicious attacks or computing an accurate estimate of the popularity of Web sites. However, legitimate concerns about privacy often inhibit participation in collaborative data-analysis systems. In this paper, we design, implement, and evaluate a practical solution for privacy-preserving collaboration among a large number of participants. Scalability is achieved through a "semi-centralized" architecture that divides responsibility between a proxy that obliviously blinds the client inputs and a database that identifies the (blinded) keywords that have values satisfying some evaluation function.

    Our solution leverages a novel cryptographic protocol that provably protects the privacy of both the participants and the keywords. For example, if web servers collaborate to detect source IP addresses responsible for denial-of-service attacks, our protocol would not reveal the traffic mix of the Web servers or the identity of the "good" IP addresses. We implemented a prototype of our design, including an amortized oblivious transfer protocol that substantially improves the efficiency of client-proxy interactions. Our experiments show that the performance of our system scales linearly with computing resources, making it easy to improve performance by adding more cores or machines. For collaborative diagnosis of denial-of-service attacks, our system can handle millions of suspect IP addresses per hour when the proxy and the database each run on two quad-core machines.
        author = {Haakon Ringberg and Benny Applebaum and 
                  Michael J. Freedman and Matthew Caesar and 
                  Jennifer Rexford},
        title = {Collaborative, Privacy-Preserving Data Aggregation at 
        howpublished = {Cryptology ePrint Archive, Report 2009/180},
        year = {2009},
        note = {\url{}},
    Haakon Ringberg, Benny Applebaum, Michael J. Freedman, Matthew Caesar, and Jennifer Rexford
    Cryptology ePrint Archive: Report 2009/180, April, 2009
  • Bringing P2P to the Web: Security and Privacy in the Firecoral Network [PDF] [Abstract]
    Peer-to-peer systems have been a disruptive technology for enabling large-scale Internet content distribution. Yet web browsers, today's dominant application platform, seem inherently based on the client/server communication model. This paper presents the design of Firecoral, a browserbased extension platform that enables the peer-to-peer exchange of web content in a secure, flexible manner. Firecoral provides a highly-configurable interface through which users can enforce privacy preferences by carefully specifying which content they will share, and a security model that guarantees content integrity even in the face of untrusted peers. The Firecoral protocol is backwards compatible with today's web standards, integrates easily with existing web servers, and is designed not to interfere with a typical browsing experience and publishing ecosystem.
    Jeff Terrace, Harold Laidlaw, Hao Eric Liu, Sean Stern, and Michael J. Freedman
    Proc. 8th International Workshop on Peer-to-Peer Systems (IPTPS '09) Boston, MA, April 2009.
  • Incrementally-Deployable Security for Interdomain Routing [PDF] [Abstract]
    The Internet's interdomain-routing system is extremely vulnerable to accidental failure, configuration errors, and malicious attack. Any successful approach to improving interdomain-routing security must satisfy two requirements for incremental deployability: backwards compatibility with the existing routing protocol and installed base of routers and incentive compatibility with the desire of each domain to improve its part of the routing system even if other domains have not taken similar steps. We propose an incrementally deployable approach based on a Routing Control Platform (RCP) that makes routing decisions on behalf of the routers in a domain, without requiring changes to the routers or protocols. The RCP runs anomaly-detection algorithms that identify, and avoid, suspicious routes, allowing a domain (or a small group of cooperating domains) to significantly improve interdomain routing security.
    Jennifer Rexfordand Joan Feigenbaum
    Extended abstract, Proc. Cybersecurity Applications and Technologies for Homeland Security, March 2009
  • The Uniform Hardcore Lemma via Approximate Bregman Projections [PDF]
    Boaz Barak, Moritz Hardt, and Satyen Kale
    Proc. ACM-SIAM Symposium on Discrete Algorithms (SODA), 2009.
  • Strong Parallel Repetition Theorem for Free Projection Games [PDF]
    Boaz Barak, Anup Rao, Ran Raz, Ricky Rosen, and Ronen Shaltiel
    Proc. RANDOM 2009.
  • Run-time Enforcement of Nonsafety Policies
    Jay Ligatti, Lujo Bauer, and David Walker
    ACM Transactions on Information and System Security, Volume 12, Issue 3, pp 1-41, January 2009
  • Merkle Puzzles are Optimal - an O(n^2) attack on key exchange from a random oracle [PDF]
    Boaz Barakand Mohammad Mahmoody
    Proc. CRYPTO '09 .
  • Bounded Key-Dependent Message Security [PDF]
    Boaz Barak, Iftach Haitner, Dennis Hofheinz, and Yuval Ishai
    Submitted, 2009.
  • 2008

  • From Optimization to Regret Minimization and Back Again [HTML] [Abstract]
    Internet routing is mostly based on static information--it's dynamicity is limited to reacting to changes in topology. Adaptive performance-based routing decisions would not only improve the performance itself of the Internet but also its security and availability. However, previous approaches for making Internet routing adaptive based on optimizing network-wide objectives are not suited for an environment in which autonomous and possibly malicious entities interact.

    In this paper, we propose a different framework for adaptive routing decisions based on regret-minimizing online learning algorithms. These algorithms, as applied to routing, are appealing because adopters can independently improve their own performance while being robust to adversarial behavior. However, in contrast to approaches based on optimization theory that provide guarantees from the outset about network-wide behavior, the network-wide behavior if online learning algorithms were to interact with each other is less understood. In this paper, we study this interaction in a realistic Internet environment, and find that the outcome is a stable state and that the optimality gap with respect to the network-wide optimum is small. Our findings suggest that online learning may be a suitable framework for adaptive routing decisions in the Internet.
    Ioannis Avramopoulos, Jennifer Rexford, and Robert Schapire
    Proc. Workshop on Tackling Computer Systems Problems with Machine Learning Techniques, December, 2008
  • Insecurities and Inaccuracies of the Sequoia AVC Advantage 9.00H DRE Voting Machine [PDF]
    Andrew W. Appel, Maia Ginsburg, Harri Hursti, Brian W. Kernighan, and Christopher D. Richards Gang Tan
    October 2008
  • Autonomous Security for Autonomous Systems [PDF]
    Josh Karlin, Stephanie Forrest, and Jennifer Rexford
    Computer Networks, special issue on Complex Computer and Communications Networks, October, 2008
  • Semantic foundations for typed assembly languages [PDF]
    A. Ahmed, A. W. Appel, C. D. Richards, K. Swadi, G. Tan, and D. C. Wang
    ACM Transactions on Programming Languages and Systems, September 2008 This PDF file doesn't display well in the Acrobat web-browser plug-inbut works if you explicitly download it first.
  • Foundational High-level Static Analysis [PDF]
  • Lest We Remember: Cold Boot Attacks on Encryption Keys [PDF, Website, Communications of the ACM] [Abstract, BibTeX]
    Contrary to popular assumption, DRAMs used in most modern computers retain their contents for seconds to minutes after power is lost, even at operating temperatures and even if removed from a motherboard. Although DRAMs become less reliable when they are not refreshed, they are not immediately erased, and their contents persist sufficiently for malicious (or forensic) acquisition of usable full-system memory images. We show that this phenomenon limits the ability of an operating system to protect cryptographic key material from an attacker with physical access. We use cold reboots to mount attacks on popular disk encryption systems — BitLocker, FileVault, dm-crypt, and TrueCrypt — using no special devices or materials. We experimentally characterize the extent and predictability of memory remanence and report that remanence times can be increased dramatically with simple techniques. We offer new algorithms for finding cryptographic keys in memory images and for correcting errors caused by bit decay. Though we discuss several strategies for partially mitigating these risks, we know of no simple remedy that would eliminate them.
    @inproceedings{ coldboot:sec08,
        title = "Lest We Remember: {C}old Boot Attacks on Encryption Keys",
        author = "J. Alex Halderman and Seth D. Schoen and Nadia 
                  Heninger and William Clarkson and William Paul and 
                  Joseph A. Calandrino and Ariel J. Feldman and Jacob 
                  Appelbaum and Edward W. Felten",
        booktitle = "Proc.\ 17th {USENIX} {S}ecurity {S}ymposium",
        location = "San Jose, CA",
        month = Jul,
        year = 2008,
        pages = {45--60}
    J. Alex Halderman, Seth D. Schoen, Nadia Heninger, William Clarkson, William Paul, Joseph A. Calandrino, Ariel J. Feldman, Jacob Appelbaum, and Edward W. Felten
    Proc. 17th USENIX Security Symposium (Sec '08), San Jose, CA, July 2008 Best Student Paper
    Reprinted in Communications of the ACM, May 2009
  • Path-Quality Monitoring in the Presence of Adversaries [PDF, Full Version, Sharon's slides] [Abstract, BibTeX]
    Edge networks connected to the Internet need effective monitoring techniques to drive routing decisions and detect violations of Service Level Agreements (SLAs). However, existing measurement tools, like ping, traceroute, and trajectory sampling, are vulnerable to attacks that can make a path look better than it really is. In this paper, we design and analyze path-quality monitoring protocols that reliably raise an alarm when the packet-loss rate and delay exceed a threshold, even when an adversary tries to bias monitoring results by selectively delaying, dropping, modifying, injecting, or preferentially treating packets.

    Despite the strong threat model we consider in this paper, our protocols are efficient enough to run at line rate on high-speed routers. We present a secure sketching protocol for identifying when packet loss and delay degrade beyond a threshold. This protocol is extremely lightweight, requiring only 250–600 bytes of storage and periodic transmission of a comparably sized IP packet to monitor billions of packets. We also present secure sampling protocols that provide faster feedback and accurate round-trip delay estimates, at the expense of somewhat higher storage and communication costs. We prove that all our protocols satisfy a precise definition of secure path-quality monitoring and derive analytic expressions for the trade-off between statistical accuracy and system overhead. We also compare how our protocols perform in the client-server setting, when paths are asymmetric, and when packet marking is not permitted.
      TITLE = {Path-Quality Monitoring in the Presence of Adversaries},
      AUTHOR = {Sharon Goldberg and David Xiao and Eran Tromer and Boaz Barak and Jennifer Rexford},
      BOOKTITLE = {Proceedings of SIGMETRICS 2008},
      YEAR = {2008},
      PDF = {}
    Sharon Goldberg, David Xiao, Eran Tromer, Boaz Barak, and Jennifer Rexford
    Proc. ACM SIGMETRICS, June 2008
  • Multimodal Separation Logic for Reasoning About Operational Semantics [PDF]
    Robert Dockins, Andrew W. Appel, and Aquinas Hobor
    In Twenty-fourth Conference on the Mathematical Foundations of Programming Semantics, May 2008 May 2008
    Electronic Notes in Theoretical Computer Science, vol 218, 22 October 2008,
    pp. 5-20
  • In Defense of Pseudorandom Sample Selection [PDF] [Abstract, BibTeX]
    Generation of random numbers is a critical component of existing post-election auditing techniques. Recent work has largely discouraged the use of all pseudo-random number generators, including cryptographically secure pseudorandom number generators (CSPRNGs), for this purpose, instead recommending the sole use of observable physical techniques. In particular, simple dice rolling has received a great deal of positive attention [4, 6, 9]. The typical justification for this recommendation is that those less comfortable with mathematics prefer a simple, observable technique. This paper takes a contrary view. Simple, observable techniques like dice rolling are not necessarily robust against sleight of hand and other forms of fraud, and attempts to harden them against fraud can dramatically increase their complexity. With simple dice rolling, we know of no techniques that provide citizens with a reasonable means of verifying that fraud did not occur during the roll process. CSPRNGs, used properly, can be simple, robust, and verifiable, and they allow for the use of auditing techniques that might otherwise be impractical. While we understand initial skepticism towards this option, we argue that appropriate use of CSPRNGs would strengthen audit security.
        author = {Calandrino, Joseph A. and Halderman, J. Alex and Felten, Edward W.},
        title = {In defense of pseudorandom sample selection},
        booktitle = {EVT'08: Proceedings of the conference on Electronic voting technology},
        year = {2008},
        isbn = {888-8-88888-888-8},
        pages = {1--7},
        location = {San Jose, CA},
        publisher = {USENIX Association},
        address = {Berkeley, CA, USA},
    Joseph A. Calandrino, J. Alex Halderman, and Edward W. Felten
    Proceedings of the conference on Electronic voting and technology, May 2008
  • Oracle Semantics for Concurrent Separation Logic [PDF]
    Aquinas Hobor, Andrew W. Appel, and Francesco Zappa Nardelli
    European Symposium on Programming (ESOP) (Extended version with appendix) April 2008
  • Public Key Cryptography from Different Assumptions [PDF]
    Benny Applebaum, Boaz Barak, and Avi Wigderson
  • Protocols and Lower Bounds for Failure Localization in the Internet [PDF] [BibTeX]
      AUTHOR = {Boaz Barak and Sharon Goldberg and David Xiao},
      TITLE = {Protocols and Lower Bounds for Failure Localization in the Internet},
      BOOKTITLE = {Proceedings of Eurocrypt 2008},
      YEAR = {2008},
      PDF = {}
    Boaz Barak, Sharon Goldberg, and David Xiao
    Proc. Eurocrypt 2008.
  • On Basing Lower-Bounds for Learning on Worst-Case Assumptions [PDF]
    Benny Applebaum, Boaz Barak, and David Xiao
    Proc. 49th FOCS , 2008.
  • 2007

  • How Small Groups Can Secure Interdomain Routing [PDF] [Abstract, BibTeX]
    Although the Internet's routing system has serious security vulnerabilities, none of the existing proposals for a secure variant of BGP has been successfully deployed in practice. This is not surprising since deploying protocols that require the cooperation of tens of thousands of independently-operated networks is problematic. Instead, we argue that small groups should be the basis for securing BGP and we offer an alternative design in which interdomain routing is secured by a few (e.g., 5--10) participating ASes. We conduct extensive simulations on a realistic Internet topology to identify conditions for small groups to be effective. Even though the non-participants outnumber the group members by several orders of magnitude, the participants can achieve remarkable security gains by filtering compromised interdomain routes, cooperating to expose additional path diversity, inducing non-participants to select valid routes, and enlisting a few large ISPs to participate. We also propose two novel mechanisms that the group members can employ to achieve these goals, namely secure overlay routing and the cooperative announcement of each other's address space. Our experiments show that combining these two techniques allows small groups to secure interdomain routing.
      title={How small groups can secure interdomain routing},
      author={Avramopoulos, Ioannis and Suchara, Martin and Rexford, Jennifer},
      institution={Princeton University Computer Science Department},
    Ioannis Avramopoulos, Martin Suchara, and Jennifer Rexford
    Princeton University Computer Science Technical Report (TR-808-07) Princeton, NJ, December 2007
  • Risking communications security: Potential hazards of the "Protect America Act" [PDF] [Abstract]
    A new US law allows warrantless wiretapping whenever one end of the communication is believed to be outside national borders. This creates serious security risks: danger of exploitation of the system by unauthorized users, danger of criminal misuse by trusted insiders, and danger of misuse by government agents.
    Steven M. Bellovin, Matt Blaze, , Susan Landau, Peter G. Neumann, and Jennifer Rexford
    IEEE Security and Privacy, pp. 24-33, January/February 2008 A shorter version appeared as "Internal surveillance, external risks" Communications of the ACM, December 2007.
  • Separation Logic for Small-step C minor [PDF]
  • Machine-Assisted Election Auditing [PDF, Extended Version PDF] [Abstract, BibTeX]
    Election audit procedures usually rely on precinct- based recounts, in which workers manually review all paper ballots from selected polling places, but these re- counts can be expensive due to the labor required. This paper proposes an alternative audit strategy that allows machines to perform most of the work. Precincts are recounted using recounting machines, and their output is manually audited using efficient ballot sampling tech- niques. This strategy can achieve equal or greater con- fidence than precinct-based auditing at a significantly lower cost while protecting voter privacy better than pre- vious ballot-based auditing methods. We show how to determine which ballots to audit against the recounting machines' records and compare this new approach to precinct-based audits in the context of Virginia's Novem- ber 2006 election. Far fewer ballots need to be audited by hand using our approach. We also explore extensions to these techniques, such as varying individual ballots' audit probabilities based on the votes they contain, that promise further efficiency gains.
        author = {Calandrino, Joseph A. and Halderman, J. Alex and Felten, Edward W.},
        title = {Machine-assisted election auditing},
        booktitle = {EVT'07: Proceedings of the USENIX Workshop on Accurate Electronic Voting Technology},
        year = {2007}, pages = {9--9},
        location = {Boston, MA},
        publisher = {USENIX Association},
        address = {Berkeley, CA, USA},
    Joseph A. Calandrino, J. Alex Halderman, and Edward W. Felten
    Proceedings of the USENIX Workshop on Accurate Electronic Voting Technology, September 2007
  • Effective Audit Policy for Voter-Verified Paper Ballots [PDF]
    Presented at 2007 Annual Meeting of the American Political Science Association, Chicago,, September 2007 September 2007
    Earlier version:
    Effective Audit Policy for Voter-Verified Paper Ballots in New Jersey,
    March 9, 2007.

  • Counteracting Discrimination Against Network Traffic [PDF] [Abstract, BibTeX]
    End users, edge networks, content providers, and service providers alike all need effective ways to counteract traffic discrimination---the selective (mis)treatment of packets as they flow through the Internet. However, preventing discrimination, or even detecting ongoing discrimination, is difficult in practice. Instead, our solution (which we call surelinks) forces the discriminator to introduce more easily detectable loss and delay, and then moves traffic away from offending paths. Surelinks combine three techniques: encryption of aggregate traffic between edge nodes, multipath routing to circumvent performance problems, and stealth probing for accurate measurements in an adversarial setting. Experiments with our prototype system, implemented in the Click router, demonstrate that surelinks have practical overhead comparable to destination-based forwarding.
      title={Counteracting discrimination against network traffic},
      author={Avramopoulos, Ioannis and Rexford, J and Syrivelis, D and Lalis, S},
      institution={Princeton University Computer Science Department}
    Ioannis Avramopoulos, Jennifer Rexford, Dimitris Syrivelis, and Spyros Lalis
    Princeton University Computer Science Technical Report (TR-794-07) Princeton, NJ, August 2007
  • A Pluralist Approach to Interdomain Communication Security [PDF, Yannis' slides]
    Ioannis Avramopoulosand Jennifer Rexford
    Proc. NetEcon Workshop, June 2007
  • Privacy, accuracy, and consistency too: a holistic solution to contingency table release [PDF] [BibTeX]
      TITLE = {Privacy, accuracy, and consistency too: a holistic
               solution to contingency table release},
      AUTHOR = {Boaz Barak and Kamalika Chaudhuri and Cynthia Dwork
                and Satyen Kale and Frank McSherry and Kunal Talwar},
      BOOKTITLE = {Proceedings of ACM PODS},
      PUBLISHER = {ACM},
      YEAR = {2007},
      EDITOR = {Leonid Libkin},
      ISBN = {978-1-59593-685-1},
      PAGES = {273--282},
      PDF = {}
    Boaz Barak, Kamalika Chaudhuri, Cynthia Dwork, Satyen Kale, Frank McSherry, and Kunal Talwar
    Proc. ACM PODS, 2007.
  • Lower bounds on signatures from symmetric primitives [PDF] [BibTeX]
      TITLE = {Lower bounds on signatures from symmetric primitives},
      AUTHOR = {Boaz Barak and Mohammad Mahmoody-Ghidary},
      CROSSREF = {focs07},
      YEAR = {2007},
      PDF = {}
    Boaz Barakand Mohammad Mahmoody
    Proc. 48th Foundations of Computer Science (FOCS), 2007.
  • A Very Modal Model of a Modern, Major, General Type System [PDF]
    Andrew W. Appel, Paul-Andre Mellies, Christopher D. Richards, and Jerome Vouillon
    POPL 2007: The 34th Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, January 2007
  • 2006

  • Pretty Good BGP: Improving BGP by Cautiously Adopting Routes [PDF] [Abstract, BibTeX]
    The Internet's interdomain routing protocol, BGP, is vulnerable to a number of damaging attacks, which often arise from operator misconfiguration. Proposed solutions with strong guarantees require a public-key infrastructure, accurate routing registries, and changes to BGP. However, BGP routers can avoid selecting and propagating these routes if they are cautious about adopting new reachability information. We describe a protocol-preserving enhancement to BGP, Pretty Good BGP (PGBGP), that slows the dissemination of bogus routes, providing network operators time to respond before problems escalate into large- scale Internet attacks. Simulation results show that realistic deployments of PGBGP could provide 99% of Autonomous Systems with 24 hours to investigate and repair bogus routes without affecting prefix reachability. We also show that without PGBGP, 40% of ASs cannot avoid selecting bogus routes; with PGBGP, this number drops to less than 1%. Finally, we show that PGBGP is incrementally deployable and offers significant security benefits to early adopters and their customers.
      title={Pretty good {BGP}: Improving BGP by cautiously adopting routes},
      author={Karlin, Josh and Forrest, Stephanie and Rexford, Jennifer},
      booktitle={Network Protocols, 2006. ICNP'06. Proceedings of the 2006 14th IEEE International Conference on},
    Josh Karlin, Stephanie Forrest, and Jennifer Rexford
    14th IEEE International Conference on Network Protocols (ICNP'06) Santa Barbara, CA, November 2006
  • Don't Secure Routing Protocols, Secure Data Delivery [PDF, Dan's slides]
    Dan Wendlandt, Ioannis Avramopoulos, David G. Andersen, and Jennifer Rexford
    Proc. HotNets, November 2006
  • Security Analysis of the Diebold AccuVote-TS Voting Machine [PDF] [Abstract, BibTeX]
    This paper presents a fully independent security study of a Diebold AccuVote-TS voting machine, including its hardware and software. We obtained the machine from a private party. Analysis of the machine, in light of real election procedures, shows that it is vulnerable to extremely serious attacks. For example, an attacker who gets physical access to a machine or its removable memory card for as little as one minute could install malicious code; malicious code on a machine could steal votes undetectably, modifying all records, logs, and counters to be consistent with the fraudulent vote count it creates. An attacker could also create malicious code that spreads automatically and silently from machine to machine during normal election activities — a voting-machine virus. We have constructed working demonstrations of these attacks in our lab. Mitigating these threats will require changes to the voting machine's hardware and software and the adoption of more rigorous election procedures.
        author = {Feldman, Ariel J. and Halderman, J. Alex and Felten, Edward W.},
        title = {Security analysis of the diebold AccuVote-TS voting machine},
        booktitle = {EVT'07: Proceedings of the USENIX Workshop on Accurate Electronic Voting Technology},
        year = {2007},
        pages = {2--2},
        location = {Boston, MA},
        publisher = {USENIX Association},
        address = {Berkeley, CA, USA},
    Proceedings of the USENIX/ACCURATE Electronic Voting Technology Workshop, August 2007
  • Lessons from the Sony CD DRM Episode [PDF] [Abstract, BibTeX]
    In the fall of 2005, problems discovered in two Sony-BMG compact disc copy protection systems, XCP and MediaMax, triggered a public uproar that ultimately led to class-action litigation and the recall of millions of discs. We present an in-depth analysis of these technologies, including their design, implementation, and deployment. The systems are surprisingly complex and suffer from a diverse array of flaws that weaken their content protection and expose users to serious security and privacy risks. Their complexity, and their failure, makes them an interesting case study of digital rights management that carries valuable lessons for content companies, DRM vendors, policymakers, end users, and the security community
        author = {Halderman, J. Alex and Felten, Edward W.},
        title = {Lessons from the Sony CD DRM episode},
        booktitle = {USENIX-SS'06: Proceedings of the 15th conference on USENIX Security Symposium},
        year = {2006},
        location = {Vancouver, B.C., Canada},
        publisher = {USENIX Association},
        address = {Berkeley, CA, USA},
    Proceedings of the 15th conference on USENIX Security Symposium - Volume 15, August 2006
  • Lessons from the Sony CD DRM Episode [PDF] [Abstract, BibTeX]
  • Ceci n'est pas une urne: On the Internet vote for the Assemblée des Français de l'Etranger [PDF]
    June 2006
  • Stealth probing: Efficient data-plane security for IP routing [PDF] [Abstract]
    IP routing is notoriously vulnerable to accidental misconfiguration and malicious attack. Although secure routing protocols are an important defense, the data plane must be part of any complete solution. Existing proposals for secure (link-level) forwarding are heavy-weight, requiring cryptographic operations at each hop in a path. Instead, we propose a light-weight data-plane mechanism (called stealth probing) that monitors the availability of paths in a secure fashion, while enabling the management plane to home in on the location of adversaries by combining the results of probes from different vantage points (called Byzantine tomography). We illustrate how stealth probing and Byzantine tomography can be applied in today's routing architecture, without requiring support from end hosts or internal routers.
    Ioannis Avramopoulosand Jennifer Rexford
    USENIX Annual Technical Conference, May/June 2006
  • A list-machine benchmark for mechanized metatheory [PDF]
    Andrew W. Appeland Xavier Leroy
    INRIA Research Report RR-5914, May 2006 May 2006
    A shorter version appeared in LFMTP'06: International Workshop on Logical Frameworks and
    Meta-Languages: Theory and Practice
    , August 2006.
  • Safe Java Native Interface [PDF]
    Gang Tan, Andrew W. Appel, Srimat Chakradhar, Anand Raghunathan, Srivaths Ravi, and Daniel Wang
    International Symposium on Secure Software Engineering, March 2006
  • Windows Access Control Demystified [PDF]
    Sudhakar Govindavajhalaand Andrew W. Appel
    January 2006
  • Tactics for Separation Logic [PDF]
    January 2006
  • Extracting Randomness Using Few Independent Sources [PDF] [Abstract, BibTeX]
    In this work we give the first deterministic extractors from a constant number of weak sources whose entropy rate is less than 1/2. Specifically, for every δ > 0 we give an explicit construction for extracting randomness from a constant (depending polynomially on 1/δ) number of distributions over {0, 1}^n , each having min-entropy δn. These extractors output n bits, which are 2−n close to uniform. This construction uses several results from additive number theory, and in particular a recent one by Bourgain, Katz and Tao [BKT03] and of Konyagin [Kon03].

    We also consider the related problem of constructing randomness dispersers. For any constant output length m, our dispersers use a constant number of identical distributions, each with min-entropy Ω(log n) and outputs every possible m-bit string with positive probability. The main tool we use is a variant of the "stepping-up lemma" used in establishing lower bound on the Ramsey number for hypergraphs (Erdos and Hajnal, [GRS80]).
       AUTHOR = {Boaz Barak and Russell Impagliazzo and Avi Wigderson},
       COLLABORATION = {},
       TITLE = {Extracting Randomness Using Few Independent Sources},
       PUBLISHER = {SIAM},
       YEAR = {2006},
       JOURNAL = {SIAM Journal on Computing},
       VOLUME = {36},
       NUMBER = {4},
       PAGES = {1095-1118},
       PS = {},
       PDF = {},
       POWERPOINT = {},
       PRELIMYEAR = {2004},
       NOTE = {Preliminary version in FOCS' 04}
    Boaz Barak, Russell Impagliazzo, and Avi Wigderson
    SIAM Journal on Computing, 2006.
  • Digital Rights Management
    IEEE Security and Privacy, January 2006
  • Concurrent Non-Malleable Zero Knowledge [PDF] [BibTeX]
      AUTHOR = {Boaz Barak and Manoj Prabhakaran and Amit Sahai},
      TITLE = {Concurrent Non-Malleable Zero Knowledge},
      CROSSREF = {focs06},
      YEAR = {2006},
      PDF = {}
    Boaz Barak, Manoj Prabhakaran, and Amit Sahai
    Proc. 47th Foundations of Computer Science (FOCS), 2006.
  • A Compositional Logic for Control Flow [PDF]
    Gang Tanand Andrew W. Appel
    In 7th International Conference on Verification, Model Checking Abstract Interpretation (VMCAI), January 2006
  • 2-source dispersers for sub-polynomial entropy and Ramsey graphs beating the Frankl-Wilson construction [PS] [BibTeX]
      AUTHOR = {Boaz Barak and Anup Rao and Ronen Shaltiel and Avi
      TITLE = {$2$-source dispersers for sub-polynomial entropy and
             {Ramsey} graphs beating the {Frankl--Wilson}
      CROSSREF = {stoc06},
      PAGES = {671--680},
      YEAR = {2006},
      PS = {}
    Boaz Barak, Anup Rao, Ronen Shaltiel, and Avi Wigderson
    Proc. 38th Symposium on Theory of Computing (STOC), 2006.
  • 2005

  • A Distributed Reputation Approach to Cooperative Internet Routing Protection [PDF, Harlan's Slides] [Abstract]
    The security of the Internet's interdomain routing system hinges on whether autonomous systems (ASes) can trust the information they receive from each other via the Border Gateway Protocol (BGP). Frequently, this trust has been misguided, resulting in wide-spread outages and significant concerns about future attacks. Despite the seriousness of these problems, proposals for a more secure version of BGP have been stymied by serious impediments to practical deployment. Instead, we argue that the existing trust relationships between network operators (and the institutions they represent) are a powerful force for improving the security of BGP, without changing the underlying routing protocol. Our approach leverages ideas from online reputation systems to allow ASes to form a peer-to-peer overlay that integrates results from local network-management tools for detecting attacks and configuration errors. The proposed architecture is incrementally deployable, protects against shilling attacks, and deters malicious operator behavior.
    Proc. Workshop on Secure Network Protocols, November 2005
  • MulVAL: A Logic-based Network Security Analyzer [PDF]
    Xinming Ou, Sudhakar Govindavajhala, and Andrew W. Appel
    In 14th Usenix Security Symposium, August 2005 August 2005
    Older version: Network
    Security Management with High-level Security Policies
    , September 2004

  • Inside RISKS: DRM and public policy [PDF] [Abstract, BibTeX]
    Digital rights management (DRM) systems try to erect technological barriers to unauthorized use of digital data. DRM elicits strong emotions on all sides of the digital copyright debate. Copyright owners tend to see DRM as the last defense against rampant infringement, and as an enabler of new business models. Opponents tend to see DRM as robbing users of their fair use rights, and as the first step toward a digital lockdown. Often the DRM debate creates more heat than light. I propose six principles that should underlie sensible public policy regarding DRM. They were influenced strongly by discussions within USACM (ACM's U.S. Public Policy Committee), which is working on a DRM policy statement. These principles may seem obvious to some readers; but current U.S. public policy is inconsistent, in various ways, with all of them.
     author = {Felten, Edward W.},
     title = {DRM and public policy},
     journal = {Commun. ACM},
     volume = {48},
     number = {7},
     year = {2005},
     issn = {0001-0782},
     pages = {112},
     doi = {},
     publisher = {ACM},
     address = {New York, NY, USA},
    Communications of the ACM, July 2005
  • A Convenient Method for Securely Managing Passwords [PDF] [Abstract, BibTeX]
    Computer users are asked to generate, keep secret, and recall an increasing number of passwords for uses including host accounts, email servers, e-commerce sites, and online financial services. Unfortunately, the password entropy that users can comfortably memorize seems insufficient to store unique, secure passwords for all these accounts, and it is likely to remain constant as the number of passwords (and the adversary's computational power) increases into the future. In this paper, we propose a technique that uses a strengthened cryptographic hash function to compute secure passwords for arbitrarily many accounts while requiring the user to memorize only a single short password. This mechanism functions entirely on the client; no server-side changes are needed. Unlike previous approaches, our design is both highly resistant to brute force attacks and nearly stateless, allowing users to retrieve their passwords from any location so long as they can execute our program and remember a short secret. This combination of security and convenience will, we believe, entice users to adopt our scheme. We discuss the construction of our algorithm in detail, compare its strengths and weaknesses to those of related approaches, and present Password Multiplier, an implementation in the form of an extension to the Mozilla Firefox web browser.
     author = {Halderman, J. Alex and Waters, Brent and Felten, Edward W.},
     title = {A convenient method for securely managing passwords},
     booktitle = {WWW '05: Proceedings of the 14th international conference on World Wide Web},
     year = {2005},
     isbn = {1-59593-046-9},
     pages = {471--479},
     location = {Chiba, Japan},
     doi = {},
     publisher = {ACM},
     address = {New York, NY, USA},
    Proceedings of the 14th international conference on World Wide Web, May 2005
  • How to Play Almost Any Mental Game Over the Net - Concurrent Composition Using Super-Polynomial Simulation [PDF] [Abstract, BibTeX]
    We construct a secure protocol for any multi-party functionality that remains secure (under a relaxed definition of security) when executed concurrently with multiple copies of itself and other protocols. We stress that we do not use any assumptions on existence of trusted parties, common reference string, honest majority or synchronicity of the network. The relaxation of security, introduced by Prabhakaran and Sahai (STOC '04), is obtained by allowing the ideal- model simulator to run in quai-polynomial (as opposed to polynomial) time. Quasi-polynomial simulation suffices to ensure security for most applications of multi-party computation. Fur- thermore, Lindell (FOCS '03, TCC' 04) recently showed that such a protocol is impossible to obtain under the more standard definition of polynomial-time simulation by an ideal adversary. Our construction is the first such protocol under reasonably standard cryptographic assumptions. That is, existence of a hash function collection that is collision resistent with respect to circuits of subexponential size, and existence of trapdoor permutations that are secure with respect to circuits of quasi-polynomial size.

    We introduce a new technique: "protocol condensing". That is, taking a protocol that has strong security properties but requires super-polynomial communication and computation, and then transforming it into a protocol with polynomial communication and computation, that still inherits the strong security properties of the original protocol. Our result is obtained by combining this technique with previous techniques of Canetti, Lindell, Ostrovsky, and Sahai (STOC '02) and Pass (STOC '04).
      AUTHOR = {Boaz Barak and Amit Sahai},
      TITLE = {How to Play Almost Any Mental Game Over the Net -
      Concurrent Composition Using Super-Polynomial Simulation},
      YEAR = {2005},
      CROSSREF = {focs05},
      PS = {},
      PDF = {},
      POWERPOINT = {}
    Boaz Barakand Amit Sahai
    Proc. 46th FOCS, 2005.
  • An architecture for robust pseudo-random generation and Applications to /dev/random [PDF] [Abstract, BibTeX]
    We present a formal model and a simple architecture for robust pseudorandom generation that ensures resilience in the face of an observer with partial knowledge/control of the generator's entropy source. Our model and architecture have the following properties: Resilience. The generator's output looks random to an observer with no knowledge of the internal state. This holds even if that observer has complete control over data that is used to refresh the internal state.

    Forward security. Past output of the generator looks random to an observer, even if the observer learns the internal state at a later time.

    Backward security/Break-in recovery. Future output of the generator looks random, even to an observer with knowledge of the current state, provided that the generator is refreshed with data of sufficient entropy.

    Architectures such as above were suggested before. This work differs from previous attempts in that we present a formal model for robust pseudo-random generation, and provide a formal proof within this model for the security of our architecture. To our knowledge, this is the first attempt at a rigorous model for this problem.

    Our formal modeling advocates the separation of the entropy extraction phase from the output generation phase. We argue that the former is information-theoretic in nature, and could therefore rely on combinatorial and statistical tools rather than on cryptography. On the other hand, we show that the latter can be implemented using any standard (non-robust) cryptographic PRG.

    We also discuss the applicability of our architecture for applications such as /dev/(u)random in Linux and pseudo-random generation on smartcards.
      AUTHOR = {Boaz Barak and Shai Halevi},
      TITLE = {An architecture for robust pseudo-random generation
               and Applications to /dev/random},
      YEAR = {2005},
      BOOKTITLE = {Proc. Computing and Communication Security (CCS)},
      EDITOR = {ACM},
      PS = {},
      PDF = {}
    Boaz Barakand Shai Halevi
    Proc. Computing and Communication Security (CCS), 2005.
  • 2004

  • Privacy management for portable recording devices [PDF] [Abstract, BibTeX]
    The growing popularity of inexpensive, portable recording devices, such as cellular phone cameras and compact digital audio recorders, presents a significant new threat to privacy. We propose a set of technologies that can be integrated into recording devices to provide stronger, more accurately targeted privacy protections than other legal and technical measures now under consideration. Our design is based on an informed consent principle, which it supports by the use of novel devices and protocols that automate negotiations over consent and ensure appropriate safeguards on recorded data. We define the protocols needed for this purpose and establish their security. We also describe a working prototype implementation that safeguards audio recorded by laptop PCs in a wireless network.
        author = {Halderman, J. Alex and Waters, Brent and Felten, Edward W.},
        title = {Privacy management for portable recording devices},
        booktitle = {WPES '04: Proceedings of the 2004 ACM workshop on Privacy in the electronic society},
        year = {2004},
        isbn = {1-58113-968-3},
        pages = {16--24},
        location = {Washington DC, USA},
        doi = {},
        publisher = {ACM},
        address = {New York, NY, USA},
    Proceedings of the 2004 ACM workshop on Privacy in the electronic society, November 2004
  • New Client Puzzle Outsourcing Techniques for DoS Resistance [PDF] [Abstract, BibTeX]
    We explore new techniques for the use of cryptographic puzzles as a countermeasure to Denial-of-Service (DoS) attacks. We propose simple new techniques that permit the out-sourcing of puzzles; their distribution via a robust external service that we call a bastion. Many servers can rely on puzzles distributed by a single bastion. We show how a bastion, somewhat surprisingly, need not know which servers rely on its services. Indeed, in one of our constructions, a bastion may consist merely of a publicly accessible random data source, rather than a special purpose server. Our out-sourcing techniques help eliminate puzzle distribution as a point of compromise.

    Our design has three main advantages over prior approaches. First, it is more resistant to DoS attacks aimed at the puzzle mechanism itself, withstanding over 80% more attack traffic than previous methods in our experiments. Second, our scheme is cheap enough to apply at the IP level, though it also works at higher levels of the protocol stack. Third, our method allows clients to solve puzzles offline, reducing the need for users to wait while their computers solve puzzles. We present a prototype implementation of our approach, and we describe experiments that validate our performance claims.
        author = {Waters, Brent and Juels, Ari and Halderman, J. Alex and Felten, Edward W.},
        title = {New client puzzle outsourcing techniques for DoS resistance},
        booktitle = {CCS '04: Proceedings of the 11th ACM conference on Computer and communications security},
        year = {2004},
        isbn = {1-58113-961-6},
        pages = {246--256},
        location = {Washington DC, USA},
        doi = {},
        publisher = {ACM},
        address = {New York, NY, USA},
    Proceedings of the 11th ACM conference on Computer and communications security, November 2004
  • Polymorphic Lemmas and Definitions in Lambda Prolog and Twelf [PDF]
    Andrew W. Appeland Amy P. Felty
    Theory and Practice of Logic Programming 4 (1) 1-39, January 2004
  • Dependent Types Ensure Partial Correctness of Theorem Provers [PDF]
    Andrew W. Appeland Amy P. Felty
    Journal of Functional Programming 14(1):3-19, January 2004
  • Construction of a Semantic Model for a Typed Assembly Language [PDF]
    Gang Tan, Andrew W. Appel, Kedar N. Swadi, and Dinghao Wu
    In 5th International Conference on Verification, Model Checking, and Abstract Interpretation (VMCAI '04), January 2004
  • 2003

  • Receiver Anonymity via Incomparable Public Keys [PDF] [Abstract, BibTeX]
    We describe a new method for protecting the anonymity of message receivers in an untrusted network. Surprisingly, existing methods fail to provide the required level of anonymity for receivers (although those methods do protect sender anonymity). Our method relies on the use of multicast, along with a novel cryptographic primitive that we call an Incomparable Public Key cryptosystem, which allows a receiver to efficiently create many anonymous "identities" for itself without divulging that these separate "identities" actually refer to the same receiver, and without increasing the receiver's workload as the number of identities increases. We describe the details of our method, along with a prototype implementation.
        author = {Waters, Brent R. and Felten, Edward W. and Sahai, Amit},
        title = {Receiver anonymity via incomparable public keys},
        booktitle = {CCS '03: Proceedings of the 10th ACM conference on Computer and communications security},
        year = {2003},
        isbn = {1-58113-738-9},
        pages = {112--121},
        location = {Washington D.C., USA},
        doi = {},
        publisher = {ACM},
        address = {New York, NY, USA},
    Proceedings of the 10th ACM conference on Computer and communications security, November 2003
  • A Trustworthy Proof Checker [PDF]
    Andrew W. Appel, Neophytos G. Michael, Aaron Stump, and Roberto Virga
    Journal of Automated Reasoning 31:231-260., November 2003 November 2003
    Previous version
    appeared in Verification Workshop - VERIFY 2002 and (jointly) in
    Foundations of Computer Security - FCS 2002 Copenhagen, Denmark, July 2002
  • Policy-Enforced Linking of Untrusted Components (Extended Abstract) [PDF]
    Eunyoung Leeand Andrew W. Appel
    European Software Engineering Conference and ACM SIGSOFT Symposium on the Foundations of Software Engineering, pp. 371-374, September 2003
  • Foundational Proof Checkers with Small Witnesses [PDF]
    Dinghao Wu, Andrew W. Appel, and Aaron Stump
    5th ACM-SIGPLAN International Conference on Principles and Practice of Declarative Programming, pp. 264-274,, August 2003
  • A Provably Sound TAL for Back-end Optimization [PDF]
    Juan Chen, Dinghao Wu, Andrew W. Appel, and Hai Fang
    PLDI 2003: ACM SIGPLAN Conference on Programming Language Design and Implementation, pp. 208-219, June 2003
  • Using Memory Errors to Attack a Virtual Machine [PDF]
    Sudhakar Govindavajhalaand Andrew W. Appel
    2003 IEEE Symposium on Security and Privacy, pp. 154-165, May 2003
  • Understanding Trusted Computing: Will its Benefits Outweigh its Drawbacks? [PDF] [BibTeX]
        author = {Felten, Edward W.},
        title = {Understanding Trusted Computing: Will Its Benefits Outweigh Its Drawbacks?},
        journal = {IEEE Security and Privacy},
        volume = {1},
        number = {3},
        year = {2003},
        issn = {1540-7993},
        pages = {60--62},
        doi = {},
        publisher = {IEEE Educational Activities Department},
        address = {Piscataway, NJ, USA},
    IEEE Security and Privacy, May 2003
  • A skeptical view of DRM and fair use [PDF] [BibTeX]
        author = {Felten, Edward W.},
        title = {A skeptical view of DRM and fair use},
        journal = {Commun. ACM},
        volume = {46},
        number = {4},
        year = {2003},
        issn = {0001-0782},
        pages = {56--59},
        doi = {},
        publisher = {ACM},
        address = {New York, NY, USA},
    Communications of the ACM, April 2003
  • Mechanisms for Secure Modular Programming in Java [PDF] [Abstract, BibTeX]
    We present a new module system for Java that improves upon many of the deficiencies of the Java package system and gives the programmer more control over dynamic linking. Our module system provides explicit interfaces, multiple views of modules based on hierarchical nesting and more flexible name-space management than the Java package system. Relationships between modules are explicitly specified in module description files. We provide more control over dynamic linking by allowing import statements in module description files to require that imported modules be annotated with certain properties, which we implement by digital signatures. Our module system is compatible enough with standard Java to be implemented as a source-to-source and bytecode-to-bytecode transformation wrapped around a standard Java compiler, using a standard Java virtual machine (JVM).
        author = {Bauer, Lujo and Appel, Andrew W. and Felten, Edward W.},
        title = {Mechanisms for secure modular programming in Java},
        journal = {Softw. Pract. Exper.},
        volume = {33},
        number = {5},
        year = {2003},
        issn = {0038-0644},
        pages = {461--480},
        doi = {},
        publisher = {John Wiley \& Sons, Inc.},
        address = {New York, NY, USA},
    Software—Practice & Experience, April 2003
  • Mechanisms for secure modular programming in Java [PDF]
    Lujo Bauer, Andrew W. Appel, and Edward W. Felten
    Software--Practice and Experience 33:461-480, February 2003 February 2003
  • An Indexed Model of Impredicative Polymorphism and Mutable References [PDF]
    Amal Ahmed, Andrew W. Appel, and Roberto Virga
    Princeton University, January 2003
  • 2002

  • Creating and Preserving Locality of Java Applications at Allocation and Garbage Collection Times [PDF]
    Yefim Shuf, Manish Gupta, Hubertus Franke, Andrew W. Appel, and Jaswinder Pal Singh
    In 17th Annual ACM Conference on Object-Oriented Programming, Systems, Languages Applications (OOPSLA 2002), SIGPLAN Notices 37(11) pp. 13-25, November 2002
  • Attacking an Obfuscated Cipher by Injecting Faults [PDF] [Abstract, BibTeX]
    We study the strength of certain obfuscation techniques used to protect software from reverse engineering and tampering. We show that some common obfuscation methods can be defeated using a fault injection attack, namely an attack where during program execution an attacker injects errors into the program environment. By observing how the program fails under certain errors the attacker can deduce the obfuscated information in the program code without having to unravel the obfuscation mechanism. We apply this technique to extract a secret key from a block cipher obfuscated using a commercial obfuscation tool and draw conclusions on preventing this weakness.
        author = {Matthias Jacob and Dan Boneh and Edward Felten},
        title = {Attacking an Obfuscated Cipher By Injecting Faults},
        booktitle = {In Proceedings of ACM CCS-9 Workshop DRM},
        year = {},
        pages = {16--31},
        publisher = {Springer-Verlag}
    Matthias Jacob, Dan Boneh, and Edward W. Felten
    In Proceedings of ACM CCS-9 Workshop DRM, November 2002
  • A Kind System for Typed Machine Language [PDF]
    Andrew W. Appel, Christopher D. Richards, and Kedar N. Swadi
    Princeton University, October 2002
  • Secure Linking: a Framework for Trusted Software Components [PDF]
    Eunyoung Leeand Andrew W. Appel
    Princeton University CS TR-662-02, September 2002 September 2002
    Extended Version
    , CS TR-663-02, September 2002.
  • Deobfuscation is in NP [PDF]
    Princeton University, August 2002
  • A Stratified Semantics of General References Embeddable in Higher-Order Logic [PDF]
    Amal J. Ahmed, Andrew W. Appel, and Roberto Virga
    17th Annual IEEE Symposium on Logic in Computer Science (LICS 2002), pp. 75-86, June 2002 June 2002
    (Preliminary version of July 2001).
  • JVM TCB: Measurements of the Trusted Computing Base of Java Virtual Machines [Tech Report]
    Andrew W. Appeland Daniel C. Wang
    Princeton University CS TR-647-02, April 2002
  • 2001

  • An Indexed Model of Recursive Types for Foundational Proof-Carrying Code [PDF]
    Andrew W. Appeland David McAllester
  • Reading Between the Lines: Lessons from the SDMI Challenge [PDF] [Abstract, BibTeX]
    The Secure Digital Music Initiative is a consortium of parties interested in preventing piracy of digital music, and to this end they are developing architectures for content protection on untrusted platforms. SDMI recently held a challenge to test the strength of four watermarking technologies, and two other security technologies. No documentation explained the implementations of the technologies, and neither watermark embedding nor detecting software was directly accessible to challenge participants. We nevertheless accepted the challenge, and explored the inner workings of the technologies. We report on our results here.
        author = {Scott A. Craver and Min Wu and Bede Liu and Adam Stubblefield and Ben Swartzl and Dan S. Wallach and Drew Dean and Edward W. Felten},
        title = {Reading Between the Lines: Lessons from the SDMI Challenge},
        booktitle = {In Information Hiding Workshop},
        year = {2001},
        pages = {13--17}
    Scott A. Craver, Min Wu, Bede Liu, Adam Stubblefield, Ben Swatzlander, Dan W. Wallach, Drew Dean, and Edward W. Felten
    In Information Hiding Workshop, August 2002
  • Foundational Proof-Carrying Code [PDF]
    In 16th Annual IEEE Symposium on Logic in Computer Science (LICS '01), pp. 247-258, June 2001
  • Models for Security Policies in Proof-Carrying Code [Tech Report]
    Andrew W. Appeland Edward W. Felten
    Princeton University Computer Science TR-636-01, March 2001
  • Dictionary Passing for Polytypic Polymorphism [Tech Report]
    Juan Chenand Andrew W. Appel
    Princeton University Computer Science TR-635-01, March 2001
  • Type-Preserving Garbage Collectors [PDF]
    Daniel C. Wangand Andrew W. Appel
    POPL 2001: The 28th Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, pp. 166-178, January 2001 January 2001
    Extended Version (Technical Report)
  • 2000

  • SAFKASI: A Security Mechanism for Language-Based Systems [PDF]
    Dan S. Wallach, Andrew W. Appel, and Edward W. Felten
  • Technological Access Control Interferes with Noninfringing Scholarship [Tech Report]
    Andrew W. Appeland Edward W. Felten
    Communications of the ACM 43 (9) 21-23, September 2000
  • Machine Instruction Syntax and Semantics in Higher Order Logic [Tech Report]
    Neophytos G. Michaeland Andrew W. Appel
    17th International Conference on Automated Deduction (CADE-17), pp. 7-24, Springer-Verlag (Lecture Notes in Artificial Intelligence), June 2000
  • Hints on Proving Theorems in Twelf [HTML]
    Princeton University, February 2000
  • A Semantic Model of Types and Machine Instructions for Proof-Carrying Code [PS]
  • 1999

  • Proof-Carrying Authentication [PDF] [Abstract, BibTeX]
    We have designed and implemented a general and powerful distributed authentication frame-work based on higher-order logic. Authenti- cation frameworks — including Taos, SPKI, SDSI, and X.509 — have been explained using logic. We show that by starting with the logic, we can implement these frameworks, all in the same concise and efficient system. Because our logic has no decision procedure — although proof checking is simple — users of the frame-work must submit proofs with their requests.
      title={Proof-carrying authentication},
      author={Appel, Andrew W and Felten, Edward W},
      booktitle={Proceedings of the 6th ACM Conference on Computer and Communications Security},
    6th ACM Conference on Computer and Communications Security (CCS 1999) New York, NY, November 1999
  • Lightweight Lemmas in Lambda Prolog [PS] [Abstract, BibTeX]
    Lambda Prolog is known to be well-suited for expressing and implementing logics and inference systems. We show that lemmas and definitions in such logics can be implemented with a great economy of expression. We encode a polymorphic higher-order logic using the ML-style polymorphism of Lambda Prolog. The terms of the metalanguage (Lambda Prolog) can be used to express the statement of a lemma, and metalanguage type-checking can directly type-check the lemma. But to allow polymorphic lemmas requires either more general polymorphism at the meta-level or a less concise encoding of the object logic. We discuss both the Terzo and Teyjus implementations of Lambda Prolog as well as related systems such as Elf.
      title={Lightweight Lemmas in AProlog},
      author={Appel, Andrew W and Felty, Amy P},
      booktitle={Logic programming: proceedings of the 1999 International Conference on Logic Programming},
      organization={The MIT Press}
    Andrew W. Appeland Amy P. Felty
    16th International Conference on Logic Programming (16th International Conference on Logic Programming)
  • Protection against untrusted code [HTML]
  • 1996

  • Java Security: From HotJava to Netscape and Beyond [PDF] [Abstract]
    The introduction of Java applets has taken the World Wide Web by storm. Information servers can customize the presentation of their content with server-supplied code which executes inside the Web browser. We examine the Java language and both the HotJava and Netscape browsers which support it, and find a significant number of flaws which compromise their security. These flaws arise for several reasons, including implementation errors, unintended interactions between browser features, differences between the Java language and bytecode semantics, and weaknesses in the design of the language and the bytecode format. On a deeper level, these flaws arise because of weaknesses in the design methodology used in creating Java and the browsers. In addition to the flaws, we discuss the underlying tension between the openness desired by Web application writers and the security needs of their users, and we suggest how both might be accommodated.
    Drew Dean, Edward W. Felten, and Dan S. Wallach
    IEEE Symposium on Security and Privacy, May 1996
  • Intensional Equality for Continuations [PDF]
    ACM SIGPLAN Notices 31 (2), pp. 55-57