Zero-day vulnerabilities are powerful cyber weapons: Use them or patch them?

The U.S. government faces a dilemma regarding zero-day vulnerabilities: it can either stockpile or disclose them. In a new article, Marcelo M. Leal and Paul Musgrave show that Americans overwhelmingly support the disclosure of information about zero-day vulnerabilities to vendors.

In 2017, the WannaCry and NotPetya malware exploited a vulnerability in the Windows operating system, causing widespread havoc. Ironically, the U.S. National Security Agency (NSA) had been aware of this vulnerability for about five years. Instead of disclosing the vulnerability to Microsoft, however, NSA held on to the knowledge—until the vulnerability was leaked in public forums.

This case illustrates a dilemma that the United States government faces when it discovers zero-day vulnerabilities. Zero-day vulnerabilites are software and hardware flaws that are unknown to computer vendors. As a result, they are enormously valuable to attackers since there is no defense against them. Intelligence agencies such as the NSA and CIA—as well as other governments and even some private firms—work hard to develop such zero-day exploits because of the advantages they afford the attackers.

Once a vulnerability is discovered, an agency can either disclose information about it to vendors so that it can be patched or withhold it so that it can add the vulnerability to its stockpile of cyber weapons. By withholding information, U.S. national security agencies can exploit zero days to penetrate the computer systems of its adversaries—yet doing so also leaves U.S. and allied entities vulnerable should an adversary independently discover these flaws and use them against the United States. By disclosing information to vendors, the U.S. government allows vendors to fix the vulnerability and secure their systems in a timely manner—but it also denies the use of such attack vectors against adversaries by U.S. national security agencies.

Debates about how and where to draw the line between disclosure and stockpiling are a staple of cyber policy discourse. This is less of a dilemma for the U.S. public. Results from a survey experiment we conducted in late 2021 show that it’s likely that respondents are squarely in favor of disclosing information to vendors, even when informed that withholding this information could save many Americans lives in a future conflict. Our results also demonstrate that the likelier it is that an adversary could use a given zero day, the more Americans favor disclosing the vulnerability to a vendor.

The Vulnerabilities Equities Process

Since 2010, the U.S. government has a policy in place to address this dilemma. The vulnerabilities equities process, or VEP, guides executive branch officials in their decision to disclose or to retain publicly unknown vulnerabilities. Official documents released to the public in recent years show that officials are believed to take several factors into consideration when they discover a zero-day vulnerability, like the consequences of an adversary exploitation and how quickly an exploit could be patched. Nonetheless, analysts have singled out two factors that are critical for decisionmakers: how long a vendor will remain unaware about the flaw (longevity of a zero day) and the likelihood that an adversary will independently discover it (its collision rate).

Those who think that vulnerabilities need to be patched to protect American interests from adversaries think the VEP is too weak. Those who think that a strong cyber offense is more important think that it is (or could become) too strong. Understanding how this debate will resolve requires researching many topics, such as the influence of different agencies and interest groups, but it also requires investigating public opinion. Even though cybersecurity is a technical field, and even though zero-day vulnerabilities are among the U.S. government’s most prized secrets, the public’s views on the issue could shape how politicians and officials craft policy—particularly if there’s another major incident involving zero-day flaws known to the U.S. government.

To explore how the American public thinks, we conducted a survey experiment testing whether different levels of longevity and collision rates influence respondents support for disclosing or withholding zero-day vulnerabilities. Respondents read a scenario that pitted retaining a vulnerability for use in a potential attack against Iran (saving many American servicemembers’ lives) against the possibility that it could be independently discovered by an adversary and used against the United States. We manipulated this collision rate to specify that there could be a high, low, or medium chance that an adversary could acquire the zero day. Separately, we also manipulated whether the vulnerability would be likely to exist for a few months, a year, or several years.

The results were unequivocal. We found that the longevity of a vulnerability does not make respondents more or less likely to support disclosure. On the other hand, collision rates do influence respondents’ evaluations. As the likelihood that an adversary could independently discover a vulnerability goes up, support for informing the vendor about the vulnerability also increases.

Policy implications

There may be a substantial disconnect between the preferences of the public and those of the U.S. government regarding zero-day disclosure policy. Even though U.S. officials hint that disclosure is the default option in the vulnerabilities equities process, recent studies show that this might not be true in practice. Some research also suggests that the interests of the intelligence and law enforcement agencies are more represented in the VEP than those of the public and technology firms, suggesting a bias toward stockpiling zero days.

This public-government disconnect could lead to policy changes. Previous leaks showing that U.S. intelligence agencies failed to disclose zero-day vulnerabilities to vendors have already led the federal government to disclose information about the vulnerabilities equities process several times. Congress has also reacted to these cases. In recent years, lawmakers have introduced two bills that aimed to codify the VEP and could make disclosure the default option by law. None, however, advanced in their legislature. Given our findings, it is possible that agency policies may be subject to correction should policy windows open for a long enough period.

Marcelo M. Leal and Paul Musgrave are the authors of “Backwards from zero: How the U.S. public evaluates the use of zero-day vulnerabilities in cybersecurity”, Contemporary Security Policy, which is available here

Private Sector Contribution to National Strategies of Cyber Deterrence

More often than not, the delegation of national security responsibilities to private actors has generated controversy. Notable cases include the United States’ reliance on private military contractors in the recent conflicts of Afghanistan and Iraq. Hence, it may come as a surprise that the current debate around private sector contribution to national strategies of cyber deterrence has been largely exempt from such controversies.

On the contrary, a steady consensus has grown around the idea that national strategies of cyber deterrence would benefit significantly from the direct participation of actors in the private sector. In particular, there have been repeated calls for tech companies, cyber-security firms, and owners and operators of critical infrastructure to bring their vast resources to the table in order to boost governments’ ability to fend off malicious cyber activity.

Without dismissing the opportunities originating from the contributions of the private sector, a new article written by Eugenio Lilli highlights how such private contributions could also pose significant security, legal, and moral challenges.

The first step to assess the desirability or not of private sector contribution to national strategies of cyber deterrence is to define the concept of deterrence in cyber space. As it is the case with many neologisms containing the prefix “cyber”, cyber deterrence also lacks a universally agreed upon definition. In the article, cyber deterrence is defined as the deterrence of malicious activity occurring within or through cyber space. It is also argued that deterrence in cyber space should be

  •  Restrictive. It should seek to shape and limit the overall frequency and severity of malicious activity rather than aiming at dissuading all attacks from occurring at all times.
  • Comprehensive. It should encompass deterrence by denial, punishment, entanglement, and norms; it should rely on deterrent measures taken in the other operational domains of land, sea, air, and space; it should include the whole range of instruments of national power including diplomatic, information, military, economic, financial, intelligence, and law enforcement (aka DIMEFIL) instruments.
  • Dynamic. In response to rapid technological innovation, it should constantly monitor systems and networks, update defenses, improve intelligence sharing, patch vulnerabilities, and renew contingency plans; in response to change in cyber norms, it should implement measures aimed at actively shaping the evolution of norms in cyber space.
  • Complemental. It should not be expected to work best as a separate tool in an actor’s toolbox but rather, as complemental to other forms of coercive and non-coercive strategic interaction.

By relying on the RCDC (Restrictive, Comprehensive, Dynamic, Complemental) conceptualization of cyber deterrence, the article identifies specific areas where private sector contribution can be especially beneficial to national strategies of cyber deterrence. For example, there is evidence to support the argument that private actors can be instrumental to hardening cyber defenses and enhancing resilience, to sharing information, to imposing costs to adversaries, to attributing cyber incidents, to creating strategic interdependencies, and to advancing norms of appropriate behavior in cyber space.

Some important benefits of private sector contribution appear to be common to all areas. To begin with, the private sector can offer unique state-of-art-technologies, highly skilled human capital, and critical funding to compensate for a national government’s limited resources. Moreover, while government authority is often geographically limited, private actors’ visibility and reach can extend beyond national borders. In addition, compared to the somewhat cumbersome processes of policymaking characteristic of state bureaucracies, private sector processes of policymaking give these actors more flexibility and speed; key abilities given the fast-changing nature of threats in cyber space.

Given the above, it is not surprising that the number of those people calling for more private participation in national cyber deterrence is steadily increasing. However, as it is often the case, the devil is in the details. The opportunities originating from private sector contributions are apparent, yet these same contributions also have the potential to raise serious security, legal, and moral challenges that need to be thoroughly understood.

For example, contracting a private company to host classified military information can give fast-track access to the latest technologies but it could also endanger national security if the private company is successfully breached by a hostile actor. Similarly, private companies, especially big tech companies, usually employ people from the world over. Where would these employees’ loyalty lie in case of heighten international tensions or an open confrontation? With the country which contracted them or with their country of origin?

Moreover, legal considerations could limit the willingness of the private sector to contribute to activities of intelligence sharing and active cyber defense. In the context of the United States both types of deterrence activities, while beneficial, may in some cases violate domestic law.

There are also instances of contributions which raise moral issues. For example, private sector’s access to government’s sensitive information could lead to the abuse of such information for private gain. Private companies are ultimately responsible to shareholders rather than to the citizenry. How can they be held accountable to the nation’s interest? With regard to attribution of cyber incidents, commercial interests could make private actors somewhat biased in their public attributions. In particular, they could refrain from publicly attributing incidents to specific governments because they do not want to jeopardize their access to these countries’ profitable contracts and markets.

To conclude, these few examples show the need for starting a more nuanced debate on the nature and desirability of private sector contribution to national strategies of cyber deterrence which is not limited to highlighting the opportunities deriving from it but that also considers the related challenges.

Eugenio Lilli is a lecturer at University College Dublin. He is the author of “Redefining deterrence in cyberspace: Private sector contribution to national strategies of cyber deterrence”, Contemporary Security Policy, which is available here.

Contested public attributions of cyber incidents and the role of academia

In a recent article in Contemporary Security Policy, Florian J. Egloff reflects on the contested nature of public attributions of cyber incidents and what role academia could take up.

In the last five years, public attribution of cyber incidents has gone from an incredibly rare event to a more regular occurrence. For example, in October 2018 the UK’s National Cyber Security Centre publicized its assessment of cyber activities conducted by the Russian military intelligence service (also known by its old acronym, the GRU). Clearly, publicizing activities that other political actors like to keep secret is a political act – but what kind of political act is it and what happens when a government publicly attributes? 

For research on governmental public attribution, one can split the public attribution process into two phases: mechanisms that lead to public attribution and what happens after an incident is publicly attributed. Little research exists on either phase with regard to attribution of cyber incidents. This is problematic, as our understanding of contemporary security policy rests on understanding what drives threat narratives, how and why those particular ones are introduced publicly, and how contestation of threat narratives takes place in the public sphere. 

In a recent article for Contemporary Security Policy, I focus on this second phase of public attribution, namely, what happens after a government goes public about a cyber incident. Understanding this phase is important, as public attributions of cyber incidents are one of the main sources from which the public learns about who is attacking whom in cyberspace, thereby shaping the threat perception of the general public. Most attribution judgements are published by governments, the private sector, and a small number of civil society actors. To situate the knowledge space, in which attribution claims are introduced to, I reflect on this source of knowledge about cyber conflict by identifying how it structurally shapes our understanding of cyber conflict, in particular due to operational and (political, commercial, and legal) structural factors. In short, due to the commercial incentives on the private sector side and the political bias on the government side, the public data about cyber conflict structurally induces distrust into the representativeness of the public attribution statements.

I then focus on the contestation of public attribution claims in democracies and the consequences such contestation brings. Contestation is fundamental to democratic politics. The open debate, the ability of everyone to freely voice opinions, and the emergence of truth trough democratic discourse is foundational to the public sphere of democratic polities. Thus, the ability to contest is a sign of healthy democratic politics. However, as I show in the article, this openness to contestation, coupled with the information poor environment, creates particular problems in the area of cybersecurity. 

Attribution claims are introduced, contested, and even the possibility to do attribution is put into question. Disinformation tactics are used to muddy specific attribution claims, leaving an electorate exposed to the coexistence of multiple “truths” and a fractured narrative of the past. Due to the secrecy attached surrounding the attribution processes by governments, particularly due to concerns of intelligence agencies about sources and methods, governments are often reluctant to reveal the evidence underlying the attribution judgments. These are ideal enabling conditions for other actors to contest governmental claims. 

In a series of empirical examples (Sony, DNC, NotPetya), I reflect on the drivers of contestation after an incident is publicly attributed and show how attackers and other constituencies with various political and economic motivations purport particular narratives. The Sony incident highlights the difficulty a government can have in convincing an electorate of its claims, when there is no record of accomplishment in making attribution claims in public. The DNC intrusion shows how the attacker can take part in the meaning-making activities, actively trying to dispel the notion that the government knows who is behind a cyber incident. Finally, the NotPetya incident illustrates how actors seemed to have learned from the contested cases. In particular, the coordination of attribution claims across different countries and entities was specifically designed to bolster the legitimacy and credibility of the attribution claims at the international level.

Finally, I reflect on how academia could be a partial remedy to this situation. Academia, so far, has not been a strong participant in the discursive space around particular attributions. This is despite its commitment to transparency and independence theoretically making it a well-placed actor to contribute an independent interdisciplinary contribution on the state of cyber conflict. Thus, I argue for an increasing need for academic interventions in the area of attribution. This includes interdisciplinary research on all aspects of attribution (not just in cybersecurity), and conducting independent research on the state of cyber conflict historically and contemporarily. One of the main implications of this research on contestation of attribution claims for democracies are to be more transparent about how attribution is performed, to enable other civilian actors to study cyber conflict, and to thereby broaden the discourse on what is one of the main national security challenges of today. 

Florian J. Egloff is a Senior Researcher in Cybersecurity at the Center for Security Studies at ETH Zürich. He is the author of “Contested public attributions of cyber incidents and the role of academia”, Contemporary Security Policy, Advance Online Publication, available here. A shorter policy analysis on the subject can be found here.

Cyber-noir: Popular cultural influences on cybersecurity experts

In a recent article in Contemporary Security Policy, James Shires draws on film noir to discuss portrayals of cyber in popular culture.

In his testimony to the House of Representatives sub-committee on cybersecurity in 2013, Kevin Mandia, a cybersecurity CEO and former U.S. government official, emphasized that “cyber remains the one area where if there is a dead body on the ground, there is no police you call who will run to you and do the forensics and all that”. This was of course a metaphor, as there was no literal dead body in the Chinese cyber-espionage cases his company were known for. Nonetheless, he portrayed his role exactly like the start of a film noir: an absent police presence, a violent act and a dead body, and a self-reliant private investigator. Was this just a figure of speech? Or is there something else going on–something more fundamental to cybersecurity itself? 

A foundational problem in cybersecurity is drawing a clear dividing line between legitimate and malicious activity. This is difficult because cybersecurity is an environment swamped with data, where identical tools and tactics are used for different ends, and where social and economic structures linking offensive and defensive action compound technical similarities. These obstacles to distinguishing between legitimate and malicious cyber activity are well recognized by both practitioners and scholars.

In a recent article for Contemporary Security Policy, I highlight another factor that is rarely discussed but no less important: popular cultural influences on cybersecurity experts. Cybersecurity expert practices are infused with visual and textual influences from broader discourses of noir in popular culture, including dystopian science fiction, fantasy, and cyber-punk: a phenomenon I call “cyber-noir”. These influences produce cybersecurity identities that are liminal and transgressive, moving fluidly between legitimate and malicious activity. To paraphrase a neat description of film noir leads, cybersecurity experts see themselves as “seeming black and then seeming white, and being both all along”.

In the article, I examine two forms of popular cultural influences on expert practices: visual styles and naming conventions. I suggest that these influences create a morally ambiguous expert identity, which in turn perpetuates practices that blur the legitimate/malicious boundary.

First, due to its relative novelty and digital basis, many concepts and objects in cybersecurity have no obvious visual association. This gap means that, as Hall, Heath, and Coles-Kemp suggest, many techniques of cybersecurity visualization deserve further critical scrutiny. Through code images signifying illegibility and technical sophistication, and pseudo-geographic “attack maps” emphasizing constant threat, cybersecurity is portrayed as a dark and uncertain world where simulation slips easily into reality and reality into simulation. A range of threatening identities using images of noir characters, various coloured hats, and hooded hackers add to this atmosphere. These images and visual styles use noir aesthetics and palettes to convey transgression, danger and moral ambiguity. Although light and dark shades are classically associated with good and evil, in cybersecurity–as in noir–both “good” and “bad” entities occupy the same place in the visual spectrum.

Second, naming conventions are infused with popular culture, through direct references and quotations and in their style, sound and visual aspect.  Many company names and analysis tools in cybersecurity evoke a popular culture crossover between noir, science fiction, fantasy and cyber punk. Vulnerabilities receive names that could be straight from dystopian fiction, like “Heartbleed,” “Spectre,” “Meltdown,” and “Rowhammer”, while others highlight a darker aesthetic, such as “Black Lambert” and “Eternal Blue”. Although these are clearly strategic decisions, they also shape the identity of the individuals who work in these organizations and the organizations themselves. Consequently, names with popular cultural influences and associations not only enliven the working day for cybersecurity experts, but constitute the moral orientation of their world.

In 2017, British youth Marcus Hutchins became well-known among cybersecurity experts, following his portrayal as the person who singlehandedly stopped the devastating WannaCry virus that affected the UK’s National Health Service. However, Hutchins’ fame enabled other cybersecurity experts and U.S. law enforcement to follow a trail of domain names, malware names, and handles on hacker forums, including “ghosthosting,” “hackblack,” “blackshades,” and “blackhole,” to the creation of an illegal banking virus named Kronos. Hutchins was arrested months after his public appearance and sentenced to time served in July 2019 for his role in distributing this virus. Hutchins’ case is an extreme example of the relationship between noir aesthetics and transgressive practices. As his story illustrates starkly, many cybersecurity expert identities are constituted in such a way that practices like hacking back, undercover intelligence collection and participation in “grey” or “black” hacking forums seem to be a normal, even necessary, set of activities.

The article concludes that the fragile distinction between legitimate and malicious activity in cybersecurity expert discourses is not merely a question of technological similarities, exacerbated by particular economic and institutional structures. Instead, experts themselves perpetuate uncertainty over what is legitimate and malicious in cybersecurity. Their adoption of popular culture adds to the explicit obstacles confronting cybersecurity experts, suggesting that the task of separating legitimate and malicious is much more challenging than commonly thought. Consequently, the deepest difficulty in maintaining the legitimate/malicious binary–and therefore constructing a stable foundation for cybersecurity itself–is not the range of technological, social, and economic pressures explicitly recognized by cybersecurity experts, but their implicit embrace of cyber-noir.

James Shires is an Assistant Professor at the Institute for Security and Global Affairs, University of Leiden. He is the author of “Cyber-noir: Cybersecurity and popular culture”, Contemporary Security Policy, Advance Online Publication,  available here.

How should we use cyber weapons?

cyberIn a recent article in Contemporary Security Policy, Forrest Hare argues that we should shift the cyber conflict debate from the “Can we?” question to the “How should we?” question.

The recent release of the United States Department of Defense’s 2018 Cyber Strategy timed closely with National Security Advisor John Bolton’s declaration that the White House has authorized offensive cyber operations suggests that the United States intends to take a much more aggressive approach to combatting perceived threats in the domain.

However, these developments generate as many questions as answers. For example, is the U.S. military prepared with the capabilities required to make good on the National Security Advisor’s declaration? How should the US Department of Defense even structure its military capabilities to combat the threats it faces in the domain?

In my recent CSP article, I hope to spur the cyber conflict debate forward in a productive direction, and away from a focus on strategic alarms, so we can get at answers to these questions. In other words, we must acknowledge that military conflicts have now expanded to cyberspace and it is time to start focusing on ensuring that its conduct is carried out in a professional manner that address all the valid concerns and implications of conflict in the domain.

With this backdrop, I argue in my article that developing precision cyber weapon systems, to be used during a lawful conflict, can be an important part of a responsible national security strategy to reduce the amount of violence and physical destruction in conflicts. To make this argument, I first describe a precision cyber weapon system in a military context. I then present three compelling rationales for the development of precision cyber weapon systems based on ethical, operational, and financial considerations.

For many years now, we have been engaged in debates about the potential for acts of “mass disruption” in cyberspace and the possible legal, moral, and other implications of such strategic incidents. Many writers and popular media have raised the alarm about the dangers that will confront us as a result of an all-out cyber conflagration. Is it possible that this resistance to accepting the usefulness of cyber capabilities has actually led to more death and destruction in conflicts?

Detractors may not consider that an unintended consequence of their conflation of issues, and a singular focus on potential strategic effects, may be creating greater risk to the warfighter, civilian populations, and even the taxpayer. Arguments against the use of any cyber weapon capabilities may put militaries and civilians on both sides of a conflict at unnecessary risk when kinetic weapons may be preferred unnecessarily.

To be clear, I do not argue that precision cyber weapons will be a panacea. We should never expect cyber weapons to replace other weapons in conflict. There will always be a requirement for a spectrum of capabilities to defend a nation in all domains. However, I look forward to the day when there is a broad acknowledgement by military and academic professionals to consider precision cyber weapons an important force multiplier and component of a responsible national security strategy.

Forrest Hare is a retired Colonel in the United States Air Force having served most recently at the Defense Intelligence Agency. His recent article “Precision cyber weapon systems: An important component of a responsible national security strategy?”, Contemporary Security Policy, Advance online publication, is available here