Saving Face in the Cyberspace: Responses to Public Cyber Intrusions in the Gulf

How do states “save face” following a cyber intrusion directed at them? A new article identifies how Gulf states employ diverse rhetorical strategies—beyond attribution—to narrate cyber intrusions and keep cyber conflict contained.

In July 2017, nearly two months after Qatar had suffered a public cyber intrusion, the Qatari Ministry of Interior (MOI) revealed evidence concerning the intrusion. Rather than only providing technical details about the intrusion, the MOI broadcasted a dramatic video with intense music, thrilling graphics, and a spy-style vibe to reveal the intrusion step-by-step. In addition to delegitimizing cyber intrusions as acts of terrorism, the video emphasizes the Qatari remarkable success in containing the intrusion and detecting its source despite the intrusion’s sophistication.

This Qatari press conference was not unique. When cyber intrusions become public, states do not only engage in technical strategies to deal with the intrusion and identify the initiator, but also manage their public relations – they publish messages, hold press conferences, brief reporters, and rhetorically try to manage the crisis.

However, these performative and symbolic strategies are often left unnoticed in existing research on cyber discourse. Of course, many studies zoom in on the strategy of attribution, but as we see, much more is going on following a public cyber intrusion.

In this article, we explore the rhetorical strategies used by governments in the Gulf in response to a public cyber intrusion they suffered. We do so via an original discourse analysis of official statements and state-sponsored media reports in five cyber intrusions that differ in their targets and methods: Saudi Arabia’s response to cyber intrusion against its oil company Aramco (“Shamoon” 2012), Saudi Arabia’s response to a “hack-and-leak” intrusion (2015), Saudi Arabia’s response to intrusions using “Shamoon 2.0” malware (2017), Qatar’s response to a “hack-and-fake” intrusion (2017), and Bahrain’s response to multiple hacking operations (2019).

Responding to Public Cyber Intrusions

When a cyber intrusion becomes public knowledge, targeted states must find ways to address and explain the resulting social costs. The need to “save face” in these situations arises from the undesirable implications for the identity and image of the state in front of both domestic and international audiences. We suggest that states employ rhetorical strategies to “save face” – to protect their public image in front of domestic, regional, or international audiences.

To better understand these strategies, we propose a typology of “face-saving” strategies that can be categorized into three broad groups: diminishing strategies, self-complimenting strategies, and accusing strategies.

Diminishing strategies involve minimizing the effect of the intrusion, normalizing it, or debunking false information. Minimizing means that states try to reduce the magnitude of the intrusion. Normalizing means that states frame the intrusion as a common occurrence in global politics, highlighting that other countries also experience cyber intrusions. Debunking means that states try to dispute the authenticity of leaked information and provide evidence to counter false claims. These strategies serve to diminish the impact and prevent further dissemination of damaging information.

Self-complimenting strategies are used to enhance the positive perception of the targeted state. States employ bolstering rhetoric to emphasize their successes, international connections, and positive values. Reasserting control is a rhetorical move that showcases measures taken to ensure future protection, often involving investigations, new cyber institutions, and regulations. Correcting is a rhetoric that aims to replace leaked or fabricated information with a more positive narrative by providing an alternative and beneficial account.

Accusing strategies involve exposing the intrusion, condemning the perpetrators, and attributing the attack to specific actors. By adopting these strategies, states shift blame and position themselves as victims.

Findings

When thinking about the public response of states to public cyber intrusions, existing literature primarily discusses the risks of retaliation or escalation as well as attribution. However, as this article shows, states engage in multiple “face-saving” strategies to manage their image and legitimize their restraint. Attribution is only one rhetorical option out of many.

The results of our systematic discourse analysis suggest that different contextual factors shape the specific strategies used. In cyber intrusions that involve leaking or faking information involve, unique strategies of debunking or correcting were used.

Regarding attribution, the cases involving Saudi Arabia – a regional power – did not include public attribution. In contrast, Bahrain and Qatar – smaller powers – did attribute the intrusions but did so only after such attribution was made by American media. These suggestive contextual factors might be used in future research on the rhetoric of cyber responses in other areas.

Understanding these face-saving strategies is crucial for two reasons. First, it provides insights into the restraint and limited nature of cyber conflicts. Existing research focuses on operational aspects and restraint shown by targeted states, but the public narrative and strategic narration of these events are often overlooked. By adopting face-saving strategies, targeted states aim to reduce pressure to retaliate or escalate and justify why such actions are unnecessary. Second, this study contributes to constructivist scholarship by expanding the repertoire of strategies used by states to cope with embarrassment. By focusing on the Gulf countries, we highlight the agency of states in the Global South to interpret cyber intrusions in front of different audiences.

Yehonatan Abramson and Gil Baram are the authors of “Saving Face in the Cyberspace: Responses to Public Cyber Intrusions in the Gulf” in Contemporary Security Policy, which is available here.

The cyber-domain as a narrative battlefield

How do the main actors in cyberspace make sense of its fragmented governance, and how does that translate to their broader strategic narratives? André Barrinha and Rebecca Turner study strategic narratives in their new article in order to find out.

In an era of increasing digital connectivity, the governance of cyberspace has become a critical global concern. Multilateral efforts to navigate the complexities of cyber governance are well underway, with two cyber initiatives currently ongoing at the United Nations (UN).

At the forefront is the Open-ended Working Group (OEWG) on developments in the field of information and telecommunications in the context of international security overseen by the UN General Assembly’s (UNGA) First Committee. The OEWG is responsible for negotiating norms related to international cybersecurity and responsible state behaviour in cyberspace. The second more recent group is the Ad Hoc Committee (AHC) on cybercrime, overseen by UNGA’s Third Committee. The AHC was created with the intention to create a new treaty specifically addressing cybercrime. These two groups are operating in parallel because of the assumption that international cybersecurity and cybercrime should be addressed separately, as two distinct cyber regimes under the same complex.

International cybersecurity is understood to be divided along three main groups: the liberals, also known as the gatekeepers of cyberspace and custodians of the internet’s core principles, including the UK, US, EU and other likeminded states; the sovereigntists, led by Russia and China, who are inspired by a much more state-centric and territorialised approach to cyberspace regulation; and finally, the non-aligned or swing states, including Brazil, India and South Africa, who oscillate between the former two groups depending on the policy issue.

In our article, we explore the narrative battlefields of the OEWG and AHC using strategic narratives as our starting point. By examining the approaches of key actors from each of the groups – the EU as a representative of the liberals, Russia as an advocate for the sovereigntists and India as a swing state – we aim to uncover their storytelling techniques and the associated implications for the multilateral governance of cyberspace.

As we conclude, the existence of two different forums does not seem to impact the consistency of each actor’s strategic narratives. Rather, there is a strong continuum across the two forums, as described below.

The EU: a force for good

For the EU, both forums serve as opportunities to reinforce its position as a global force for good, committed to responsible leadership and democratic values. Central to the EU’s narrative is the defence of the rules-based order and the founding principles of the internet, which emphasise its global, open, free, stable, and secure nature. In championing these values, the EU establishes itself as an advocate for maintaining the status quo. The EU’s commitment to being a status quo actor is likely motivated by concerns about “Westlessness” – the perception that the world, and cyberspace, is gradually becoming less Western-centric and less aligned with liberal ideals. The EU’s force for good identity narrative and rules-based order system narrative directly facilitates its policy narratives around cooperation, development, and capacity-building.

Russia: the norm-entrepreneur

Russia’s strategic narratives in the OEWG and AHC revolved around four main themes: Russophobia, anti-Westernism, sovereignty and multilateralism. These narrative elements were consistently present in both forums, indicating that Russia’s establishment of the AHC was driven less by a belief in the institutional separation of crime and international cybersecurity as distinct cyber regimes, and more by a desire to counter existing legal and diplomatic structures that Moscow perceives as leaning towards liberal ideals. Through these strategic narratives, Russia aims to position itself at the forefront of cyber diplomacy as a norm-entrepreneur, shape future policy decisions to its advantage, and influence the global discourse on cyberspace governance.

India: the multi-aligner

India is still in the process of formulating a comprehensive strategic approach to cyberspace that aligns with its national interests and aspirations. This ongoing process helps to explain why India adopts a position of multi-alignment in the cyber domain, seeking to maintain connections with both ‘Liberals’ and ‘Sovereigntists’. Consequently, India’s strategic narratives in the cyber realm appear more ambiguous in comparison to the EU and Russia. India articulates narratives around sovereignty, technological autonomy, multilateralism, democracy, and its status as a developing nation. But, while these narratives are present in both the OEWG and AHC, they often lack coherence and occasionally conflict with one another. For instance, the struggle between upholding human rights and asserting stringent sovereign controls exemplifies India’s discursive frictions on fundamental cyber issues.

Narratives matter

Given the relatively nascent stage of the cyber domain and the conflicting views and priorities of the three groups under analysis, the way cyber issues are discursively approached offers intriguing insights into the state of cyber diplomacy. As the AHC moves towards a draft convention on cyber-crime and the OEWG into the second year of its second iteration, the world remains significantly divided on what should and should not be allowed to happen in cyberspace. Understanding the narratives underpinning those divergences is crucial if we are to move towards a safe and stable cyberspace.

As we conclude in the article, for all the specificities and technicalities associated with cybercrime or with the potential application of international humanitarian law to cyberspace, there are over-arching narratives told by the active actors in this domain that need to be taken into consideration. Ultimately, the successful implementation of any agreement or norm will rely on the incorporation of those positive steps within those actors’ strategic narratives of cyberspace.

André Barrinha and Rebecca Turner are the authors of  “Strategic narratives and the multilateral governance of cyberspace: The cases of European Union, Russia, and India” in Contemporary Security Policy, which is available here.

The Paradox and Perils of Authoritarian Support for Multilateral Cyber Governance

Support for international organizations remains a foreign policy mainstay for most democratic states. In a new article, Mark Raymond and Justin Sherman explain why the situation is more complicated with respect to cyber governance. They find that major authoritarian states are championing their own distinct variant of authoritarian multilateralism, while many democratic states have embraced a contemporary form of multilateralism that incorporates substantial elements of multistakeholder governance. The divergence on how to accomplish cyber governance is rooted in a difference over what multilateralism means and the appropriate way to practice it, with deep implications for the broader trajectory of rule-based global order. The widespread adoption of authoritarian multilateralism would amount to CRISPR gene editing the liberal DNA out of the post-1945 order, leaving the form but not the vital substance of liberal multilateralism.

Varieties of Multilateralism 

International Relations scholarship recognizes multilateralism as one of the pillars of the contemporary rule-based global order. Language invoking multilateralism as an idea, and as a practice instrumental to maintaining global security, also features prominently in leaders’ public foreign policy statements. President Biden’s preferred formulation, “rules-based order,” is a close cognate of multilateralism, at least to the ears of listeners in democratic states, who largely accept the notions that the rule of law entails the equal application of rules to actors regardless of power differentials, and that rules should be authored by those subject to them.

However, we think there are good reasons to suspect not only that authoritarian states have different views of how multilateralism should be practiced, but also that democratic states are experiencing ‘dri’ over time in their understandings of what multilateralism entails. We identify and describe two distinct variants of multilateralism: liberal and authoritarian.

The liberal variant is the familiar one, rooted in notions of equality before the law and representation in rule-making processes. In contrast, authoritarian multilateralism is rooted in notions of great power privilege, akin to hierarchical notions of great power management more commonly associated with nineteenth-century world politics. It also differs from liberal multilateralism in the underlying purpose it accords to global governance arrangements. Liberal multilateralism emphasizes transparency and participation, and the realization of human rights as a key goal of global governance arrangements more generally; authoritarian multilateralism is more opaque and statist, and privileges state sovereignty over the welfare of individuals.

Authoritarian Multilateralism in Cyber Governance 

There is broad agreement that China and Russia are the main players in a substantial international coalition seeking to nudge cyber governance arrangements toward multilateralism and away from private and multistakeholder governance modalities. Our analysis goes further by drawing atention to the specific means that they are using in service of this goal: (1) exploiting established procedures to subvert established liberal multilateral governance arrangements; and (2) parallel order-building efforts that employ or create new governance arrangements that lack the distinctive hallmarks of liberal multilateralism.

Russia first sought a multilateral arms control treaty for cyberspace at the United Nations in 1998, leading to the Group of Governmental Experts (GGE) process that continued until 2021. Russia and China supplemented this flagship UN process with increasing involvement in private and multistakeholder Internet and cyber governance arrangements, especially for establishing technical standards.

Landmark GGE reports in 2013 and 2015 and deteriorating relations with the United States and other Western states led China and Russia to shi their UN strategy. They criticized the GGE as fundamentally undemocratic because it included only a select group of states, calling for the establishment of an Open-Ended Working Group (OEWG). Crucially, the OEWG expanded participation to tilt the composition in favor of authoritarian states, and it shied the terms of reference to include negotiation of international agreements rather than the study oriented GGE mandate. Although the first OEWG became more inclusive of non-state actors over time due to democracies’ efforts, the initial design was more akin to authoritarian rather than liberal multilateralism.

Outside the UN, China and Russia also seek to advance authoritarian multilateralism by way of increased engagement with technical standard-seng processes for digital technologies and in bilateral infrastructure diplomacy. However, the parallel order-building strategy is most evident in long-standing efforts by the Shanghai Cooperation Organization, which stands out as an explicitly illiberal international organization substantially less transparent than IGOs created with strong involvement from the world’s major democracies. Most recently, China has announced that it intends to transition its World Internet Conference into a new multilateral organization specifically for cyber governance. Such a step would substantially elevate parallel order-building efforts in the cyber regime complex.

Implications for Rule-Based Global Order 

Cyber governance is not only vitally important, it is also an especially stark contrast between two different visions of what multilateralism means and how it should be practiced. The authoritarian variant illustrated here is opaque, insulated from participation by non-state actors, and aims at creating an international order that excises core aspects of the post-1945 order rooted in democracy and human rights as core values. The liberal variant, in contrast, has evolved over time to be more inclusive of nonstate actors than its initial form, such that multilateralism as practiced by democratic states now incorporates elements of multistakeholder governance.

Which of these variants predominates in global governance is thus a consequential question for policymakers, and for the trajectory of the rule-based global order. It also poses foreign policy challenges for democratic states. If China moves ahead with a multilateral international organization for cyber governance, democracies will face a choice: should they join such an organization, hoping to influence its decisions? If so, they will need to operate in a fundamentally different procedural context than most major international organizations. If they stay out, it will provide greater freedom of action for Russia, China and other authoritarian states to shape the future of cyber governance in ways that may have significant global effects over time.

Mark Raymond and Justin Sherman are the authors of “Authoritarian Multilateralism in the Global Cyber Regime Complex,” Contemporary Security Policy, which is available here

 

Zero-day vulnerabilities are powerful cyber weapons: Use them or patch them?

The U.S. government faces a dilemma regarding zero-day vulnerabilities: it can either stockpile or disclose them. In a new article, Marcelo M. Leal and Paul Musgrave show that Americans overwhelmingly support the disclosure of information about zero-day vulnerabilities to vendors.

In 2017, the WannaCry and NotPetya malware exploited a vulnerability in the Windows operating system, causing widespread havoc. Ironically, the U.S. National Security Agency (NSA) had been aware of this vulnerability for about five years. Instead of disclosing the vulnerability to Microsoft, however, NSA held on to the knowledge—until the vulnerability was leaked in public forums.

This case illustrates a dilemma that the United States government faces when it discovers zero-day vulnerabilities. Zero-day vulnerabilites are software and hardware flaws that are unknown to computer vendors. As a result, they are enormously valuable to attackers since there is no defense against them. Intelligence agencies such as the NSA and CIA—as well as other governments and even some private firms—work hard to develop such zero-day exploits because of the advantages they afford the attackers.

Once a vulnerability is discovered, an agency can either disclose information about it to vendors so that it can be patched or withhold it so that it can add the vulnerability to its stockpile of cyber weapons. By withholding information, U.S. national security agencies can exploit zero days to penetrate the computer systems of its adversaries—yet doing so also leaves U.S. and allied entities vulnerable should an adversary independently discover these flaws and use them against the United States. By disclosing information to vendors, the U.S. government allows vendors to fix the vulnerability and secure their systems in a timely manner—but it also denies the use of such attack vectors against adversaries by U.S. national security agencies.

Debates about how and where to draw the line between disclosure and stockpiling are a staple of cyber policy discourse. This is less of a dilemma for the U.S. public. Results from a survey experiment we conducted in late 2021 show that it’s likely that respondents are squarely in favor of disclosing information to vendors, even when informed that withholding this information could save many Americans lives in a future conflict. Our results also demonstrate that the likelier it is that an adversary could use a given zero day, the more Americans favor disclosing the vulnerability to a vendor.

The Vulnerabilities Equities Process

Since 2010, the U.S. government has a policy in place to address this dilemma. The vulnerabilities equities process, or VEP, guides executive branch officials in their decision to disclose or to retain publicly unknown vulnerabilities. Official documents released to the public in recent years show that officials are believed to take several factors into consideration when they discover a zero-day vulnerability, like the consequences of an adversary exploitation and how quickly an exploit could be patched. Nonetheless, analysts have singled out two factors that are critical for decisionmakers: how long a vendor will remain unaware about the flaw (longevity of a zero day) and the likelihood that an adversary will independently discover it (its collision rate).

Those who think that vulnerabilities need to be patched to protect American interests from adversaries think the VEP is too weak. Those who think that a strong cyber offense is more important think that it is (or could become) too strong. Understanding how this debate will resolve requires researching many topics, such as the influence of different agencies and interest groups, but it also requires investigating public opinion. Even though cybersecurity is a technical field, and even though zero-day vulnerabilities are among the U.S. government’s most prized secrets, the public’s views on the issue could shape how politicians and officials craft policy—particularly if there’s another major incident involving zero-day flaws known to the U.S. government.

To explore how the American public thinks, we conducted a survey experiment testing whether different levels of longevity and collision rates influence respondents support for disclosing or withholding zero-day vulnerabilities. Respondents read a scenario that pitted retaining a vulnerability for use in a potential attack against Iran (saving many American servicemembers’ lives) against the possibility that it could be independently discovered by an adversary and used against the United States. We manipulated this collision rate to specify that there could be a high, low, or medium chance that an adversary could acquire the zero day. Separately, we also manipulated whether the vulnerability would be likely to exist for a few months, a year, or several years.

The results were unequivocal. We found that the longevity of a vulnerability does not make respondents more or less likely to support disclosure. On the other hand, collision rates do influence respondents’ evaluations. As the likelihood that an adversary could independently discover a vulnerability goes up, support for informing the vendor about the vulnerability also increases.

Policy implications

There may be a substantial disconnect between the preferences of the public and those of the U.S. government regarding zero-day disclosure policy. Even though U.S. officials hint that disclosure is the default option in the vulnerabilities equities process, recent studies show that this might not be true in practice. Some research also suggests that the interests of the intelligence and law enforcement agencies are more represented in the VEP than those of the public and technology firms, suggesting a bias toward stockpiling zero days.

This public-government disconnect could lead to policy changes. Previous leaks showing that U.S. intelligence agencies failed to disclose zero-day vulnerabilities to vendors have already led the federal government to disclose information about the vulnerabilities equities process several times. Congress has also reacted to these cases. In recent years, lawmakers have introduced two bills that aimed to codify the VEP and could make disclosure the default option by law. None, however, advanced in their legislature. Given our findings, it is possible that agency policies may be subject to correction should policy windows open for a long enough period.

Marcelo M. Leal and Paul Musgrave are the authors of “Backwards from zero: How the U.S. public evaluates the use of zero-day vulnerabilities in cybersecurity”, Contemporary Security Policy, which is available here

Private Sector Contribution to National Strategies of Cyber Deterrence

More often than not, the delegation of national security responsibilities to private actors has generated controversy. Notable cases include the United States’ reliance on private military contractors in the recent conflicts of Afghanistan and Iraq. Hence, it may come as a surprise that the current debate around private sector contribution to national strategies of cyber deterrence has been largely exempt from such controversies.

On the contrary, a steady consensus has grown around the idea that national strategies of cyber deterrence would benefit significantly from the direct participation of actors in the private sector. In particular, there have been repeated calls for tech companies, cyber-security firms, and owners and operators of critical infrastructure to bring their vast resources to the table in order to boost governments’ ability to fend off malicious cyber activity.

Without dismissing the opportunities originating from the contributions of the private sector, a new article written by Eugenio Lilli highlights how such private contributions could also pose significant security, legal, and moral challenges.

The first step to assess the desirability or not of private sector contribution to national strategies of cyber deterrence is to define the concept of deterrence in cyber space. As it is the case with many neologisms containing the prefix “cyber”, cyber deterrence also lacks a universally agreed upon definition. In the article, cyber deterrence is defined as the deterrence of malicious activity occurring within or through cyber space. It is also argued that deterrence in cyber space should be

  •  Restrictive. It should seek to shape and limit the overall frequency and severity of malicious activity rather than aiming at dissuading all attacks from occurring at all times.
  • Comprehensive. It should encompass deterrence by denial, punishment, entanglement, and norms; it should rely on deterrent measures taken in the other operational domains of land, sea, air, and space; it should include the whole range of instruments of national power including diplomatic, information, military, economic, financial, intelligence, and law enforcement (aka DIMEFIL) instruments.
  • Dynamic. In response to rapid technological innovation, it should constantly monitor systems and networks, update defenses, improve intelligence sharing, patch vulnerabilities, and renew contingency plans; in response to change in cyber norms, it should implement measures aimed at actively shaping the evolution of norms in cyber space.
  • Complemental. It should not be expected to work best as a separate tool in an actor’s toolbox but rather, as complemental to other forms of coercive and non-coercive strategic interaction.

By relying on the RCDC (Restrictive, Comprehensive, Dynamic, Complemental) conceptualization of cyber deterrence, the article identifies specific areas where private sector contribution can be especially beneficial to national strategies of cyber deterrence. For example, there is evidence to support the argument that private actors can be instrumental to hardening cyber defenses and enhancing resilience, to sharing information, to imposing costs to adversaries, to attributing cyber incidents, to creating strategic interdependencies, and to advancing norms of appropriate behavior in cyber space.

Some important benefits of private sector contribution appear to be common to all areas. To begin with, the private sector can offer unique state-of-art-technologies, highly skilled human capital, and critical funding to compensate for a national government’s limited resources. Moreover, while government authority is often geographically limited, private actors’ visibility and reach can extend beyond national borders. In addition, compared to the somewhat cumbersome processes of policymaking characteristic of state bureaucracies, private sector processes of policymaking give these actors more flexibility and speed; key abilities given the fast-changing nature of threats in cyber space.

Given the above, it is not surprising that the number of those people calling for more private participation in national cyber deterrence is steadily increasing. However, as it is often the case, the devil is in the details. The opportunities originating from private sector contributions are apparent, yet these same contributions also have the potential to raise serious security, legal, and moral challenges that need to be thoroughly understood.

For example, contracting a private company to host classified military information can give fast-track access to the latest technologies but it could also endanger national security if the private company is successfully breached by a hostile actor. Similarly, private companies, especially big tech companies, usually employ people from the world over. Where would these employees’ loyalty lie in case of heighten international tensions or an open confrontation? With the country which contracted them or with their country of origin?

Moreover, legal considerations could limit the willingness of the private sector to contribute to activities of intelligence sharing and active cyber defense. In the context of the United States both types of deterrence activities, while beneficial, may in some cases violate domestic law.

There are also instances of contributions which raise moral issues. For example, private sector’s access to government’s sensitive information could lead to the abuse of such information for private gain. Private companies are ultimately responsible to shareholders rather than to the citizenry. How can they be held accountable to the nation’s interest? With regard to attribution of cyber incidents, commercial interests could make private actors somewhat biased in their public attributions. In particular, they could refrain from publicly attributing incidents to specific governments because they do not want to jeopardize their access to these countries’ profitable contracts and markets.

To conclude, these few examples show the need for starting a more nuanced debate on the nature and desirability of private sector contribution to national strategies of cyber deterrence which is not limited to highlighting the opportunities deriving from it but that also considers the related challenges.

Eugenio Lilli is a lecturer at University College Dublin. He is the author of “Redefining deterrence in cyberspace: Private sector contribution to national strategies of cyber deterrence”, Contemporary Security Policy, which is available here.

Contested public attributions of cyber incidents and the role of academia

In a recent article in Contemporary Security Policy, Florian J. Egloff reflects on the contested nature of public attributions of cyber incidents and what role academia could take up.

In the last five years, public attribution of cyber incidents has gone from an incredibly rare event to a more regular occurrence. For example, in October 2018 the UK’s National Cyber Security Centre publicized its assessment of cyber activities conducted by the Russian military intelligence service (also known by its old acronym, the GRU). Clearly, publicizing activities that other political actors like to keep secret is a political act – but what kind of political act is it and what happens when a government publicly attributes? 

For research on governmental public attribution, one can split the public attribution process into two phases: mechanisms that lead to public attribution and what happens after an incident is publicly attributed. Little research exists on either phase with regard to attribution of cyber incidents. This is problematic, as our understanding of contemporary security policy rests on understanding what drives threat narratives, how and why those particular ones are introduced publicly, and how contestation of threat narratives takes place in the public sphere. 

In a recent article for Contemporary Security Policy, I focus on this second phase of public attribution, namely, what happens after a government goes public about a cyber incident. Understanding this phase is important, as public attributions of cyber incidents are one of the main sources from which the public learns about who is attacking whom in cyberspace, thereby shaping the threat perception of the general public. Most attribution judgements are published by governments, the private sector, and a small number of civil society actors. To situate the knowledge space, in which attribution claims are introduced to, I reflect on this source of knowledge about cyber conflict by identifying how it structurally shapes our understanding of cyber conflict, in particular due to operational and (political, commercial, and legal) structural factors. In short, due to the commercial incentives on the private sector side and the political bias on the government side, the public data about cyber conflict structurally induces distrust into the representativeness of the public attribution statements.

I then focus on the contestation of public attribution claims in democracies and the consequences such contestation brings. Contestation is fundamental to democratic politics. The open debate, the ability of everyone to freely voice opinions, and the emergence of truth trough democratic discourse is foundational to the public sphere of democratic polities. Thus, the ability to contest is a sign of healthy democratic politics. However, as I show in the article, this openness to contestation, coupled with the information poor environment, creates particular problems in the area of cybersecurity. 

Attribution claims are introduced, contested, and even the possibility to do attribution is put into question. Disinformation tactics are used to muddy specific attribution claims, leaving an electorate exposed to the coexistence of multiple “truths” and a fractured narrative of the past. Due to the secrecy attached surrounding the attribution processes by governments, particularly due to concerns of intelligence agencies about sources and methods, governments are often reluctant to reveal the evidence underlying the attribution judgments. These are ideal enabling conditions for other actors to contest governmental claims. 

In a series of empirical examples (Sony, DNC, NotPetya), I reflect on the drivers of contestation after an incident is publicly attributed and show how attackers and other constituencies with various political and economic motivations purport particular narratives. The Sony incident highlights the difficulty a government can have in convincing an electorate of its claims, when there is no record of accomplishment in making attribution claims in public. The DNC intrusion shows how the attacker can take part in the meaning-making activities, actively trying to dispel the notion that the government knows who is behind a cyber incident. Finally, the NotPetya incident illustrates how actors seemed to have learned from the contested cases. In particular, the coordination of attribution claims across different countries and entities was specifically designed to bolster the legitimacy and credibility of the attribution claims at the international level.

Finally, I reflect on how academia could be a partial remedy to this situation. Academia, so far, has not been a strong participant in the discursive space around particular attributions. This is despite its commitment to transparency and independence theoretically making it a well-placed actor to contribute an independent interdisciplinary contribution on the state of cyber conflict. Thus, I argue for an increasing need for academic interventions in the area of attribution. This includes interdisciplinary research on all aspects of attribution (not just in cybersecurity), and conducting independent research on the state of cyber conflict historically and contemporarily. One of the main implications of this research on contestation of attribution claims for democracies are to be more transparent about how attribution is performed, to enable other civilian actors to study cyber conflict, and to thereby broaden the discourse on what is one of the main national security challenges of today. 

Florian J. Egloff is a Senior Researcher in Cybersecurity at the Center for Security Studies at ETH Zürich. He is the author of “Contested public attributions of cyber incidents and the role of academia”, Contemporary Security Policy, Advance Online Publication, available here. A shorter policy analysis on the subject can be found here.

Cyber-noir: Popular cultural influences on cybersecurity experts

In a recent article in Contemporary Security Policy, James Shires draws on film noir to discuss portrayals of cyber in popular culture.

In his testimony to the House of Representatives sub-committee on cybersecurity in 2013, Kevin Mandia, a cybersecurity CEO and former U.S. government official, emphasized that “cyber remains the one area where if there is a dead body on the ground, there is no police you call who will run to you and do the forensics and all that”. This was of course a metaphor, as there was no literal dead body in the Chinese cyber-espionage cases his company were known for. Nonetheless, he portrayed his role exactly like the start of a film noir: an absent police presence, a violent act and a dead body, and a self-reliant private investigator. Was this just a figure of speech? Or is there something else going on–something more fundamental to cybersecurity itself? 

A foundational problem in cybersecurity is drawing a clear dividing line between legitimate and malicious activity. This is difficult because cybersecurity is an environment swamped with data, where identical tools and tactics are used for different ends, and where social and economic structures linking offensive and defensive action compound technical similarities. These obstacles to distinguishing between legitimate and malicious cyber activity are well recognized by both practitioners and scholars.

In a recent article for Contemporary Security Policy, I highlight another factor that is rarely discussed but no less important: popular cultural influences on cybersecurity experts. Cybersecurity expert practices are infused with visual and textual influences from broader discourses of noir in popular culture, including dystopian science fiction, fantasy, and cyber-punk: a phenomenon I call “cyber-noir”. These influences produce cybersecurity identities that are liminal and transgressive, moving fluidly between legitimate and malicious activity. To paraphrase a neat description of film noir leads, cybersecurity experts see themselves as “seeming black and then seeming white, and being both all along”.

In the article, I examine two forms of popular cultural influences on expert practices: visual styles and naming conventions. I suggest that these influences create a morally ambiguous expert identity, which in turn perpetuates practices that blur the legitimate/malicious boundary.

First, due to its relative novelty and digital basis, many concepts and objects in cybersecurity have no obvious visual association. This gap means that, as Hall, Heath, and Coles-Kemp suggest, many techniques of cybersecurity visualization deserve further critical scrutiny. Through code images signifying illegibility and technical sophistication, and pseudo-geographic “attack maps” emphasizing constant threat, cybersecurity is portrayed as a dark and uncertain world where simulation slips easily into reality and reality into simulation. A range of threatening identities using images of noir characters, various coloured hats, and hooded hackers add to this atmosphere. These images and visual styles use noir aesthetics and palettes to convey transgression, danger and moral ambiguity. Although light and dark shades are classically associated with good and evil, in cybersecurity–as in noir–both “good” and “bad” entities occupy the same place in the visual spectrum.

Second, naming conventions are infused with popular culture, through direct references and quotations and in their style, sound and visual aspect.  Many company names and analysis tools in cybersecurity evoke a popular culture crossover between noir, science fiction, fantasy and cyber punk. Vulnerabilities receive names that could be straight from dystopian fiction, like “Heartbleed,” “Spectre,” “Meltdown,” and “Rowhammer”, while others highlight a darker aesthetic, such as “Black Lambert” and “Eternal Blue”. Although these are clearly strategic decisions, they also shape the identity of the individuals who work in these organizations and the organizations themselves. Consequently, names with popular cultural influences and associations not only enliven the working day for cybersecurity experts, but constitute the moral orientation of their world.

In 2017, British youth Marcus Hutchins became well-known among cybersecurity experts, following his portrayal as the person who singlehandedly stopped the devastating WannaCry virus that affected the UK’s National Health Service. However, Hutchins’ fame enabled other cybersecurity experts and U.S. law enforcement to follow a trail of domain names, malware names, and handles on hacker forums, including “ghosthosting,” “hackblack,” “blackshades,” and “blackhole,” to the creation of an illegal banking virus named Kronos. Hutchins was arrested months after his public appearance and sentenced to time served in July 2019 for his role in distributing this virus. Hutchins’ case is an extreme example of the relationship between noir aesthetics and transgressive practices. As his story illustrates starkly, many cybersecurity expert identities are constituted in such a way that practices like hacking back, undercover intelligence collection and participation in “grey” or “black” hacking forums seem to be a normal, even necessary, set of activities.

The article concludes that the fragile distinction between legitimate and malicious activity in cybersecurity expert discourses is not merely a question of technological similarities, exacerbated by particular economic and institutional structures. Instead, experts themselves perpetuate uncertainty over what is legitimate and malicious in cybersecurity. Their adoption of popular culture adds to the explicit obstacles confronting cybersecurity experts, suggesting that the task of separating legitimate and malicious is much more challenging than commonly thought. Consequently, the deepest difficulty in maintaining the legitimate/malicious binary–and therefore constructing a stable foundation for cybersecurity itself–is not the range of technological, social, and economic pressures explicitly recognized by cybersecurity experts, but their implicit embrace of cyber-noir.

James Shires is an Assistant Professor at the Institute for Security and Global Affairs, University of Leiden. He is the author of “Cyber-noir: Cybersecurity and popular culture”, Contemporary Security Policy, Advance Online Publication,  available here.

How should we use cyber weapons?

cyberIn a recent article in Contemporary Security Policy, Forrest Hare argues that we should shift the cyber conflict debate from the “Can we?” question to the “How should we?” question.

The recent release of the United States Department of Defense’s 2018 Cyber Strategy timed closely with National Security Advisor John Bolton’s declaration that the White House has authorized offensive cyber operations suggests that the United States intends to take a much more aggressive approach to combatting perceived threats in the domain.

However, these developments generate as many questions as answers. For example, is the U.S. military prepared with the capabilities required to make good on the National Security Advisor’s declaration? How should the US Department of Defense even structure its military capabilities to combat the threats it faces in the domain?

In my recent CSP article, I hope to spur the cyber conflict debate forward in a productive direction, and away from a focus on strategic alarms, so we can get at answers to these questions. In other words, we must acknowledge that military conflicts have now expanded to cyberspace and it is time to start focusing on ensuring that its conduct is carried out in a professional manner that address all the valid concerns and implications of conflict in the domain.

With this backdrop, I argue in my article that developing precision cyber weapon systems, to be used during a lawful conflict, can be an important part of a responsible national security strategy to reduce the amount of violence and physical destruction in conflicts. To make this argument, I first describe a precision cyber weapon system in a military context. I then present three compelling rationales for the development of precision cyber weapon systems based on ethical, operational, and financial considerations.

For many years now, we have been engaged in debates about the potential for acts of “mass disruption” in cyberspace and the possible legal, moral, and other implications of such strategic incidents. Many writers and popular media have raised the alarm about the dangers that will confront us as a result of an all-out cyber conflagration. Is it possible that this resistance to accepting the usefulness of cyber capabilities has actually led to more death and destruction in conflicts?

Detractors may not consider that an unintended consequence of their conflation of issues, and a singular focus on potential strategic effects, may be creating greater risk to the warfighter, civilian populations, and even the taxpayer. Arguments against the use of any cyber weapon capabilities may put militaries and civilians on both sides of a conflict at unnecessary risk when kinetic weapons may be preferred unnecessarily.

To be clear, I do not argue that precision cyber weapons will be a panacea. We should never expect cyber weapons to replace other weapons in conflict. There will always be a requirement for a spectrum of capabilities to defend a nation in all domains. However, I look forward to the day when there is a broad acknowledgement by military and academic professionals to consider precision cyber weapons an important force multiplier and component of a responsible national security strategy.

Forrest Hare is a retired Colonel in the United States Air Force having served most recently at the Defense Intelligence Agency. His recent article “Precision cyber weapon systems: An important component of a responsible national security strategy?”, Contemporary Security Policy, Advance online publication, is available here