Contested public attributions of cyber incidents and the role of academia

In a recent article in Contemporary Security Policy, Florian J. Egloff reflects on the contested nature of public attributions of cyber incidents and what role academia could take up.

In the last five years, public attribution of cyber incidents has gone from an incredibly rare event to a more regular occurrence. For example, in October 2018 the UK’s National Cyber Security Centre publicized its assessment of cyber activities conducted by the Russian military intelligence service (also known by its old acronym, the GRU). Clearly, publicizing activities that other political actors like to keep secret is a political act – but what kind of political act is it and what happens when a government publicly attributes? 

For research on governmental public attribution, one can split the public attribution process into two phases: mechanisms that lead to public attribution and what happens after an incident is publicly attributed. Little research exists on either phase with regard to attribution of cyber incidents. This is problematic, as our understanding of contemporary security policy rests on understanding what drives threat narratives, how and why those particular ones are introduced publicly, and how contestation of threat narratives takes place in the public sphere. 

In a recent article for Contemporary Security Policy, I focus on this second phase of public attribution, namely, what happens after a government goes public about a cyber incident. Understanding this phase is important, as public attributions of cyber incidents are one of the main sources from which the public learns about who is attacking whom in cyberspace, thereby shaping the threat perception of the general public. Most attribution judgements are published by governments, the private sector, and a small number of civil society actors. To situate the knowledge space, in which attribution claims are introduced to, I reflect on this source of knowledge about cyber conflict by identifying how it structurally shapes our understanding of cyber conflict, in particular due to operational and (political, commercial, and legal) structural factors. In short, due to the commercial incentives on the private sector side and the political bias on the government side, the public data about cyber conflict structurally induces distrust into the representativeness of the public attribution statements.

I then focus on the contestation of public attribution claims in democracies and the consequences such contestation brings. Contestation is fundamental to democratic politics. The open debate, the ability of everyone to freely voice opinions, and the emergence of truth trough democratic discourse is foundational to the public sphere of democratic polities. Thus, the ability to contest is a sign of healthy democratic politics. However, as I show in the article, this openness to contestation, coupled with the information poor environment, creates particular problems in the area of cybersecurity. 

Attribution claims are introduced, contested, and even the possibility to do attribution is put into question. Disinformation tactics are used to muddy specific attribution claims, leaving an electorate exposed to the coexistence of multiple “truths” and a fractured narrative of the past. Due to the secrecy attached surrounding the attribution processes by governments, particularly due to concerns of intelligence agencies about sources and methods, governments are often reluctant to reveal the evidence underlying the attribution judgments. These are ideal enabling conditions for other actors to contest governmental claims. 

In a series of empirical examples (Sony, DNC, NotPetya), I reflect on the drivers of contestation after an incident is publicly attributed and show how attackers and other constituencies with various political and economic motivations purport particular narratives. The Sony incident highlights the difficulty a government can have in convincing an electorate of its claims, when there is no record of accomplishment in making attribution claims in public. The DNC intrusion shows how the attacker can take part in the meaning-making activities, actively trying to dispel the notion that the government knows who is behind a cyber incident. Finally, the NotPetya incident illustrates how actors seemed to have learned from the contested cases. In particular, the coordination of attribution claims across different countries and entities was specifically designed to bolster the legitimacy and credibility of the attribution claims at the international level.

Finally, I reflect on how academia could be a partial remedy to this situation. Academia, so far, has not been a strong participant in the discursive space around particular attributions. This is despite its commitment to transparency and independence theoretically making it a well-placed actor to contribute an independent interdisciplinary contribution on the state of cyber conflict. Thus, I argue for an increasing need for academic interventions in the area of attribution. This includes interdisciplinary research on all aspects of attribution (not just in cybersecurity), and conducting independent research on the state of cyber conflict historically and contemporarily. One of the main implications of this research on contestation of attribution claims for democracies are to be more transparent about how attribution is performed, to enable other civilian actors to study cyber conflict, and to thereby broaden the discourse on what is one of the main national security challenges of today. 

Florian J. Egloff is a Senior Researcher in Cybersecurity at the Center for Security Studies at ETH Zürich. He is the author of “Contested public attributions of cyber incidents and the role of academia”, Contemporary Security Policy, Advance Online Publication, available here. A shorter policy analysis on the subject can be found here.

Cyber-noir: Popular cultural influences on cybersecurity experts

In a recent article in Contemporary Security Policy, James Shires draws on film noir to discuss portrayals of cyber in popular culture.

In his testimony to the House of Representatives sub-committee on cybersecurity in 2013, Kevin Mandia, a cybersecurity CEO and former U.S. government official, emphasized that “cyber remains the one area where if there is a dead body on the ground, there is no police you call who will run to you and do the forensics and all that”. This was of course a metaphor, as there was no literal dead body in the Chinese cyber-espionage cases his company were known for. Nonetheless, he portrayed his role exactly like the start of a film noir: an absent police presence, a violent act and a dead body, and a self-reliant private investigator. Was this just a figure of speech? Or is there something else going on–something more fundamental to cybersecurity itself? 

A foundational problem in cybersecurity is drawing a clear dividing line between legitimate and malicious activity. This is difficult because cybersecurity is an environment swamped with data, where identical tools and tactics are used for different ends, and where social and economic structures linking offensive and defensive action compound technical similarities. These obstacles to distinguishing between legitimate and malicious cyber activity are well recognized by both practitioners and scholars.

In a recent article for Contemporary Security Policy, I highlight another factor that is rarely discussed but no less important: popular cultural influences on cybersecurity experts. Cybersecurity expert practices are infused with visual and textual influences from broader discourses of noir in popular culture, including dystopian science fiction, fantasy, and cyber-punk: a phenomenon I call “cyber-noir”. These influences produce cybersecurity identities that are liminal and transgressive, moving fluidly between legitimate and malicious activity. To paraphrase a neat description of film noir leads, cybersecurity experts see themselves as “seeming black and then seeming white, and being both all along”.

In the article, I examine two forms of popular cultural influences on expert practices: visual styles and naming conventions. I suggest that these influences create a morally ambiguous expert identity, which in turn perpetuates practices that blur the legitimate/malicious boundary.

First, due to its relative novelty and digital basis, many concepts and objects in cybersecurity have no obvious visual association. This gap means that, as Hall, Heath, and Coles-Kemp suggest, many techniques of cybersecurity visualization deserve further critical scrutiny. Through code images signifying illegibility and technical sophistication, and pseudo-geographic “attack maps” emphasizing constant threat, cybersecurity is portrayed as a dark and uncertain world where simulation slips easily into reality and reality into simulation. A range of threatening identities using images of noir characters, various coloured hats, and hooded hackers add to this atmosphere. These images and visual styles use noir aesthetics and palettes to convey transgression, danger and moral ambiguity. Although light and dark shades are classically associated with good and evil, in cybersecurity–as in noir–both “good” and “bad” entities occupy the same place in the visual spectrum.

Second, naming conventions are infused with popular culture, through direct references and quotations and in their style, sound and visual aspect.  Many company names and analysis tools in cybersecurity evoke a popular culture crossover between noir, science fiction, fantasy and cyber punk. Vulnerabilities receive names that could be straight from dystopian fiction, like “Heartbleed,” “Spectre,” “Meltdown,” and “Rowhammer”, while others highlight a darker aesthetic, such as “Black Lambert” and “Eternal Blue”. Although these are clearly strategic decisions, they also shape the identity of the individuals who work in these organizations and the organizations themselves. Consequently, names with popular cultural influences and associations not only enliven the working day for cybersecurity experts, but constitute the moral orientation of their world.

In 2017, British youth Marcus Hutchins became well-known among cybersecurity experts, following his portrayal as the person who singlehandedly stopped the devastating WannaCry virus that affected the UK’s National Health Service. However, Hutchins’ fame enabled other cybersecurity experts and U.S. law enforcement to follow a trail of domain names, malware names, and handles on hacker forums, including “ghosthosting,” “hackblack,” “blackshades,” and “blackhole,” to the creation of an illegal banking virus named Kronos. Hutchins was arrested months after his public appearance and sentenced to time served in July 2019 for his role in distributing this virus. Hutchins’ case is an extreme example of the relationship between noir aesthetics and transgressive practices. As his story illustrates starkly, many cybersecurity expert identities are constituted in such a way that practices like hacking back, undercover intelligence collection and participation in “grey” or “black” hacking forums seem to be a normal, even necessary, set of activities.

The article concludes that the fragile distinction between legitimate and malicious activity in cybersecurity expert discourses is not merely a question of technological similarities, exacerbated by particular economic and institutional structures. Instead, experts themselves perpetuate uncertainty over what is legitimate and malicious in cybersecurity. Their adoption of popular culture adds to the explicit obstacles confronting cybersecurity experts, suggesting that the task of separating legitimate and malicious is much more challenging than commonly thought. Consequently, the deepest difficulty in maintaining the legitimate/malicious binary–and therefore constructing a stable foundation for cybersecurity itself–is not the range of technological, social, and economic pressures explicitly recognized by cybersecurity experts, but their implicit embrace of cyber-noir.

James Shires is an Assistant Professor at the Institute for Security and Global Affairs, University of Leiden. He is the author of “Cyber-noir: Cybersecurity and popular culture”, Contemporary Security Policy, Advance Online Publication,  available here.

How should we use cyber weapons?

cyberIn a recent article in Contemporary Security Policy, Forrest Hare argues that we should shift the cyber conflict debate from the “Can we?” question to the “How should we?” question.

The recent release of the United States Department of Defense’s 2018 Cyber Strategy timed closely with National Security Advisor John Bolton’s declaration that the White House has authorized offensive cyber operations suggests that the United States intends to take a much more aggressive approach to combatting perceived threats in the domain.

However, these developments generate as many questions as answers. For example, is the U.S. military prepared with the capabilities required to make good on the National Security Advisor’s declaration? How should the US Department of Defense even structure its military capabilities to combat the threats it faces in the domain?

In my recent CSP article, I hope to spur the cyber conflict debate forward in a productive direction, and away from a focus on strategic alarms, so we can get at answers to these questions. In other words, we must acknowledge that military conflicts have now expanded to cyberspace and it is time to start focusing on ensuring that its conduct is carried out in a professional manner that address all the valid concerns and implications of conflict in the domain.

With this backdrop, I argue in my article that developing precision cyber weapon systems, to be used during a lawful conflict, can be an important part of a responsible national security strategy to reduce the amount of violence and physical destruction in conflicts. To make this argument, I first describe a precision cyber weapon system in a military context. I then present three compelling rationales for the development of precision cyber weapon systems based on ethical, operational, and financial considerations.

For many years now, we have been engaged in debates about the potential for acts of “mass disruption” in cyberspace and the possible legal, moral, and other implications of such strategic incidents. Many writers and popular media have raised the alarm about the dangers that will confront us as a result of an all-out cyber conflagration. Is it possible that this resistance to accepting the usefulness of cyber capabilities has actually led to more death and destruction in conflicts?

Detractors may not consider that an unintended consequence of their conflation of issues, and a singular focus on potential strategic effects, may be creating greater risk to the warfighter, civilian populations, and even the taxpayer. Arguments against the use of any cyber weapon capabilities may put militaries and civilians on both sides of a conflict at unnecessary risk when kinetic weapons may be preferred unnecessarily.

To be clear, I do not argue that precision cyber weapons will be a panacea. We should never expect cyber weapons to replace other weapons in conflict. There will always be a requirement for a spectrum of capabilities to defend a nation in all domains. However, I look forward to the day when there is a broad acknowledgement by military and academic professionals to consider precision cyber weapons an important force multiplier and component of a responsible national security strategy.

Forrest Hare is a retired Colonel in the United States Air Force having served most recently at the Defense Intelligence Agency. His recent article “Precision cyber weapon systems: An important component of a responsible national security strategy?”, Contemporary Security Policy, Advance online publication, is available here