Three generations of proxy war research

In a recent article, Vladimir Rauta evaluates the progress of the proxy wars debate. He finds that there are three different generations of scholars: the founders, framers, and reformers. This conceptualization is helpful in thinking how to take research on proxy wars forward.

In the first half of 2020, the Syrian civil war entered its tenth year, while the Libyan civil war became the Middle East’s most important proxy war. Iraq is turning into a battleground for foreigners once again, still scarred by its civil war and the international efforts against ISIS. At the same time, the latter’s factions are quickly adapting to regional proxy games, with the Islamic State in Yemen, for example, transforming into an entity resembling a proxy or a tool in a broader conflict between regional players.

What is more, the renewed prospect of ethnic strife in Ethiopia comes only a year after the momentous awarding of the Nobel Peace Prize to its Prime Minister, Abiy Ahmed. The award was in recognition of Ahmed’s efforts to normalize its relationship with neighbouring Eritrea, ending a decades-long cycle of proxy wars. That Ethiopia faces the prospects of proxy wars once more is testimony to the enduring appeal of wars on the cheap and the frailty of agreements designed to end them. As such, proxy wars are neither new nor rare, and the same can be said about their study.

Over the last decade, proxy war research has matured in recognition of the multiple problems proxy wars pose to the international system. This presents an opportunity to take stock of the proxy war debate in order to understand its past, present, and future. Two questions are relevant here: First, how has proxy war literature evolved? And, second, how has proxy wars research added up? 

In answering the first question, we can think about the debate as evolving across three “generations”: (1) founders, (2) framers, and (3) reformers. The founders refers to a generation of scholarship emerging during the Cold War and its immediate aftermath. This identifies the pioneering work on proxy wars as a point of reference to theoretical and conceptual accounts emerging in the distinct socio-political context of the Cold War and its aftermath.

The framers contributed to the scholarship emerging in the aftermath of 9/11 and around the time of the Arab Spring. Not only did they register the absence of a debate on proxy wars, but they set out the trajectory for their future study in a programmatic shift that drew on creativity, intuition, and intellectual vigor.

Finally, the reformers captured the rise of proxy wars as the Syrian, Yemeni, and Libyan civil wars collapsed under the external pressures of proxy dynamics. The Russian annexation of Crimea, the ensuing proxy war in the South-East of Ukraine, and the transformation of the so-called Obama Doctrine into a set of strategic responses through proxies added empirical weight.

Thinking about the debate through the lens of “generations” serves to show how much we actually know, how diverse research is (in terms of discipline, sub-fields, and methodologies and theories), and helps set a benchmark for where research might go.

The second question invites us to reconsider the assumptions informing each generation’s innovative research. One the one hand, the three generations show that we have come to know a lot about proxy wars. On the other hand, this is undermined by the debate’s insistence that proxy wars are still “under-analyzed”, “under-conceptualized”, or “under-theorised”.

To assess the tension between framing the debate as “under-researched” and its actual advancements, we should consider, first, the enhancement and expansion of the historical basis of proxy wars research, and, second, the development of theoretically rich accounts of the strategic interactions behind proxy relationships.

In short, we should assess the role of both history and strategy for the future development of proxy war research. Because proxy wars invite a narrow reading of history which locates them at the centre of the Cold War superpower competition, future research should consider a historiography of the idea of “proxy war”.

What we need a long term perspective that rethinks proxy war beyond the confines of the Cold War to show the trans-historical character of considerations and constraints over decisions to go wage war by proxy. A reappraisal proxy war against a wider historical background has the potential to minimize myth-making, errors in analogy, and provide insights serving as more than sources of data.

Similarly, strategy helps understand why proxy wars are now seen, as General Sir Richard Barrons put it, the most successful kind of political war being waged of our generation. The basic intellectual structure of strategy–ends, ways, means, and assumptions –serves because proxy wars are a set of choices: over whom, by whom, against whom, to what end, to what advantage to wage indirect war.

Strategy and strategic interaction are a productive framework allowing policy and scholarly debate to move forward by shifting the focus on strategic bargaining between actors. Through this, we can then appreciate the extent to which proxies are invested in warfighting, how other states might respond to proxy strategic environment, and how to balance escalation with inaction or retreat. 

Vladimir Rauta is a Lecturer in Politics and International Relations at the University of Reading in the United Kingdom. He is the author of “Framers, founders, and reformers: Three generations of proxy war research”, Contemporary Security Policy, which can be accessed here

The concept of resilience and critiques of international intervention

International interventions are often criticized by scholars for not doing enough, including not enough local ownership. The newly emerging concept of resilience also falls victim to these critiques. In a new article, Pol Bargués-Pedreny warns that this may lead to a dangerous nihilism in processes of international intervention: the acceptance of a permanent failure.

International policymakers assisting disaster and conflict-affected societies often appear confused. While overwhelmed by implementation dilemmas, they continue working nevertheless. Their policies are riddled with inconsistencies that regularly fail or lead to unanticipated consequences, and yet they learn and try again.

A commentator captures this well, when he writes: “policymaking has found its ways of living with affirmation. It has developed concepts of peace governance ambiguous enough to conceptually work even when failing in practice.” Why is it that policies to enhance resilience suffer from a chronic deficiency, which needs to be made good? 

Nowhere is the perception of deficit clearer than in accounts that promote “local ownership.” Drawing on the poor track record of international interventions designed from the “top-down,” the underlying assumption is that interventions should be locally-driven and context specific. Yet the policy of “local ownership” never seems to work out in practice. Sometimes local actors appear incapable of taking the lead, some other times interests of different groups conflict with one another. Thus the preferences of international interveners end up imposing themselves.

Today, the key concern is how to bridge the gap between discourse and practice of local ownership; how to fulfill the commitment to transferring competences satisfactorily, in every context and policy area, so that local actors are more than mere implementers of an external agenda. Critical scholars have insisted on the need to push the policy further. For them, as for policymakers, more resources and efforts are required to make local ownership real, meaningful. As one scholar notes, “[The international community must] rely more on the very people it is ostensibly trying to protect.” 

The perception of deficit can also be seen when looking at the new technologies that enhance humanitarian interventions. The bond between technology and innovation for humanitarian purposes started a decade ago and reached its climax in May 2016, in the firstever World Humanitarian Summit. The Global Alliance for Humanitarian Innovation was created to bring the aid industry, governments, private sector partners, and hubs together to scale innovation. New technologies are at the core of innovation ventures and bring the promise of “finding needles in haystacks,” penetrating up to the most isolated areas and leaving no one behind. Yet there are always biasis, deficiencies, glitches, that need to be corrected. 

For instance, digital maps collect large amounts of diverse information at unprecedented speeds with the promise that the information that is updated and verified by local actors could help practitioners understand, be attentive and respond to the everyday needs and concerns of the people affected by conflict or disaster. However, the conclusion seems to be that information given by crisis mapping projects is incomplete, distorted, tinged with power biases and false representations of space.  Thus, the demand is that more (and better) data should be gathered to meet expectations. Innovation always requires new and better maps and gadgets.

The third example where the perception of deficit energizes international intervention can be seen in accounts that demand longer missions and operations. International policymakers that  seek immediate results provide reductionist and simplistic analyses, neglecting several unforeseen effects. Instead, prolonged engagements offer the opportunity to build contacts and strengthen alliances; make insightful observations and conduct more accurate, context-specific and in-depth analyses; also, they increase the likelihood of witnessing the evolution of events and allowing serendipitous discoveries.

Although current international missions and operations are already committed to long-term conflict prevention (before conflict starts) and peace consolidation strategies (after conflict ends), “more” is always better. A generalised feeling is that implementation programs are still dominated by short-term concerns and are not open enough to unpredictability and change. They operate with too much haste. Therefore, long-term perspectives are always necessary to be able to foster resilience and adapt to uncertainty. There is never an end-state called resilience, where peace and harmony could settle: no mission, initiative or policy seems to get us close enough to finally achieve resilience. 

What are the consequences of thinking that policies are always in the wrong? On the one hand, by identifying a “lack” in processes of promoting resilience (something was absent, someone was omitted, something else could be done) may be liberating and useful, giving another chance of genuine improvement. On the other hand, however, it is important to relate this perception of deficit to contemporary forms of neoliberal governance that continually expand. A sense of “deficit” sustains “the economic logic of late-capitalism”: “the endless willingness to happily fail-forward into the future.” By “governing through failure and denial” and interpreting crises and past failures as new opportunities, measures for further monitoring and control are legitimized, while critiques and challenges to broader structures are deemed impossible.

The feeling that resilience is “always more” also leads to a dangerous nihilism in processes of international intervention: the acceptance of a permanent failure. As Jessica Schmidt puts it, resilience approaches today accept the problem of “never getting it right” and, in consequence, translate it into a virtue: “always having to adapt.” As practitioners are in awe with resilience, skepticism about programs and policies spreads. Results seem impossible to achieve and the end appears remote. The general direction is towards accepting idleness and despair.

Pol Bargués-Pedreny is a researcher at the Barcelona Centre for International Affairs (CIDOB) and the author of “Resilience is ‘always more’ than our practices: Limits, critiques, and skepticism about international intervention”, Contemporary Security Policy, 41(2), pp. 263-286. The article is available here.

Resilience and EU refugee policy: A smokescreen for political agendas?

“Resilience” enjoys widespread uptake across many and diverse domains – including security and crisis response. Shrouded in ambiguity and uncertainty, however, it may be just a buzzword as we know little about the implications of resilience as a strategy to insecurity and crisis. Exploring resilience in EU humanitarian and development policy and how it translates into practice in Jordan and Lebanon, we argue in a recent article that resilience-building may function as a smokescreen for buttressing “Fort Europe” against migrants and refugees. 

 “Resilience” enjoys widespread uptake across many and diverse domains, from technology to business management, to urban planning and counselling. The word stems from the Latin “resilire” – to leap or jump back. It gained traction in the 1970s, when the Canadian ecologist Holling defined resilience as the ability of ecological systems to absorb change and disturbance. Borrowing from Holling, risk scholars like Wildavsky viewed resilience as “the capacity to cope with unanticipated dangers after they have become manifest, learning to bounce back”. Wildavsky argued that resilience was a more effective and cheaper strategy to deal with risks than anticipation and prevention. From the 1990s onwards, resilience became an integral component of disaster risk reduction (DRR) programmes, aimed at minimising the impact of natural disasters and enhancing recovery.

Policymakers have recently started to use resilience in the context of man-made disasters and crisis. For example, resilience has been identified as a major leitmotif in the 2016 European Union (EU) Global Strategy for Foreign and Security Policy. Prior to the Global Strategy, resilience was already an important component of EU humanitarian and development policies, especially in the context of migration and forced displacement. The EU was not the first to use this buzzword: the UK placed resilience at the centre of its humanitarian and development aid in 2011. Shortly thereafter, USAID published policy and programme guidelines for “building resilience to recurrent crisis”. United Nations (UN) agencies and large international non-governmental organizations (NGOs) currently all have policies, guidelines and programmes aimed at building resilience. 

Despite its widespread uptake, uncertainty remains about what resilience is, how it translates into practice, and the implications of resilience-building as a response to insecurity and crisis, qualifying resilience as a buzzword. The ambiguity surrounding buzzwords often lead scholars and practitioners to dismiss them as empty and meaningless. Yet buzzwords generally espouse strong (normative) ideas about what they are supposed to bring about. The assumptions and rationales underlying buzzwords, moreover, frequently remain unquestioned, making them interesting to study. In our recent article, we examine the EU turn to resilience by analysing key EU humanitarian and development policies. Subsequently, we delve into an empirical example of resilience-building in Jordan and Lebanon to explore how this buzzword translates into practice.  

Our policy analysis yields two aspects that are key in EU resilience thinking. First, resilience-building requires humanitarian and development actors to be simultaneously involved in crisis response and to work closely together. The so-called “humanitarian-development nexus” resonates with older concepts aimed at bridging the ideological and institutional divide between humanitarian and development actors, such as Linking Relief, Rehabilitation and Development (LRRD). 

Second, resilience assigns significant importance to ‘the local’. This means, firstly, that the EU recognizes the importance of understanding context-specific vulnerabilities and their (root) causes, as well as what local capacities exist that humanitarian and development interventions could tap into, build upon, and strengthen. Next, resilience is strongly framed as the responsibility of national governments and local authorities. Finally, the EU constructs refugees in particular as an asset to host-country economies, their resilience dependent on access to host-countries’ formal labour markets. Refugees are turned into a development opportunity for refugee-hosting states – but at the same time constitute a threat to Europe.

How do these different aspects of resilience translate into practice in Jordan and Lebanon? Jordan and Lebanon host the largest number of Syrian refugees in respect to the size of their population. Government estimates indicate Jordan hosts up to 1.3 million refugees and Lebanon 1.5 million – respectively 13 and 25% of their population. In response to the challenges of Syria’s neighbouring countries, the multi-agency response framework – the Regional Refugee and Resilience Plan (3RP) – was established in 2015. 

In line with EU thinking, the 3RP combines a humanitarian response to protect Syrian refugees with a development response to build the resilience of national government and affected host communities. Although the 3RP structure simultaneously engages humanitarian and development actors in the response, evidence shows that different funding modalities and tensions between (leading) UN agencies weaken rather than strengthen the humanitarian-development nexus in practice. 

Second, whereas the 3RP country chapters are officially under the leadership of the Jordanian and Lebanese government, significant challenges arise in practice. Especially the involvement of Lebanese authorities was limited at the start of the crisis, its later statements and measures straining its relationship with the international community. Evidence indicating that Lebanon may strategically maintain the precariousness of Syrian refugees’ lives, moreover, points at the need for caution in insisting on national governments’ responsibility. 

Finally, the same framing of refugees as a development opportunity underlies initiatives like the EU-Jordan Trade Agreement, which promises access to EU markets in exchange for refugee work permits. The nature of the Jordanian and Lebanese labour market – in combination with structural political, social and economic problems – makes refugees’ employment as a pathway to resilience an unlikely reality. It also constructs refugees as a commodity, to be exchanged for aid. 

In conclusion, the way in which resilience is understood and the challenges it generates when translating resilience into practice, make us wonder whether this buzzword is not just a smokescreen for ulterior political motives. Building the resilience of “countries of origin and transit” may conveniently prevent migration, meanwhile externalizing the control of migration and forced displacement to crisis-affected states. As Jordan and Lebanon continue to struggle with the impact of the crisis, the EU’s strategy of refugee containment may instead increase their vulnerability, ultimately threatening rather than safeguarding the security of Europe.

Rosanne Anholt and Giulia Sinatti work at the Vrije Universiteit Amsterdam. They are the authors of “Under the guise of resilience: The EU approach to migration and forced displacement in Jordan and Lebanon”, Contemporary Security Policy, which is available here.

The Afghanistan model for Somali peace negotiations

Mohamed Haji Ingiriis makes a case for peace talks with Al-Shabaab in Somalia on the model of the ongoing negotiations with the Taliban. This blog post builds on an earlier article published in Contemporary Security Policy.

The ongoing talks with the Taliban in Afghanistan held in Doha and Moscow have generated some enthusiasm in Somalia, as many Somalis demand similar talks with Harakaat Al-Shabaab Al-Mujaahiduun (Al-Shabaab).

Al-Shabaab, Somalia’s Taliban, has evidently heard of the growing demands from the Somali public for United States to have direct negotiations with the insurgent movement, like the regime in Kabul. Thus far, nonetheless, there is no statement from the insurgency movement. Al-Shabaab’s silence can be interpreted both as an acceptance or a rejection of any talks. 

For many years, Al-Shabaab has insisted on not talking to the Western-backed ‘puppet’ regimes in Mogadishu. Yet privately some elements within the Al-Shabaab leadership contacted and told the current president of the regime in Mogadishu that they would be ready to talk to him.

Indeed, Al-Shabaab negotiated successfully in the past with some African governments like South Africa in 2010 for safety and security issues around the World Cup (see Stig Jarle Hansen’s, Al-Shabaab in Somalia: The History and Ideology of a Militant Islamist Group, 2013). This is an indication that Al-Shabaab is open to negotiations, apparently when that benefit their politics.

The Legitimacy and Strategy

The strategy of Al-Shabaab is to gain a bigger bargain from the United States or other international community involving Somalia. For the calculative Al-Shabaab leadership, talking to the big powers is much more beneficial than talking to a dysfunctional failed state in Somalia.

In many ways, Al-Shabaab is similar to the Taliban, which has long refused to sit and negotiate with the regime in Kabul but accepted only negotiating with the United States, because – like Al-Shabaab leadership – the Taliban leadership see the Western-backed Afghan entity as a ‘puppet regime’.

Negotiations for talks with insurgency groups like Al-Shabaab or Taliban start with a tit-for-tat questions of legitimacy: who should talk to who, what, when and why. But the end goal is a political settlement to create peace among war-torn societies like Somalia and Afghanistan.

If one can draw a lesson from the Taliban manoeuvres, Al-Shabaab will at last come to the table with the regime in Mogadishu. In the third round of the negotiations between the regime in Kabul and the Taliban, the Afghan leadership came to the table, not as an official state government, but more or less as an observer entity.

A New American Approach?

In Somalia, the United States can certainly play the role of Russia is currently playing in Afghanistan. Historically, Washington has a history of violent engagement with Somalia in the 1990s, not much less than the Moscow’s violent engagement with Afghanistan in the 1980s.

The United States can now change the course by taking another route. It needs to engage with all Somali stakeholders including Al-Shabaab, regardless of their political or religious position. In this way, the United States can change the bad image held by many Somalis that Washington works against Somali interests both past and present times.

Today, there are many and multiple (internal and external) conflicts in Somalia, but the main contemporary critical conflict is the one between Al-Shabaab and the international community forces in Somalia. The regime in Mogadishu acts in this war as a rubber stamp for the United States to legitimise its operations in the form of drone attacks on the Al-Shabaab areas in southern Somalia.

The African Union Forces, funded by the Western countries, particularly the European Union, are seen by most Somalis as a mercenary forces for the United States. The AMISOM forces are doing a good job for the United States to protect Al-Shabaab from the corrupt regime in Mogadishu.

By sending drones from the air to Al-Shabaab, the United States continues to frustrate the capacity and capability of Al-Shabaab to conduct and carry out regular attacks outside Mogadishu, but Washington will hardly eliminate the capacity of Al-Shabaab to conduct usual attacks in Mogadishu and elsewhere in East and Horn of Africa region.

Yet, during the course of my fieldwork research in Somalia over the last four years, many Somali elders and intellectuals in the Somali capital city of Mogadishu and elsewhere in southern Somalia would regularly express concern about the United States’s aggressive and uncompromising approach at Al-Shabaab, while negotiating with Taliban on the other hand.

“Why is the United States not positively engage with Al-Shabaab? Why is the United States constantly conducting airstrike against Al-Shabaab in Somalia, but not against the Taliban in Afghanistan?” These were some of the questions posed by local Somalis on the streets or sitting in Somalia cafes.

Recent research into the Taliban in Afghanistan revealed similarly that the Taliban is not that uniquely cruel and that compared to other 20th-century ideologies such as socialism and communism, they have killed less people and rarely been charged with genocide. This can also be applied to Al-Shabaab.

At a time Somalia celebrates more than three decades of an absence of functional governance in southern Somalia, there is no better time to directly talk to armed insurgency like Al-Shabaab posing threat to the external attempts to impose a type of suitable entity for Mogadishu.

Mohamed Haji Ingiriis is pursuing a doctoral degree at the Faculty of History, University of Oxford, the UK. He published “Building peace from the margins in Somalia: The case for political settlement with Al-Shabaab”, Contemporary Security Policy, 39(4), 2018, 512-536, available here.

The Counterproductive Consequences of America’s Vicarious Wars

PIC 1In seeking to confront various security threats while simultaneously evading associated military and political costs, America has come to rely on the vicarious warfighting approaches of delegation, danger-proofing and darkness. Thomas Waldman shows in a new CSP journal article that the results are not promising. Security is not a commodity that can be bought on the cheap.

Following the failed military campaigns of the 2000s, America has not shied away from military intervention but has instead settled upon a low-level, limited, and persistent mode of fighting which I term ‘vicarious warfare.’

The concept covers a diverse range of military approaches that come together in different combinations in different contexts. It is broadly characterised by the outsourcing of military missions to proxy actors, the use of force in ways that minimizes the danger to American personnel and assets, and the conduct of covert and special operations in the shadows.

These methods are held together by decision-makers’ belief that wars can be fought economically, at arm’s length, and in discrete, limited and controllable ways, while at the same time evading various risks and restraints. In a recent article, I argue that the rationales underpinning the prosecution of vicarious warfare are deeply flawed. The attractions of such methods are clear, but the benefits are outweighed by longer-term harmful effects.

U.S.-led Operation Inherent Resolve has arguably been fought as an archetypal vicarious war and, in late 2017, has largely succeeded in removing Islamic State from its major strongholds in Iraq and Syria. Welcome news of course, but at what cost for the future?

In Syria, American-backed groups find themselves in confrontation with regional powers and new political realties make future ethnic strife between Kurds and Arabs likely. In Iraq, the way the operation to retake Mosul was conducted means “there is a real risk that this battle will form one more chapter in a seemingly endless cycle of devastating conflict.”

PIC 2But how can we account for the emergence of vicarious warfare? Looking back to the early 2000s, influential voices such as General Sir Rupert Smith suggested that we had entered into an age of “war amongst the people” – timeless irregular conflicts involving non-state actors and influenced by an ever-present mass media. Many American security elites thought it advisable to steer clear of such messy conflicts, especially following the bloody debacles in Iraq and Afghanistan.

Yet, contrary to informed and sober analysis, politicians continued to believe that America was assailed by various menacing threats and risks – such as those posed by radical Jihadists and other rogue actors – that had to be confronted with force. But how to do this without being dragged into yet more debilitating irregular wars?

Evolving methods appeared to offer a way to essentially flip Smith’s logic and fight “war without the people” – to prevent serious security incidents, while keeping the necessary measures economically affordable, socially acceptable, legally permissible, and politically viable. Responsibility could be delegated to those designed to take considerable risks (special forces), those about whom the public is little concerned (private contractors, proxies), or those with the ability to sweep risk under the carpet (CIA).

This is the essence of vicarious warfare, and I suggest that it can usefully be understood as comprising three “Ds”: delegation, danger-proofing and darkness. Briefly considering each in turn, it is possible to see how vicarious methods lead to consistently and cumulatively counterproductive outcomes.


The notion that proxy actors might serve as effective force multipliers while concealing the true costs of war appears persuasive. However, the empirical record is less positive and most rigorous studies profoundly sceptical. Rushed programs to build state security forces, sacrificing quality and sustainability for immediate effect, have resulted in “hollow” forces plagued by corruption, divisions and operational deficiencies. Support to irregular militias has been typified by short-term gains balanced by long-term harm: most groups have been associated with a lack of control, radicalization, and abuses. Similarly, incidents involving private contractors have generated baleful consequences leading scholars to conclude that the benefits of outsourcing “are either specious or fleeting, and its costs are massive and manifest.”


Driven by increased political interference in decisions that are usually the responsibility of commanders, America fights so as to minimize harm to American personnel. Yet, there are reasons to believe that excessive protection undermines operations and even increases the risk of casualties. Airpower and stand-off weapons such as armed drones and cruise missiles – extreme forms of danger-proofing, offering protection through distance – have rained death on America’s enemies. Yet, insurgent organizations “exhibit a biological reconstitution capacity” because the underlying causes of their regeneration remain unaddressed. The costs of unremitting drone warfare outweigh whatever tactical gains they deliver.


Covert action, special forces, and rapidly emerging offensive cyber warfare capabilities seemingly allow elites to attain objectives while evading difficult political questions. Yet, such approaches have contributed to major “blowback” and led to embarrassing political crises. Special forces have provided support to local forces, enabling impressive battlefield victories. Yet, focusing on “kinetic” operations has distracted attention from addressing critical underlying issues. Attempts to remove terrorist leaders through “decapitation” strikes have failed to defeat targeted groups, and may have contributed to their longer term lethality.

Operation Iraqi FreedomThe three “Ds” are all adopted for their attraction as low-cost, tactically effective approaches to deal with pressing challenges. Superficially, these approaches are not entirely without merit. Rather, it is the way they have come to drive policy that leads to counterproductive outcomes. They distract decision-makers from addressing vital political dynamics, encourage militarised approaches which exacerbate complex problems, and drag America into unintended commitments.

Perhaps more concerning is the deeper self-harm being inflicted on the American polity. The normalization of the persistent use of military force, the expansion of under-scrutinized executive authority and, the rise of xenophobic populism are perhaps just indications of worse things to come.

The record of the Trump administration’s first year in office suggests the central dimensions of vicarious warfare look set to persist. Trump’s loosening of rules governing the use of force by commanders and the marginalization of the State Department may usher in an era of unprecedented militarization, while the costs borne by civilians – directly through bombings, raids, and abuses, or indirectly through protracted conflict and psychological trauma – cumulatively fosters discontent and continued resistance.

Thomas Waldman in lecturer in security studies at Macquarie University. He has published widely on war, military strategy and contemporary conflict. His Twitter handle is @tom_waldman and his work can be followed on He is author of “Vicarious Warfare: The Counterproductive Consequences of Contemporary American Military Practice”, available here.

Why September 11 and drones don’t tell the whole story about targeted killings


To understand the proliferation of target killing as a new method of warfare, we have to look beyond events like 9/11 or the emergence of new technology.

For centuries, assassination was an accepted instrument of foreign policy and considered a normal practice. During the early modern period, however, resorting to assassination gradually became a taboo, something modern states would not do because of their self-perception as modern. Today we observe a weakening of this taboo. Reframed as “targeted killing,” assassination seems to move towards normalization, as more states engage in the practice and, instead of denying it, openly justify targeted killing strategies. “The gloves are off,” a senior CIA official stated mere weeks after the attack on the World Trade Center, “[l]ethal operations that were unthinkable pre-September 11 are now underway.”

Scholarly attempts at making sense of this normative change sometimes seem to implicitly share this assessment. They tend to overemphasize the role of September 11, 2001 and the ensuing “War on Terror” as turning points. Similarly, scholars have argued that the anti-assassination norm has been eroding because of the development and availability of drone technology. Consequentially, the vast majority of studies concerned with such normative change only look at post-9/11 cases. In my article, I seek to shift the focus. Rather than concentrating on major events or technology, I highlight the pivotal importance of two meta-norms, sovereignty and liberal thought, in the transformation of assassination norms prior to the War on Terror.

It has often been argued that historical state-sponsored assassination and present-day targeted killing constitute two completely different subjects, since the targeted killing of terror suspects seems so different from headline-grabbing assassinations of state leaders during the 19th and 20th century. Yet those share a common normative realm. When the term “targeted killing” was coined in the late 1990s and early 2000s, however, it represented a deliberate attempt to render some forms of killing permissible precisely by uncoupling them from their restrictive historical assassination context. Indeed, today’s targeted killing programs largely rest on similar logics, on the assumption that terrorist networks are centralized enough to allow attackers to degrade enemy functioning through killing leadership.

It is beyond doubt that 9/11 marked a severe turning point in security practices, and my article does not seek to refute its general importance. However, the normative underpinnings of those shifts were subject to much slower change–not as rapid as cursory accounts of the history of assassination might suggest. This transformation started not only before 9/11 but also well before the end of the Cold War.

During the early modern period, state-sponsored assassination became increasingly rejected due to the emergence of sovereign statehood and liberal thought. Those are reflected in debates about assassination as a specific (and from a liberal perspective deplorable) nature of killing as well as debates about the special protection of specific persons from being targets of assassination due to their status as representatives of sovereign statehood. This distinguishes assassination from many other changing international norms.

Liberal norms and the sovereignty norms have frequently collided, as the case of humanitarian intervention and the “responsibility to protect” exemplifies: Here, a liberal responsibility collides with sovereignty rights of nation states. The same is true for most norms rooted in human rights discourse, since the mere existence of such a norm means that it is universal enough to have some effect on the behavior of states, which is then by definition generates a tension with state sovereignty. It can be argued that the tension between the two meta norms of sovereignty and liberal thought constitute the core of most instances of norm contestation.

In this sense, assassination norms are peculiar. Rather than being in tension with one meta-norm and shielded by the other, they are rooted in both discourses. At the very core of the assassination/targeted killing normative realm lies an incentive to protect the long-term stability of sovereign states and a state-based order and a liberal impetus to avoid harm to human beings.

As I maintain in my article, this connection also helps understand the weakening of the norm, as they can be invoked by actors in order to reinterpret it. On a grand scale, the second half of the 20th century saw an overall strengthening of liberal values at the expense of state sovereignty. During the same period however, actors began emphasizing assassination’s sovereignty implications at the expense of its connection to liberal meta-norms.

Over time, the condemnation of state-sponsored assassination had become a mere subset of sovereignty, no longer shielded by its original powerful liberal underpinnings. Hence, when states began to openly advocate targeted killing policies in the early 21st century, precisely on the ground of liberal values and in spite of sovereignty during the War on Terror, the normative ground had already been prepared.

Mathias Großklaus is a PhD candidate at the Graduate School of North American Studies, Freie Universität Berlin. He is the author of “Friction, not erosion: assassination norms at the fault line between sovereignty and liberal values”, Contemporary Security Policy, 38(2), 260-280. It is available here.

Caveats in coalition operations: Why do states restrict their military efforts?

PerMarius3Caveats refer to the reservations states impose on how their forces can operate when assigned to a military coalition operation. Many argue that caveats have been a particular problem for unity of effort in multinational military coalition operations in the post-Cold War period.

The practice of caveats rose to prominence in defense and policy circles with NATO’s ISAF-campaign in Afghanistan, often emphasized as one of the most significant causes to NATO’s lack of operational effectiveness. While states sent troops to Afghanistan, the problem for NATO was that many nations set heavy restrictions on what their forces were permitted to do. Some could not operate at night. Others could not take part in offensive operations. The most commonly used restrictions were perhaps geographical limitations for where forces could operate in Afghanistan.

Caveats are often mentioned in the context of NATO’s operations in Afghanistan. Nonetheless, similar examples of national reservations are well known also from other coalition operations in the post-Cold War era.

While it is unusual for states to fully surrender their military forces to operate under other nations’ command, caveats represent a particular puzzling type of reserved state behavior in military coalitions. Why would states provide a specific military capability and then prevent the coalition from using the full potential of the forces by applying caveats? If a state for some reason finds it necessary to adjust its support to the coalition, would it not be more meaningful to send a military capability that the coalition could use without reservations?

The use of caveats is further puzzling when we take into account how controversial the practice of caveats has become over the last decade. Operationally, caveats hamper coalition commanders’ operational flexibility and often require coalition forces to fight with one hand tied to their back – reducing the coalition forces’ military progress. One U.S. general even referred to national caveats as “a cancer that eats away at the effective usability of troops”. Politically, caveats are contested because they can easily be seen as part of a buck-passing strategy, adding more burdens to those states that do not apply caveats – risking breaking the cohesion among coalition partners.

So, what motivates states to apply caveats to their military forces in coalition operations when such reservations limit military progress and weaken the political cohesion in the coalition? In my recent study on the use of caveats by Denmark, Norway and The Netherlands during the NATO operation in Libya, I found that there are three possible causes that can lead to caveats.

First, confronted with the question of whether or not to join military coalition operations, many governments have found themselves between a rock and a hard place – between external pressure for supporting allies and domestic skepticism about what the external pressure demands and exactly how to respond to it. To gain sufficient domestic support for making a military contribution to a coalition, it might be necessary for a government to add caveats to address concerns among political parties that can block the decision to make a contribution. For a government eager to see their forces take part in coalition operations, it might be better to make a reserved contribution than to make no contribution at all.

Second, the use of caveats is rarely fully determined by the need to make a domestic political compromise. Domestic factors help to explain whether or not there will be caveats, while external pressure helps to explain the form that such caveats takes. Clever national policy-makers will spot opportunities for how caveats can be implemented. By adjusting how caveats are practiced, more of the units’ military value to the coalition operations can be maintained. As such, decision-makers can secure a better balance between, on the one hand, to make a more relevant contribution to a coalition’s demand for military support and, on the other, maintain domestic support for such a contribution.

Third, with unanimous domestic support for a nation’s military participation in a coalition operation there is another possibility for caveats. Ideally for the purpose of utilizing the military resources at the coalition’s disposal, a coalition commander would like to have no national strings attached to the contributed units under his or her command. However, cutting off every national string to a national military unit to ease the challenges with coordinating military effort might be counter-productive. In lack of clear guidance from their national principals, military officers might themselves apply reservations in fear of reprimands or of causing domestic political crisis by simply following coalition orders.

In military coalitions, operational effectiveness hinges on states’ ability to coordinate their military efforts. The phenomenon of caveats in post-Cold War coalition operations illustrates how national control have challenged states’ ability to coordinate their military efforts when operating together and how this has affected coalition forces’ operational effectiveness. With different causes leading to caveats, there is no easy solution to make a stop to the growing practice of caveats in coalition operations. To overcome the practical problems that caveats create, policy-makers and military decision-makers should develop a better understanding of the reasons for why caveats appear.

Per Marius Frost-Nielsen was a PhD candidate at the  Department for Sociology and Political Science, Faculty of Social Sciences and Technology Management, Norwegian University of Science and Technology (NTNU), Trondheim, Norway. He is the author of “Conditional commitments: Why states use caveats to reserve their efforts in military coalition operations”, Contemporary Security Policy, 38, forthcoming. It is available here.

Why democracies may support other democracies – but not autocracies – against rebellions

csp_blog_16_13_goldman_adulofDemocratic peace theory has been extensively tested in cases of interstate war. It is important that we use these insights as well to better understand intervention in civil wars. Our research shows that it matters whether the regime fighting against rebels is a democracy or autocracy.

Democratic regimes are often seen as tolerant: they solve their domestic disputes by peaceful means, they keep civil liberties and political rights, such as freedom of speech or the equal right to participate in fair and open elections. Autocratic regimes, on the other hand, are far less tolerant, more suppressive, and generally more violent in their domestic politics. And if the international reflects the national, it only makes sense that we will see more violence coming from authoritarian, than democratic, countries. The ‘democratic peace theory’ turns this intuition into an academic pursuit, its most renown (dyadic) version: democracies do not fight each other. But even if true, and quite a few dispute it, why is it so? Explanations have roughly diverged into two branches: structural-electoral causes and normative, liberal, ones.

The structural rationale suggests, for example, that electoral considerations of the leaders make them more cautious about waging war because the domestic cost might be high: voters may well vote against them in the next elections. Therefore, when both leaders come from democratic political systems the probability of war is lower. The normative rationale suggests, on the other hand, that democratic leaders are socialized into a peaceful resolution of domestic conflicts, and externalize this liberal behavior to international politics. Therefore, when they face other democratic leader they trust each other to favor peace over violence.

Scholars have extensively tested both arguments. The large majority of the literature about the democratic peace theory have focused on militarized interstate disputes, say between France and Germany. Yet, the above claims have far more applications. For example, they can be applied to covert actions by democracies. One application, which we tested in this study, is that governments consider regime type when weighing intervention in civil wars.

4303718514_3cfaf0e7c3_bThe liberal hostility towards autocracies should drive democracies to avoid supporting embattled autocracies, perhaps even to support rebellions against them. Cases in point are the U.S. decisions to withdraw its support from the oppressive Iranian Shah and Nicaragua’s Somoza as well as the USA aiding rebels against the regime of Assad. On the other hand, democracies have occasionally also helped bring autocrats to power and keep them there. Cases in point are the France’s assistance to the Algerian regime, Israel’s backing of the Hashemite Kingdom of Jordan (against Palestinian Liberation Organization rebels), U.S. support for various embattled dictators in developing countries (e.g. Indonesia’s Suharto throughout the anti-communist purge).

From a liberal normative perspective democratic government would be held as legitimate, being fairly elected by its constituency, while a violent organization which fight it would be seen as defying the liberal norms, its violent conduct deem illegitimate. Conversely, a non-democratic regime would not enjoy this liberal legitimacy—in the eye of democracies—and its rival might be seen as more legitimate despite its violent acts. Indeed, autocratic leaders are seen as being in a permanent state of aggression against their own people. Thus, the more an embattled regime is seen by the potential intervening democracy as adhering to appropriate norms, the more likely is intervention on its behalf, against the rebellious organization.

Compared to the normative account, the structural account seems less pertinent. If democratic leaders avoid sending troops to fight for a foreign government, they can minimize potential audience cost. Moreover, since supporting a foreign embattled regime is often veiled the democratic leader need not navigate the formal/official systems of checks and balances, or face as fierce an organized opposition or open public debate. Nonetheless, if the foreign intervention receives wide media cover, it may become a subject of public debate, an electoral factor particularly relevant for decision makers in democracies. Also, the more democratic the regime, the greater the likelihood that the opposition will be able to mobilize protest and exact a higher political price from the government for supporting a non-democratic government. Moreover, when a military intervention abroad is overt, it increases the number of institutions that must approve the decision.

We may thus hypothesize that democratic leaders considering support for non-democratic governments in their intrastate wars would take into account the negative institutional and public opinion implications, and be less inclined to lend a hand to such embattled non-democratic regimes. In the statistical analysis we conducted we found that autocracies almost never support democratic governments in intrastate wars. The results also support the prediction that democracies support embattled autocratic regimes much less than autocracies do (see figure). Put differently, the more democratic two states are, the higher the probability one would support the embattled other.

In conclusion, these findings allow us to better understand democratic foreign policy and expand democratic peace theory empirical validity to more indirect and sometimes subtler forms of conduct. Democracies behave differently towards governments that face intrastate war, partly based on their regime type, and they are inclined to support governments, which are more democratic.

Ogen S. Goldman is a Lecturer at Ashkelon Academic College. Uriel Abulof is a Senior Lecturer of Politics at Tel Aviv University and an LISD research fellow at Princeton University. They are the authors of “Democracy for the rescue—of dictators? The role of regime type in civil war interventions”, Contemporary Security Policy, 37, 341–368. It is available here.

Armies should be self-aware when using historical lessons

CSP_Blog_16_08_Eric Sangar (Small)Military strategy is often informed by lessons from the past. Which lessons armies pick up and use, however, depends on organizational filters. Due to organizational layering, armies may collect contradictory lessons leading to incoherent policy.

The study of success and failure in past wars has been closely intertwined with the emergence of strategic thought. Prominent strategic thinkers, such as Machiavelli, Clausewitz, or Liddle Hart, have relied on history of past campaigns to analyze and improve warfare in the present. And during the recent wars in Iraq and Afghanistan, there have been lengthy debates on which lessons from the past have been neglected, and which have been applied wrongly.

However, so far there have been no systematic attempts to theorize how armies learn from their historical experience. In my article “The Pitfalls of Learning from Historical Experience”, I propose a pioneering theoretical argument to explain why the British Army discussed historical lessons for the Afghanistan mission (ISAF) in a contradictory way.

Why contradictory? Using research papers written by staff officers as well as doctrinal pamphlets, I observe two strands of historical experience that dominated the internal debate on lessons for the Afghanistan mission: the Anglo-Afghan Wars of the 19th and early 20th centuries, and the colonial counterinsurgency campaigns conducted after 1945, including Malaya. However, this debate is characterized by a significant difficulty: due to fundamental differences in their nature, the two strands of lessons cannot be integrated into operational strategy without losing coherence.

The lessons from the Anglo-Afghan Wars are about the assumption that the Afghan society is so different that any military approach needs to be tailored to the local Afghan context. For instance, the Land Warfare Centre’s pamphlet presenting lessons from the Anglo-Afghan Wars states:

The Afghans are a proud and independent people who resent foreign interference and especially foreign militaries that they construe as occupation forces. They have always resisted external forces and attempts to change traditional ways; […] there is room for reintegration and possibly reconciliation but only when the use of force against insurgents has applied enough pressure on the insurgent/tribesman that his options are limited enough to make him want to move from one side to the other.

By contrast, the suggested lessons from the post-1945 campaigns assume that there is a set of universally applicable principles that, if implemented coherently, are a central condition of success. In the words of an officer,

not only are the principles contained in British COIN doctrine relevant to modern COIN operations they are also applicable to a wide range of conflict situations, from peacekeeping to general war.

This contradiction – between the adaptation to the perceived specificity of the Afghan context and the adherence to a universal set of principles – has also had implications for coherent operational decision-making. For the initial deployment of British forces to Helmand province in the Summer of 2006, the British Army had prepared an integrated civil-military plan influenced by many of the principles that were formalized in the aftermath of the Malaya campaign. However, operational commanders decided to deviate from this plan only few weeks after their arrival. This can be explained by a perception of a historically violent Afghan society, where the Taliban insurgency could only be stopped through the determinate use of force.

Why do these apparently rather incompatible sets of lessons coexist in British military thought and practice? It is important to understand the stages of internal evolution of military organizations, which determine what kind of lessons are selected and transmitted at specific points of time. I introduce the concept of ‘layered organizational culture’. This expression relates to the idea that existing sets of ideas and organizational routines will determine how a military organization processes new experiences.

As organizational culture changes, so will the ways in which experience is handled. However, earlier layers of organizational culture continue to interact with more recent ones. This can lead to the sub-optimal co-existence of inherently incompatible lessons. When contemporary military organizations study experience from different stages of their historical experience, it often results in recommendations that are perceived to be legitimate although they are taken from greatly diverging contexts of organizational needs and perceptions.

The history of the British Army’s efforts to learn from its colonial experience illustrates this argument well: during the Victorian era, although the bulk of the British Army was deployed in permanent garrisons all over the Empire, a systematic evaluation and transmission of lessons gleaned from colonial operations did not happen. Experience was compartmentalized within locally deployed regiments, and there were no attempts to build a universally applicable doctrine. Internal debates across the army dealt almost exclusively with strategy and tactics for interstate warfare on the European continent.

The perhaps only ‘universal’ lesson transmitted from colonial operations was that every context was unique, and that local commanders had to show initiative in order to tailor strategy to local requirements. As a result, the defeat during the First Anglo-Afghan War was attributed to the specificity of the local context, that is the xenophobic and warlike nature of Afghan society, and adaptation to this ‘alien’ context was the main lesson transmitted within organizational memory.

Organizational culture regarding the use of colonial experience changed after 1945. This was a result of changes in the Army’s force posture. The strategic reserve forces that were rapidly shipped from one colonial uprising to the next did not have the time to develop that sense of local awareness that was perceived to be necessary for success. Instead, from the Malaya campaign onwards, doctrinal thinkers started to look for principles that could be easily taught and applied across diverging contexts. But this new layer of organizational culture interacted with the one rooted in the Victorian Army. As a result, ground commanders continued to enjoy a tremendous amount of autonomy with regards to the interpretation and implementation of the ‘classical’ principles of British counterinsurgency doctrine.

What lessons can be gleaned from the use of lessons by the British Army in the context of the ISAF mission? It would be neither realistic nor helpful to abandon the study of historical experience altogether. But doctrinal thinkers should be more aware that experiences transmitted from the past are ‘filtered’ through the lens of specific configurations of organizational culture that were dominant at the time when an experience was made. This would require working more closely with military historians and sociologists, who can help to answer why specific observations and recommendations have been recorded from past campaigns. History may indeed become a toolbox – but one that can stimulate increased organizational self-awareness and help to avoid the pitfalls of learning from the past.

Eric Sangar is a FNRS Research Fellow at the Tocqueville Chair in Security Policy of the University of Namur, Belgium. He is the author of “The Pitfalls of Learning from Historical Experience: The British Army’s Debate on Useful Lessons for the War in Afghanistan”, Contemporary Security Policy, forthcoming. It is available here. He is currently analyzing the influence of collective memory on uses of history in the realms of media discourses on armed conflict, foreign-policy making, and military strategy.

Something Must Be Done, But What? On Humanitarian Interventions

CSP_Blog_16_04_AbeWhen confronted by shocking images of gross human rights violations, massacres and massive flows of refugees, many people may shout: ‘something must be done!’ Unfortunately, such tragic images are, on a daily basis, coming out of Syria and northern Iraq where the Islamic State reigns, and many other places all over the world. Moreover, thanks to the development of inexpensive communicative devices, such tragic images are spread worldwide at a historically unprecedented speed.

However, cries for ‘something must be done’ will soon be followed by the question: ‘but what?’. One key consideration is the legitimisation of intervention by the international community. Foreign intervention breaches of the principles of sovereign integrity and non-use of force, both of which are stipulated in the Charter of the United Nations. Whilst action can be legitimised by UN Security Council authorisation, often agreement in New York is difficult to achieve.

By questions of legitimacy do not end with UN authorisation. Foreign military intervention may bring about casualties among local civilians and soldiers of intervening states, even it was mandated to bring a conflict to a close. So we may experience a situation in which proponents of intervention use lethal force and, at the same time, voices calling for troops to be withdrawn from battle will become louder.

This question has been repeatedly posed since the end of the Cold War. One of the first instances was the war in Bosnia (broadly speaking, the former Yugoslavia). In this case, international action was strongly urged as it was stated: “Shame in Our Time, in Bosnia” (The New York Times, 21 May 1992). As the intervention continued, nevertheless, other voices were increasingly raised, warning about the dangers of becoming deeply involved. After all, Western governments were subjected to public criticism for failing to stop the war and, at the same time, for dragging their public into a foreign war.

Former British Foreign Secretary Malcolm Rifkind described this difficulty by stating that:

‘something must be done’ may not be sustained if involvement in a bitter conflict in a country in which no vital national interests are at stake results in casualties. The clamour for action can turn, almost overnight, into an equally vigorous clamour to ‘bring our boys home’.

Why do such ‘dilemmas’ appear, even when action is required genuinely for humanitarian reasons? Supposedly, this is because we are living in a world where information and normative concerns are globalised, but the political system remains unchanged. The traditional international system has been established with the rule of non-intervention and the principle of non-use of armed forces to make inter-state relations more stable. Meanwhile, information recognises no territorial borders and in domestic politics its unrestricted flow has created an agenda too inhumane to ignore. This gap between a geographically-constrained world and a globally spreading world generates dilemmas for state decision makers.

In my article I analyse this dilemma in the case of Bosnian intervention and I discuss the consequences it had for NATO. These questions remain, however, critically important today. The intervention of the international community in Libya in 2011, for example, was very much inspired by the idea that ‘something must be done’ to protect civilians against the Gaddafi regime. On the other hand, the international community has been reluctant to further provide support to Libya after the NATO missions were done.

Yuki Abe is an Associate Professor at Kumamoto University, Japan. He is the author of “Norm dilemmas and international organizational development: humanitarian intervention in the crisis of Bosnia and the reorganization of North Atlantic Treaty Organization”, Contemporary Security Policy, Vol.37, No.1, pp.62-88. It is available here.