The Counterproductive Consequences of America’s Vicarious Wars

PIC 1In seeking to confront various security threats while simultaneously evading associated military and political costs, America has come to rely on the vicarious warfighting approaches of delegation, danger-proofing and darkness. Thomas Waldman shows in a new CSP journal article that the results are not promising. Security is not a commodity that can be bought on the cheap.

Following the failed military campaigns of the 2000s, America has not shied away from military intervention but has instead settled upon a low-level, limited, and persistent mode of fighting which I term ‘vicarious warfare.’

The concept covers a diverse range of military approaches that come together in different combinations in different contexts. It is broadly characterised by the outsourcing of military missions to proxy actors, the use of force in ways that minimizes the danger to American personnel and assets, and the conduct of covert and special operations in the shadows.

These methods are held together by decision-makers’ belief that wars can be fought economically, at arm’s length, and in discrete, limited and controllable ways, while at the same time evading various risks and restraints. In a recent article, I argue that the rationales underpinning the prosecution of vicarious warfare are deeply flawed. The attractions of such methods are clear, but the benefits are outweighed by longer-term harmful effects.

U.S.-led Operation Inherent Resolve has arguably been fought as an archetypal vicarious war and, in late 2017, has largely succeeded in removing Islamic State from its major strongholds in Iraq and Syria. Welcome news of course, but at what cost for the future?

In Syria, American-backed groups find themselves in confrontation with regional powers and new political realties make future ethnic strife between Kurds and Arabs likely. In Iraq, the way the operation to retake Mosul was conducted means “there is a real risk that this battle will form one more chapter in a seemingly endless cycle of devastating conflict.”

PIC 2But how can we account for the emergence of vicarious warfare? Looking back to the early 2000s, influential voices such as General Sir Rupert Smith suggested that we had entered into an age of “war amongst the people” – timeless irregular conflicts involving non-state actors and influenced by an ever-present mass media. Many American security elites thought it advisable to steer clear of such messy conflicts, especially following the bloody debacles in Iraq and Afghanistan.

Yet, contrary to informed and sober analysis, politicians continued to believe that America was assailed by various menacing threats and risks – such as those posed by radical Jihadists and other rogue actors – that had to be confronted with force. But how to do this without being dragged into yet more debilitating irregular wars?

Evolving methods appeared to offer a way to essentially flip Smith’s logic and fight “war without the people” – to prevent serious security incidents, while keeping the necessary measures economically affordable, socially acceptable, legally permissible, and politically viable. Responsibility could be delegated to those designed to take considerable risks (special forces), those about whom the public is little concerned (private contractors, proxies), or those with the ability to sweep risk under the carpet (CIA).

This is the essence of vicarious warfare, and I suggest that it can usefully be understood as comprising three “Ds”: delegation, danger-proofing and darkness. Briefly considering each in turn, it is possible to see how vicarious methods lead to consistently and cumulatively counterproductive outcomes.


The notion that proxy actors might serve as effective force multipliers while concealing the true costs of war appears persuasive. However, the empirical record is less positive and most rigorous studies profoundly sceptical. Rushed programs to build state security forces, sacrificing quality and sustainability for immediate effect, have resulted in “hollow” forces plagued by corruption, divisions and operational deficiencies. Support to irregular militias has been typified by short-term gains balanced by long-term harm: most groups have been associated with a lack of control, radicalization, and abuses. Similarly, incidents involving private contractors have generated baleful consequences leading scholars to conclude that the benefits of outsourcing “are either specious or fleeting, and its costs are massive and manifest.”


Driven by increased political interference in decisions that are usually the responsibility of commanders, America fights so as to minimize harm to American personnel. Yet, there are reasons to believe that excessive protection undermines operations and even increases the risk of casualties. Airpower and stand-off weapons such as armed drones and cruise missiles – extreme forms of danger-proofing, offering protection through distance – have rained death on America’s enemies. Yet, insurgent organizations “exhibit a biological reconstitution capacity” because the underlying causes of their regeneration remain unaddressed. The costs of unremitting drone warfare outweigh whatever tactical gains they deliver.


Covert action, special forces, and rapidly emerging offensive cyber warfare capabilities seemingly allow elites to attain objectives while evading difficult political questions. Yet, such approaches have contributed to major “blowback” and led to embarrassing political crises. Special forces have provided support to local forces, enabling impressive battlefield victories. Yet, focusing on “kinetic” operations has distracted attention from addressing critical underlying issues. Attempts to remove terrorist leaders through “decapitation” strikes have failed to defeat targeted groups, and may have contributed to their longer term lethality.

Operation Iraqi FreedomThe three “Ds” are all adopted for their attraction as low-cost, tactically effective approaches to deal with pressing challenges. Superficially, these approaches are not entirely without merit. Rather, it is the way they have come to drive policy that leads to counterproductive outcomes. They distract decision-makers from addressing vital political dynamics, encourage militarised approaches which exacerbate complex problems, and drag America into unintended commitments.

Perhaps more concerning is the deeper self-harm being inflicted on the American polity. The normalization of the persistent use of military force, the expansion of under-scrutinized executive authority and, the rise of xenophobic populism are perhaps just indications of worse things to come.

The record of the Trump administration’s first year in office suggests the central dimensions of vicarious warfare look set to persist. Trump’s loosening of rules governing the use of force by commanders and the marginalization of the State Department may usher in an era of unprecedented militarization, while the costs borne by civilians – directly through bombings, raids, and abuses, or indirectly through protracted conflict and psychological trauma – cumulatively fosters discontent and continued resistance.

Thomas Waldman in lecturer in security studies at Macquarie University. He has published widely on war, military strategy and contemporary conflict. His Twitter handle is @tom_waldman and his work can be followed on He is author of “Vicarious Warfare: The Counterproductive Consequences of Contemporary American Military Practice”, available here.

Why September 11 and drones don’t tell the whole story about targeted killings


To understand the proliferation of target killing as a new method of warfare, we have to look beyond events like 9/11 or the emergence of new technology.

For centuries, assassination was an accepted instrument of foreign policy and considered a normal practice. During the early modern period, however, resorting to assassination gradually became a taboo, something modern states would not do because of their self-perception as modern. Today we observe a weakening of this taboo. Reframed as “targeted killing,” assassination seems to move towards normalization, as more states engage in the practice and, instead of denying it, openly justify targeted killing strategies. “The gloves are off,” a senior CIA official stated mere weeks after the attack on the World Trade Center, “[l]ethal operations that were unthinkable pre-September 11 are now underway.”

Scholarly attempts at making sense of this normative change sometimes seem to implicitly share this assessment. They tend to overemphasize the role of September 11, 2001 and the ensuing “War on Terror” as turning points. Similarly, scholars have argued that the anti-assassination norm has been eroding because of the development and availability of drone technology. Consequentially, the vast majority of studies concerned with such normative change only look at post-9/11 cases. In my article, I seek to shift the focus. Rather than concentrating on major events or technology, I highlight the pivotal importance of two meta-norms, sovereignty and liberal thought, in the transformation of assassination norms prior to the War on Terror.

It has often been argued that historical state-sponsored assassination and present-day targeted killing constitute two completely different subjects, since the targeted killing of terror suspects seems so different from headline-grabbing assassinations of state leaders during the 19th and 20th century. Yet those share a common normative realm. When the term “targeted killing” was coined in the late 1990s and early 2000s, however, it represented a deliberate attempt to render some forms of killing permissible precisely by uncoupling them from their restrictive historical assassination context. Indeed, today’s targeted killing programs largely rest on similar logics, on the assumption that terrorist networks are centralized enough to allow attackers to degrade enemy functioning through killing leadership.

It is beyond doubt that 9/11 marked a severe turning point in security practices, and my article does not seek to refute its general importance. However, the normative underpinnings of those shifts were subject to much slower change–not as rapid as cursory accounts of the history of assassination might suggest. This transformation started not only before 9/11 but also well before the end of the Cold War.

During the early modern period, state-sponsored assassination became increasingly rejected due to the emergence of sovereign statehood and liberal thought. Those are reflected in debates about assassination as a specific (and from a liberal perspective deplorable) nature of killing as well as debates about the special protection of specific persons from being targets of assassination due to their status as representatives of sovereign statehood. This distinguishes assassination from many other changing international norms.

Liberal norms and the sovereignty norms have frequently collided, as the case of humanitarian intervention and the “responsibility to protect” exemplifies: Here, a liberal responsibility collides with sovereignty rights of nation states. The same is true for most norms rooted in human rights discourse, since the mere existence of such a norm means that it is universal enough to have some effect on the behavior of states, which is then by definition generates a tension with state sovereignty. It can be argued that the tension between the two meta norms of sovereignty and liberal thought constitute the core of most instances of norm contestation.

In this sense, assassination norms are peculiar. Rather than being in tension with one meta-norm and shielded by the other, they are rooted in both discourses. At the very core of the assassination/targeted killing normative realm lies an incentive to protect the long-term stability of sovereign states and a state-based order and a liberal impetus to avoid harm to human beings.

As I maintain in my article, this connection also helps understand the weakening of the norm, as they can be invoked by actors in order to reinterpret it. On a grand scale, the second half of the 20th century saw an overall strengthening of liberal values at the expense of state sovereignty. During the same period however, actors began emphasizing assassination’s sovereignty implications at the expense of its connection to liberal meta-norms.

Over time, the condemnation of state-sponsored assassination had become a mere subset of sovereignty, no longer shielded by its original powerful liberal underpinnings. Hence, when states began to openly advocate targeted killing policies in the early 21st century, precisely on the ground of liberal values and in spite of sovereignty during the War on Terror, the normative ground had already been prepared.

Mathias Großklaus is a PhD candidate at the Graduate School of North American Studies, Freie Universität Berlin. He is the author of “Friction, not erosion: assassination norms at the fault line between sovereignty and liberal values”, Contemporary Security Policy, 38(2), 260-280. It is available here.

Caveats in coalition operations: Why do states restrict their military efforts?

PerMarius3Caveats refer to the reservations states impose on how their forces can operate when assigned to a military coalition operation. Many argue that caveats have been a particular problem for unity of effort in multinational military coalition operations in the post-Cold War period.

The practice of caveats rose to prominence in defense and policy circles with NATO’s ISAF-campaign in Afghanistan, often emphasized as one of the most significant causes to NATO’s lack of operational effectiveness. While states sent troops to Afghanistan, the problem for NATO was that many nations set heavy restrictions on what their forces were permitted to do. Some could not operate at night. Others could not take part in offensive operations. The most commonly used restrictions were perhaps geographical limitations for where forces could operate in Afghanistan.

Caveats are often mentioned in the context of NATO’s operations in Afghanistan. Nonetheless, similar examples of national reservations are well known also from other coalition operations in the post-Cold War era.

While it is unusual for states to fully surrender their military forces to operate under other nations’ command, caveats represent a particular puzzling type of reserved state behavior in military coalitions. Why would states provide a specific military capability and then prevent the coalition from using the full potential of the forces by applying caveats? If a state for some reason finds it necessary to adjust its support to the coalition, would it not be more meaningful to send a military capability that the coalition could use without reservations?

The use of caveats is further puzzling when we take into account how controversial the practice of caveats has become over the last decade. Operationally, caveats hamper coalition commanders’ operational flexibility and often require coalition forces to fight with one hand tied to their back – reducing the coalition forces’ military progress. One U.S. general even referred to national caveats as “a cancer that eats away at the effective usability of troops”. Politically, caveats are contested because they can easily be seen as part of a buck-passing strategy, adding more burdens to those states that do not apply caveats – risking breaking the cohesion among coalition partners.

So, what motivates states to apply caveats to their military forces in coalition operations when such reservations limit military progress and weaken the political cohesion in the coalition? In my recent study on the use of caveats by Denmark, Norway and The Netherlands during the NATO operation in Libya, I found that there are three possible causes that can lead to caveats.

First, confronted with the question of whether or not to join military coalition operations, many governments have found themselves between a rock and a hard place – between external pressure for supporting allies and domestic skepticism about what the external pressure demands and exactly how to respond to it. To gain sufficient domestic support for making a military contribution to a coalition, it might be necessary for a government to add caveats to address concerns among political parties that can block the decision to make a contribution. For a government eager to see their forces take part in coalition operations, it might be better to make a reserved contribution than to make no contribution at all.

Second, the use of caveats is rarely fully determined by the need to make a domestic political compromise. Domestic factors help to explain whether or not there will be caveats, while external pressure helps to explain the form that such caveats takes. Clever national policy-makers will spot opportunities for how caveats can be implemented. By adjusting how caveats are practiced, more of the units’ military value to the coalition operations can be maintained. As such, decision-makers can secure a better balance between, on the one hand, to make a more relevant contribution to a coalition’s demand for military support and, on the other, maintain domestic support for such a contribution.

Third, with unanimous domestic support for a nation’s military participation in a coalition operation there is another possibility for caveats. Ideally for the purpose of utilizing the military resources at the coalition’s disposal, a coalition commander would like to have no national strings attached to the contributed units under his or her command. However, cutting off every national string to a national military unit to ease the challenges with coordinating military effort might be counter-productive. In lack of clear guidance from their national principals, military officers might themselves apply reservations in fear of reprimands or of causing domestic political crisis by simply following coalition orders.

In military coalitions, operational effectiveness hinges on states’ ability to coordinate their military efforts. The phenomenon of caveats in post-Cold War coalition operations illustrates how national control have challenged states’ ability to coordinate their military efforts when operating together and how this has affected coalition forces’ operational effectiveness. With different causes leading to caveats, there is no easy solution to make a stop to the growing practice of caveats in coalition operations. To overcome the practical problems that caveats create, policy-makers and military decision-makers should develop a better understanding of the reasons for why caveats appear.

Per Marius Frost-Nielsen was a PhD candidate at the  Department for Sociology and Political Science, Faculty of Social Sciences and Technology Management, Norwegian University of Science and Technology (NTNU), Trondheim, Norway. He is the author of “Conditional commitments: Why states use caveats to reserve their efforts in military coalition operations”, Contemporary Security Policy, 38, forthcoming. It is available here.

Why democracies may support other democracies – but not autocracies – against rebellions

csp_blog_16_13_goldman_adulofDemocratic peace theory has been extensively tested in cases of interstate war. It is important that we use these insights as well to better understand intervention in civil wars. Our research shows that it matters whether the regime fighting against rebels is a democracy or autocracy.

Democratic regimes are often seen as tolerant: they solve their domestic disputes by peaceful means, they keep civil liberties and political rights, such as freedom of speech or the equal right to participate in fair and open elections. Autocratic regimes, on the other hand, are far less tolerant, more suppressive, and generally more violent in their domestic politics. And if the international reflects the national, it only makes sense that we will see more violence coming from authoritarian, than democratic, countries. The ‘democratic peace theory’ turns this intuition into an academic pursuit, its most renown (dyadic) version: democracies do not fight each other. But even if true, and quite a few dispute it, why is it so? Explanations have roughly diverged into two branches: structural-electoral causes and normative, liberal, ones.

The structural rationale suggests, for example, that electoral considerations of the leaders make them more cautious about waging war because the domestic cost might be high: voters may well vote against them in the next elections. Therefore, when both leaders come from democratic political systems the probability of war is lower. The normative rationale suggests, on the other hand, that democratic leaders are socialized into a peaceful resolution of domestic conflicts, and externalize this liberal behavior to international politics. Therefore, when they face other democratic leader they trust each other to favor peace over violence.

Scholars have extensively tested both arguments. The large majority of the literature about the democratic peace theory have focused on militarized interstate disputes, say between France and Germany. Yet, the above claims have far more applications. For example, they can be applied to covert actions by democracies. One application, which we tested in this study, is that governments consider regime type when weighing intervention in civil wars.

4303718514_3cfaf0e7c3_bThe liberal hostility towards autocracies should drive democracies to avoid supporting embattled autocracies, perhaps even to support rebellions against them. Cases in point are the U.S. decisions to withdraw its support from the oppressive Iranian Shah and Nicaragua’s Somoza as well as the USA aiding rebels against the regime of Assad. On the other hand, democracies have occasionally also helped bring autocrats to power and keep them there. Cases in point are the France’s assistance to the Algerian regime, Israel’s backing of the Hashemite Kingdom of Jordan (against Palestinian Liberation Organization rebels), U.S. support for various embattled dictators in developing countries (e.g. Indonesia’s Suharto throughout the anti-communist purge).

From a liberal normative perspective democratic government would be held as legitimate, being fairly elected by its constituency, while a violent organization which fight it would be seen as defying the liberal norms, its violent conduct deem illegitimate. Conversely, a non-democratic regime would not enjoy this liberal legitimacy—in the eye of democracies—and its rival might be seen as more legitimate despite its violent acts. Indeed, autocratic leaders are seen as being in a permanent state of aggression against their own people. Thus, the more an embattled regime is seen by the potential intervening democracy as adhering to appropriate norms, the more likely is intervention on its behalf, against the rebellious organization.

Compared to the normative account, the structural account seems less pertinent. If democratic leaders avoid sending troops to fight for a foreign government, they can minimize potential audience cost. Moreover, since supporting a foreign embattled regime is often veiled the democratic leader need not navigate the formal/official systems of checks and balances, or face as fierce an organized opposition or open public debate. Nonetheless, if the foreign intervention receives wide media cover, it may become a subject of public debate, an electoral factor particularly relevant for decision makers in democracies. Also, the more democratic the regime, the greater the likelihood that the opposition will be able to mobilize protest and exact a higher political price from the government for supporting a non-democratic government. Moreover, when a military intervention abroad is overt, it increases the number of institutions that must approve the decision.

We may thus hypothesize that democratic leaders considering support for non-democratic governments in their intrastate wars would take into account the negative institutional and public opinion implications, and be less inclined to lend a hand to such embattled non-democratic regimes. In the statistical analysis we conducted we found that autocracies almost never support democratic governments in intrastate wars. The results also support the prediction that democracies support embattled autocratic regimes much less than autocracies do (see figure). Put differently, the more democratic two states are, the higher the probability one would support the embattled other.

In conclusion, these findings allow us to better understand democratic foreign policy and expand democratic peace theory empirical validity to more indirect and sometimes subtler forms of conduct. Democracies behave differently towards governments that face intrastate war, partly based on their regime type, and they are inclined to support governments, which are more democratic.

Ogen S. Goldman is a Lecturer at Ashkelon Academic College. Uriel Abulof is a Senior Lecturer of Politics at Tel Aviv University and an LISD research fellow at Princeton University. They are the authors of “Democracy for the rescue—of dictators? The role of regime type in civil war interventions”, Contemporary Security Policy, 37, 341–368. It is available here.

Armies should be self-aware when using historical lessons

CSP_Blog_16_08_Eric Sangar (Small)Military strategy is often informed by lessons from the past. Which lessons armies pick up and use, however, depends on organizational filters. Due to organizational layering, armies may collect contradictory lessons leading to incoherent policy.

The study of success and failure in past wars has been closely intertwined with the emergence of strategic thought. Prominent strategic thinkers, such as Machiavelli, Clausewitz, or Liddle Hart, have relied on history of past campaigns to analyze and improve warfare in the present. And during the recent wars in Iraq and Afghanistan, there have been lengthy debates on which lessons from the past have been neglected, and which have been applied wrongly.

However, so far there have been no systematic attempts to theorize how armies learn from their historical experience. In my article “The Pitfalls of Learning from Historical Experience”, I propose a pioneering theoretical argument to explain why the British Army discussed historical lessons for the Afghanistan mission (ISAF) in a contradictory way.

Why contradictory? Using research papers written by staff officers as well as doctrinal pamphlets, I observe two strands of historical experience that dominated the internal debate on lessons for the Afghanistan mission: the Anglo-Afghan Wars of the 19th and early 20th centuries, and the colonial counterinsurgency campaigns conducted after 1945, including Malaya. However, this debate is characterized by a significant difficulty: due to fundamental differences in their nature, the two strands of lessons cannot be integrated into operational strategy without losing coherence.

The lessons from the Anglo-Afghan Wars are about the assumption that the Afghan society is so different that any military approach needs to be tailored to the local Afghan context. For instance, the Land Warfare Centre’s pamphlet presenting lessons from the Anglo-Afghan Wars states:

The Afghans are a proud and independent people who resent foreign interference and especially foreign militaries that they construe as occupation forces. They have always resisted external forces and attempts to change traditional ways; […] there is room for reintegration and possibly reconciliation but only when the use of force against insurgents has applied enough pressure on the insurgent/tribesman that his options are limited enough to make him want to move from one side to the other.

By contrast, the suggested lessons from the post-1945 campaigns assume that there is a set of universally applicable principles that, if implemented coherently, are a central condition of success. In the words of an officer,

not only are the principles contained in British COIN doctrine relevant to modern COIN operations they are also applicable to a wide range of conflict situations, from peacekeeping to general war.

This contradiction – between the adaptation to the perceived specificity of the Afghan context and the adherence to a universal set of principles – has also had implications for coherent operational decision-making. For the initial deployment of British forces to Helmand province in the Summer of 2006, the British Army had prepared an integrated civil-military plan influenced by many of the principles that were formalized in the aftermath of the Malaya campaign. However, operational commanders decided to deviate from this plan only few weeks after their arrival. This can be explained by a perception of a historically violent Afghan society, where the Taliban insurgency could only be stopped through the determinate use of force.

Why do these apparently rather incompatible sets of lessons coexist in British military thought and practice? It is important to understand the stages of internal evolution of military organizations, which determine what kind of lessons are selected and transmitted at specific points of time. I introduce the concept of ‘layered organizational culture’. This expression relates to the idea that existing sets of ideas and organizational routines will determine how a military organization processes new experiences.

As organizational culture changes, so will the ways in which experience is handled. However, earlier layers of organizational culture continue to interact with more recent ones. This can lead to the sub-optimal co-existence of inherently incompatible lessons. When contemporary military organizations study experience from different stages of their historical experience, it often results in recommendations that are perceived to be legitimate although they are taken from greatly diverging contexts of organizational needs and perceptions.

The history of the British Army’s efforts to learn from its colonial experience illustrates this argument well: during the Victorian era, although the bulk of the British Army was deployed in permanent garrisons all over the Empire, a systematic evaluation and transmission of lessons gleaned from colonial operations did not happen. Experience was compartmentalized within locally deployed regiments, and there were no attempts to build a universally applicable doctrine. Internal debates across the army dealt almost exclusively with strategy and tactics for interstate warfare on the European continent.

The perhaps only ‘universal’ lesson transmitted from colonial operations was that every context was unique, and that local commanders had to show initiative in order to tailor strategy to local requirements. As a result, the defeat during the First Anglo-Afghan War was attributed to the specificity of the local context, that is the xenophobic and warlike nature of Afghan society, and adaptation to this ‘alien’ context was the main lesson transmitted within organizational memory.

Organizational culture regarding the use of colonial experience changed after 1945. This was a result of changes in the Army’s force posture. The strategic reserve forces that were rapidly shipped from one colonial uprising to the next did not have the time to develop that sense of local awareness that was perceived to be necessary for success. Instead, from the Malaya campaign onwards, doctrinal thinkers started to look for principles that could be easily taught and applied across diverging contexts. But this new layer of organizational culture interacted with the one rooted in the Victorian Army. As a result, ground commanders continued to enjoy a tremendous amount of autonomy with regards to the interpretation and implementation of the ‘classical’ principles of British counterinsurgency doctrine.

What lessons can be gleaned from the use of lessons by the British Army in the context of the ISAF mission? It would be neither realistic nor helpful to abandon the study of historical experience altogether. But doctrinal thinkers should be more aware that experiences transmitted from the past are ‘filtered’ through the lens of specific configurations of organizational culture that were dominant at the time when an experience was made. This would require working more closely with military historians and sociologists, who can help to answer why specific observations and recommendations have been recorded from past campaigns. History may indeed become a toolbox – but one that can stimulate increased organizational self-awareness and help to avoid the pitfalls of learning from the past.

Eric Sangar is a FNRS Research Fellow at the Tocqueville Chair in Security Policy of the University of Namur, Belgium. He is the author of “The Pitfalls of Learning from Historical Experience: The British Army’s Debate on Useful Lessons for the War in Afghanistan”, Contemporary Security Policy, forthcoming. It is available here. He is currently analyzing the influence of collective memory on uses of history in the realms of media discourses on armed conflict, foreign-policy making, and military strategy.

Something Must Be Done, But What? On Humanitarian Interventions

CSP_Blog_16_04_AbeWhen confronted by shocking images of gross human rights violations, massacres and massive flows of refugees, many people may shout: ‘something must be done!’ Unfortunately, such tragic images are, on a daily basis, coming out of Syria and northern Iraq where the Islamic State reigns, and many other places all over the world. Moreover, thanks to the development of inexpensive communicative devices, such tragic images are spread worldwide at a historically unprecedented speed.

However, cries for ‘something must be done’ will soon be followed by the question: ‘but what?’. One key consideration is the legitimisation of intervention by the international community. Foreign intervention breaches of the principles of sovereign integrity and non-use of force, both of which are stipulated in the Charter of the United Nations. Whilst action can be legitimised by UN Security Council authorisation, often agreement in New York is difficult to achieve.

By questions of legitimacy do not end with UN authorisation. Foreign military intervention may bring about casualties among local civilians and soldiers of intervening states, even it was mandated to bring a conflict to a close. So we may experience a situation in which proponents of intervention use lethal force and, at the same time, voices calling for troops to be withdrawn from battle will become louder.

This question has been repeatedly posed since the end of the Cold War. One of the first instances was the war in Bosnia (broadly speaking, the former Yugoslavia). In this case, international action was strongly urged as it was stated: “Shame in Our Time, in Bosnia” (The New York Times, 21 May 1992). As the intervention continued, nevertheless, other voices were increasingly raised, warning about the dangers of becoming deeply involved. After all, Western governments were subjected to public criticism for failing to stop the war and, at the same time, for dragging their public into a foreign war.

Former British Foreign Secretary Malcolm Rifkind described this difficulty by stating that:

‘something must be done’ may not be sustained if involvement in a bitter conflict in a country in which no vital national interests are at stake results in casualties. The clamour for action can turn, almost overnight, into an equally vigorous clamour to ‘bring our boys home’.

Why do such ‘dilemmas’ appear, even when action is required genuinely for humanitarian reasons? Supposedly, this is because we are living in a world where information and normative concerns are globalised, but the political system remains unchanged. The traditional international system has been established with the rule of non-intervention and the principle of non-use of armed forces to make inter-state relations more stable. Meanwhile, information recognises no territorial borders and in domestic politics its unrestricted flow has created an agenda too inhumane to ignore. This gap between a geographically-constrained world and a globally spreading world generates dilemmas for state decision makers.

In my article I analyse this dilemma in the case of Bosnian intervention and I discuss the consequences it had for NATO. These questions remain, however, critically important today. The intervention of the international community in Libya in 2011, for example, was very much inspired by the idea that ‘something must be done’ to protect civilians against the Gaddafi regime. On the other hand, the international community has been reluctant to further provide support to Libya after the NATO missions were done.

Yuki Abe is an Associate Professor at Kumamoto University, Japan. He is the author of “Norm dilemmas and international organizational development: humanitarian intervention in the crisis of Bosnia and the reorganization of North Atlantic Treaty Organization”, Contemporary Security Policy, Vol.37, No.1, pp.62-88. It is available here.