The Prevention Problem: Thinking about Rwanda 20 years later

Of my areas of interest, the two that stand out are violence prevention and technology. This year marks the 20th anniversary of the Rwanda genocide, and I’ve been keeping track of the media coverage which has included the usual themes of never again, and a call to seek the tools and capacity to prevent such events in the future. To really make this happen though there needs to be a differentiation between patterns of smaller atrocities and genocide. This presents a challenge for localizing peacebuilding, especially for those of us who work in the technology space.

First, we have to differentiate atrocities from genocides. There are books upon books worth of arguments about semantics (which are important from a legal standpoint!), but I want to generally focus on differences in scale and intent. A militant group might commit a one-time atrocity to make a political statement, a riot could lead to a military crackdown that spins out of control, one ethnic group might target another over land rights, etc. These can be atrocities, especially if there’s a pattern of events. Genocide, what happened in Rwanda 20 years ago, is different in scale and intent. The scope of violence is an entire identity group, and the intent is the elimination of that group. Unlike an atrocity, this requires state-grade organization and capacity. Indeed, these are rather blunt definitions that ignore a lot of semantic detail, but bear with me.

If our goal is the prevention of atrocities and genocide, and our preferred method is to empower local communities with the tools and skills to prevent violence before it starts, then scale and intention matter. If we take the example of election violence in Kenya in 2007/8, there were many atrocities committed, but the intent wasn’t overtly genocidal. Since that election there have been efforts made to reinforce peace keeping (not ‘peacekeeping’) capacity at the local level through training programs and innovative approaches to information sharing using mobile phones and social media. In this scenario the communities that would be affected by discrete events of violence could prevent the spark at the local level. Compare this to Rwanda in 1994, where the Hutu-led government provided the weapons and logistics to the militias that did the killing, and the aim was the elimination of the Tutsi ethnic group. There had been atrocities at the local level leading up to the genocide (particularly in the north where the Tutsi RPF militia was fighting Rwandan government forces), but when the genocide started in earnest the violence was top-down and totalizing. Local-level violence prevention and peacebuilding methods weren’t going to stop that level of organized killing.

So where does this leave us now? If the goal is violence prevention, then we have to recognize where the local strategies work, and be willing to push for international intervention when necessary. Start by asking,”is the violence extrinsically motivated, and localized?” Are people fighting over a tangible thing (e.g. land, access to water, representation in government)? If so, there are going to be opportunities for local-level peacebuilding and violence prevention. Public information and discourse will play a major role in this kind of peacebuilding, and communication technology can have a significant positive multiplier effect. Is the violence extrinsic and national, for example election violence? This is where intervention from the international community probably needs to happen, but there’s also a large place for localized peacebuilding too. For example, peacekeepers might come to enforce stability but local level peacebuilding needs to happen if the gains from a ceasefire are going to hold up in communities. Communication technology can play a role in linking communities to each other, as well as providing a conduit for sharing needs and information with the national government and international intervenors.

What about intrinsically motivated national level violence? This where local solutions start to lose impact, especially when we’re talking about the violence being carried out by the state against a minority. At this point, it’s unlikely communication technologies are going to be much use; either they’ll amplify negative messages in an already politically volatile space, or they won’t matter as violence becomes ubiquitous. Large, international intervention becomes necessary at this point to force the sides apart and impose stability while a peace process is undertaken.

Localized peacebuilding and technology are at their most effective before large scale violence starts. Communication technology in particular can play a powerful role in connecting communities, and breaking down narratives that can reinforce the kinds of intrinsic, dehumanizing narratives of violence that open the door to genocide. When we think about ‘preventing genocide’ we actually need to be thinking about how we prevent or intervene in the small atrocities which build up to a Genocidal event, because once that event has started it’s too late.

Learnings from ISA

Another March, another ISA conference. 2014 has been good, especially since the networking and socializing was matched by excellent feedback on what I presented. The highlights:

What I thought was a failed experiment in getting Twitter to love me actually teased out some interesting methodological challenges that other panelists on the Crowdsourcing Violence panel faced. Basically, the problem is how to encourage participation in the crowd when there isn’t an emergency. Whether it was crowdsourcing using Twitter or crowdseeding using trusted reporters, we all faced a challenge in getting participants to respond. This makes crowdsourcing and crowdseeding difficult to use as research methods. It’ll be interesting seeing how we all approach this challenge in our different papers and projects, to see if there are ways that incentives or networks can be tapped to get more consistent participation.

My paper on using crowdsourcing to support peacekeeping operations also got some good feedback. The paper was my attempt to think about technology in the context of peacekeeping operations, as opposed to peacekeeping being responsive to the technology available (e.g. how do we avoid deploying a technology solution seeking a problem). I’m going to take this in an institutional analysis direction, and focus on interviews with peacekeeping staff and experts since there is a paucity of documentation on the few crowdsourcing and crowdseeding projects that have been undertaken by missions.

This was an overall excellent week, with solid panels, fascinating topics and good conversation. If you have thoughts or feedback on my papers, feel free to share in the comments section, or shoot me an email!

Headed to Toronto soon…

I’ll be at the International Studies Association annual convention from March 26-30 presenting two papers (never again will I submit two abstracts for papers that have to be written from scratch…) on Crowdsourcing methodology and technology in peacekeeping operations. Should be a lot of fun – feel free to give me feedback on the papers as I get them posted and let me know if you’ll be in Toronto. I’m always up for a coffee, beer or lunch!

Finding Big Data’s Place in Conflict Analysis

Daniel Solomon recently posted a piece on how we conceptualize (and often misconceptualize) the role of big data in conflict event prediction. His post got me thinking about what role big data plays in conflict analysis. This comes on the heels of Chris Neu’s post on the TechChange blog about the limits of using crowdsourcing to track violence in South Sudan.

This is one of my favorite parts of Daniel’s post: “Acts of violence don’t create data, but rather destroy them. Both local and global information economies suffer during conflict, as warring rumors proliferate and trickle into the exchange of information–knowledge, data–beyond a community’s borders. Observers create complex categories to simplify events, and to (barely) fathom violence as it scales and fragments and coheres and collapses.”

The key question for me becomes: is there a role for Big Data in conflict analysis? Is it something that will empower communities to prevent violence locally, as Letouze, Meier and Vinck propose? Will it be used by the international community for real-time information to speed responses to crises? Could it be leveraged into huge datasets and used to predict outbreaks of violence, so that we can be better prepared to prevent conflict? All of these scenarios are possible, but I’ve yet to see them come to fruition (not to say that they won’t!). The first two are hampered by practicalities of local access to information, and bureaucratic decision making speed; thus, for me the interesting one is the third since it deals directly with an analytic process, which is what I’ll focus on.

When we talk about prediction, we’re talking about using observed information to inform what will happen in the future. In American political science, there has been a trend toward using econometric methods to develop models of conflict risk. There are other methods, such as time-series analysis, that can be used as well. But the efficacy of these methods hinges on the quality and attributes of the data itself. Daniel’s post got me to think about a key issue that has to be dealt with if big data is going to generate valid statistical results. This key issue is the problem of endogeneity.

To start, what is endogeneity? Basically, it means that the data we’re using to predict an event is part of the event we’re trying to predict. As Daniel points out, the volume of data coming out of a region goes down as violence goes up; what we end up with is information that is shaped out of the conflict itself. If we rely on that data to be our predictor of conflict likelihood, we have a major logical problem – that data is endogenous to (part of) conflict. Does data collected during conflict predict conflict? Of course it does, because the only time we see that stream of data appear is when there’s already a conflict. Thus we don’t achieve our end goal, which is predicting what causes conflict to break out. Big Data doesn’t tell us anything useful if the underlying analytic logic that was used in the data collection is faulty.

So what do we do? There’s all kids of dirty, painful math that can be used to address problems in data, such as instrumental variables, robustness checks, etc. But these are post hoc methods, things you do when you’ve got data that’s not quite right. The first step to solving the problem of endogeneity is good first principles. We have to define what are we looking for, and state a falsifiable* hypothesis for how and when it happens. We’re trying to determine what causes violence to break out (this is what we’re looking for). We think that it breaks out because political tensions rise over concerns that public goods and revenues will not be democratically shared (I just made this up, but I think it’s probably a good starting place). Now we know what we’re looking for, and a hypothesis for what causes it and when.

If the violence has already started, real-time data probably won’t help us figure out what caused the violence to break out, so we should perhaps look elsewhere in the timeline. This relates to another point Daniel made: don’t think of big events as a big event. Big events are the outcome of many sequential events over time. There was a time before violence – this would be a good place to look for data about what led to the violence.

Using good first principles and well thought out data collection methods, Big Data might yet make conflict analysis as much science as art.

*This is so important that it deserves a separate blog post. Fortunately, if you’re feeling saucy and have some time on your hands Rene Decartes does the topic far more justice than I could (just read the bit on Cartesian Doubt). Basically, if someone says “I used big data and found this statistical relationship” but they didn’t start from a falsifiable proposition, be very wary of the validity of their results.

New post on the TechChange blog!

I just had a new post go up on the TechChange blog – I haven’t written for them in a while, so it feels good to be writing for them again!

Here’s a brief intro, and you can read the rest here:

“In recent years, mobile phones have drawn tremendous interest from the conflict management community. Given the successful, high profile uses of mobile phone-based violence prevention in Kenya in voting during 2010 and 2013, what can the global peacebuilding community learn from Kenya’s application of mobile technology to promote peace in other conflict areas around the world? What are the social and political factors that explain why mobile phones can have a positive effect on conflict prevention efforts in general?…”

Nancy Ngo, one of the TechChange staff members helped get it written, so a big thanks to her for getting it up!

Disaggregating Peacekeeping Data: A new dataset on peacekeeping contributions

Jacob Kathman at the University of Buffalo has an article in the current issue of Conflict Management and Peace Science about his new dataset on the numbers and nationalities of all peacekeeper contributions by month since 1990.  This is a pretty fantastic undertaking since peacekeeping data is often difficult to find, and no small feat given how challenging it is not only to code a 100,000+ point dataset, but do it in such a way that it complements other datasets like Correlates of War and Uppsala/PRIO.  I’m particularly excited about this dataset because it highlights something I’ve been interested in, and will continue to work on throughout my career: gathering and coding historical data on peacekeeping missions so that social scientists and economists can start producing quantitative research to compliment the existing case study-oriented research on peacekeeping operations and practice.

As Kathman points out, there has usually been a focus on case study approaches to researching peacekeeping.  This makes sense: most of the research is geared toward identifying lessons learned from mission success and failure, and is meant to be easily integrated into operational behavior, instead of addressing theoretical issues.  This also reflects the ad hoc nature of peacekeeping; a mission gets a mandate to deal with a specific issue, and missions tend to be short (with some exceptions), so the data tends to be mission and context specific which lends to case study research approaches.  As civil wars became the norm in the 1990s though, missions expanded their roles to include war fighting, humanitarian aid delivery, medical provision, policing, and other aspects of civil society.  This meant that peacekeeping missions became part of the political, economic and social fabric of the post-ceasefire environment, and over the last ten years social scientists started studying the effects of peacekeeping missions on ceasefire duration and economic development, among other things.

One of the things that has lacked, and that Kathman’s dataset helps with, is data about the missions themselves.  Studies, such as Virginia Page Fortna’s excellent book on the effect of peacekeeping missions on ceasefire durability tend to rely on conflict start-stop data to make inferences about the impact of peacekeeping.  Studies of peacekeeping and economics also run into the same issues; researchers have used baseline effect on GDP of peacekeeping missions, but this is a blunt instrument approach and suffers from problems of endogeneity.  Caruso et al’s analysis of the UN mission in South Sudan’s positive effect on cereal production treats the UN mission as a mass entity, but is unable to show comparative impacts on food production across missions since there isn’t finer grained mission data readily available.

Given the need, I would suggest pushing forward with datasets that contain not only data on troop contributions, but also data on mission expenditures, since peacekeeping missions have effects on the local economy which could be positive.  The problem is that the positive effects might not be seen without finer grained data on how missions use their money in the country they’re operating in.  Do investments in durable infrastructure make a difference to the durability of peace and economic growth?  What about focusing on local provision of goods and services where available?  At the moment data on these things is hard to find, but would be useful to conflict researchers.

Kathman’s paper is worth a read since he gives us a road map for how to develop further datasets on peacekeeping missions.  More datasets like this are important for the theorists who do research in the abstract, but can also help inform better processes for mission mandating, procurement and staffing.  If you want to download the datasets, Kathman has them in zip files on his website.

Peacekeeping, economic growth and technology

The economics of peacekeeping are difficult to unpack but there are signs that when a mission has a strategy that includes long-range economic planning, it can have positive long term effects on the host country’s economy.  This could help us understand the strategic value of communication technology as not just a tool for good governance and transparency, but also as an economic stimulant in the aftermath of a conflict.

Carnahan, Durch and Gilmore (CD&G) have made the most comprehensive effort to fully address the ways that a peacekeeping mission can have a positive economic impact.  Like other authors, CD&G discuss the negative impacts of peacekeeping operations on local economies, but also develop an argument for the ways that peacekeeping operations can provide stimulus for local and national economies.  The keys areas include modifications in the acquisition process to focus on acquiring good and services locally, encouraging peacekeepers to spend their mission subsistence allowances in-country, and being aware of brain drain if host country nationals leave their civil services to work with the UN mission.  CD&G focus their recommendations both on short term issues like local procurement and managing wage disparities between local and international staff to prevent price spikes, but also discuss issues such as doing long-term analysis of infrastructure projects beyond the timeframe of the mission mandate, so that mission spending is designed to meet the strategic economic needs of the host country.

A recent article from Raul Caruso and Roberto Ricciuti dove deeper into the economics of peacekeeping by looking at the increase in cereals (grain) production in South Sudan over time, and creating a causal model of the UNMISS mission’s positive impact on food production.  While the security peacekeeping missions provide can help things like the agricultural sector, the mission doesn’t directly control cereals production though.  We should be equally interested in  highlighting the roll that missions can play in investing in durable infrastructure, since this is an area that missions and the UNDPKO have more direct control over.

This brings us back to the long-term value of missions using civilian communication infrastructure as part of the mission strategy.  Communication infrastructure could be low hanging fruit as a durable investment, is useful tactically to the mission, and is good in the long term for the host country’s economy at large.  Because of this ICTs could play both a Keynesian role, stimulating the economy through immediate multilateral and mission spending on airtime and bandwidth, while also having a Solowian long term effect as local populations make use of mobile phones and internet that the peacekeeping mission paid for initially as the economy stabilizes.  Given what we know about the positive effects of ICT infrastructure on developing economies, pushing for an ICT strategy when a peacekeeping mission is deployed could support the mission’s tactical needs while also investing in a sector that is good for the economy after the peacekeepers have left.

Syria Update

Yesterday I mentioned the need to be transparent with our intelligence on chemical weapons use in Syria if we wanted to take the moral high ground.  Today I read the release outlining the U.S. intelligence findings on the attack.  The Huffington Post linked to this, along with a quote from Secretary of State Kerry that “Its findings are as clear as they are compelling.”  This is not exactly an example of unimpeachable proof that causes erstwhile allies to galvanize, while shaming Russia and China into a more placable position.  Statecraft: fail.

Getting traction in the United Nations on Syria

As I’ve been following story of the chemical weapons attacks in Syria, and the resulting moves to prepare for military strikes, I’ve felt like the U.N. has been an under-utilized resource for dealing with the crisis.  A few friends mention that President Obama’s ‘red line’ could be defined as something other than a military strike, and I would posit that alternative ‘red lines’ could exist at the United Nations.  This would require a rethink of how the U.S. uses diplomacy at the U.N. though.

Continue reading

Unpacking P-values: Turning statistical significance into practical significance

I often get questions about the veracity of using statistics to understand conflict and political behavior, especially when using predictive or confirmatory analytic methods.  The questions are well founded, since a recent article found that potentially up to 54% of statistical results in the medical field are spurious.  This should give social scientists pause, since medical researchers are working with a far more stable set of variables, in controlled experiments.  Conflict researchers by comparison are analyzing human behavior, not in a lab, in highly stressful environments.  If medical researchers are potentially getting spurious results over half the time in highly controlled settings, what does that tell us about the results conflict researchers get?  Are the statistical models even useful?

Continue reading