Mobile Phones and Conflict Prevention: A recent interview

I was recently interviewed on my experience with, and research on, how mobile phones support conflict prevention as part of the launch of George Washington University’s Media and Peacebuilding Project. Along with my interview, they interviewed some really excellent people from across the research and practice spaces. I’m really excited to see what comes out of this program!

Advertisements

Going From Science March to Political Impact

I was at the Bonn/Köln iteration of the March for Science and it was a good time. But as I watched the marches around the world, especially in the U.S., my thoughts turned to how to create further action. Large turnout in cities populated predominantly by people who already value science and empirically-based policy making can only go so far. A few quick thoughts going forward…

  1. Get involved in local politics. This is a goal I’ve set for myself, at least as much as I can as a non-citizen in the country I work in. It’s amazing how much physical and social science training can be put to use in local settings. For example, everything I know about collective action problems is not only useful in studying participatory governance in developing countries, it’s also applicable to town and city government. If you’re in the Earth sciences, get on the planning board or an Earth-science relevant area of local government.
  2. Once you have achieved joining the local government *do not turn into a know-it-all*. I have expertise in collection action problems; my job is not to remind people how dumb we are as a species for falling into the normal collection action traps. My job is, understanding those traps and knowing how they work in theory, to help the rest of my town hall compatriots manage them. This isn’t a lecture hall, it’s participation in governance.
  3. Think about how to broaden the base. Yes, a march is good and snarky signs are funny, but at some point people in districts in other parts of the country will elect representatives. As scientists, we have to make science valuable to people, so that they don’t elect charlatans who would take us back to the Middle Ages to line their pockets with a few bucks from lobbyists. One way to do this is to make the history of applied science, and the collective role of society and government in that history, central to any argument about the value of science in public policy. As scientists this starts with us – we have to be fluent and engaging in not only explaining the practice of our work, but the history (social, political, economic) of how our work fits into society. I read a great book on the history of urban planning in New York City a while back; there was a lot of detailed analysis of the engineering science, but it was written wonderfully. I had no idea how interesting, both from a social and chemical standpoint, concrete was. Making interesting what seems banal to non-scientists  would go a long way.
  4. If you have tenure, run for office. It’s hard as hell getting an academic or research career started, so if you are established and have some clout and job security, take advantage of your privilege and get involved in politics. Younger researchers will model this behavior as they become established as well.

That’s I’ve got at the moment (well, lots more, but it’s only a blog post). Be excited about science, be excited about it’s role in society, and don’t forget: People often vote on emotion. Empirics are good, but an engaging story will go a long way, too.

Joining the Deutsches Institut für Entwicklungspolitik!

I’m excited to announce that I’ll be joining the Deutsches Institut für Entwicklungspolitik (German Institute for Development Policy) in Bonn, Germany! I’ll be working in their Governance, Statehood, and Security group, doing research and providing policy advice on forced displacement in fragile and conflict affected countries.

I’m excited to have the opportunity to put my skills and knowledge to use working on this topic – I’ll be able to continue applying my knowledge on technology and development to this topic, while also working with experts on migration, geography and economics to produce policy-relevant scientific research.

The intersection of academia and public policy is the space I most enjoyed occupying during my PhD studies, so I’m thrilled to be in a place where my research can speak directly to critical policy issues in the development and peacebuilding fields!

The Blog Will be Fuller

After a lovely year in Sydney as a research fellow with IEP I’ll be headed back to the Northern Hemisphere to finish my dissertation. I should be defending it this summer – once it’s done, it’ll be on to new and exciting research!

This also means that I now have the freedom and time to start posting here again. While the year was fun it was also busy, which meant limited time to blog. I’m planning on some data-oriented posts, which will be fun.

The schedule the next month or two includes a presenting at the Midwest Political Science Association meeting, hopefully popping into the Tech4Dev conference on short notice, giving a paper with the inimitable Nicholas Bodanac  at the European Political Science Association meeting in June, and hopefully circling all the way back to the Build Peace Conference in September. If you’re going to be at any of these, or anywhere in between, give me a shout!

Build Peace 2015

I was invited to be a speaker on the panel on behavior change and technology in peacebuilding and Build Peace 2015. The panel was a lot of fun, with some fascinating presentations! You can find them on the Build Peace YouTube page. Here’s mine:

This was a particularly fun conference, pulling together practitioners, activists and academics in a setting that breaks away from the usual paper/panel/questions format of most conferences. Looking forward to next year!

Upcoming events!

Unfortunately the last few months have been fairly low output in terms of blog posts. This can be credited to resettling after returning from Samoa, getting back to work with the tech community in D.C, and of course getting a dissertation written. I have had the chance to get myself on a few panels this month and next to discuss my research, though. I’ll be joined by some awesome people too, so hopefully if you’re in D.C. you can come out and join us!

October 15: Brownbag lunch panel at the OpenGovHub hosted by the Social Innovation Lab, FrontlineSMS, and Ushahidi.

November 5: Guest talk at Georgetown University’s School of Foreign Service about my research in Samoa, and larger issues of using ICTs for crisis response.

Later in November: Dissertation proposal defense at the School for Conflict Analysis and Resolution (exact date TBD). Open to the public!

Hopefully you can make it out to one or more of these, I think they’ll be really interesting!

 

Big News: The GDELT Global Dashboard

GDELT just released their new Global Visualization dashboard, and it’s pretty cool. It blinks and flashes, glows and pulses, and is really interesting to navigate. Naturally, as a social scientist who studies conflict, I have some thoughts.

1) This is really cool. The user interface is attractive, it’s easy to navigate, and it’s intuitive. I don’t need a raft of instructions on how to use it, and I don’t need to be a programmer or have any background in programming to make use of all its functionality. If the technology and data sectors are going to make inroads into the conflict analysis space, they should take note of how GDELT did this, since most conflict specialists don’t have programming backgrounds and will ignore tools that are too programming intensive. Basically, if it takes more than about 10 minutes for me to get a tool or data program functioning, I’m probably not going to use it since I have other analytic techniques at my disposal that can achieve the same outcome that I’ve already mastered.

2) Beware the desire to forecast! As I dug through the data a bit, I realized something important. This is not a database of information that will be particularly useful for forecasting or predictive analysis. Well, replicable predictive analysis at least. You might be able to identify some trends, but since the data itself is news reports there’s going to be a lot of variation across tone, lag between event and publication, and a whole host of other things that will make quasi-experiments difficult. The example I gave to a friend who I was discussing this with was the challenge of predicting election results using Twitter; it worked when political scientists tried to predict the distribution of seats in the German Bundestag by party, but then when they replicated the experiment in the 2010 U.S. midterm elections it didn’t work at all. Most of this stemmed from the socio-linguistics of political commentary in the two countries. Germans aren’t particularly snarky or sarcastic in their political tweeting (apparently), while Americans are. This caused a major problem for the algorithm that was tracking key words and phrases during the American campaign season. Consider, if we have trouble predicting relatively uniform events like elections using language-based data, how much harder will it be to predict something like violence, which is far more complex?

3) Do look for qualitative details in the data! A friend of mine pointed out that the data contained on this map is treasure trove of sentiment, perception and narrative about how the media at a very local level conceptualizes violence. Understanding how media, especially local media, perceive things like risk or frame political issues is incredibly valuable for conflict analysts or peacebuilding professionals. I would argue that this is actually more valuable than forecasting or predictive modeling; if we’re honest with ourselves I think we’d have to admit that ‘predicting’ conflict and then rushing to stop it before it starts has proven to be a pretty lost endeavor. But if we understand at a deeper level why people would turn to violence, and how their context helps distill their perception of risk into something hard enough to fight over, then interventions such as negotiation, mediation and political settlements are going to be better tailored to the specific conflict. This is where the GDELT dashboard really shines as an analytic tool.

I’m excited to see how GDELT continues to make the dashboard better – there are already plans to provide more options for layering and filtering data, which will be helpful. Overall though, I’m excited to see what can be done with some creative qualitative research using this data, particularly for understanding sentiment and perception in the media during conflict.

Rigor Versus Reality: Balancing the field with the lab

I am finally able to respond (add) to a post by Chris Moore about the problem of mathematicization and formalization of political science, and social science more generally, as it relates to how the social sciences inform real policy issues.  As I’m finishing a Fulbright fellowship in Samoa, where I worked specifically on research supporting policy making in the ICT sector, Chris’s analysis was particularly apropos. As I read his post I thought “indeed, I’ve seen many an article in APSR that fall into the trap he describes,” articles with formal mathematics and econometrics that are logically infallible, use superbly defined instrumental variables, but have little explanatory value outside of the ontological bubble of theoretical political science. Why do academics do this? How can they (we…I’m sort of one myself) make academic research useful to non-academics, or at least bring some real-world perspective to the development of theory.

Qian and Nunn’s 2012 article on food aid’s effect on conflict is a good example of how formal methods can drive the question, instead of the question driving the method. Food aid indeed has an effect on conflict, and vice versa. To tease out a causal path from food aid to conflict though requires a logical stream that while formally correct, adds a lot of complexity to the argument. The thing that sticks out to me is they have to use an instrumental variable to make their argument. U.S. wheat production fits the requirements to be the variable they use, but do we really think that bumper crops in wheat actually lead to an increased risk of conflict? If so, is the policy prescription for decreasing conflict risk not allowing bumper crops of wheat? In the end they do a fair amount of complex logical modeling, then conclude by saying the data’s not good enough, we don’t really know the interactive effects of other aid on conflict, and that to really understand the relationship between food aid and conflict likelihood we need to explore the question in a different way.

Is there value in this type of exercise? Perhaps, but it’s probably limited to a number of academics who specialize in this type of intellectual exercise. Is this article useful to non-specialist readers or policy makers? Highly (99%) unlikely. Most policy makers don’t have the mathematical/statistical training to really understand the authors’ empirical strategy. If they do, they probably don’t have time to really digest it. That’s a fundamental problem, but it’s compounded by the use of an instrumental variable, which is a pretty abstract thing in itself. It’s not that it’s wrong, it’s that when we step outside the methodological confines the authors are working in, their analysis begins to lack inherent value. I don’t say this to shame or castigate Qian and Nunn; academics write for their peers since that’s who gives them job security.

So how do we derive value from this work if we want to inform policy? One way to do this is for academic departments to encourage doctoral students to try policy work during summers during the coursework phase. The summers between years one and two are good times for this; they’re pre-dissertation, so a student isn’t in a research mode yet, and the lessons learned during a summer in the field during coursework can feed into the writing of a dissertation. If we’re talking about faculty, departments can look for ways to reward writing for a general audience (about one’s field of specialization). Making public intellectualism part of the tenure file would probably be welcomed by many of the academics I know, who have a passion for their fields and would happily share their insights with public.

This has the added benefit of reducing groupthink or herd mentality, which academics are prone to like any other professional group. Possibly more so, since academic work is internally referential (academics cite each other). It’s easy in such an environment to stop asking why we’re adding a variable to a statistical analysis, or what value it has in a practical sense. By having to step out of the academic intellectual bubble, whether as a summer intern or to write an op-ed that has to be understood by a non-expert, it’s a chance to be in the field either physically or intellectually and re-assess why we’re analyzing particular variables and using particular methods.

At the very least it gives academics some raw material to take back to the lab, even if the ‘field’ is a disconcerting, statistically noisy place.