Dissertation Proposal Defense

No, I won’t be ‘Dr.’ tomorrow, but the proposal defense is a milestone none the less. For those who are interested in my dissertation research, and can’t make it to my proposal defense tomorrow at 12:00PM at the School for Conflict Analysis and Resolution, below is a sound file you can listen to. You can download my slideshow here and follow along that way as well!

Peacekeeping tech with Dr. Walter Dorn

I got to interview Dr. Walter Dorn of Canadian Forces College about his work on technology and peacekeeping for my TechChange course on technology for conflict management and peacebuilding – a good interview that lends some operational and political insight for using these tools in peacekeeping settings!

Upcoming events!

Unfortunately the last few months have been fairly low output in terms of blog posts. This can be credited to resettling after returning from Samoa, getting back to work with the tech community in D.C, and of course getting a dissertation written. I have had the chance to get myself on a few panels this month and next to discuss my research, though. I’ll be joined by some awesome people too, so hopefully if you’re in D.C. you can come out and join us!

October 15: Brownbag lunch panel at the OpenGovHub hosted by the Social Innovation Lab, FrontlineSMS, and Ushahidi.

November 5: Guest talk at Georgetown University’s School of Foreign Service about my research in Samoa, and larger issues of using ICTs for crisis response.

Later in November: Dissertation proposal defense at the School for Conflict Analysis and Resolution (exact date TBD). Open to the public!

Hopefully you can make it out to one or more of these, I think they’ll be really interesting!

 

The talk I gave at USAID Sept. 4

For those who were curious about what I discussed with USAID’s Office on Conflict Management and Mitigation on September 4, wonder no more. TechChange’s video guru got me on camera to record the presentation – hopefully it’s useful (or leads to some good arguments at least).

TC-109: Technology for Conflict Management and Peacebuilding

I’ll be teaching a course for TechChange on ICTs and peacebuilding next month. I’m really excited to be facilitating it, and I was really thrilled to see the final cut of the course introduction video we produced today:

Hopefully you’ll join us, it’s going to be a lot of fun and some awesome guests will be joining us to talk about their work in the peacebuilding and technology spaces!

Big News: The GDELT Global Dashboard

GDELT just released their new Global Visualization dashboard, and it’s pretty cool. It blinks and flashes, glows and pulses, and is really interesting to navigate. Naturally, as a social scientist who studies conflict, I have some thoughts.

1) This is really cool. The user interface is attractive, it’s easy to navigate, and it’s intuitive. I don’t need a raft of instructions on how to use it, and I don’t need to be a programmer or have any background in programming to make use of all its functionality. If the technology and data sectors are going to make inroads into the conflict analysis space, they should take note of how GDELT did this, since most conflict specialists don’t have programming backgrounds and will ignore tools that are too programming intensive. Basically, if it takes more than about 10 minutes for me to get a tool or data program functioning, I’m probably not going to use it since I have other analytic techniques at my disposal that can achieve the same outcome that I’ve already mastered.

2) Beware the desire to forecast! As I dug through the data a bit, I realized something important. This is not a database of information that will be particularly useful for forecasting or predictive analysis. Well, replicable predictive analysis at least. You might be able to identify some trends, but since the data itself is news reports there’s going to be a lot of variation across tone, lag between event and publication, and a whole host of other things that will make quasi-experiments difficult. The example I gave to a friend who I was discussing this with was the challenge of predicting election results using Twitter; it worked when political scientists tried to predict the distribution of seats in the German Bundestag by party, but then when they replicated the experiment in the 2010 U.S. midterm elections it didn’t work at all. Most of this stemmed from the socio-linguistics of political commentary in the two countries. Germans aren’t particularly snarky or sarcastic in their political tweeting (apparently), while Americans are. This caused a major problem for the algorithm that was tracking key words and phrases during the American campaign season. Consider, if we have trouble predicting relatively uniform events like elections using language-based data, how much harder will it be to predict something like violence, which is far more complex?

3) Do look for qualitative details in the data! A friend of mine pointed out that the data contained on this map is treasure trove of sentiment, perception and narrative about how the media at a very local level conceptualizes violence. Understanding how media, especially local media, perceive things like risk or frame political issues is incredibly valuable for conflict analysts or peacebuilding professionals. I would argue that this is actually more valuable than forecasting or predictive modeling; if we’re honest with ourselves I think we’d have to admit that ‘predicting’ conflict and then rushing to stop it before it starts has proven to be a pretty lost endeavor. But if we understand at a deeper level why people would turn to violence, and how their context helps distill their perception of risk into something hard enough to fight over, then interventions such as negotiation, mediation and political settlements are going to be better tailored to the specific conflict. This is where the GDELT dashboard really shines as an analytic tool.

I’m excited to see how GDELT continues to make the dashboard better – there are already plans to provide more options for layering and filtering data, which will be helpful. Overall though, I’m excited to see what can be done with some creative qualitative research using this data, particularly for understanding sentiment and perception in the media during conflict.

Putting the ‘political’ back in political economy

I stumbled across an article in the New York Times a few days ago by Tyler Cowen of George Mason University and a regular contributor to the blog Marginal Revolution. Entitled “Income Inequality Is Not Rising Globally. It’s Falling.”, it takes a crack at attempting to indicate that while country-level income inequality is increasing the overall effects of globalization are leading to less aggregate income inequality globally, and that this is a good thing. I always enjoy reading Cowen’s stuff even when I don’t agree with him, and in this case I have a few contentions as a political scientist about his argument.

These contentions developed after seeing a comment from a friend on Facebook about the article. He noted that the key problem isn’t income inequality, but wealth inequality. The way that income and growth are structured in the modern world, if you start from a position of higher wealth and asset ownership, the more you benefit from the structure of the global economy. If you rely on a bi-weekly paycheck though you face nothing but downward pressure on your economic position, unless you work in the information, research, governance, or financial sectors (which happen to all play key roles in globalization). Cowen though says that while this country-level trend is unfortunate, we shouldn’t miss the point that globally income inequality has dropped. This is where I have my biggest contentions with the argument, since economics is about politics, and like Tip O’Neill said all politics is local.

To make his argument Cowen has to invert the relationship between people, politics and economic systems. In effect, he argues that we should be happy that while at the local (or national) level the economy might be a mess, it’s important that at a global system level income inequality is decreasing. For this to hold up, we have to assume that systems, in this case the global economic system, are what people are responsive to, things that people can’t or shouldn’t be motivated to change. While Cowen is more humane than many of his libertarian counterparts, believing that safety nets should still exist for the workers who lose in national wealth inequality, he still makes what I think is a problematically common mistake in economics. Implicit to Cowen’s argument is that economic systems exist in parallel or outside the impact of politics. Instead of discussing the tangible problem of increasing wealth and income inequality at the national level as something that can be changed through policy and intervention, he finds an abstract way to claim the system is working. This is a huge problem from a public policy perspective.

At a fundamental level Cowen’s argument subverts the notion of representative democracy. The models of economy have become the ends in themselves, things that politicians and policy makers have applied normative value to, and thus try to shape laws and policy for. This is where the democracy problem comes in. In the United States, we ostensibly elect officials to create policies that support the public interest. When those representatives make economic policy that is based on a set of models that actually lead to massive inequality and economic hardship, they are no longer representing their constituents and instead are representing the abstract notion of market economics. If my congressional representative’s response to a total failure of the economy in my district is to say “there may be no jobs and wages might be way too low, but at least on a global scale income equality is down” then they are not representing the needs of their constituents.

This is the inherent problem with Cowen’s argument, and it has knock on effects since policy makers listen to him and other’s from his school of thought. Essentially he is arguing that a system that has failed at the level where it matters (the citizen level) due to particular aspects of the socio-political nature of finance-driven markets shouldn’t be changed at the local level because it seems, depending on how you cook define the numbers, to be working at an abstract global level. It dehumanizes economics, which is an inherently very human enterprise. In case we forget our history, such things as the Reign of Terror, Communist revolutions, and Jesus’s life and teachings were in response to fundamentally broken and/or exploitive economic systems. If tally the score in those three cases, it would be: System Maintenance 0 : 3 Revolutionary Uprising (and Violence).

Politicians and public intellectuals who focus on abstract and contorted ways to justify the maintenance of an economic system that tangibly fails the public would do well to heed the lessons of history. Abstract arguments about the way the global system is working won’t mean much when the pitchforks come out at the local level.

Quick thoughts from the #Tech4PP Twitter chat

I followed (and even participated!) in NDI’s Twitter chat today on using technology to increase political party and electoral participation. If you’re interested you can find the thread by searching the hashtag ‘#Tech4PP’. There were a lot of good examples of tech being used to increase participation, make processes more transparent, and boost inclusion in the political process. Below are a few quick thoughts that supersede the character limit:

1) I thought it was interesting that the chat tended to center around software and hardware, of which there were many interesting examples, but I tended to see less about the human or legal components of the process. I think it’s going to get really interesting to do experimental and empirical research on changes in political participation as social media and mobile based tools become increasingly available. ProTip for my academic friends who study political participation: look at this thread since it has a ton of examples you’d be interested in.

2) I saw a theme in the chat that asked about how we transition from digital outreach to human participation. I thought the framing was interesting since it set up technology as the causal mechanism of participation. I’m not sure I buy that directionality in a generalizable way; perhaps there are examples of this, but on average across cases I’d be inclined to think that the technology/participation relationship hinges more on the intervening variable of pre-existing political interest and knowledge of the issues within the community. I see a use for regression analysis here.

3) I threw a comment into the mix about the need to understand the regulatory and legal environment in a country where any kind of digital political participation software is being used. I’ll admit I’m surprised I didn’t see more on this topic, since it’s a pretty fraught space. Some of the more interesting questions around data ownership, regulatory effects on access to technology, and the cost of broadband could play a significant role in the overall impact of technology on political participation.

These are just a few questions that came to mind as I followed the thread – it was a good one, and I think there are some really good examples of tech for political participation that can be pulled out of it by researchers who are interested in learning more about the space.

Rigor Versus Reality: Balancing the field with the lab

I am finally able to respond (add) to a post by Chris Moore about the problem of mathematicization and formalization of political science, and social science more generally, as it relates to how the social sciences inform real policy issues.  As I’m finishing a Fulbright fellowship in Samoa, where I worked specifically on research supporting policy making in the ICT sector, Chris’s analysis was particularly apropos. As I read his post I thought “indeed, I’ve seen many an article in APSR that fall into the trap he describes,” articles with formal mathematics and econometrics that are logically infallible, use superbly defined instrumental variables, but have little explanatory value outside of the ontological bubble of theoretical political science. Why do academics do this? How can they (we…I’m sort of one myself) make academic research useful to non-academics, or at least bring some real-world perspective to the development of theory.

Qian and Nunn’s 2012 article on food aid’s effect on conflict is a good example of how formal methods can drive the question, instead of the question driving the method. Food aid indeed has an effect on conflict, and vice versa. To tease out a causal path from food aid to conflict though requires a logical stream that while formally correct, adds a lot of complexity to the argument. The thing that sticks out to me is they have to use an instrumental variable to make their argument. U.S. wheat production fits the requirements to be the variable they use, but do we really think that bumper crops in wheat actually lead to an increased risk of conflict? If so, is the policy prescription for decreasing conflict risk not allowing bumper crops of wheat? In the end they do a fair amount of complex logical modeling, then conclude by saying the data’s not good enough, we don’t really know the interactive effects of other aid on conflict, and that to really understand the relationship between food aid and conflict likelihood we need to explore the question in a different way.

Is there value in this type of exercise? Perhaps, but it’s probably limited to a number of academics who specialize in this type of intellectual exercise. Is this article useful to non-specialist readers or policy makers? Highly (99%) unlikely. Most policy makers don’t have the mathematical/statistical training to really understand the authors’ empirical strategy. If they do, they probably don’t have time to really digest it. That’s a fundamental problem, but it’s compounded by the use of an instrumental variable, which is a pretty abstract thing in itself. It’s not that it’s wrong, it’s that when we step outside the methodological confines the authors are working in, their analysis begins to lack inherent value. I don’t say this to shame or castigate Qian and Nunn; academics write for their peers since that’s who gives them job security.

So how do we derive value from this work if we want to inform policy? One way to do this is for academic departments to encourage doctoral students to try policy work during summers during the coursework phase. The summers between years one and two are good times for this; they’re pre-dissertation, so a student isn’t in a research mode yet, and the lessons learned during a summer in the field during coursework can feed into the writing of a dissertation. If we’re talking about faculty, departments can look for ways to reward writing for a general audience (about one’s field of specialization). Making public intellectualism part of the tenure file would probably be welcomed by many of the academics I know, who have a passion for their fields and would happily share their insights with public.

This has the added benefit of reducing groupthink or herd mentality, which academics are prone to like any other professional group. Possibly more so, since academic work is internally referential (academics cite each other). It’s easy in such an environment to stop asking why we’re adding a variable to a statistical analysis, or what value it has in a practical sense. By having to step out of the academic intellectual bubble, whether as a summer intern or to write an op-ed that has to be understood by a non-expert, it’s a chance to be in the field either physically or intellectually and re-assess why we’re analyzing particular variables and using particular methods.

At the very least it gives academics some raw material to take back to the lab, even if the ‘field’ is a disconcerting, statistically noisy place.