I’m excited to be invited to present some of my work November 5 at Georgetown University’s Center for Australian, New Zealand and Pacific Studies – here’s some info, I think it’ll be a fun talk!
For those who were curious about what I discussed with USAID’s Office on Conflict Management and Mitigation on September 4, wonder no more. TechChange’s video guru got me on camera to record the presentation – hopefully it’s useful (or leads to some good arguments at least).
I’ll be teaching a course for TechChange on ICTs and peacebuilding next month. I’m really excited to be facilitating it, and I was really thrilled to see the final cut of the course introduction video we produced today:
Hopefully you’ll join us, it’s going to be a lot of fun and some awesome guests will be joining us to talk about their work in the peacebuilding and technology spaces!
GDELT just released their new Global Visualization dashboard, and it’s pretty cool. It blinks and flashes, glows and pulses, and is really interesting to navigate. Naturally, as a social scientist who studies conflict, I have some thoughts.
1) This is really cool. The user interface is attractive, it’s easy to navigate, and it’s intuitive. I don’t need a raft of instructions on how to use it, and I don’t need to be a programmer or have any background in programming to make use of all its functionality. If the technology and data sectors are going to make inroads into the conflict analysis space, they should take note of how GDELT did this, since most conflict specialists don’t have programming backgrounds and will ignore tools that are too programming intensive. Basically, if it takes more than about 10 minutes for me to get a tool or data program functioning, I’m probably not going to use it since I have other analytic techniques at my disposal that can achieve the same outcome that I’ve already mastered.
2) Beware the desire to forecast! As I dug through the data a bit, I realized something important. This is not a database of information that will be particularly useful for forecasting or predictive analysis. Well, replicable predictive analysis at least. You might be able to identify some trends, but since the data itself is news reports there’s going to be a lot of variation across tone, lag between event and publication, and a whole host of other things that will make quasi-experiments difficult. The example I gave to a friend who I was discussing this with was the challenge of predicting election results using Twitter; it worked when political scientists tried to predict the distribution of seats in the German Bundestag by party, but then when they replicated the experiment in the 2010 U.S. midterm elections it didn’t work at all. Most of this stemmed from the socio-linguistics of political commentary in the two countries. Germans aren’t particularly snarky or sarcastic in their political tweeting (apparently), while Americans are. This caused a major problem for the algorithm that was tracking key words and phrases during the American campaign season. Consider, if we have trouble predicting relatively uniform events like elections using language-based data, how much harder will it be to predict something like violence, which is far more complex?
3) Do look for qualitative details in the data! A friend of mine pointed out that the data contained on this map is treasure trove of sentiment, perception and narrative about how the media at a very local level conceptualizes violence. Understanding how media, especially local media, perceive things like risk or frame political issues is incredibly valuable for conflict analysts or peacebuilding professionals. I would argue that this is actually more valuable than forecasting or predictive modeling; if we’re honest with ourselves I think we’d have to admit that ‘predicting’ conflict and then rushing to stop it before it starts has proven to be a pretty lost endeavor. But if we understand at a deeper level why people would turn to violence, and how their context helps distill their perception of risk into something hard enough to fight over, then interventions such as negotiation, mediation and political settlements are going to be better tailored to the specific conflict. This is where the GDELT dashboard really shines as an analytic tool.
I’m excited to see how GDELT continues to make the dashboard better – there are already plans to provide more options for layering and filtering data, which will be helpful. Overall though, I’m excited to see what can be done with some creative qualitative research using this data, particularly for understanding sentiment and perception in the media during conflict.
I stumbled across an article in the New York Times a few days ago by Tyler Cowen of George Mason University and a regular contributor to the blog Marginal Revolution. Entitled “Income Inequality Is Not Rising Globally. It’s Falling.”, it takes a crack at attempting to indicate that while country-level income inequality is increasing the overall effects of globalization are leading to less aggregate income inequality globally, and that this is a good thing. I always enjoy reading Cowen’s stuff even when I don’t agree with him, and in this case I have a few contentions as a political scientist about his argument.
These contentions developed after seeing a comment from a friend on Facebook about the article. He noted that the key problem isn’t income inequality, but wealth inequality. The way that income and growth are structured in the modern world, if you start from a position of higher wealth and asset ownership, the more you benefit from the structure of the global economy. If you rely on a bi-weekly paycheck though you face nothing but downward pressure on your economic position, unless you work in the information, research, governance, or financial sectors (which happen to all play key roles in globalization). Cowen though says that while this country-level trend is unfortunate, we shouldn’t miss the point that globally income inequality has dropped. This is where I have my biggest contentions with the argument, since economics is about politics, and like Tip O’Neill said all politics is local.
To make his argument Cowen has to invert the relationship between people, politics and economic systems. In effect, he argues that we should be happy that while at the local (or national) level the economy might be a mess, it’s important that at a global system level income inequality is decreasing. For this to hold up, we have to assume that systems, in this case the global economic system, are what people are responsive to, things that people can’t or shouldn’t be motivated to change. While Cowen is more humane than many of his libertarian counterparts, believing that safety nets should still exist for the workers who lose in national wealth inequality, he still makes what I think is a problematically common mistake in economics. Implicit to Cowen’s argument is that economic systems exist in parallel or outside the impact of politics. Instead of discussing the tangible problem of increasing wealth and income inequality at the national level as something that can be changed through policy and intervention, he finds an abstract way to claim the system is working. This is a huge problem from a public policy perspective.
At a fundamental level Cowen’s argument subverts the notion of representative democracy. The models of economy have become the ends in themselves, things that politicians and policy makers have applied normative value to, and thus try to shape laws and policy for. This is where the democracy problem comes in. In the United States, we ostensibly elect officials to create policies that support the public interest. When those representatives make economic policy that is based on a set of models that actually lead to massive inequality and economic hardship, they are no longer representing their constituents and instead are representing the abstract notion of market economics. If my congressional representative’s response to a total failure of the economy in my district is to say “there may be no jobs and wages might be way too low, but at least on a global scale income equality is down” then they are not representing the needs of their constituents.
This is the inherent problem with Cowen’s argument, and it has knock on effects since policy makers listen to him and other’s from his school of thought. Essentially he is arguing that a system that has failed at the level where it matters (the citizen level) due to particular aspects of the socio-political nature of finance-driven markets shouldn’t be changed at the local level because it seems, depending on how you
cook define the numbers, to be working at an abstract global level. It dehumanizes economics, which is an inherently very human enterprise. In case we forget our history, such things as the Reign of Terror, Communist revolutions, and Jesus’s life and teachings were in response to fundamentally broken and/or exploitive economic systems. If tally the score in those three cases, it would be: System Maintenance 0 : 3 Revolutionary Uprising (and Violence).
Politicians and public intellectuals who focus on abstract and contorted ways to justify the maintenance of an economic system that tangibly fails the public would do well to heed the lessons of history. Abstract arguments about the way the global system is working won’t mean much when the pitchforks come out at the local level.
I followed (and even participated!) in NDI’s Twitter chat today on using technology to increase political party and electoral participation. If you’re interested you can find the thread by searching the hashtag ‘#Tech4PP’. There were a lot of good examples of tech being used to increase participation, make processes more transparent, and boost inclusion in the political process. Below are a few quick thoughts that supersede the character limit:
1) I thought it was interesting that the chat tended to center around software and hardware, of which there were many interesting examples, but I tended to see less about the human or legal components of the process. I think it’s going to get really interesting to do experimental and empirical research on changes in political participation as social media and mobile based tools become increasingly available. ProTip for my academic friends who study political participation: look at this thread since it has a ton of examples you’d be interested in.
2) I saw a theme in the chat that asked about how we transition from digital outreach to human participation. I thought the framing was interesting since it set up technology as the causal mechanism of participation. I’m not sure I buy that directionality in a generalizable way; perhaps there are examples of this, but on average across cases I’d be inclined to think that the technology/participation relationship hinges more on the intervening variable of pre-existing political interest and knowledge of the issues within the community. I see a use for regression analysis here.
3) I threw a comment into the mix about the need to understand the regulatory and legal environment in a country where any kind of digital political participation software is being used. I’ll admit I’m surprised I didn’t see more on this topic, since it’s a pretty fraught space. Some of the more interesting questions around data ownership, regulatory effects on access to technology, and the cost of broadband could play a significant role in the overall impact of technology on political participation.
These are just a few questions that came to mind as I followed the thread – it was a good one, and I think there are some really good examples of tech for political participation that can be pulled out of it by researchers who are interested in learning more about the space.
I am finally able to respond (add) to a post by Chris Moore about the problem of mathematicization and formalization of political science, and social science more generally, as it relates to how the social sciences inform real policy issues. As I’m finishing a Fulbright fellowship in Samoa, where I worked specifically on research supporting policy making in the ICT sector, Chris’s analysis was particularly apropos. As I read his post I thought “indeed, I’ve seen many an article in APSR that fall into the trap he describes,” articles with formal mathematics and econometrics that are logically infallible, use superbly defined instrumental variables, but have little explanatory value outside of the ontological bubble of theoretical political science. Why do academics do this? How can they (we…I’m sort of one myself) make academic research useful to non-academics, or at least bring some real-world perspective to the development of theory.
Qian and Nunn’s 2012 article on food aid’s effect on conflict is a good example of how formal methods can drive the question, instead of the question driving the method. Food aid indeed has an effect on conflict, and vice versa. To tease out a causal path from food aid to conflict though requires a logical stream that while formally correct, adds a lot of complexity to the argument. The thing that sticks out to me is they have to use an instrumental variable to make their argument. U.S. wheat production fits the requirements to be the variable they use, but do we really think that bumper crops in wheat actually lead to an increased risk of conflict? If so, is the policy prescription for decreasing conflict risk not allowing bumper crops of wheat? In the end they do a fair amount of complex logical modeling, then conclude by saying the data’s not good enough, we don’t really know the interactive effects of other aid on conflict, and that to really understand the relationship between food aid and conflict likelihood we need to explore the question in a different way.
Is there value in this type of exercise? Perhaps, but it’s probably limited to a number of academics who specialize in this type of intellectual exercise. Is this article useful to non-specialist readers or policy makers? Highly (99%) unlikely. Most policy makers don’t have the mathematical/statistical training to really understand the authors’ empirical strategy. If they do, they probably don’t have time to really digest it. That’s a fundamental problem, but it’s compounded by the use of an instrumental variable, which is a pretty abstract thing in itself. It’s not that it’s wrong, it’s that when we step outside the methodological confines the authors are working in, their analysis begins to lack inherent value. I don’t say this to shame or castigate Qian and Nunn; academics write for their peers since that’s who gives them job security.
So how do we derive value from this work if we want to inform policy? One way to do this is for academic departments to encourage doctoral students to try policy work during summers during the coursework phase. The summers between years one and two are good times for this; they’re pre-dissertation, so a student isn’t in a research mode yet, and the lessons learned during a summer in the field during coursework can feed into the writing of a dissertation. If we’re talking about faculty, departments can look for ways to reward writing for a general audience (about one’s field of specialization). Making public intellectualism part of the tenure file would probably be welcomed by many of the academics I know, who have a passion for their fields and would happily share their insights with public.
This has the added benefit of reducing groupthink or herd mentality, which academics are prone to like any other professional group. Possibly more so, since academic work is internally referential (academics cite each other). It’s easy in such an environment to stop asking why we’re adding a variable to a statistical analysis, or what value it has in a practical sense. By having to step out of the academic intellectual bubble, whether as a summer intern or to write an op-ed that has to be understood by a non-expert, it’s a chance to be in the field either physically or intellectually and re-assess why we’re analyzing particular variables and using particular methods.
At the very least it gives academics some raw material to take back to the lab, even if the ‘field’ is a disconcerting, statistically noisy place.
I spent the last two months managing a research collaboration between Samoa’s Ministry of Communications and Information Technology (MCIT) and the National University of Samoa, collecting nation wide data on how people use information and information technology to respond to natural disasters. This data will feed into my dissertation, as well as be useful to the Ministry and the National University, who will be using it for policy development and research. The research team wanted to make this data publicly available, since funding for the research came from MCIT and thus we see it as a public good. You can download the data here, and below is the suggested citation:
Martin-Shields, Charles, Ioana Chan Mow, Lealaolesau Fitu & Hobert Sasa. (2014) “ICTs and Information Use During Emergencies: Data from Samoa,” MCIT/NUS Data Project. Dataset available at: https://charlesmartinshields.files.wordpress.com/2014/06/mcitnus-survey.xlsx
The thrust of the research design was multifold. For MCIT, it’s important to know how people get their information, especially when trying to allocate spectrum or regulate communication providers. The research team from NUS does quite a bit of work with ICT4D and the social aspects of access to communication technology, so having data on use preferences from around the country is helpful in their research agenda. My own research looks at technologies as proxies for socio-political behavior, aiming to understand how social and political context affects the way that people use technology to manage collective action problems during crisis.
The dataset takes inspiration from the work I’ve done with Elizabeth Stones, whose dataset on Kenyan information use and trust inspired my thinking on doing a tailored replication in Samoa. We welcome feedback on the data, our structure, and hope that it can be useful to others working on ICT policy.
I came across an article a friend posted on Facebook yesterday about the work that the MasterCard Foundation is doing to reduce poverty in Africa. Since some of my work is in the ‘techno-innovation 4 development’ sector, I was curious to give it a read. It was everything that makes me *sigh* and/or *shake my fist* at the ‘development innovation’ field.
The article starts from a logical premise that misunderstands what poverty is. Poverty, fundamentally, is when there’s not enough stuff available for all the people in a polity or community to meet their needs. In the modern world we measure capacity to gather the stuff we need in terms of money. I read the article waiting for the part where the MasterCard Foundation addresses the fundamental dilemma of people not having enough money to get the stuff to meet their needs; it never came. There were other things about the article that could be highlighted as problematic, but they are all secondary to the fact that the poverty reduction program being discussed doesn’t address poverty reduction. So what does it address?
“The MasterCard Foundation, with huge assets of $9 billion, is an independent entity without a single MasterCard executive on its board. But its financial work in Africa syncs up nicely with the efforts of Mastercard, the company, to nurture a cashless society as the African continent continues its economic rise.” Basically, they’re developing a market for non-cash monetary services. This is fine; I appreciate the convenience of my debit card, and my bank that allows me to access my money when I’m working abroad. But providing these services in Africa is not poverty reduction, and presenting it as such is at best intellectually dishonest.
There’s a lot more I could say about this article, but the point is that it highlights a consistent problem in the development innovation space. At times we are too easily captivated by ‘solutions’, losing sight of the fundamental causes of the problems we’re trying to solve.
My colleague Dr. Pamina Firchow and I are organizing a panel for next year’s ISA meeting in New Orleans (Feb. 15-21, 2015) on crowdsourcing and the study of violence and violence prevention. Below you’ll find our panel description, and instructions for submitting an abstract to us. We’ll need them by May 23 so we can make decisions on the five papers we will include in the panel proposal that we’ll be submitting before the June 1 deadline. We’d love to see what you all are working on, and look forward to your proposals!
Crowdsourcing Peace and Violence: Methods and Technologies in the Field
Over the last five years the field of crowdsourcing has been increasingly used by researchers and practitioners who study peace and violence. The primary goals of this panel are to discuss examples of successful projects, highlight ongoing challenges of using crowdsourcing and seeding, and frame crowd-based research methodologies based within the framework of established social science methods. The technologies that are used in crowdsourcing are readily available and inexpensive; these include mobile phones, social media, and open source software systems like Ushahidi maps. With all this expansion, however, there have been persistent challenges to using crowdsourcing and crowdseeding for peace and conflict research. Some of these are methodological, including problems with sampling bias, validity, and data integrity. Others are techno-social, such as how people use crowdsourcing technologies in their daily life, privacy concerns, and information security. This panel will feature papers from researchers who are actively using crowdsourcing and crowdseeding methods in their research, continuing the theme of ISA 2014’s panel “Crowdsourcing in the Study of Violence (WD26).”
Panelists will also be invited to submit their papers to be included in a special journal issue on crowdsourcing in violence prevention and peacebuilding. Abstracts for the ISA panel should be submitted to Pamina Firchow (pfirchow[at]nd.edu) and Charles Martin-Shields (cmarti17[at]gmu.edu) by May 23, 2014 via email in Word format. Titles need to be less than 50 words and abstracts need to be less than 200 words. Please include affiliation and contact information in your abstract!