To wrap up the year we published a Discussion Paper on state fragility and forced displacement – this will be revised further in January for publication in a peer-reviewed journal, so comments and critiques are welcome!
My colleague Nicholas Bodanac and I have been working on this for about a year now, and we finally have a published version of our paper where we argue that a digital turn in peacekeeping can have positive economic effects in post-conflict settings. It’s currently online at International Peacekeeping – anyone who wants the full text just needs to send a message, and I’ll be happy to share!
This was a challenging paper to take from start to finish, and it actually started as my PhD qualifying exam question back in 2013. I’ll write a post later this week about the writing process, and the challenge of taking the idea from exam response to finished article in a post later this week.
This week I was the featured writer for the Deutsches Institut für Entwicklungspolitik/German Development Institute’s Current Column. I shared my thoughts and observations on how development and technical cooperation can support livelihoods in countries where people may otherwise migrate, often taking on extraordinary risks, to seek work and economic opportunities. Es gibt eine Deutsche Version hier.
Enjoy, and thanks to DIE-GDI for posting it!
I’m excited to announce that I’ll be joining the Deutsches Institut für Entwicklungspolitik (German Institute for Development Policy) in Bonn, Germany! I’ll be working in their Governance, Statehood, and Security group, doing research and providing policy advice on forced displacement in fragile and conflict affected countries.
I’m excited to have the opportunity to put my skills and knowledge to use working on this topic – I’ll be able to continue applying my knowledge on technology and development to this topic, while also working with experts on migration, geography and economics to produce policy-relevant scientific research.
The intersection of academia and public policy is the space I most enjoyed occupying during my PhD studies, so I’m thrilled to be in a place where my research can speak directly to critical policy issues in the development and peacebuilding fields!
While scanning Twitter this morning I came across a post from Duncan Greene that caught my eye:
— Duncan Green (@fp2p) September 29, 2016
The blogpost he was linking to raises some excellent questions about the benefits of closer relations between academics and practitioners in the development space, and how to increase the overlapping parts of the academic/practice Venn Diagram. It resonated with me because if I hadn’t had a close relationship with institutions like the World Bank, TechChange Inc, UNDP, the U.S. Institute of Peace and many smaller NGOs during my PhD studies, I wouldn’t have been able to develop my dissertation topic, gather my dissertation data, or have a policy audience that would find my dissertation results useful. Greene proposes some good points about how to create these linkages that I completely agree with, and as an academic that has spent time as a practitioner in INGOs and IOs I felt motivated to chime in.
So why is it so hard to get academics to work with international organizations, NGOs and policy bodies? Greene’s post makes two major points about larger mandates (knowledge for the sake of knowledge versus knowledge to make evidence-based decisions) and time scale (academic research isn’t responsive to what’s happening in real time) that I think will forever dog the ability for practitioners and academics to work together. While there are more academic departments that are recognizing the value of research that feeds into policy development, for the most part academics receive little to no benefit (and indeed sometimes incur a negative cost) for engaging with government/international agencies and INGOs. This doesn’t mean there’s no space to collaborate, but it’s an issue that I think will always be there.
This gets at a major point that is critical for folks at INGOs and IOs to recognize. Academia is quite stringent about what counts toward advancement. With some rare exceptions, there’s little if any institutional reward for working on policy issues. Even if a department is supportive of it, tenure and rank decisions are made at the university level, so any policy work has to be on top of the expected publications and research funding that count in the eyes of the wider university. I’d argue the way to work around this as an INGO is to focus on relationships with PhD students and senior faculty. PhD students benefit from being ‘out there’ and having their research be seen more broadly – it’s been a huge help in my academic experience to have a wide range of contacts in INGOs and IOs who know my work. For senior (tenured) faculty the advantage is building relationships that can be useful to their students (many of whom won’t go into academia, but would love to work at Oxfam!), as well as having access to things like public service sabbaticals which makes it easier for them to take time off the research production line.
Indeed, Greene mentions having a PhD student who was doing his topic in coordination with with an Oxfam thematic policy. This is a fantastic way to bring academia and practice closer together, and it’s what I did a lot of during my PhD. It was great: I got the occasional nice consultant’s payday, got interesting feedback from non-academics, helped with policy issues, and still did work that is academically relevant. The main issue I found though was that this was not systematic at all – every INGO, IO, or agency I worked with was based on specific personal relationships. As long as the colleague was in that agency, I had a good relationship with that agency. When they left, the relationship with the agency ended (or more accurately, followed that colleague to their new agency). There are some good systemic efforts in the U.S. Government to bring academics into the policy fold, such as the AAAS fellowships which place social and natural science PhDs in government agencies for 1-2 years. To bring academics and policy people closer, there need to be more of these kind of system-level programs in place that cover the costs of having researchers working in organizations that are funded primarily to respond to current events.
It’s not impossible to work with junior faculty, but this is where understanding the idiosyncrasies of academia is crucial. One example is the idea of 50/50 action/research funding; it’s a good idea, but it’s hard to pull off in a way that mutually benefits the academic party. If an academic puts in many hours drafting a proposal and the only money they see is through a consulting arrangement, then it doesn’t count as grant money for them departmentally. If the research that then comes out of that money isn’t peer-review grade then they’ve potentially spent months of time working on a proposal and project that won’t move them toward tenure or their next academic rank. Most academic departments won’t count consulting work, even if it’s in-field, toward a junior faculty member’s tenure file. The way to solve this is for the INGO and an academic department to be co-applicants, so that the research side of the funding goes straight to the academic department with the academic’s name on it as a principal investigator. This allows the academic, at the very least, to count it on their CV as research money that was brought into the university even if the research outputs never see peer review.
The discussion about how to find new, better ways to bring academia and practice into a mutually beneficial relationship is important – a lot of public money goes into research, and the results should have public as well as theoretical value. While many departments are recognizing the importance of professors being involved in real-world work, it’s also important that INGOs and NGOs recognize that academia places very specific, often idiosyncratic, demands on researchers. By understanding those demands and working with academics to shape projects that meet those demands, I think there will be many more opportunities for academia and the practice community to create an increasingly overlapped venn diagram.
The last two posts I wrote focused on the social and political structures that drive data collection and availability. In these posts I was primarily talking about statistics in wealthy countries, as well as developing countries that aren’t affected by conflict or violence. When it comes to countries that are beset by widespread conflict and violence, all the standard administrative structures that would normally gather, process and post data are either so compromised by the politics of conflict that the data can’t be trusted, or worse they just don’t exist. Without human security and reliable government structures, talking about data selection and collection is a futile exercise.
Conflict data, compared to other administrative data, is a bit of a mash up. There are long term data collection projects like the Correlates of War project and the UCDP data program, both of which measure macro issues in conflict and peace such as combatant types, conflict typologies, and fatalities. Because both projects have long timelines in their data they are considered the best resources for quantitatively studying violence and war. Newer data programs include the Armed Conflict Location and Event Data project and the Global Database of Events Language and Tone. These projects take advantage of geographic and internet-based data sources to examine the geographic elements of conflict. There are other conflict data projects that use communication technologies to gather local-level data on conflict and peace, including Voix des Kivus and the Everyday Peace Indicators project.
This is just a sample of projects and programs, but the main thing to note is that they are generally hosted by universities and the data they gather is oriented toward research as opposed to public administration. Administrative data is obviously a different animal than research data (though researchers often use administrative data and vice versa). To be useful it has to be consistent, statistically valid in terms of sampling and collection technique, and available through some sort of website or institutional application. If the aim of the international community is to measure the twelve Goal 16 Targets in the Sustainable Development Goals, particularly in countries affected by conflict, international organizations and donors need to focus on how to develop the structures that collect administrative data.
We can look to existing models of how to gather data, particularly sensitive data on things like violence. Household surveys are a core tool for gathering administrative data, but to gather representative samples takes a lot of work. It also requires a stable population and reliable census data. For example if a statistical office gets tasked by a ministry of justice to run a survey on crime victimization, the stats office would need to interview as many victims as possible to develop sampling tranches. The U.S. Bureau of Justice Statistics National Crime Victimization Survey is an excellent example of a large-scale national survey. One only needs to read the methodology section to grasp how large an undertaking this is; the government needs the capacity to interview over 150,000 respondents twice a year, citizens need to be stable enough to have a household, and policing data needs to be good enough at the local level to identify victims of crime. Reliable administrative statistics, especially about sensitive topics like crime victimization and violence requires: Functional government, stable populations, and effective local data collection capacity.
While many countries can measure the Goal 16 Targets, countries affected by conflict and violence (the ones that we should be most interested in from a peacebuilding perspective) fundamentally lack the political and social structures necessary to gather and provide reliable administrative data. Proposing a solution like “establish a functioning state with solid data collection and output processes at the local and national level” sounds comically simplistic, but for many conflict-affected states this is the level of discussion – talking about what kind data to collect is an academic exercise unless issues of basic security and population stability and institutional capacity are dealt with first.
I ended up jumping into a Twitter conversation started by international development journalist Tom Murphy about how Rwanda changed the methodology for its Integrated Household Living Conditions survey (EICV), and thus demonstrated that their poverty rate had decreased. The problem is that the new methodology essentially redefines ‘poverty’ to get the numbers to look good; using the previous EICV methodology, it indeed appears that poverty hasn’t decreased but has increased by 6%. While a number of people have already picked apart the methodological problems, is this really a methodological problem or part of a wider indictment of how donor agencies determine success and manage their human resources? Are the people in donor agencies dupes, cynics or both? Neither I reckon. I think they’re just overworked and probably under trained in statistics to get to the root of story, and have little incentive to do so anyway.
Filip Reyntjens does a really nice job of breaking down the problems with Rwanda’s EICV. He makes some good points about the problems with changing the methodology, and in the twitter discussion many other people highlighted technical problems with the new definitions of poverty used in the EICV. While these technical issues are important, the other problem is what the survey means to the stakeholders. This group includes the Rwandan government, donor agencies, and DAC governments. Reyntjens notes that the numbers in the updated EICV make the Rwandan government look good, and by extension make donor agencies look good. Everyone wins (except for the Rwandans who are still in poverty). Setting aside why the Rwandan government would want to modify a survey to make their baseline poverty statistics look better, what do we make of the donor community’s attitude? Are the various aid and development professions that guide policy just cynical bureaucrats happy to tick the box marked “Rwanda got better”?
Some probably are, but in my experience most development professionals take their jobs seriously and want to see the lives of people improved. So what would lead otherwise upstanding development professionals to ignore potentially blatant number cooking by a beneficiary government? Overwork and a lack of statistical training most likely. The work loads that staff at donor agencies deal with are immense. Combine that with a tendency within agencies to stovepipe the statisticians away from the policy makers and you end up with over burdened staff who may not have the training to quickly digest the vagaries of a survey’s methodology or analyze the reason certain changes happen in data from year to year.
It shouldn’t surprise anyone that Rwanda’s government took the opportunity to redefine the methodology that signals how they’re doing at reducing poverty. Their government stays in the good graces of allied governments and donor agencies by ‘hitting’ their poverty prevention targets. But if we’re going to demand that donor agencies be prepared to call out number cooking, the donor agencies need to bring on more staff to spread the workload and make sure that the statistics capacity isn’t stove piped away from all the policy teams. Unfortunately the trend in donor agency funding right now is to focus on ‘efficiency’ above all else (read: too few people doing too much work), which means frayed policy staff will check the “hit the targets” box and the Rwandan government will continue cooking its data to keep donor money flowing.
Unfortunately the last few months have been fairly low output in terms of blog posts. This can be credited to resettling after returning from Samoa, getting back to work with the tech community in D.C, and of course getting a dissertation written. I have had the chance to get myself on a few panels this month and next to discuss my research, though. I’ll be joined by some awesome people too, so hopefully if you’re in D.C. you can come out and join us!
Later in November: Dissertation proposal defense at the School for Conflict Analysis and Resolution (exact date TBD). Open to the public!
Hopefully you can make it out to one or more of these, I think they’ll be really interesting!
I am finally able to respond (add) to a post by Chris Moore about the problem of mathematicization and formalization of political science, and social science more generally, as it relates to how the social sciences inform real policy issues. As I’m finishing a Fulbright fellowship in Samoa, where I worked specifically on research supporting policy making in the ICT sector, Chris’s analysis was particularly apropos. As I read his post I thought “indeed, I’ve seen many an article in APSR that fall into the trap he describes,” articles with formal mathematics and econometrics that are logically infallible, use superbly defined instrumental variables, but have little explanatory value outside of the ontological bubble of theoretical political science. Why do academics do this? How can they (we…I’m sort of one myself) make academic research useful to non-academics, or at least bring some real-world perspective to the development of theory.
Qian and Nunn’s 2012 article on food aid’s effect on conflict is a good example of how formal methods can drive the question, instead of the question driving the method. Food aid indeed has an effect on conflict, and vice versa. To tease out a causal path from food aid to conflict though requires a logical stream that while formally correct, adds a lot of complexity to the argument. The thing that sticks out to me is they have to use an instrumental variable to make their argument. U.S. wheat production fits the requirements to be the variable they use, but do we really think that bumper crops in wheat actually lead to an increased risk of conflict? If so, is the policy prescription for decreasing conflict risk not allowing bumper crops of wheat? In the end they do a fair amount of complex logical modeling, then conclude by saying the data’s not good enough, we don’t really know the interactive effects of other aid on conflict, and that to really understand the relationship between food aid and conflict likelihood we need to explore the question in a different way.
Is there value in this type of exercise? Perhaps, but it’s probably limited to a number of academics who specialize in this type of intellectual exercise. Is this article useful to non-specialist readers or policy makers? Highly (99%) unlikely. Most policy makers don’t have the mathematical/statistical training to really understand the authors’ empirical strategy. If they do, they probably don’t have time to really digest it. That’s a fundamental problem, but it’s compounded by the use of an instrumental variable, which is a pretty abstract thing in itself. It’s not that it’s wrong, it’s that when we step outside the methodological confines the authors are working in, their analysis begins to lack inherent value. I don’t say this to shame or castigate Qian and Nunn; academics write for their peers since that’s who gives them job security.
So how do we derive value from this work if we want to inform policy? One way to do this is for academic departments to encourage doctoral students to try policy work during summers during the coursework phase. The summers between years one and two are good times for this; they’re pre-dissertation, so a student isn’t in a research mode yet, and the lessons learned during a summer in the field during coursework can feed into the writing of a dissertation. If we’re talking about faculty, departments can look for ways to reward writing for a general audience (about one’s field of specialization). Making public intellectualism part of the tenure file would probably be welcomed by many of the academics I know, who have a passion for their fields and would happily share their insights with public.
This has the added benefit of reducing groupthink or herd mentality, which academics are prone to like any other professional group. Possibly more so, since academic work is internally referential (academics cite each other). It’s easy in such an environment to stop asking why we’re adding a variable to a statistical analysis, or what value it has in a practical sense. By having to step out of the academic intellectual bubble, whether as a summer intern or to write an op-ed that has to be understood by a non-expert, it’s a chance to be in the field either physically or intellectually and re-assess why we’re analyzing particular variables and using particular methods.
At the very least it gives academics some raw material to take back to the lab, even if the ‘field’ is a disconcerting, statistically noisy place.
I came across an article a friend posted on Facebook yesterday about the work that the MasterCard Foundation is doing to reduce poverty in Africa. Since some of my work is in the ‘techno-innovation 4 development’ sector, I was curious to give it a read. It was everything that makes me *sigh* and/or *shake my fist* at the ‘development innovation’ field.
The article starts from a logical premise that misunderstands what poverty is. Poverty, fundamentally, is when there’s not enough stuff available for all the people in a polity or community to meet their needs. In the modern world we measure capacity to gather the stuff we need in terms of money. I read the article waiting for the part where the MasterCard Foundation addresses the fundamental dilemma of people not having enough money to get the stuff to meet their needs; it never came. There were other things about the article that could be highlighted as problematic, but they are all secondary to the fact that the poverty reduction program being discussed doesn’t address poverty reduction. So what does it address?
“The MasterCard Foundation, with huge assets of $9 billion, is an independent entity without a single MasterCard executive on its board. But its financial work in Africa syncs up nicely with the efforts of Mastercard, the company, to nurture a cashless society as the African continent continues its economic rise.” Basically, they’re developing a market for non-cash monetary services. This is fine; I appreciate the convenience of my debit card, and my bank that allows me to access my money when I’m working abroad. But providing these services in Africa is not poverty reduction, and presenting it as such is at best intellectually dishonest.
There’s a lot more I could say about this article, but the point is that it highlights a consistent problem in the development innovation space. At times we are too easily captivated by ‘solutions’, losing sight of the fundamental causes of the problems we’re trying to solve.