I am finally able to respond (add) to a post by Chris Moore about the problem of mathematicization and formalization of political science, and social science more generally, as it relates to how the social sciences inform real policy issues. As I’m finishing a Fulbright fellowship in Samoa, where I worked specifically on research supporting policy making in the ICT sector, Chris’s analysis was particularly apropos. As I read his post I thought “indeed, I’ve seen many an article in APSR that fall into the trap he describes,” articles with formal mathematics and econometrics that are logically infallible, use superbly defined instrumental variables, but have little explanatory value outside of the ontological bubble of theoretical political science. Why do academics do this? How can they (we…I’m sort of one myself) make academic research useful to non-academics, or at least bring some real-world perspective to the development of theory.
Qian and Nunn’s 2012 article on food aid’s effect on conflict is a good example of how formal methods can drive the question, instead of the question driving the method. Food aid indeed has an effect on conflict, and vice versa. To tease out a causal path from food aid to conflict though requires a logical stream that while formally correct, adds a lot of complexity to the argument. The thing that sticks out to me is they have to use an instrumental variable to make their argument. U.S. wheat production fits the requirements to be the variable they use, but do we really think that bumper crops in wheat actually lead to an increased risk of conflict? If so, is the policy prescription for decreasing conflict risk not allowing bumper crops of wheat? In the end they do a fair amount of complex logical modeling, then conclude by saying the data’s not good enough, we don’t really know the interactive effects of other aid on conflict, and that to really understand the relationship between food aid and conflict likelihood we need to explore the question in a different way.
Is there value in this type of exercise? Perhaps, but it’s probably limited to a number of academics who specialize in this type of intellectual exercise. Is this article useful to non-specialist readers or policy makers? Highly (99%) unlikely. Most policy makers don’t have the mathematical/statistical training to really understand the authors’ empirical strategy. If they do, they probably don’t have time to really digest it. That’s a fundamental problem, but it’s compounded by the use of an instrumental variable, which is a pretty abstract thing in itself. It’s not that it’s wrong, it’s that when we step outside the methodological confines the authors are working in, their analysis begins to lack inherent value. I don’t say this to shame or castigate Qian and Nunn; academics write for their peers since that’s who gives them job security.
So how do we derive value from this work if we want to inform policy? One way to do this is for academic departments to encourage doctoral students to try policy work during summers during the coursework phase. The summers between years one and two are good times for this; they’re pre-dissertation, so a student isn’t in a research mode yet, and the lessons learned during a summer in the field during coursework can feed into the writing of a dissertation. If we’re talking about faculty, departments can look for ways to reward writing for a general audience (about one’s field of specialization). Making public intellectualism part of the tenure file would probably be welcomed by many of the academics I know, who have a passion for their fields and would happily share their insights with public.
This has the added benefit of reducing groupthink or herd mentality, which academics are prone to like any other professional group. Possibly more so, since academic work is internally referential (academics cite each other). It’s easy in such an environment to stop asking why we’re adding a variable to a statistical analysis, or what value it has in a practical sense. By having to step out of the academic intellectual bubble, whether as a summer intern or to write an op-ed that has to be understood by a non-expert, it’s a chance to be in the field either physically or intellectually and re-assess why we’re analyzing particular variables and using particular methods.
At the very least it gives academics some raw material to take back to the lab, even if the ‘field’ is a disconcerting, statistically noisy place.
Oh instrumental variables, the forbidden and often inedible fruit. I held own through so many econometrics classes… only to learn that oh so much is done for a love of the method itself. We finished by taking apart the top studies, Krueger and Card on minimum wage, some study that very cleverly used tropical death rates or something to look at effects of colonial institutions. I dropped the series before we moved on to religion, even though the professor was only getting more excited.
I think you’re completely right that Qian and Nunn shouldn’t be castigated. As social scientists we know that institutions matter, and there is a certain business to academia like any other field, in that to a large extent what gets done is what gets measured and rewarded. And both your recommendations are right on. Get the students out so they at least have a feeling for what it is they are trying to represent and measure with their imperfect models and data is essential before they start climbing the ladder. And writing in natural, as opposed to technical, language, is quite frankly good intellectual practice. Jargon can help you formulate a question, and is good shorthand among experts, but it also can shield your results from a lot of very important “So what’s?” and “What if’s?”
There is also the question of role models, where you might enjoy my review of an article by a GMU professor on Elinor Ostrom.
Anyway, I really am enjoying this conversation and hope we can keep it going. I certainly have a lot more to say and hope that you will read and follow along.
———
Side geeky technical note: In Qian and Nunn’s defense, isn’t the fact that we don’t believe that bumper crops of wheat lead to conflict what allows them to use it as an instrumental variable? (Right? It has been seven years since I studied econometrics seriously but I require the very irrelevance of the selected instrumental variable being part of the “Wow!” factor.)
Oops, not as good a self promoter as I aspire to be. Here is the link to my blog piece on Elinor Ostrom’s methods. http://whiskeyatwater.com/2013/11/19/a-social-scientist-on-patrol/