I often get questions about the veracity of using statistics to understand conflict and political behavior, especially when using predictive or confirmatory analytic methods. The questions are well founded, since a recent article found that potentially up to 54% of statistical results in the medical field are spurious. This should give social scientists pause, since medical researchers are working with a far more stable set of variables, in controlled experiments. Conflict researchers by comparison are analyzing human behavior, not in a lab, in highly stressful environments. If medical researchers are potentially getting spurious results over half the time in highly controlled settings, what does that tell us about the results conflict researchers get? Are the statistical models even useful?
The first thing to unpack is how statistical results are judged. The key thing to look at in any model is what’s called the P-value of the independent variables. For example, if I want to demonstrate that GDP predicts conflict likelihood, I would make the outbreak of conflict the dependent variable and test whether GDP across cases correlates with conflict outbreak. The P-value will be something between 0 & .99. P is basically the expected percentage of times the relationship between two variables will be random. If P = .05 it means that the relationship between variables is random 5% of the time; what’s important is that the relationship is not random 95% of the time. When doing statistics you want low P-values. If we run a model correlating GDP to conflict outbreak, we might get a P-value of .01, meaning that 99% of the time the relationship between GDP and conflict outbreak is not random.
So what’s the problem with P-values? My simple explanation is to say that they measure the relationship between the variables in a model, not the vagaries of how those variables are defined in the human world. This isn’t to say that they’re useless; in complex econometric or predictive models the P-values in the results table indicate whether there is a significant relationship between two variables. From an exploratory analysis perspective, this is useful information. It can indicate where to dig deeper using comparative or ethnographic methods, and can help researchers focus their efforts. But P-values have their limits.
The limit highlighted in the medical statistics article above, which is an even bigger limit when studying conflict, is the problem of intervening or confounding variables. Let’s look back at the GDP/conflict relationship example. There might be a significant relationship between these two things, but we have to recognize that GDP is affected by a huge number of sub-variables. It is an aggregate number representing a variety of factors in a country’s economy. I would argue that if GDP and conflict outbreak correlate and have a significant P-value, the policy and research response should be “what are the factors that make up GDP that might be causing social or political tension?”, not “how do we raise GDP?” Our model of GDP and conflict might show a strong statistical relationship between two aggregate numbers; understanding the deeper ‘why questions’ of the relationship needs to be done qualitatively if we expect to actually understand why conflict and GDP are related. Basically, there’s a difference between establishing correlation and causation between two things (the math world), and understanding why one thing correlates with and causes the other (the human world).
This post is a plea to my counterparts to step back from the ideological ramparts of quant vs. qual, post-modernists vs. positivists, etc. If my P-value can help an ethnographer better identify where to do field work, and the ethnographer’s observations can help me better understand the intervening variables in my models, then we’re achieving the classic goal of research; the development of knowledge. The lesson for me is that if my goal is a more peaceful world (for the humans), I need to recognize the limits of my quantitative methods (the math world) and accept help from a few good ethnographers.