I published a post yesterday about how administrative data is produced. In the end I claimed that data gathering is an inherently political process. Far from being comparable, scientifically standardized representations of general behavior, public data and statistics are imbued with all the vagaries and unique socio-administrative preferences of the country or locality that collects them.
Administrative criminal statistics are an interesting starting point if someone wants to understand how data reflects the vagaries of administrative structures. If someone thought “I would really like to compare crime rates across European Union member states” they would probably be surprised to learn that unless they just compare homicide rates it’s impossible to compare crime rates between countries. This is not only because definitions of different crimes are different between countries (though the UNODC has done a lot of work to at least standardize definitions), but the actual events of crime are counted differently. For example, Germany uses what’s called “principal offense” counting – this means that in the event that multiple crimes are committed at the same time, the final statistics only count the most serious crime. Belgium doesn’t use this counting method, so its crime statistics look much higher than Germany’s on paper. The University of Lausanne’s Marcelo Aebi, arguably the expert on comparative criminal statistics, published an excellent paper on comparing criminal statistics and the problems posed by different counting procedures (pages 17-18 for those who just want the gist).
Aebi makes a crucial point in the conclusion of his article: Statistics are social constructs and each society has a different way of constructing them. Statistics represent the things we have valued. The past-tense is important here; when we see data it’s showing us the past (the 2016 Global Peace Index uses numbers from 2015, for example), and thus represents what we valued at the time. Data can be used to build and test models of potential future events, but there is no such thing as ‘future data’. The value in data is that it can help citizens and policy makers understand what worked, or didn’t work, so that policies and behaviors can be adjusted going forward.
Of course institutional and administrative behavior is often resistant to trends in data (or very comfortable with data that supports the status quo). This can be for valid, or at least non-nefarious, reasons. For example the Sustainable Development Goals (SDGs) rely heavily on GDP as an economic indicator. The SDGs are supposed to represent sustainable growth and social development into the future, so it’s interesting that they use an economic indicator that many experts and organizations view as quite flawed.
Why would the SDGs rely so heavily on GDP then? For one, it’s a reliable indicator – everyone at least has some vague idea of what is represents. Two, it’s got a long history – we have tracked it for decades. Three, most of the people who created the SDGs come from backgrounds where GDP is a standard indicator – they pick targets and data based on their professional and institutional experience. They didn’t do this because they’re jerks. They did it because GDP represents the standard, if flawed, way that we measure economic performance. They probably also did it because gathering new data is an expensive, time consuming process that everyone says is important [for someone else to pay for].
This is all to say: If you want better public data, or to at least understand why the public data you have seems to reflect the status quo instead of telling you how to break out of it, it’s imperative to understand the qualitative political, social and administrative behaviors inherent to the place or people you’re researching. Once you’ve got that, you can start the political process of organizing the resources to get newer, better, data to formulate newer, better public policy and/or smashing the status quo.