Some Observations on Development and Migration

This week I was the featured writer for the Deutsches Institut für Entwicklungspolitik/German Development Institute’s Current Column. I shared my thoughts and observations on how development and technical cooperation can support livelihoods in countries where people may otherwise migrate, often taking on extraordinary risks, to seek work and economic opportunities. Es gibt eine Deutsche Version hier.

Enjoy, and thanks to DIE-GDI for posting it!

Advertisements

Processing the Election

After an absolutely searing U.S. election season, Donald Trump has won. This result has defied everything we thought we knew in political science, from how parties manage themselves and their candidates to how likely voters will make selections. It also laid bare things that we’re going to have to figure out as Americans. I’ll just give a few overarching takes on a few topics, since it’s all rather early days and I think everyone, on both sides of the aisle, are disoriented and exhausted.

What will Trump do? This is by far the question that most animates my fear and anger over this election. Fundamentally, I don’t know what Trump will do policy-wise. And it’s not just culture-war issues – I’m deeply concerned about how Trump will manage the boring but elemental aspects of public policy. Would he allow bondholders to take a haircut on their Treasury bonds? This is the kind of boring, in-the-background policy issue that could irreparably wreck the global economy and for the first time I see a president-elect who I don’t fundamentally trust to handle these decisions. I could have my mind changed (I don’t think he’s not smart enough to handle them), but in terms of temperament and outlook I have yet to be convinced.

What does this mean for political science? I saw a Twitter post the other day that I responded to:

My response was that at times political science feels like it has increased its focus on quantitative methods and experiments, especially econometric and regression techniques, and is engaging in methodological navel gazing. Daniel Drezner, a Tufts professor who is one of the most active academics at engaging with people through non-academic media, has also lamented what he’s seen as a retreat form theorization in political science. I like doing quant research as much as the next person who likes doing quant research, but I also think much of the debate in political science is being stunted by an increasing lack of qualitative research. I’ll write another post about what I saw missing in the models and survey techniques used during this cycle, but for research that I think speaks to what I think really animated this election I would suggest reading Matthew Desmond’s ethnographic work on poverty in the United States. It’s my hope that this election cycle jolts political science out of its quantitative gravity well, reinvigorates the demand for good qualitative and mixed-method research, and increased theorization.

What’s a bizarre way Trump could be a good president? I’ve read a few things basically saying he might be a functional president. I can’t disagree with those, but I also think there’s something else at work here. I don’t get the sense that Trump has a personal political center of gravity – my perception, having watched him, is that he’s a performer that reflects and acts on what he sees and picks up on from audiences. If the loudest of the audience members are the KKK and white nationalists, that’s what he reflects (which is terrifying). If the loudest though are the people who voted for him and aren’t racists/misogynists/anti-semites etc, people who have fundamental and valid fears about being left behind who were willing to ignore all the terrible things the candidate animated, and they demand that he rebuke the worst of his following and actually find ways to mend bridges he might reflect it. This might actually lead to some progress. Alternatively, I might have totally misjudged him (wouldn’t be the first time this cycle a political scientist was wrong) and he’s actually a dedicated fascist/white nationalist demagogue. *I really hope that’s not the case.*

What’s a surprising way this election has increased my political dialogue? Since the end of the election I have spoken with family members for the first time in months (or years in some cases) about what their political wants and desires are, and been able to articulate my political position to them as well. We actually all listened to each other and while some threads got a bit contentious people were actively keeping it civil. I brought up why an African American, Latino, Female or LGBT voter might be both terrified at the outcome, and could right now be very distrusting of someone they know who is otherwise outwardly decent and voted Trump, and I think this resonated with my more conservative family members. The quid pro quo is that I’m willing to hear them out too. This is corollary to the previous paragraph; most of your friends and family aren’t KKK supporters, Karl Marx incarnate, or the Illuminati. There’s a lot more political overlap between Americans than I think we’re generally led to believe by our Facebook/Twitter echo chambers, so now is as good a time as any to reach out.

In the end, the best I can do is appeal to the best in us. We’re all going to need it.

Collective (Digital) Action During a Coup

The events in Turkey last night were nothing short of astounding – the world watched a NATO country, in which all was normal as late as happy hour, descend into political chaos as a coup was attempted and by morning has returned to a tenuous balance with President Erdoğan still (apparently) in charge. While the final outcome was driven by loyalists being more militarily and politically powerful than the anti-Erdoğan contingent, the perceptions of the population about where authority lies and thus what action to take is critical as well. The process of meaning making among the population about what was going on, and the importance of both mass communication and authority in settling the events, mirrored some of the findings in my dissertation research. People maximize their information gathering after a shock to make meaning from events, and as the information cycle evolves, the authority of sources is identified and collective decisions are made. Last night’s events were live tweeted, Facebook shared in real time, and broadcast through all manner of medium throughout the night. This culminated with President Erdoğan taking to FaceTime to give an interview and reassert his control over the country.

At the end of the night it wasn’t the broadcast media that directly beamed Erdoğan’s message out, it was him on an iPhone FaceTimeing remotely. In the partial information of the social media and news churn, the person endowed with the authority to make decisive calls cut through and focused both the discussion and the collective action going forward. The medium that he used was secondary to the importance of having a voice of authority broadcast into a chaotic information environment. While the situation is still fluid, a quick check of the BBC, Washington Post, NY Times, LA Times, Süddeutsche Zeitung and Paris Match front pages have all claimed the coup has failed. That’s the power of authority, even in a complex media churn.

Kieran Healy, a sociologist at Duke University, had an interesting take on the role of internet-based media in this coup. He points out that there were people downplaying the role of social media and broadcast technology in preventing the coup, and he counters the argument with an interesting comparative analysis of King Juan Carlos’s role in stopping the attempted F-23 coup in Spain in 1981. But what really caught my eye in his post was his discussion of the importance of mass communication in supporting collective action processes. Social media and the digital information environment played a huge role in how this attempted coup played out, and the interplay between authority and information medium was key in this process.

My dissertation research looked specifically at peoples’ preferences for information sources and mediums during shocks, such as election violence, natural disasters, or in this case an attempted coup. Social scientists, such as James Fearon and David Laitin, know that people on the whole don’t like chaos and in most cases will find ways to cooperate and maintain stability. In my research people do this by developing a common conception of the event, then identifying the sources of authority and the mediums to find their message on. In a modern, hyper-connected digital environment people can now participate in massive collective action processes because everyone has multiple options for information gathering and sharing. This connectivity keeps people involved in a collective meaning making process – even when people didn’t know exactly what was going on throughout the night, they were engaged and the narrative remained fluid. In the case of Turkey the military could never consolidate the message.

With a fluid narrative, people wait to consolidate into a collective action – there’s not enough information to decide whether to submit to the military or stick with the government. Overall it seems people preferred the government, and in spite of a broadcast media shutdown once Erdoğan got his message out it spread quickly and provided enough information symmetry to turn the collective tide against the military faction behind the coup attempt. What last night’s attempted coup demonstrated is the importance of digital media in preventing the military from consolidating the narrative enough to control the populace, as well as the power of authority to cut through a chaotic information space and solidify collective action during a shock to the political and social system.

 

Where Are the Legislators (Who Ostensibly Pay for Data)?

I watched from a distance on Twitter as the World Bank hosted its annual data event. I would love to have attended – the participants were a pretty amazing collection of economists, data professionals and academics. This tweet seemed to resonate with a theme I’ve been focused on the last week or so: There is a data shortage such that even the most advanced countries can’t measure the Sustainable Development Goals (SDGs).

The European Statistical System can only produce around 1/3 of #SDG indicators, according to Pieter Everaers of @EU_Eurostat #ABCDEwb — Neil Fantom (@neilfantom) June 21, 2016

I replied to this tweet with a query about whether there was evidence of political will among EU member states to actually collect this data. In keeping with the “data is political” line that I started on last week, political will is important because the European Statistical System relies heavily on EU member states’ statistics offices to provide data. The above tweet highlights two things for me – there needs to be a conversation about where the existing data comes from, and there need to be MPs or MEPs (legislative representatives) at meetings like the World Bank’s annual data event.

Since Eurostat and the European Statistical System were the topic of the tweet, I’ll focus on how they gather statistics. Most of my expertise is in their social and crime stats so I’ll speak to those primarily, but it’s important to note that the quality and quantity of any statistic is based on its importance to the collector and end user. Eurostat got its start as a hub for data on the coal and steel industries in the 1950s, and while its mandate has grown significantly the quality and density of the economic and business indicators hosted on its data site reflect its founding purpose. Member states provide good economic data because states have decided that trade is important – there is a compelling political reason to provide these statistics. Much of this data is available at high levels of granularity, down to the NUTS 3 level. It’s mostly eye-wateringly boring agricultural, land use, and industrial data, but it’s the kind of stuff that’s important for keeping what is primarily an economic union running smoothly(-ish).

If we compare Eurostat’s economic data to its social and crime data, the quality and coverage decrease notably. This is when it’s important to ask where the data comes from and how it’s gathered – if 2/3 of the data necessary to measure the SDGs isn’t available for Europe (let alone say, the Central African Republic) we need to be thinking clearly about why we have the data we have, and the values that undergird gathering good social data. Eurostat statistics that would be important to measuring the SDGs might include the SILC surveys that measure social inclusion, and general data on crime and policing. The SILC surveys are designed by Eurostat and implemented by national statistics offices in EU member states. The granularity and availability varies depending on the capacity of the national stats office and the domestic laws regarding personal data and privacy. For example, some countries run the SILC surveys at the NUTS 2 level while others administer them only at the national level. A handful of countries, such as France, do the surveys at the individual level and produce panel data. The problem is that the SILC data has mixed levels of availability due to national laws regarding privacy – for example, if you want the SILC panel data you have to apply for it and prove you have data storage standards that meet France’s national laws for data security.

Crime and police data is even more of an issue. Eurostat generally doesn’t collect crime data directly from member states. They have an arrangement with the UN Office on Drugs and Crime where crime and police data reported to the UN by EU member states gets passed to Eurostat and made available through their database. One exception is a dataset of homicide, robbery and burglary in the EU from 2008-2010 that is disaggregated down to the NUTS 3 level. When I spoke with the crime stats lead at Eurostat about this dataset he explained that it was a one-off survey in which Eurostat worked with national statistics offices to gather the data; in the end it was so time consuming and expensive that it was canceled. Why would such a rich data collection process get the axe? Because it’s an established fact that crime statistics can’t be compared across jurisdictions due to definitional and counting differences. So funders reasonably asked: What’s the point of spending a lot of money and time collecting data that isn’t comparable in the first place?

A key problem I see in the open data discussion is a heavy focus on data availability with relatively little focus on why the data we have exists in the first place, and by extension what would go into gathering new SDG-focused data (e.g. the missing 2/3 noted in the opening tweet). Some of this is driven by, in my opinion, an over confidence in/fetishization of ‘big data’ and crowdsourced statistics. Software platforms are important if you think the data availability problem is just a shortage of capacity to mine social networks, geospatial satellite feeds and passive web-produced data. I’d argue though that the problem isn’t collection ability, and that the focus on collection and validation of ‘big data’ distracts from the important political discussion of whether societies value the SDGs enough to put money and resources into filling the 2/3 gap with purpose-designed surveys instead of mining the internet’s exhaust hoping to find data that’s good enough to build policy on.

I’m not a Luddite crank – I’m all for using technology in innovative ways to gather good data and make it available to all citizens. Indeed, ‘big data’ can provide interesting insights into political and social processes, so finding technical solutions for managing reams and reams of it are important. But there is something socially and politically important about allocating public funds for gathering purpose-designed administrative statistics. When MPs, members of Congress, or MEPs allocate public funds they are making two statements. One is that they value data-driven policy making; the other, more important in my opinion, is that they value a policy area enough to use public resources to improve government performance in it. For this reason I’d argue that data events which don’t have legislative representatives featured as speakers are missing a key chance to talk seriously about the politics of data gathering. Perhaps next year instead of having a technical expert from Eurostat tell us that 2/3 of the necessary data for measuring the SDGs is missing, have Marianne Thyssen, the Commissioner for Employment, Social Affairs and Inclusion that covers Eurostat, come and take questions about EU and member state political will to actually measure the SDGs.

The World Bank’s data team, as well as countless other technical experts at stats offices and research organizations, are doing great work when it comes to making existing data available through better web services, APIs, and open databases. But we’re only having 50% of the necessary discussion if the representatives who set budgets and represent the interests of constituents aren’t participating in the discussion of what we value enough to measure, and what kind of public resources it will take to gather the necessary data.

 

The Challenge of Conflict Data

The last two posts I wrote focused on the social and political structures that drive data collection and availability. In these posts I was primarily talking about statistics in wealthy countries, as well as developing countries that aren’t affected by conflict or violence. When it comes to countries that are beset by widespread conflict and violence, all the standard administrative structures that would normally gather, process and post data are either so compromised by the politics of conflict that the data can’t be trusted, or worse they just don’t exist. Without human security and reliable government structures, talking about data selection and collection is a futile exercise.

Conflict data, compared to other administrative data, is a bit of a mash up. There are long term data collection projects like the Correlates of War project and the UCDP data program, both of which measure macro issues in conflict and peace such as combatant types, conflict typologies, and fatalities. Because both projects have long timelines in their data they are considered the best resources for quantitatively studying violence and war. Newer data programs include the Armed Conflict Location and Event Data project and the Global Database of Events Language and Tone. These projects take advantage of geographic and internet-based data sources to examine the geographic elements of conflict. There are other conflict data projects that use communication technologies to gather local-level data on conflict and peace, including Voix des Kivus and the Everyday Peace Indicators project.

This is just a sample of projects and programs, but the main thing to note is that they are generally hosted by universities and the data they gather is oriented toward research as opposed to public administration. Administrative data is obviously a different animal than research data (though researchers often use administrative data and vice versa). To be useful it has to be consistent, statistically valid in terms of sampling and collection technique, and available through some sort of website or institutional application. If the aim of the international community is to measure the twelve Goal 16 Targets in the Sustainable Development Goals, particularly in countries affected by conflict, international organizations and donors need to focus on how to develop the structures that collect administrative data.

We can look to existing models of how to gather data, particularly sensitive data on things like violence. Household surveys are a core tool for gathering administrative data, but to gather representative samples takes a lot of work. It also requires a stable population and reliable census data. For example if a statistical office gets tasked by a ministry of justice to run a survey on crime victimization, the stats office would need to interview as many victims as possible to develop sampling tranches. The U.S. Bureau of Justice Statistics National Crime Victimization Survey is an excellent example of a large-scale national survey. One only needs to read the methodology section to grasp how large an undertaking this is; the government needs the capacity to interview over 150,000 respondents twice a year, citizens need to be stable enough to have a household, and policing data needs to be good enough at the local level to identify victims of crime. Reliable administrative statistics, especially about sensitive topics like crime victimization and violence requires: Functional government, stable populations, and effective local data collection capacity.

While many countries can measure the Goal 16 Targets, countries affected by conflict and violence (the ones that we should be most interested in from a peacebuilding perspective) fundamentally lack the political and social structures necessary to gather and provide reliable administrative data. Proposing a solution like “establish a functioning state with solid data collection and output processes at the local and national level” sounds comically simplistic, but for many conflict-affected states this is the level of discussion – talking about what kind data to collect is an academic exercise unless issues of basic security and population stability and institutional capacity are dealt with first.

How is Public Data Produced (Part 2)

I published a post yesterday about how administrative data is produced. In the end I claimed that data gathering is an inherently political process. Far from being comparable, scientifically standardized representations of general behavior, public data and statistics are imbued with all the vagaries and unique socio-administrative preferences of the country or locality that collects them.

Administrative criminal statistics are an interesting starting point if someone wants to understand how data reflects the vagaries of administrative structures. If someone thought “I would really like to compare crime rates across European Union member states” they would probably be surprised to learn that unless they just compare homicide rates it’s impossible to compare crime rates between countries. This is not only because definitions of different crimes are different between countries (though the UNODC has done a lot of work to at least standardize definitions), but the actual events of crime are counted differently. For example, Germany uses what’s called “principal offense” counting – this means that in the event that multiple crimes are committed at the same time, the final statistics only count the most serious crime. Belgium doesn’t use this counting method, so its crime statistics look much higher than Germany’s on paper. The University of Lausanne’s Marcelo Aebi, arguably the expert on comparative criminal statistics, published an excellent paper on comparing criminal statistics and the problems posed by different counting procedures (pages 17-18 for those who just want the gist).

Aebi makes a crucial point in the conclusion of his article: Statistics are social constructs and each society has a different way of constructing them. Statistics represent the things we have valued. The past-tense is important here; when we see data it’s showing us the past (the 2016 Global Peace Index uses numbers from 2015, for example), and thus represents what we valued at the time. Data can be used to build and test models of potential future events, but there is no such thing as ‘future data’. The value in data is that it can help citizens and policy makers understand what worked, or didn’t work, so that policies and behaviors can be adjusted going forward.

Of course institutional and administrative behavior is often resistant to trends in data (or very comfortable with data that supports the status quo). This can be for valid, or at least non-nefarious, reasons. For example the Sustainable Development Goals (SDGs) rely heavily on GDP as an economic indicator. The SDGs are supposed to represent sustainable growth and social development into the future, so it’s interesting that they use an economic indicator that many experts and organizations view as quite flawed.

Why would the SDGs rely so heavily on GDP then? For one, it’s a reliable indicator – everyone at least has some vague idea of what is represents. Two, it’s got a long history – we have tracked it for decades. Three, most of the people who created the SDGs come from backgrounds where GDP is a standard indicator – they pick targets and data based on their professional and institutional experience. They didn’t do this because they’re jerks. They did it because GDP represents the standard, if flawed, way that we measure economic performance. They probably also did it because gathering new data is an expensive, time consuming process that everyone says is important [for someone else to pay for].

This is all to say: If you want better public data, or to at least understand why the public data you have seems to reflect the status quo instead of telling you how to break out of it, it’s imperative to understand the qualitative political, social and administrative behaviors inherent to the place or people you’re researching. Once you’ve got that, you can start the political process of organizing the resources to get newer, better, data to formulate newer, better public policy and/or smashing the status quo.

 

How Is Public Data Produced?

The 2016 Global Peace Index (GPI) launched recently. Along with its usual ranking of most to least peaceful countries it included a section analyzing the capacity for the global community to effectively measure progress in the Sustainable Development Goals (SDGs), specifically Goal 16, the peace goal. The GPI’s analysis of statistical capacity (pp. 73-94) motivates a critical question: Where does data come from, and why does it get produced? This is important, because while the GPI notes that some of the Goal 16 targets can be measured with existing data, many cannot. How will we get all this new data?

Some of the data necessary to measure the Targets for Goal 16 is available. I’d say the GPI’s  findings can probably be extended to the other goals, so we’ll imagine for the sake of argument that we can measure 50-60% of the 169 Targets across all the SDGs with the data currently available globally. How will we get the other 40-50%? To deal with these questions it’s important to know who collects data: The primary answer is of course national statistics offices. These are the entities tasked by governments with managing statistics across a country’s ministries and agencies, as well as doing population censuses. Other data organizations include international institutions and polling firms. NGOs and academic institutes gather data too, but I’d argue that the scale of the SDGs means that governments, international organizations and big polling firms are going to carry the primary load. Knowing the Who, we can now get to the How.

National statistics offices (NSOs) should be the place where all data that will be used for demonstrating a nation’s progress toward goals is gathered and reported. In a perfect world NSOs would have necessary resources for collecting data, and the flexibility to run new surveys using innovative technologies to meet the rapidly evolving data needs of public policy. This is of course not how NSOs work. Much of what happens in a statistics office is less about gathering new data, and more about making sure what exists is accessible. In my experience NSOs have a core budget for census taking, but if new data has to be collected the funding comes from another government office. This last bit is important: NSOs do not generally have the authority to go get whatever data is necessary. If NSOs are going to be the primary source for data that will be used to measure the SDGs, it is critical that legislatures provide funding to government offices for data gathering.

International organizations are the next place we might look to for data. The World Bank, in my opinion, is the gold standard for international data. United Nations agencies also collect a fair amount of data. What sets the Bank apart is that they do some of their own data collection. Most international organizations’ data though is actually just NSO data from member states. For example, when you go to the UN Office on Drugs and Crime’s database, most of what you’ll find are statistics that were voluntarily reported by member states’ statistical offices. The UN, World Bank, OECD and other myriad organizations do relatively little of their own data gathering; much of their effort is spent making sure that the data they are given is accessible. Unless legislatures in member states provide funding to government agencies to gather data, and the government agrees to share the data with international organizations, most international institutions won’t have much new data.

Polling firms such as Gallup gather international survey data that is both timely, accurate and covers a wide range of topics relevant to the SDGs. Unfortunately their data is expensive to access. As a for-profit entity they have a level of flexibility to gather new data  that statistics offices don’t, but this level of flexibility is very expensive to maintain. A problem arises too when Gallup (and similar firms) decide that the data necessary to measure the SDGs is not commercially viable to gather and sell access to. In this case legislatures would need to provide funding to government agencies to hire Gallup to gather data that is relevant to measuring progress toward the SDGs.

There is a pattern in the preceding paragraphs. All of them end with the legislature or representative body of government having to provide funding for data gathering. How we gather data (the funding, budgeting, administration, and authority) is entirely political. This is a key issue that gets lost in a lot of discussion around ‘open data’ and demands for data-driven policy making. It is too easy to fall into a trap where data gets treated as a neutral, values-free thing, existing in a plane outside the messy politics of public administration. The Global Peace Index does a good service by highlighting where there are serious gaps in the necessary data for tracking the SDG Targets. This leads us to the political question of financing data collection.

If the UN and the various stakeholders who developed the Sustainable Development Goals can’t make the case to legislatures and parliaments that investments in data gathering and statistical capacity are politically worthwhile, it is entirely likely that the SDGs will go unmeasured and we’ll be back around a table in 2030 hacking away at the same development challenges while missing the harder conversation about the politics necessary to drive sustainable change.

After Paris, Now What?

Like many people I’ve been following the events in Paris with shock and sadness. I’ve watched the narratives evolve out of the tragedy, and a few resonate with me.

Western leaders have seemed incapable of any kind of creative response to ISIL and the wider risks they pose. I responded on Twitter to an article about the knee jerk reaction to declare war on ISIL and to ban Syrian immigrants from entering Western countries. There’s something almost quaint to this thinking; it’s like it’s the 1940s and we’re storming the beach head, fighting another nation state’s army. There’s a place for a significant military response against ISIL, but if there isn’t a correspondingly big diplomatic and civil society effort that pulls a lot of competing sides together, ISIL will continue dividing and surviving. To those who say we don’t/can’t negotiate with our enemies, I say learn your history. The U.S. routinely negotiated with the Soviet Union for 40 years, often with the risk of a nuclear exchange on the line. The current state of affairs in the Middle East is partially the outcome of a long decline in U.S. diplomatic capacity, and an over reliance on force and securitization. Unless we change that, ISIL will continue to survive as an organization.

It’s hard to make policy or design a complex response if people are fundamentally ignorant. This person thinks the problem is that terrorists leave their own country (read: Syria) and attack the West:

IdiotTweet

Based on the known attackers’ nationality/residences, his solution is to keep them in…Belgium and France? While he’s just some random guy on Twitter, it’s problematic that a majority of U.S. Governors as well as Republican presidential candidates have the same outlook. Will stopping refugees make a locality safer? Unlikely; as far as we can tell the attackers weren’t refugees. Indeed, trying to sneak a terrorist cell into Europe via the refugee routes would be the worst possible and least efficient way to get them to a target. They could drown, get stuck in Serbia/Hungary/Croatia/etc, get picked up at one of the myriad check points between Syria and Western Europe, or freeze to death sleeping rough in the woods.  Objectively it would be stupid for ISIL get terror cells into the West this way; by extension would be stupid to assume that blocking refugees will keep terrorists out (especially if they’re already citizens of the country to be targeted, and living in that country). Stupid policy decisions will neither mitigate the threat, nor address the humanitarian crisis.

This brings us to the last point. Stupid policy decisions are usually the outcome not only of objective analytic failure, but also an abdication of one’s moral grounding. 30 U.S. Governors and however many candidates remain in the Republican primary have, in saying they won’t take refugees, allowed ISIL to set the terms of their moral obligation to their fellow humans. They’re the worst kind of cowards, the kind that use a humanitarian calamity to gain political points while living in a publicly provided security bubble. It’s a sad commentary on the moral fabric of the U.S. that people of so little integrity and humanity can make it as far as they have in politics.

The only way to defeat the ISIL’s of the world is through a smart, humane, morally grounded set of policies. Force will be necessary, but so too will smart diplomacy, and a recognition that we have a moral obligation to aid the victims of a brutal regional conflict.

 

Diagnosis Matters: Preventing human trafficking on the demand side

I was watching the news past Saturday when Australia’s Prime Minister, Tony Abbott, took time out from a talk on iron ore prices (or something along those lines) to discuss the ongoing issue of people smuggling. It’s a short video that you’ll have to follow the link to see (The Australian doesn’t provide embed code), but what’s interesting is Abbott’s prescription for stopping people smuggling. The logical issues with his argument are worth unpacking because they’re routinely used by politicians everywhere who either don’t understand what they’re dealing with, want to change the argument, or some combination thereof. None of which leads to good policy outcomes.

Abbott says in the interview that the key issue is human trafficking. In order to stop the human trafficking one has to stop the boats getting to their end destination. This is an interesting way of framing the issue. Others might argue that the people on these boats aren’t being trafficked as much as they’re fleeing persecution and paying people to arrange passage. But regardless of why these people are paying to get onto boats, if the problem is the traffickers is Tony Abbott’s solution of turning back the boats going to stop the traffickers from sending people out on boats?

We could look at this in two ways. Both are economic, with one focused on supply and demand and the other focused on changing the economics of the transaction. The supply/demand argument is fairly straightforward. People are being persecuted, in this case Rohinga Muslims in Myanmar, so they pay traffickers to put them on a boat out. If you buy this model, then is it really useful to stop the boats if you want to decrease trafficking? Not really; the conditions that spur demand for traffickers’ services still exist, so people will keep paying to risk their lives at sea. Assuming they never return to Myanmar the human trafficking problem works itself out when all the people who want to leave have done so.

The second way to look at this is transactionally. Let’s assume that demand for the traffickers’ services is fixed. People are going to pay them no matter what. The problem with turning back the boats as the solution for stopping human trafficking is when the transaction between migrant (refugee) and trafficker takes place; the trafficker gets paid before the refugee boards the boat. Their pay isn’t dependent on the boat arriving anywhere, so turning the boats back doesn’t cut into their revenue. Indeed, by turning the boats back you’re just sending back people who will be repeat customers. If I were a trafficker I’d be all for this.

Basically Abbott has misdiagnosed the problem, then prescribed a solution that just makes the problem worse. This isn’t unique to Abbott – there are plenty of politicians the world over who have made this an art form. The main question we have to ask now is whether he and politicians in the U.S. and Europe who face their own migration issues are up to the intellectual task of governing, are misrepresenting the problem to fit an anti-refugee policy position, or some combination thereof.