I was scanning Twitter this morning and came across this IRIN article on how aid agencies will have to rethink their data protection and privacy standards as the European Union’s General Data Protection Regulation comes into effect. It raised a number of interesting personal data issues across the full spectrum of the humanitarian and NGO space, but what particularly resonated with me was the section on biometric and personal data being used in humanitarian response to identify people who would receive aid.
The World Food Program (WFP), the main provider of emergency food aid and refugee protection, received a critical response in audits of their digital data protection capacity. The individual data they capture is used to track the delivery of aid to people in need of food and refugee services. Increasingly the identifying data they use is biometric; this could be fingerprints, and more recently retina scans. This is usually framed as ‘innovation’, using new and increasingly complex technology to capture a wider range of data on vulnerable populations. Ostensibly this data can be used to tailor services, predict needs, and improve the delivery of aid to individuals.
While I have been involved in the research and practice of using technology in development and humanitarian response for a number of years, this kind of audit has always been something that I have uncomfortably waited for. While there are arguments that this is the result of the humanitarian sector being more interested in using new tech for the sake of using new tech, my discomfort is instead grounded in the increasingly hostile global political climate among donor countries toward humanitarian aid. This current of hostility has led me to view data collection less as innovation and improvements in service delivery, and increasingly as a means to monitor vulnerable populations, to make sure no one is ‘cheating’ the system.
The lack of data security is telling. As donors have unceasingly ramped up demands on aid agencies to track every dollar and meal, these agencies have naturally turned to a wider array of data collection technologies to try to keep up with demand. Under these kinds of pressures data collection becomes an end in itself, and the deployment of increasingly complex technologies into politically sensitive environments without requisite data security becomes the norm. In the current political environment the WFP and other UN programs face a much higher level of operational risk if they can’t prove to donors that every dollar and calorie went exactly where it was supposed to go; the long-term health and safety of people who are supposed to be protected by the UN have become subordinate to donors’ demands to track resources and make sure beneficiaries use those resources exactly as expected.
I struggle with this dark side of tech and ‘innovation’ in humanitarian and refugee response. One thing that needs to be made clear though is that the people working for these agencies are doing the best they can, often under incredibly difficult circumstances – my friends and colleagues who work at places like UNOCHA, WFP, and UNHCR are some of the most dedicated, innovative people I know. When we evaluate something like the data protection failures at WFP, we should of course demand that the organization do what it can to improve. But we also have to look squarely at donors who set the monitoring demands that drive UN agencies to deploy technologies in a way that places data collection ahead of actually protecting refugees. No humanitarian agency can ethically and safely manage huge volumes of sensitive data in an environment where the donor countries that control budgets care more about surveilling refugees than helping them.