eye of the beholder

Evidence – is it in the eye of the beholder?

In the last week, the Listening Program has been immersed in discussions about what is evidence and how it is used in decision-making.  In the current push for better evidence, including the use of randomized control trials (RCTs) and other methods in measuring the results of aid efforts, we are concerned about how to ensure that qualitative approaches that capture the knowledge and views of local people are seen as credible sources of evidence.

There are many debates and questions such as:  what is a critical mass of evidence from the ground – in other words – how many times do you need to hear something to recognize it is an issue? How generalizable are the conclusions:  what is your sample size? did you talk to enough people, triangulate, compare, cross-check, etc.? How do all of the stories and anecdotes shared by aid recipients add up to solid, actionable evidence?  These are some of the challenges for those who want to bring “evidence from the ground” to the decision makers.

The issue of WHO makes decisions about international aid efforts also keeps coming up—in discussions at the ALNAP Annual Meeting last week, the action research on effective humanitarian feedback mechanisms we are doing with ALNAP, and in the analysis of the current externally driven aid delivery system we highlight in Time to Listen.  Despite the efforts to improve participation and accountability, most decisions are still largely made by those providing international assistance, not those who are meant to be supported by it.

Decision-makers across the various donors, governments, NGOs, UN and other actors in the international aid system have different backgrounds, interests, perspectives, agendas and biases which influence how they weigh the evidence put before them.  Some are very evidence driven, but most rely more on the own experiences and knowledge to guide the decisions they make. And many are heavily influenced by politics—in their countries and in their organizations.  So even if fantastic evidence on what works—and what does not—is put before them, they still may decide to do the same thing they have always done or what their organization wants to do regardless of the evidence.

The challenge for those of us who are working to bring the knowledge, experiences and perspectives of local people to bear on the decisions which affect their lives is to get decision-makers to listen and weigh all evidence equally when they are making decisions that will affect people whom they do not even know.  Ideally, we need to change the decision-making processes to ensure that local people can speak for themselves.  While we are certainly working towards that goal, I was challenged last week to think about how to produce and “package” the evidence in Time to Listen, so that it is actionable amidst the many facts and figures which are often preferred by those who see themselves as “evidence-driven.”


By Dayna Brown, Director of the Listening Program, CDA