On a sweltering Port-au-Prince day in May 2013, I sat down to interview a senior staff member at an international organization in Haiti and listened to him exclaim, “We have the information! Now we need to move towards utilization!” What are the barriers, I asked. He described many, including external pressures to deliver tangible results (and reports!) and internal organizational priorities which influence how money, time and human resources are spent and to what extent (any) data, but in particular feedback data, actually informs decisions.
Information is valuable and can be powerful. In many crisis-affected communities, people often lack critical information and need to know when, what, and how aid will be provided in order to make critical decisions affecting their lives. Indeed, valuable information is something crisis-affected people are used to providing to those who come to help. They patiently answer surveys, questionnaires, needs assessments, sit through verification missions and provide feedback–solicited and unsolicited–when channels exist. So is it not a fair question to ask, where does all their information go? How is it used? Who uses it? And when will they get a response? These are some of the questions we heard during listening conversations with 6,000 people in aid recipient countries across the globe.
The reality is that responding to and acting on feedback continues to be more difficult than gathering it. Donors (e.g. DFID) are now asking their partners to demonstrate functioning feedback loops in their programs, while also asking questions about the value and return on investment from these feedback processes: To what extent can feedback improve the quality and accountability of programs? And to what extent can feedback loops catalyze long-term outcomes (e.g. empowerment and civic engagement)?
There seem to be many aspirations and expectations from feedback loops in aid programs! It is important to examine whose expectations are these. When considering the effectiveness of feedback mechanisms, does it matter where the demand for feedback comes from? We think it does.
In CDA’s recent action research conducted jointly with ALNAP we focused on humanitarian contexts and observed feedback practices in operational agencies in Darfur, Pakistan, and Haiti. Our main interest was to identify the factors that enable the utilization of feedback provided by those affected by crises and by aid programs. One of the eight propositions that we tested was two-pronged:
“An effective feedback mechanism is run by staff within an agency that supports and values giving and receiving feedback as part of general management practice.The organization allocates the necessary resources for operating the feedback mechanism.”
We observed that humanitarian aid agencies with effective feedback practices make the following commitments and resource allocations:
- Decision-makers in these agencies take an interest in, ask for and value feedback data;
- Complaints and critical feedback are addressed within a predictable timeframe;
- Serious grievances (i.e. corruption or staff misconduct) are brought to the attention of senior managers for tracking and follow-up;
- Managers see feedback as one of many relevant sources of data to inform day-to-day program implementation and modification;
- Feedback data is combined with other information (i.e. market analysis, assessment/ monitoring data) to provide a compelling “bundle of evidence” for decision makers to consider and act on;
- Management allocates sufficient resources for establishing and resourcing a feedback mechanism: hiring staff with necessary expertise, establishing appropriate channels, training staff to effectively solicit feedback, and to analyse the data;
- The purpose of the feedback mechanism, expectations and roles are clearly communicated to all users (staff and aid recipients).
All across these findings, the leadership factor cannot be overstated. While frontline staff play a critical role in feedback collection, acknowledgement and analysis, they are generally not empowered to make significant course corrections in program implementation. There are internal hierarchies, power dynamics and priorities at play. In humanitarian programming, course corrections may include decisions on selection criteria, eligibility and coverage of programs or the type of programming. Some of these decisions have very high stakes, certainly for the local population but also for program staff whose raison d’être on the ground are linked to specific types of programming.
So what is the purpose of gathering feedback, if we don’t act on it? Surely, listening is not enough. The frontline staff are asked to gather all kinds of data for monitoring and assessment purposes. They do indeed see much of the quantitative data utilized for tracking and reporting. Conversely, what happens to the (often qualitative) feedback data? If senior management does not explicitly demand feedback data on a regular basis, the task of feedback collection and analysis can easily fall through the cracks given other pressures. Those who solicit feedback and those who provide it need to see the utility of the feedback channels and evidence of use.
Culture of feedback
At a recent discussion hosted by Feedback Labs and InterAction on September 23rd, participants discussed how overcoming barriers to feedback utilization requires something else. Adaptive programming is not going to be prompted by feedback alone. It has to come from the willingness to identify what is not working, battle inertia and the learned helplessness that often ensues in organizations. If feedback loops are expected to lead to greater empowerment and accountability vis-à-vis the local population, can it do the same internally in aid organizations? During our field research, we heard junior staff in organizations that value feedback describe how their supervisors model the culture of feedback internally by seeking and responding to it. This ought to be a standard practice for teams that want to close the feedback loop.
Originally posted on FeedbackLabs‘ blog, September 30, 2014
Isabella Jean is the Director of Evaluation and Learning at CDA, based in Cambridge, MA. CDA is facilitating collaborative learning and research into effective feedback loops in humanitarian, development and peacebuilding programs. We are learning together with donors and aid agencies about what it takes for effective feedback loops to be effective in influencing program design, adaptation and strategy reviews. Please get in touch if you have promising feedback practices and lessons to share or want to partner in an on-going action research project! Subscribe to updates about new research and guidance materials as well as upcoming panels, here.