-If we want to see change informed by local feedback, what elements are vital? While perhaps less ‘sexy’ than real-time SMS feedback channels, the decision on where to anchor your feedback mechanism within your institution has a significant impact on its effectiveness and your ability to utilize the data it generates. This blog traces CDA’s evidence regarding the institutional location of feedback systems, and provides questions for practitioners seeking to strengthen their accountability mechanisms and processes.–
Anyone working on improving accountability and feedback loops these days has undoubtedly engaged in discussions about innovative, technology-based feedback channels, the role of local partners and the aspiration to place the affected people at the center. While all these dimensions hold merit in advancing effective accountability practices, we need to ask ourselves: if we want to see change informed by local feedback, what elements are vital?
In light of CDA’s case study evidence, I’d posit that the buck stops at the doorstep of each aid organization. Understanding your institutional structure, its behavior, culture, processes, and leadership structures is critical for the viability and effectiveness of your accountability and feedback system. This understanding can leverage institutional strengths, internal accountabilities and existing information flows between program teams, management and other relevant departments. While perhaps less ‘sexy’ than real-time SMS feedback channels, the decision on where to anchor your feedback mechanism within your institution has a significant impact on its effectiveness and your ability to utilize the data it generates.
During my most recent fieldwork, I observed a range of institutional arrangements, each positioning the feedback system within a different unit at the country level, managed by staff at different levels (field to headquarters – HQ), and linked (or not!) to existing structures and processes. Each of the following arrangements had features that can enable or hinder effective feedback loops.
Feedback Overseen by Monitoring and Evaluation (M&E) Team but Managed by Field Staff
This setup allows for rapid, localized decision-making. Field teams with autonomy to solve problems locally showed stronger collaboration as evidenced by staff who assumed collective responsibility to address community feedback. Closed feedback loops helped to build trust between community members and staff.
However, ensuring that local feedback informed significant course-corrections at a higher level required a robust information pathway from field to senior management. In several examples, a single staff member committed to accountability at HQ, was the sole reason for upward referral of feedback. Weak internal referral channels limited utilization of feedback in program re-design and program quality improvement.
In another case, we saw how an organization wisely built its feedback system upon existing internal referral channels, developed by its humanitarian team during a recent response. Existing institutional knowledge and capacities boosted feedback referral processes which is a key enabler for use of feedback in senior-level decision-making.
Feedback Managed by the M&E Team
Feedback functions are commonly assigned to M&E teams with the intention to supplement M&E data with community feedback and to improve programmatic design and implementation.
With M&E teams often situated in the capital or at some distance from implementation sites, this approach can allow for more durable and swift referral pathways to senior decision-makers.
However, this model can also silo and alienate program teams and hinder the communication of vital feedback data to staff who need to hear and act on it. In some cases, program staff perceive M&E team as ‘police’ monitoring their work, as opposed to a partner committed to improving program quality and performance.
These misconceptions can be addressed by involving all internal users of feedback in clarifying roles and responsibilities, and the types of information each team needs to make timely and effective decisions about program modifications.
Feedback Managed by Program Team
In some instances, feedback mechanisms are not directly managed by M&E teams, and are embedded within program teams. In other cases, feedback is managed by an Accountability Team reporting to a Program Manager at HQ and the team’s functions are distinct from the M&E team.
This set-up can support seamless collaboration and joint action between senior program staff and enable timely changes in programs and operations.
However, the flow of information from field staff (who gather the feedback) to the Accountability Team (who analyze the feedback) can still be reduced or cut off at times. Since accountability teams are generally based at HQ, it can take time to troubleshoot the breakdown in communication. In addition, interpretation of the feedback by the HQ teams may be skewed due to their relative lack of contextual knowledge.
There are also instances where an organization does not have the will or capacity to establish an agency-wide feedback system, but a program unit may instead set up its own feedback mechanism.
The advantage of this set-up is that there is usually very strong ownership of and proximity to the mechanism by program managers who can make quick decisions concerning day-to-day implementation issues.
However, given the nature of this set-up, it may be difficult to convince higher-level decision-makers to take feedback into account while advocating for significant course corrections. Without the support of M&E teams, program teams may struggle to analyze the trends in accumulated feedback, although Information Management (IM) teams can help manage the data and provide quantitative analysis.
…the institutional location of a feedback mechanism is optimal when it takes into consideration the existing internal referral pathways…
Examples such as these demonstrate that there is no ‘one size fits’ all solution or the ‘best’ institutional location for a feedback mechanism. Analyzing these examples for generalizable patterns may be fruitless, given the differences in institutional mandates, structures, and operational contexts. However, I would posit that the institutional location of a feedback mechanism is optimal when it takes into consideration the existing internal referral pathways. Every organization has protocols, formal and informal, for sharing information. Tapping into these already established processes as the basis from which to build your accountability and feedback processes will strengthen feedback loops and enable utilization.
If there is no universal answer to the question regarding location, what does that mean for operational agencies? Multiple program and M&E managers have asked me, “Where do we even start?” “Easy!” I replied, “with questions!”
- What processes for feedback collection and review already exist?
- Who manages them? Do they have the necessarily skills? What could help develop capacities for everyone who interacts with feedback?
- Are there internal referral processes for feedback from the field to reach decision-makers? How is the feedback data presented at different levels of the organization? What other data do decision makers need to supplement what is heard in the field?
- How does feedback get utilized at the field level and at the senior management level? What are the existing decision-making processes? And who needs to see the information in order to make appropriate decisions at each level of the organization? What kind of information do they request to make these decisions?
- How do departments communicate, share information, and analyze it together?
- Are there internal champions? Can the feedback system function without them?
These questions addressed at the design phase of your feedback system may increase its effectiveness. Mapping the information flows and decision-making processes in your institution may unveil glitches and efficiencies that can then instruct you on where ‘best’ to situate your feedback function. And when you do, we’d love to hear about what worked well and what can be learned from your experience that can be instructive to others! (Please email us at email@example.com).
International and local organizations and their donors continue to engage CDA as learning partners and advisors as they seek to improve their accountability to communities. You can find our case studies on feedback loops here, and subscribe to our newsletter here to be notified of new studies once they are made public.
Sarah Cechvala is a Senior Program Manager at CDA Collaborative Learning Projects. Her learning and advisory focus is on conflict-sensitivity, accountability and feedback loops, and conflict-sensitive business practice and corporate social impacts. Sarah has facilitated collaborative learning processes and field research in Africa, Asia, and Latin America. Recently, she led several case studies in Ethiopia, Pakistan, and Nepal focused on feedback utilization in long-term development programs. She holds an MA from Georgetown University and a BA from Boston University.
Share your findings with Sarah Cechvala, at firstname.lastname@example.org