A post by Cheyanne Scharbatke-Church and Diana Chigas first published on The Global Anticorruption Blog Law, Social Science, and Policy, and part of the CDA Perspectives’ series on corruption in fragile states.
Early last week, the Transparency International (TI) Secretariat in Berlin circulated an Invitation to Tender with a title that grabbed our attention. Framed as part of a commitment to “the highest standards of accountability, organizational effectiveness and learning,” this tender described a “Research Review and Evaluation of Anti-Corruption Work Assumptions: Grievance as a key determinant of people’s anti-corruption behavior.” The email that accompanied the tender suggested an exciting and needed inquiry into assumptions that drive anticorruption programs funded by the international community—on a topic that is closely related to some of our research team’s work on corruption in fragile states (see here and here). That TI was interested in funding a project of this sort was encouraging: Testing core assumptions, after all, is central to learning and should be a fundamental element of effective programming. We were also heartened by the fact that TI sought comparative analysis, and would give preference to counterfactual analysis over experimental designs—suggesting an interest in the type of qualitative inquiry that is necessary to penetrate the dynamics of corruption as a complex system.
Our initial enthusiasm turned to dismay, however, by the time we finished reading the Tender. The reason may seem prosaic, even banal: The time-frame for submitting proposals and for the work itself. To our knowledge the Tender was circulated the first week of July, applications are due August 5th, work is to start August 29th and be finished by October 31—with a budget for 30-35 working days. At first, that may not seem like such a big deal—and we recognize that it might seem like we are merely griping about our team’s inability to meet the application and project deadlines for this tender. But this is not about any one tender or any one research team. Rather, the practices embodied in—but by no means limited to—this particular tender are in fact representative of larger problems in the world of anticorruption and development evaluation research, one that we suspect may be familiar to other researchers. In particular, two problems in particular stand out.
– First, genuine learning requires sufficient resources; the value of that any project can offer the anticorruption community is in direct proportion to the effort that is put into the investigation. To engage in a contextually sensitive assessment of complex dynamics and tease out the validity of an assumption, as the TI tender requests, will take a thoughtful and creative process. This takes time. Moreover, the TI Tender requires consultants to test the assumption using “a number of diverse TI projects and approaches” and that, in addition to the fieldwork, the consultants deliver a “detailed literature review report,” along with a methodological concept note, a final report, and a dissemination workshop. All of this is to be done in a miraculous 30-35 working days, over a two month period! Based on our experience doing work of this nature, a two-person team—the bare minimum for this kind of project—our back-of-the-envelope estimate is that 30-35 working days isn’t even enough time for the team to being gathering data in one country, let alone several. The mismatch between the ambitious project that the Tender describes, and the resources that it allocates, is severe. For a two-month project, budgeting only 35 days of work, the best one could expect is a superficial survey based on existing secondary sources, supplemented by discussions with the usual suspects.
– Second, the highly compressed time-frame for submitting tenders limits the talent pool and raises questions about the openness of the process. As noted above, the Tender was circulated in the first week of July, applications are due August 5th, and work is to start August 29th. This means that TI must review replications, select the winner, negotiate and sign a contract and convey all necessary documentation to the team in 15 working days; and simultaneously the team must be available to start working with 1-2 weeks’ notice of winning the contract. How many respected professionals have this much time available with such short notice? Within the professional evaluation community the usual suspicion about any Tender with such tight time-frames and a very detailed application processes is that the agency has already decided who they want for the job, but procurement rules require (or the agency’s interest in legitimacy counsels) an open tender. Such practices, if practiced by public procurement agencies, would raise red flags. To be clear, we are in no way accusing TI of being corrupt. But even the appearance of a pre-determined Tender (even if false) can make it seem like TI is actually engaging in some of the very practices that it fights against in other contexts. And there is the related concern that the pre-selected applicants may have been chosen because the Tendering entity (in this case TI) already knows what they are going to say, and likes the result—an approach antithetical to TI’s stated goal of rigorously investigating assumptions. And, again, even if this is not actually what is going on, the perception alone is problematic.
To be clear, although we were prompted to write this post by TI’s recent Tender, the issue we are raising is not about this specific Tender, nor do we mean to single out TI alone. (We also acknowledge that TI has its reasons for the decisions on this Tender, reasons to which we are not privy.) Our broader point is that tenders of this sort, which are not at all uncommon in this field, raise a serious and overlooked problem. When a leading and influential organization like TI commissions a study, the results will have great influence and credibility in the field. If they are grounded in weak or insufficient evidence, or influenced by a preconceived notion of the right answer, the exercise will only serve to perpetuate bad practices in evaluation initiatives and undermine the stated goals of learning, accountability, and organizational effectiveness.
We know funds are limited, time is short, and funding agencies are subject to a host of other constraints. But still, the ambition of the work, and the way it is presented to the world, needs to correspond to the realities of the available time and resources. We hope that TI and other organizations can reflect on this fact and adjust their practices so that the lofty of achieving “the highest standards of accountability, organizational effectiveness and learning” has a chance of being achieved.
Cheyanne Scharbatke-Church is Principal at Besa: Catalyzing Strategic Change, a social enterprise committed to catalyzing significant change on strategic issues in places experiencing conflict and structural or overt physical violence. As a Professor of Practice, at the Fletcher School she teaches and consults on program design, monitoring, evaluation and learning. Cheyanne is also a regular author and the curator of the CDA Perspectives blog series on corruption in fragile states.
Diana Chigas is Professor of Practice of International Negotiation and Conflict Resolution at the Fletcher School of Law and Diplomacy. Since 2003, she has also been working at CDA-Collaborative Learning Projects, where she works with practitioners and policy makers globally to improve the effectiveness of peacebuilding strategies, programming, and monitoring and evaluation through conflict analysis, systems thinking, peacebuilding evaluation, participatory program design and review, theories of change, and use of Reflecting on Peace Practice. She currently co-directs CDA’s collaborative learning work.