¿Acaso no hay evaluaciones de impacto? Monitoreo y evaluación en torno a la evolución de las actividades de WASH
La ampliación de la sección de monitoreo y rendición de cuentas de nuestro sitio de políticas, prácticas e incidencia política refleja la evolución de la forma en que WaterAid monitorea su trabajo. A medida que nuestras prioridades cambian para incluir más actividades de incidencia política, fortalecimiento del sector, derechos humanos e integración sectorial, también debemos ajustar y adaptar nuestra labor de evaluación. Pero, ¿deberíamos examinar la evaluación completa del impacto? Liza Tong, gerente de Programas para Rendición de Cuentas y Eficacia de WaterAid UK, discute.
If I had £1 for every time someone has raised the issue of impact evaluation and its suitability or otherwise for international NGOs, I would probably be able to fund one.
The proper analysis of impact requires a ‘counterfactual of what outcomes would have been in the absence of the intervention’. So, as a programme manager in monitoring and evaluation, this is the conundrum I face. We want to know about the results, the outcomes, and the changes in lives we contribute towards; but, as an NGO, the elusive gold standard impact evaluation, and the scientific rigours and costs of randomised control trials, are simply beyond our reach.
But really the question is – is this the sort of activity organisations like WaterAid should be focusing our time and limited resources on? Or should we be doing what we do best, i.e. trying to get universal access for the most vulnerable and marginalised people to sustainable WASH services?
We need to be pragmatic
However, we should be doing more ‘real-world evaluation’. An evaluation mentor of mine Jim Rugh used to say “Always think like an evaluator – we should be continuously questioning whether we are doing the right thing, testing our assumptions and what we are achieving, but be pragmatic about time and resource constraints.” Hence the importance of doing evaluations that are designed to be fit for purpose, with clear evidence and analysis.
There is also the question of accountability: what do we mean by this, and where does the evidence come from to assure ourselves and others that we are really achieving positive change for our most marginalised communities?
Measuring and evidencing change is a responsibility, and part of a portfolio of work which crosses several teams at WaterAid UK. The relevance and effectiveness of our country programme work is what our Country Programme Evaluations (CPEs) focus on. The purpose of the global work we do at the centre exists only to support where the real work happens, at this country level. In this instance, our central role within WaterAid UK is to develop good processes, systems and tools to support good practice in evaluation, and make sure knowledge exchange and South-to-South learning between our Country Programmes happens. We are currently re-designing our minimum core procedures with regards to evaluation processes
Having the right capacity for evaluation
As every monitoring and evaluation person knows, evaluation has traditionally been marked as the poor cousin of monitoring, and it is not uncommon for organisations like ours to lack the skills and experience to develop robust evaluation frameworks and study designs. It's not really a question of independence and credibility – without doubt the collection, analysis, and interpretation of the findings of an evaluation need to be independently led (from an accountability and objective perspective).
From the learning and ownership perspective, however, it is clear that we need to own and be satisfied with the validity of the findings, conclusions, and recommendations in order to take these forward into meaningful, improved practice.
Maintaining independence and use of the OECD-DAC evaluation criteria
To address this, our CPEs – which will be conducted towards the end of each Country Programme’s strategy – use the OECD-DAC criteria of effectiveness, relevance, sustainability, efficiency and impact, and are carried out by a mixed team of internal staff and independent consultants. The key here is that the final analysis, recommendations and conclusions rest with the consultant, which seems to work. But it is the WaterAid technical or policy specialist and the regional and country programme team members who offer valuable programming, policy and contextual insights, and ensure ownership and buy in as far as possible. The very act of being involved in the evaluation itself builds staff confidence and capacity to be more evaluative.
How do the changes in WaterAid’s Global Strategy affect evaluation?
At WaterAid, overall we are entering an exciting period, but one of change. As we focus more of our efforts on achieving larger scale impact in WASH access, we will increasingly be working in the field of WASH sector strengthening, integration with health and education, policy and advocacy work, and the human right to WASH.
Although we believe these changes will provide the potential for far greater impact, as more people are being reached, services will be sustained and greater resources for WASH leveraged etc, we are effectively working further from the front line – attribution becomes difficult, and, increasingly, our evaluative questions centre around ‘What were our contributions towards policy change or sector strengthening? What results were achieved for the most marginalised communities? What is our added value in our work in coalitions to achieve universal access? What is the evidence for this?’
Good thing then that we’re investing in a number of new initiatives to measure our contribution to policy change, starting with our global advocacy work – for example, the Healthy Start Campaign post-2015 evaluation work. A fresh approach to the monitoring and evaluation of policy and advocacy initiatives will result in a better understanding of methods, tools and indicators which track and measure our policy impact, our human rights-based work, and the effectiveness of our engagement with coalitions, among other things.
So I think for us, at this moment in time, ‘no impact assessments’ is a sensible approach. For now at least…