Hasan in World Vision Jerusalem-West Bank-Gaza programmes

Bridging the evidence-practice gap: four things to know

By Juliana Breidy

We’re at a special point in the history of evidence use in humanitarian and development work. Donors, policy-makers, and implementers are now consistently talking about and insisting upon the importance of evidence in decision making, and I welcome this with wide-open arms!

At the recent ISPCAN (International Society for the Prevention of Child Abuse and Neglect) European Congress, I spoke about Act Now – the application of child protection research into effective child protection practice in the work we do. I presented two compelling bodies of evidence from our work across the Middle East and Eastern European region, and shared four things I have learned along this journey.

  1. Evidence is “in”
    Humanitarian and development practitioners are taking huge strides to improve the quality of data we use, and to strengthen the link between evidence, influence and effective decision-making. To ensure this trend continues we must explore better ways to link evidence producers with advocates, practitioners and influencers through scientific congresses like ISPCAN, knowledge gap mapping, research agenda planning, better dissemination plans and taking an evidence synthesis approach.
  2. The gap between evidence and practice still needs closing
    New evidence is often only available in a fragmented way: on individual organisations’ websites, portals, or at times only internally. The expectations of end-users may not always be clear to those conducting research, not only in terms of the messaging but also the level of analysis needed to be able to successfully leverage the work. All parties to the impact agenda must sit around the same table when conceiving the research, process, outputs, products – and, importantly, the impact the work will have on people we are trying to help.
  3. The basics of good data collection
    The old adage of “garbage in garbage out” still stands. Answers to questions can only be as good as the data that informs them. Poor, inconsistent monitoring data, and a lack of transparent methods used in research and evaluations compromise the quality of evidence. Through our monitoring and evaluation work, we aim to ensure high-quality assurance standards that contribute to accurate, relevant, complete and timely data. Through this, we secure quality assurance programmes.
  4. Evaluations alone do not help paint a bigger picture
    Most reports focus exclusively on the project or programme, but this poses an important challenge when attempting to synthesise findings and draw conclusions on the performance of the sector. Our monitoring and evaluation work on single projects should be complemented by a synthesis of evidence to check how the multiple projects contributed to sectors. This approach can also help to identify gaps in knowledge that will need further exploring and to establish an evidence base for best-practice guidance.

It is an exciting time to be advocating for greater use and understanding of evidence in decision making, and I look forward to continuing to play a role in helping children realise their rights to access effective social programmes. I believe ethical research, evidence and learning are the most effective path to drive the development and impact of our programmes for the children we serve.

Juliana Breidy is a Programme Effectiveness Advisor for World Vision Middle East and Eastern Europe region. Follow Juliana on Twitter @JulianaBreidy