Results-Based Protection Tip Sheet: Iterative Evaluation Practice for Protection

Date Published: July 2, 2018 Author: InterAction
Photo of Results-Based Protection Tip Sheet: Iterative Evaluation Practice for Protection

The humanitarian community has come under increased pressure to achieve more meaningful results with respect to protection programming and demonstrate outcomes manifested through measurably reduced risk. Despite the challenges of measuring tangible protection outcomes in humanitarian crises, using a results-based approach supports measuring the reduction of risk, and continual adaptation of interventions based on an awareness of changes in risk factors, in support of protection outcomes. This depends on diagnosing risk patterns through context-specific analysis, establishing mechanisms to track and measure changes in the risk environment, and using that analysis to inform relevant strategies. Furthermore, approaching learning and evaluation in an iterative way supports accountability for actions taken and provides a basis for incorporating lessons into future action.

How can humanitarian actors understand changes in constantly evolving contexts and pivot programming responsively? How can evaluation methods support adaptation in the dynamic settings of humanitarian crises?

There are no easy answers or tailor-made solutions. Evaluating protection outcomes requires understanding context and grappling with interconnected systems and a diverse range of actors and relationships. Building a nuanced understanding of context entails use of multiple strategies and methods for data collection and analysis which are grounded in the perspective of the individuals and communities directly affected by the risk. Moreover, it requires actors to view “failure”, iteration, and adaptation as expected and necessary aspects of problem-solving, rather than a regrettable lapse.Standardized checklists, indicators, and evaluation rigidly aligned with the traditional program cycle may not be well-suited to assessing complexity and real-time learning for course correction. However, some common tools you may already have can be adapted for iterative evaluation.

In the coming weeks, InterAction will be releasing a series of “tip sheets” of helpful considerations, resources, and examples of good practice as it relates to cultivating an evaluative mindset and using evaluation to adapt interventions for protective impact. The first installment of this series of tips focuses on establishing “evaluability” for protection interventions, defining the purpose and determining the criteria for success.

As we continue to build our evidence base, we’d love to hear from you and your teams about your experiences with continuous reflection, learning, and adaptation, and the tools and methods that help you do so. How do you approach designing, implementing, monitoring, and evaluating programs in an iterative manner?  Do you have methods, tools, or stories to share? Let us know in the form below!

HAVE RBP-RELATED RESOURCES TO SHARE?

Thanks for joining the conversation as we build our evidence-base of good practice of Results-Based Protection! Do you have resources to move the dialogue forward? We’d love to hear from you. Please share in the form below!

Help Expand our Resource Library

Max. file size: 100 MB.
Return