On March 5th, the U.S. Holocaust Memorial Museum convened a group of academics, practitioners, and policy-makers to take stock of existing knowledge on the role of civil society and civilian self-protection mechanisms in preventing and mitigating violence and mass atrocities, as well as identify gaps in knowledge and areas of future inquiry and research.
In this first of a series of papers on trends in Community Innovation, Gallen Maclusky will explore two approaches that have their roots in the practice and rigour of design: Design Thinking and Social Labs. These approaches draw heavily on Community Engagement, iterative and experimental processes, and creative collaboration as pathways to effect change.
This note from USAID collects a few tips targeted at those funding and managing projects/partnerships for how to proactively cultivate an environment that enables CLA to flourish.
In this series of tip sheets, InterAction highlights helpful considerations, resources, and examples of good practice to cultivate an evaluative mindset and using evaluation to adapt interventions for protective impact. The previous installments in this series of tips focused on establishing “evaluability” for protection interventions, defining the purpose and determining the criteria for success, and selecting evaluation approaches and methods. Iterative evaluation practice requires an enabling environment that supports feedback loops, whereby analysis and recommendations feed into decision-making and programmatic and strategic adaptation. This final tip sheet highlights a few considerations for the resources, processes, and organizational culture which support iterative evaluation for protection.
In this series of tip sheets, InterAction highlights helpful considerations, resources, and examples of good practice to cultivate an evaluative mindset and using evaluation to adapt interventions for protective impact. The first installment of this series of tips focused on establishing “evaluability” for protection interventions, defining the purpose and determining the criteria for success. This tip sheet outlines the next stages in the process – from refining our evaluation questions to considering which methods and approaches can help us learn and adapt in an iterative way.
In this series of tip sheets, InterAction will highlight helpful considerations, resources, and examples of good practice as it relates to cultivating an evaluative mindset and using evaluation to adapt interventions for protective impact. The first installment of this series of tips focuses on establishing “evaluability” for protection interventions, defining the purpose and determining the criteria for success.
In 2011, the IASC Principals agreed to five Commitments on Accountability to Affected Populations (CAAP) as part of a framework for engagement with communities. The revised version was developed and endorsed by the IASC Principals in November 2017 to reflect essential developments such as the Core Humanitarian Standard (CHS), the work done by the IASC on inter-agency community-based complaints mechanisms including PSEA, and the importance of meaningful collaboration with local stakeholders, which came out as a priority recommendation from the 2016 World Humanitarian Summit and in the Grand Bargain.
In fall of 2017, InterAction initiated a Call for Examples of tools, methods, and approaches which collect and analyze data and measure changes in protection risks and protection outcomes.
In this blog post, evaluator Barbara Klugman, discusses how social network analysis (SNA) can be a useful results-based method in pursuit of outcomes.
In this American Evaluation Association blog piece, blogger Jessie Tannenbaum teases out what to consider in our evaluation design and planning to ensure effective interlingual communication with affected populations.