Outcome-Oriented Methods: In Theory and Practice

Date Published: July 27, 2024Author:
Photo of Outcome-Oriented Methods: In Theory and Practice

The use of outcome-oriented methods is one of the three Key Elements of Results-Based Protection. This case study explores the theory behind them and provides examples of how humanitarian protection actors have used them in practice.

It looks at outcome-oriented methods as ways of working and tools that we can use throughout the program cycle: for analysis, design, implementation, and monitoring and evaluation (M&E). Recognizing the difficulty of measuring protection outcomes, this case study looks at different monitoring and evaluation techniques and tools, and also explores how learning and adaption can be built into ways of working.

Understanding Protection Risks as Complex and Interconnected

Humanitarian emergencies are complex. Every situation is unique—and changeable. Improving protection outcomes by reducing protection risks requires us to understand and be able to work to influence the multiplicity of factors that drive and influence protection threats, people’s vulnerabilities, and capacities.

Factors that affect protection outcomes include the environment; behaviors and attitudes of multiple stakeholders; relationships and power structures between individuals and groups; social norms and practices; international, state, local, and organizational policies; and more. These are usually interconnected. For example, social norms influence both power structures and individuals’ behavior and attitudes. Understanding the connections and influence of these factors is critical to identifying how to change protection outcomes.

Learn More: Systems Thinking

Systems Thinking

“When striving to reduce populations’ exposure to violence, we utilize Systems Thinking to understand the systems which armed groups, state actors, communities, and every relevant stakeholder operate within.”

Put simply, this is an approach to problem solving which looks at the whole system and relationships (interconnectedness) within it, rather than only looking at individual parts.

InterAction, Systems Thinking vs Design Thinking, What’s the Difference?

What Are Outcome-Oriented Methods?

Effectively reducing risks requires being focused on protection outcomes across the entire program cycle: for analysis, design, implementation, monitoring, and evaluation.

Outcome-oriented methods are ways of working and tools that help us navigate complexity and unique contexts when designing and delivering programs.

It can sometimes feel easier when designing a program to start by selecting activities that we are familiar with, that have already been used in the context or elsewhere. However, pre-determined activities are unlikely to adequately address interconnected protection risks and lead to better protection outcomes. In contrast, outcome-oriented methods can be used to design and implement responses that are based on the unique context and are adaptable to new learning and changes in the environment.

It might require us to reconsider the traditional linear model of doing a needs assessment, then implementing, then evaluating at the end. Outcome-oriented programming can be imagined as a series of loops, demonstrating iterative learning and adaptations

Learn More: Clocks and Clouds

Problem Analogy

A “clock” and “cloud” problem analogy helps conceptualize the differences between types of problems and the tools to address them.

Clock Problems

“Clock problems” are simple problems. Clocks only work the way they are designed to and when something is broken, you know how to fix it and can do so using standard tools. In a humanitarian context, these are problems that have predictable factors and can be addressed using standardized program design, including standardized indicators and logframes.

Cloud Problems

“Cloud problems” are highly dependent on the surrounding environment. They are unpredictable and can only be solved by addressing interdependent factors. Most protection risks are “cloud problems”—influenced by a lot of variables and continuously evolving. We get the best results in addressing these when we use tools that are suited for complexity.

Analysis

For a protection response to be outcome-oriented, it must be underpinned by analysis to understand not only risks but the system they take place within.

A context-specific protection analysis is central to designing an outcome-oriented protection approach. This is conducted to identify—with the community—protection risks, desired outcomes, and how changes could be made. The Protection Analytical Framework or InterAction’s Results-Based Protection (RBP) Analysis Framework can be used to identify what information is needed to undertake a protection analysis and how this can be organized and structured.

It is also necessary to understand the surrounding environment (or system) that protection risks take place within and are influenced by . Analysis tools for this include stakeholder and relationship-mapping. These analyses can then be used to identify the outcomes (i.e. reductions in specific protection risks) we are trying to achieve.

Integrating Analysis, M&E, and Learning: Norwegian Refugee Council (NRC)

Integrating Analysis, M&E, and Learning: Norwegian Refugee Council (NRC)

Through its Civilian Self-Protection Program,  NRC aims to support community actors to strengthen their existing capacities to prevent, reduce, and mitigate protection risks, complemented by relevant external interventions.

NRC developed a Protection Prevention M&E, Analysis, and Learning Plan modelled after InterAction’s GBV Prevention Evaluation Framework (GBV PEF), to be an integrated and inseparable component of the program. This aims to strengthen protection and conflict analysis, inform program design, and assess how the program contributes to protection outcomes.

Protection analysis activities—such as community mapping and perceptions of insecurity exercises—that are usually seen as having only programmatic functions are also used for measurement. By using the same tools at the beginning and end of a program, NRC teams can compare the results to understand prevention outcomes related to community capacities, perceptions, and behaviors.

Design

Once the desired outcomes have been identified comes an essential step: developing a context-specific theory and a course of action on how these outcomes can be achieved. This requires a clear causal logic: defining what steps are needed, what actions need to be taken, and by who, with the goal of a measurable reduction in risk.

A theory of change is a planning method that starts with the long-term goal (the desired protection outcome) and sets out the process and steps to achieving it. It identifies activities or actions that will be taken and explains the causal linkages—why they are expected to lead to the specific desired change. It might outline intermediate steps along the way and usually incorporates feedback loops so that the theory can be adjusted based on new learnings. Identifying steps that need to be taken might also mean identifying other actors that would need to be involved, encouraging protection teams to think about multi-disciplinary efforts needed to achieve protection outcomes, not just their own activities.

International Committee of the Red Cross (ICRC) and Theories of Changes

Example Design

In 2019, ICRC embarked on designing and testing theories of change in its Prevention of Sexual Violence Program, putting the GBV PEF into practice.

This effort aimed to bridge the gap between systematic practices and institutional knowledge, and field implementation and learning. As an initial output, ICRC produced a global-level theory of change. This was then used as inspiration for theories of change in each of four pilot countries, developed in consultation with communities and duty-bearers.

When implementing these theories of change in their programming, ICRC field teams gathered evidence to test the assumptions and adapt their theory. This evidence is being used to test assumptions and identify gaps in the global theory of change.

How and By Who?

Theory of Change

Developing a theory of change might sound daunting—and some theories of change are complex and lengthy—but it can be done through a straightforward process. The core aim of a theory of change is also straightforward: helping us to understand that “IF we do this activity, THEN this change will happen, BECAUSE of these factors.”

The InterAction training tool and facilitator’s guide, Theories of Change for Protection Outcomes, provides step-by-step training support for helping organizations learn and undertake a context-specific theory of change in support of protection outcomes.

Who does it also matters. A theory of change is developed to identify the change and steps to that change that are needed to achieve the desired protection outcomes—that is, it identifies the intermediate results (i.e., change in behavior of the threat, changes in behavior of those that are vulnerable to the threat, and enhanced changes in the coping mechanisms that are used by individuals to overcome the threat) and actions that will be taken. As the underpinning document for a program, it therefore follows that it should be done by the program team and in collaboration with monitoring, evaluation, accountability, and learning (MEAL) teams, and then validated with communities before action or strategies are developed.

Monitoring and Evaluation Outcomes

Once we have set out to try to change protection outcomes, we must be able to measure progress and success, not only to evaluate impact of the protection intervention (did it actually change outcomes?) but also to monitor progress along the way. To measure outcomes at the end of a program, they need to have been thought about, integrated into design, and monitored (with learnings fed back and approaches adapted) from the start.

It is easier to measure outputs—such as the number of people receiving services, number of community leaders trained, or items distributed—than outcomes, and as a result, monitoring and evaluation often focuses on these. Measuring outcomes requires using different tools and methodologies than are traditional in the humanitarian sector, such as the use of qualitative and participatory methods.

Rather than, for example, relying on baseline-midline-endline surveys, methodologies that are designed to capture and communicate learning on an ongoing basis can be used. We can then feed learnings back into the Theory of Change and to adapt program approaches and activities.

Ways of working within an organization might need to adjust to be able to implement these methodologies, perhaps requiring strengthening of internal collaboration between program and MEAL teams or changing how information collection is done. For example, using outcome mapping methods requires trusting relationships with community members; a MEAL team might work jointly with a program team to design Results Journal tools, with data collection then conducted by protection staff as part of their regular work with the community, utilizing their existing relationships.

To Measure Outcomes: CIVIC’s Results Journals in Nigeria

Example Methodologies

Results Journals are a type of outcome-mapping. The methodology aims to capture behavior change by identifying and tracking issues and results, such as changes in behavior, and identifying project activity influence and any follow-up that could be taken. The tools are designed to be used regularly by community-facing staff, or even by community members, with the information then analyzed and acted on by the project team. “Journals” can be simple, such as a spreadsheet in Excel, or a more detailed narrative.

“CIVIC has employed Results Journals in their work in Borno State, Nigeria. Its frontline staff uses Results Journals to track changes in the environment monthly, looking specifically at the behavior of Nigerian state security forces and [allied armed non-state actors] … and CIVIC’s contribution to a specific protection result. The information collected in these journals allows CIVIC to adapt its targeted action plans by measuring immediate results, a key step in achieving a meaningful reduction in risk. If the journals indicate that an action plan is not successfully changing behavior and achieving results, CIVIC re-evaluates its strategies and develops a new action plan—establishing an iterative feedback loop.”

Monitoring Context And Risk Changes

As well as monitoring progress toward changing protection outcomes, it is also important to continue to monitor the context and changes in risk (i.e. the patterns of threat, vulnerability to the threat, and capacity to overcome it). As the crisis evolves and efforts are underway to stop, prevent, and change behavior of different stakeholders, this may (and, hopefully, will) change dynamics in communities and the surrounding environment. Monitoring context and risk changes allows us to conduct continuous analysis and adapt program approaches along the way.

Learning, Reflecting…

Using outcome-oriented methods doesn’t only mean introducing new tools. It also encourages us to be focused on outcomes and how these can be changed at all times during the program. It promotes us to seek out learnings and regularly reflect on and review the effectiveness of interventions. This includes considering whether, how, and why change is happening as it was anticipated to in the theory of change that was developed at the outset, would and how shifts in the external environment might have affected the change.

Promoting learning and reflecting can be done through using some of the monitoring approaches outlined above and by providing space for the program team—and community members and participants—to reflect on what they think is working, not working, or has changed.

… and Adapting

Underpinning the use of outcome-oriented methods is the principle of adaptation. Going back to the reflections we started with—that humanitarian emergencies are complex and that every situation is unique and changeable—the consequence of this is that the crisis context, risk patterns, and behavior and capacities of different stakeholders may change, and that the protection response may need to change, too.

Being adaptable means we need to know about changes as they occur using outcome-oriented monitoring techniques centered on community perspectives that sit alongside a continuous context-specific protection analysis. It also means that our own mindsets and the structures we build our programming on need to be flexible and adaptable. An updated protection analysis might identify new protection threats, or feedback from ongoing program monitoring at the community level might demonstrate that a protection activity isn’t bringing about the result anticipated, requiring an updated theory of change and the activity to be adjusted. In order to implement flexible approaches in protection programming, donor requirements must in turn enable project activities to be adaptable. Measures donors can take to support results-based protection might involve agreeing to project frameworks in which some activities can be defined with communities during implementation and can be adapted with minimal administrative effort.

RBP Questions to Consider

·       What outcome-oriented methods could you apply in your program?

·       What would others in the organization need to do to support these? Would any organizational changes be needed?

·       Do you and your staff think using an outcome-oriented mindset? What will it take to change the organizational culture to adopt more outcome-oriented ways of working?

·       Do you still think in “clocks” using standardized indicators and logframes, or are you thinking in “clouds” using iterative and adaptable methods?

RBP Questions to Consider for Donors

·       To what extent do the current project proposal and implementation requirements allow protection actors to implement flexible programming?

o   For example, to what extent are activities required to be pre-defined in project proposals? Is there sufficient budget flexibility to allow activity adaptation during implementation?

o   Would any changes to donor requirements be needed to allow protection activities to be adaptable to changes in need and context, while maintaining administrative efficiency for both partner organization and donor teams?

o   Could multi-year funding be used more extensively to support outcome-oriented protection programming?

 

·       To what extent do current logframe and monitoring and evaluation requirements encourage the use of outcome-measurement (usually qualitative) as well as output-measurement (usually quantitative) methodologies?

Would any administrative changes or knowledge improvements be required for donor teams to promote and support the use of outcome-measurement methodologies by partner organizations?

Outcome-Oriented Methods: In Theory and In Practice

Return