Contribution-focused impact measurement to balance evidence and costs
.png)
In the world of social and environmental change, understanding “what works” often feels like chasing a moving target. Impact reports proudly cite increased incomes, better literacy rates, or lower deforestation. But one question lingers:
Did our intervention directly cause this, did we play a necessary part, or would the change have happened anyway?
The answer lies in two fundamental concepts in Impact Measurement and Management (IMM): Attribution and Contribution. Both are vital, yet often misunderstood. Let’s unpack why the distinction matters, how it shapes data collection, and what this means for building credible, meaningful impact narratives.
Attribution vs. Contribution: Defining the difference
Attribution seeks to establish a direct cause-and-effect link between a program and its outcomes. Imagine an NGO implements a job training initiative, and two years later, participants report significantly higher incomes. Attribution-oriented evaluations ask: How much of this change was solely due to our program and how confidently can we prove that?
The proof of attribution is the gold standard in IMM and Monitoring & Evaluation (M&E) and the most appropriate approach for this goal is the use of Randomized Controlled Trials (RCTs). By randomly assigning participants into treatment and control groups, RCTs construct a reliable counterfactual, essentially for answering the question: What would have happened without our intervention?
The result is scientifically rigorous and delivers the strongest available evidence of impact. And of course to convince funders, policymakers or investors, it is great to showcase attribution. But: is this something stakeholders expect, is this realistic and is it needed to prove and improve impact and to empower our stakeholders?
Time for a reality-check: RCTs are expensive, resource-intensive and time-consuming. Especially for social enterprises or NGOs they are difficult to realise. Real-world programs unfold in complex, unpredictable environments, far removed from the controlled conditions of a laboratory. Randomizing access to essential services or infrastructure isn’t always ethical or politically viable.
This is where Contribution becomes relevant.
Rather than striving to isolate an intervention as the sole cause of change, contribution-based approaches explore the program’s role within a broader, interconnected system of factors. They ask:
- To what extent did our intervention help create the observed change?
- How did our program interact with other actors, policies, or conditions?
Contribution acknowledges that outcomes, on stakeholder and especially societal level, are rarely the product of one actor alone. Multiple influences, like market dynamics, parallel programs, policy shifts, or community efforts intertwine to shape results. In this context, chasing perfect causal proof is often impossible, but that doesn’t mean we abandon rigor.
Contribution-focused evaluations aim to build a plausible, evidence-backed case that an intervention played a meaningful part in driving positive outcomes. They rely on tools like quasi-experimental designs, longitudinal studies and mixed-methods approaches to uncover relevant data that answers the following questions:
- Has the intervention influenced the observed result?
- Has the intervention made an important contribution to the observed result?
- Why has the result occurred?
- Is there credible evidence that the intervention plausibly played a part in these changes, even if other factors were involved?
Let’s have a closer look at an approach with a great balance of usability and feasibility.
Navigating complexity with longitudinal studies and counterfactual approximation
Longitudinal studies follow the same individuals, households, or communities over an extended period, to track how key impact indicators evolve before, during, and after an intervention. This approach provides essential temporal context to understand whether observed changes correspond with program activities or interventions. Typically longitudinal studies combine:
- Baseline surveys, capturing conditions before the program starts
- Midline surveys, showing progress and identifying early changes
- Endline surveys, measuring results at the end of the intervention and/or at defined times after the intervention
This approach doesn’t offer airtight proof of causality, but it provides strong evidence of whether meaningful change occurred after the intervention and whether those changes align with the program’s timeline and intended outcomes.
Data from longitudinal studies strengthens contribution claims by:
- Demonstrating that change occurred after the intervention, reducing the likelihood that observed outcomes are unrelated to program activities
- Providing evidence of the depth and duration of the outcomes
- Supporting adaptive management by highlighting mid-course corrections
For many social programs, particularly in complex environments, longitudinal studies form the backbone of IMM by offering valuable insight into how outcomes unfold, how programs interact with broader dynamics, and where further learning is needed.
At leonardo, we complement longitudinal tracking with a pragmatic approach to approximating the counterfactual by seeking to understand what might have happened if the intervention had not taken place. Specifically, we assess contributions by asking:
- Are there alternative solutions that could plausibly have led to similar outcomes?
- Were there other events or external factors that contributed to the observed changes?
These questions allow us to situate the observed outcomes within a broader causal landscape and differentiate between program effects and parallel influences. By combining longitudinal data with this contribution lens, we build a more nuanced understanding of change, grounded in both empirical trends and contextual insight.
This blended approach strengthens our IMM practice by balancing rigor with realism, especially in settings where randomized control trials are impractical or undesirable.
Contribution as credible evidence of impact
Understanding the difference between attribution and contribution is essential. It influences how we measure, interpret, and communicate impact in meaningful ways.
RCTs remain the gold standard as they offer the most robust method for proving that an intervention caused specific outcomes. However, in many of the complex and resource-limited settings where impact-driven organisations work, implementing RCTs is often impractical or unfeasible.
This is where contribution-focused approaches provide real value. When applied with rigor, they yield credible and actionable insights that align with the real-world dynamics of social change. They support the development of evidence-based narratives that show how programs played a significant role in achieving positive (or negative) outcomes, even without definitive proof of causality.
These approaches recognise the complexity of change, encourage continuous learning, and promote honest reflection. Instead of striving for absolute certainty, they enable transparent and responsible communication. For many organisations, this way of working is an effective path to improving programs, building trust with stakeholders, and creating meaningful, lasting impact.
Want to know more?
Get in touch with us and and start to measure impact confidently.