Evaluate the Results
The evaluation phase is where impact meets accountability. This phase requires organizations to move beyond anecdotal evidence and embrace data-driven insights to assess whether a social innovation is achieving its intended outcomes effectively. Evaluation isn’t just about proving success; it’s about learning what works, what doesn’t, and why. By identifying and measuring key performance indicators using both qualitative and quantitative methods, changemakers can revise their intervention strategies and improve outcomes.
Although evaluation is represented as the “final phase” of the Social Impact Cycle, measurements, analysis, and adjustments should occur throughout all stages of the cycle whenever appropriate. However, formalized evaluation should always follow the implementation of an intervention to ensure intended outcomes are achieved and meaningful improvements are being made.
This stage ensures that ventures remain aligned with their mission and responsive to the communities they serve.
The toolbox below contains information on strengthening evaluation approaches and building a foundation for lasting impact.
Outputs, Outcomes and Impact
Know how to measure true impact.
Outputs, outcomes, and impact are all terms used to classify the changes that occur after an intervention is implemented, but each highlights a different effectOutputs: The direct products, services, or activities generated by a social intervention. They are typically quantitative and measure what the intervention has done or delivered—the tangible evidence of the work. (e.g. Affordable housing units built).
Outcomes: The specific changes that occur among a target population following the intervention. These changes represent shifts in the negative consequences of the social problem being addressed and can be short-term or long-term. Outcomes measure what changed for the affected population and not what actions were performed by those implementing the intervention. (e.g. decrease in the number of unsheltered individuals observed locally).
Impact: The portion of outcomes that can be directly attributed to the intervention. It represents the change that happened specifically because of the intervention, isolating any outside contributions that might have influenced the outcomes. (e.g. an observable drop in homelessness rates in the city during the time the intervention was present).
The goal behind measuring outputs, outcomes, and impact is to appropriately gauge whether an intervention is succeeding or falling short. However, measuring the wrong data points could create a false correlation between intervention efforts and overall results. During the evaluation process, it’s crucial to account for influences outside of the intervention that may have had an impact on project outcomes. Narrowing down target data sets can clarify intervention results.
Strive to measure at all three levels, evaluating outputs for operation management, outcomes for intervention improvement, and impact for strategic decisions and stakeholder communication.
Can you explicitly name your desired outputs, outcomes, and impact?
Evaluate Design
Deciding how to test the intervention.
Evaluation design is a conceptual plan that outlines how data will be collected and analyzed to answer key evaluation questions about an intervention. A strong evaluation design clarifies what information will be gathered, who will be included in the evaluation, how participants or comparison groups will be selected, and what methods will be used to analyze the results. These decisions help ensure that the findings are credible, meaningful, and useful for improving programs or informing future efforts. Below are three common evaluation designs that can be used to assess program effectiveness, each offering different levels of rigor and evidence.Single-Group Design
This is the most basic and common form of evaluation, often called a “pre-post” or “before-and-after" measurement. Single-group evaluation tracks participants before and after an intervention. It measures the changes within that same group without contrasting the results with a comparison group—a similar group that did not receive the intervention. While limited in what it can prove, this single-group design is usually the most financially accessible evaluation option and can be a valuable resource for understanding whether a change occurred in the target population.
Comparison-Group Design
The comparison-group design compares the outcomes from participants in the program (treatment group) to outcomes for a similar group that did not receive the intervention (comparison group). By tracking both groups, evaluators can determine whether the targeted circumstances of the treatment group improved significantly more than those of the comparison group. Markable improvement in the treatment group can then be reasonably attributed to the intervention. This evaluation design provides higher-quality data and is still a relatively accessible measurement tactic.
Experimental Design (Randomized Controlled Trials-RCTs)
Considered the “gold standard” of evaluation design, experimental design uses the random assignment of participants to eliminate selection bias and provide the strongest possible evidence of an intervention’s impact. Because participants are assigned randomly rather than based on their characteristics, the treatment group and the comparison group should be statistically identical at the start of the study. This includes both measured characteristics (such as age, income, and education) and unmeasured characteristics (such as motivation, family support, or resilience). As a result, the only systematic difference between the two groups is whether they received the intervention. With only this single difference, any variations in outcomes at the end of the study can be confidently attributed to the program rather than to preexisting disparities, external factors, or alternative explanations.
What would the results of a pre- and post-test evaluation say about your intervention?
Organizational Learning
Create a culture of continuous improvement.
Just as it’s important for an individual to continually grow and become better, social impact organizations need to create systems that support collective learning. Organizational learning focuses on creating a culture and implementing practices that support continuous improvement and adaptation across the entire organization.This process is vital for those working in social impact to remain effective and relevant in an environment where social problems evolve, community needs shift, and new solutions emerge. Working to improve organizational systems, increase knowledge sharing, and refine communication are all potential tactics for prioritizing organizational learning.