Factual approach in project evaluation
The factual looks at patterns in outcomes, and is an alternative where it is not possible to create a credible comparison group.
Factual analysis compares the actual results to those expected if the theory of change or program logic were true. When, where and for whom did the outcomes occur? Are these results consistent with the theory that the program caused or contributed to the results?
Possible methods include:
- Comparative case studies – did the program produce results only in cases when the other necessary elements were in place?
- Dose-response – were there better outcomes for participants who received more of the program (for example, attended more of the workshops or received more support)?
- Beneficiary/expert attribution – did participants or key stakeholders believe the program had made a difference, and could they provide a plausible explanation of why this was the case?
- Predictions – did those participants or sites predicted to achieve the best outcomes (because of the quality of implementation and/or favourable context) do so? How can anomalies be explained?
- Temporality – did the outcomes occur at a time consistent with the theory of change – not before the program was implemented?
For an outcome evaluation of a program with a reasonably accepted theory of change or program logic, the design can be shaped by how much focus needs to be on each of the cause-and-effect links underpinning the program.
If the link between an output, immediate outcome, intermediate outcome and long- term outcome has been well established by previous research it may be enough to evaluate only if the immediate outcome has been achieved. This is because it can be inferred that the desired longer-term outcomes are likely to follow. This approach is particularly relevant for programs where the final outcomes will not be achieved for many years – for example, some programs for:
- natural resource management.
If these links are not well established a larger evaluation effort may be needed. The larger evaluation would need to gather information about how and why an initiative is (or isn't) working by examining in greater detail the links between:
- all outputs
- outcomes and impacts
- gathering data over a longer time period.
In the early stages of a new or pilot program it is often important to begin with an exploratory assessment of outcomes, to:
- form a basis for developing suitable measures, or
- allow for the implementation process to be firmly in place.