Outcome evaluation

Outcome evaluations measure the effect of programs or projects.

Examples include:

  • the impact of teacher professional learning on teaching practices
  • the effect of a conflict resolution program on student wellbeing
  • the difference that a new teaching strategy makes to learning engagement and learning outcomes.

Outcome measurement requires us to describe what has or hasn’t changed. To do this, we need:

  • clear objectives – the specific changes that we are trying to bring about. These might include changes in teaching practices, learning environments, student wellbeing, learning engagement or learning outcomes. The objectives are what point us to the evaluation criteria for assessing effectiveness. Developing a logic model is a great way to ensure that our intended outcomes are clearly articulated and realistic
  • meaningful indicators – observable, measurable signs that demonstrate progress towards or away from specified outcome
  • reliable data – existing or new ways of being able to observe change in these indicators.

Outcome measurement can be based on qualitative or quantitative data, and often uses a combination of the two. Triangulation using multiple sources of data is an important principle here. The main thing is to get good line of sight on relevant indicators of progress towards the goals.

Evaluation standards

Interpreting outcome data requires us to be clear about our evaluation standards.

How much positive change would we need to see before we say that something has been a success? What would a disappointing outcome look like? Establishing clear standards from the outset helps to set goals and avoid cognitive bias; for example, where the bar is retrospectively set too low (negativity bias), or where the bar is retrospectively set too high (positivity bias).

Identifying improvement

If identifying impact or improvement, there needs to be a point of comparison. This can include:

  • comparing outcomes internally against a baseline: 'Things are better than they used to be'.
  • comparing outcomes against an external standard or benchmark: 'We have lifted performance to above state average'.
  • comparing our trajectory with others with had a similar starting point: 'Things have gone better here than elsewhere'.

Read more about causality, contribution and effect.

Explaining change

Explaining change is important. If we find that something has changed for the better, the natural next question will be 'Why?' Others may want to know what made the difference and want to learn from this success to make similar changes to other aspects of their work.

Similarly, if a project or change has failed to deliver the expected results, it presents a great learning opportunity to explore why not and make sure lessons can be learnt for next time. This is one of the reasons why outcome evaluation and process evaluation are so complementary.

Explaining change is difficult. Often, the complexity of school context and individual circumstance makes it difficult to prove cause and effect. Instead, evaluators need to mount a compelling argument about the extent to which a project appears to have made a contribution.

Definitive claims about cause and effect are easier to make when:

  • we are dealing with a simple situation and intervention
  • the outcomes are directly influenced by a narrow set of activities
  • the evaluation is designed to control for other factors that may influence the outcome.

A nuanced view

Outcome evaluation seeks to provide a nuanced view of the program, exploring who the program had an impact on, to what extent, in what ways and under what circumstances.

Very few programs have the same effect for everyone and it is rare that changes introduced to a school or a system will simply work or not work in a binary fashion. Identifying people who are not responding to the program helps to target additional or alternative courses of action.

Unintended consequences

Keep an eye out for unintended consequences. Some programs or projects might have a negative effect for certain students or schools, despite having a positive effect for others. This may require us to have multiple evaluation criteria sitting alongside each other.

Example: Our school introduced a new initiative for gifted and talented students. Our evaluation is looking at the effect for all students, where our dual criteria for success are:

  1. accelerated growth for gifted and talented students, and
  2. no decline in growth rates for other students.

Of course, unintended consequences can also be positive, for example:

  • We introduced a native garden to improve the appearance of the entrance to the school. Students showed interest in the garden, which led to its use in teaching and learning about natural systems, as well as establishment of an ‘enviro club’. The ‘enviro club’ provided a valued non-sport recreation option for students during recess and lunch.
Return to top of page Back to top