Anticipating use – evaluation purposes and questions

Before we make any decisions about evaluation design or data sources, we first need to be clear about our purpose: why we are evaluating.

Organisational learning

Evaluation plays a crucial role in organisational learning and improvement. In the same way as assessment of student learning can be used for a variety of purposes, evaluation can contribute to our knowledge and understanding in a number of ways.

Evaluation can:

  • guide our decisions about ongoing quality improvement and adjustment (similar to the purpose of formative assessment)
  • identify emerging needs, gaps and priorities (similar to the purpose of diagnostic assessment)
  • inform our decisions as we approach the end of a planning cycle, about whether to continue a project and if so in what form (a summative purpose).

When evaluation findings are shared outside of a school or project team, they can also inform policy and practice by others outside the organisation by contributing to the broader evidence base about ‘what works’.

Formative and summative evaluation

The maturity of a program influences the purpose of evaluation. For a new concept or pilot, the evaluation may have a more formative purpose.

For example, it may be used to:

  • test key assumptions in the logic model (follow the link below for more information on logic modelling.)
  • establish ‘proof of concept’ and viability
  • asses effectiveness of program components and processes
  • identify barriers in the implementation, and ways they can be overcome
  • enable timely improvements
  • identify factors that will need to be taken into account if the program is to be expanded.

For an established system, process or program, the evaluation may also have a summative focus. This means standing back from the program itself to assess it against its claims.

These evaluations are more likely to:

  • have a stronger outcome evaluation design, depending on the data available and methods used (follow the link below for more information about outcome evaluation.)
  • include an assessment of value for money (follow the link below for more information about economic evaluation.)
  • be released in the public domain, to contribute to the broader body of knowledge in the field.

Even though there is a difference between formative and summative purpose, questions about process and outcome can still be asked at any stage. (Follow links below to read more.)

Accountability

Evaluation can also play a valuable role in accountability.

Accountability relationships can travel upwards in the department, downwards to students and communities, or horizontally to colleagues and partners.

If evaluation is intended for accountability purposes, at least some of the findings will need to be made available beyond the school or project team. People who are consulted as part of the evaluation need to be aware of this when giving informed consent. (Follow the link below to ethical conduct in evaluation.)

Stakeholder agreement

The purpose of any evaluation needs to be agreed between the stakeholders. The purpose will determine the timelines, resources and methods to be used. It may be necessary to narrow the purpose of an evaluation so that it can be completed within time, resourcing and data constraints. Negotiation with stakeholders about purpose and scope should be undertaken as part of the evaluation planning stage.

Quality questions

The secret of good evaluation is to ask the right questions, and ask them well.

Good evaluation questions will help produce useful and credible findings, so that the evaluation makes a real contribution to practice.

Evaluation questions need to:

  • reflect the purpose of the evaluation
  • reflect what key stakeholders want to know
  • have more than one possible answer
  • focus inquiry beyond describing what has happened, towards making judgements about it (‘so what?') and distilling implications (‘what next’?)
  • be able to be answered using reliable data
  • warrant the time, effort, and resources needed to find the answers.

Clarity

Clear evaluation questions clarify our purpose and guide our efforts. Evaluation questions differ depending on what type of evaluation we are doing.

Most evaluation efforts seek to address a combination of process and outcome questions. Economic evaluations tend to come later, once we have established that the program provides a good outcome.

Type of evaluation Example high level evaluation questions Example subsidiary evaluation questions
Process evaluation What did we do, and how well did we do it?
  • Was the program implemented as intended? If not, what was changed and for what reasons?
  • What barriers were faced in implementing this program, and what were the enablers?
  • How efficiently were the available resources used? Was there any wastage?
  • What can be learnt about how to implement a program like this smoothly in a similar school?
Outcome evaluation What happened as a result of the program?
  • What difference did the program make for students? To what extent? In what ways?
  • Were there any unintended outcomes (positive or negative)?
  • What was it about the program that made such a difference?
Economic evaluation Did the initiative deliver good value for money?
  • What is the value of the outcomes produced?
  • What did it cost in money, time, space and so on, to produce those outcomes?
  • Is the initiative cost-effective compared with other options?

The three evaluation types above (process, outcome and economic) are the main focus of the resources in this evaluation hub. Follow the links below for more information about Process, Outcome and Economic evaluations.

Focus

A well-focused evaluation will seek to address no more than 3-4 key questions. Evaluation questions are usually answered by combining relevant data from several sources.

Evaluation efforts are always limited by resources and time, so questions need to be prioritised. It is better to provide solid answers to a small number of key questions than provide superficial answers to a long list of questions.

Audience engagement is the key to prioritising evaluation questions:

  • Identify the audience for the evaluation. Who might be interested in the findings? Who might need to use them, or do something in response to them?
  • Consult with your evaluation audience, and engage them as stakeholders. What do they want to know? Why do they want to know that? What decisions are going to be informed by the findings?
  • Discuss evaluation design options and methods. It may be difficult to answer some questions to the satisfaction of all stakeholders. It’s important to come to an agreement about whether a particular evaluation design or data set is worth the effort. Stakeholder engagement is also crucial in coming to an agreement about evaluation criteria and standards. Follow the links below for more information.
  • Discuss timing. If the evaluation is designed to inform certain decisions, when do the stakeholders need the findings? Some compromises may need to be made, particularly for programs that take time to implement and produce outcomes. It may be suitable to stage the evaluation over time. For example:
    • benchmarking at the starting point
    • addressing questions about process early on to inform the ongoing development of the program, and then
    • turning to questions about outcomes once the program has had time to mature.
Return to top of page Back to top