Evaluation
Direction and guidance for the evaluation of department programs, projects, strategies, policies and initiatives to support the effective, efficient, appropriate and transparent use of public resources.
Audience
All department staff.
Version | Date | Description of changes | Approved by |
---|---|---|---|
V01.0.0 | 26/07/2024 | Under the 2023 Policy and procedure review program, new policy document with consolidated instructions previously provided in the Evaluation Policy and Evaluation framework. | Executive Director, Policy and Evidence, CESE |
About the policy
These procedures relate to the Enterprise management policy.
Term | Definition |
---|---|
Administrative data | Information that organisations collect as part of their ongoing management and operations. Examples of administrative data in the department include attendance and enrolment records, and student assessment data. |
Evaluation | A rigorous, systematic, transparent and objective process to make judgments about the implementation, impacts and merits or worth of a program, usually in relation to its effectiveness, efficiency and appropriateness. |
Evidence based decision-making | The process of gathering and processing data into meaningful information and statistics, the interpretation of which builds knowledge and provides the basis for making informed decisions. |
Mixed methods | A combination of qualitative and quantitative research and/or evaluation approaches. |
Program | A set of activities managed together over a sustained period that aim to deliver benefits for communities. ‘Program’ is sometimes used interchangeably with policy, strategy, project, or initiative. Programs may include one or more projects that aim to deliver a specific product or output and achieve a strategic outcome within a specific timeframe and budget |
Qualitative data | Information that tends to be a collection of thoughts, observations, feelings, opinions and/or lived experiences and are not easily reduced to numbers. Qualitative data helps us answer questions about the 'what', 'how' and 'why' of a phenomenon, rather than questions of 'how many' or 'how much'. |
Quantitative data | Information that can be expressed as numbers. This allows for various forms of analysis, including descriptive statistics (like averages, counts, percentages, and differences) to summarise the data, and inferential statistics to draw conclusions about a larger population from a smaller sample. Both types of analyses are instrumental in understanding large datasets, identifying trends over time, and exploring differences across groups. |
Treasury Guidelines (NSW Treasury Policy and Guidelines: Evaluation) | The Treasury Guidelines set out the mandatory requirements, recommendation and guidance for NSW Government agencies and entities to plan for and conduct the evaluation of policies, projects regulations and programs. The Treasury Guidelines are part of an investment framework that informs policy and budget setting in NSW. |
Secretary:
- has overall responsibility for evaluation in the department.
Senior executive staff:
- ensure compliance with this policy.
Deputy Secretary, Education and Skills Reform:
- approves the annual evaluation schedule to be submitted to the Cabinet Standing Committee on Expenditure each financial year.
Executive Director, Policy and Evidence, Centre for Education Statistics and Evaluation (CESE):
- endorses this policy.
Director, Evaluation and Effectiveness, CESE:
- implements, monitors and reviews this policy.
What needs to be done
The Centre for Education Statistics and Evaluation (CESE) provides leadership, advice, expertise and support for program evaluation and evaluation capacity building across the department. CESE also recognises the role of other evaluation teams in the department that may undertake monitoring and evaluation of programs.
This policy applies to evaluations initiated by:
- department divisions
It also includes evaluations required by:
- NSW Treasury
- funding bodies
- external program requirements.
1. Plan for evaluation, monitoring or periodic review of all programs
1.1 Plan for evaluation of programs
All programs should undergo some form of evaluation, monitoring or periodic review.
This should happen even if the program is not identified in the department’s annual evaluation schedule or assessed as a priority based on the Treasury Guidelines.
Program monitoring and evaluation should be conducted according to annual evaluation schedules. These schedules are approved by the department’s Executive and submitted to the Cabinet Standing Committee on Expenditure each financial year.
Programs should be evaluated as part of the department’s annual evaluation schedules if they:
- are central to the achievement of department, state or national priorities
- involve large-scale investment
- are resource intensive
- are of significant complexity or risk.
Programs that are wholly or partly funded by other government agencies or non-government organisations must be:
- evaluated
- included in the department’s annual evaluation schedules if the department is the lead agency.
1.2. Develop evaluation plans for policy proposals to Cabinet
Business cases to the Cabinet Standing Committee on Expenditure should include an evaluation plan for:
- a new policy proposal
- a recurrent proposal to expand or significantly reform an existing program.
Proposals should specify and quarantine an evaluation budget and an explicit date for review or evaluation.
Staff preparing these evaluation plans must consult with CESE.
For more information on business case development with respect to evaluation planning, refer to the NSW Government Business Case Guidelines (PDF 1 MB).
2. Develop evaluations
2.1. Comply with evaluation requirements
When developing evaluations, staff must:
- comply with NSW Treasury guidelines
- embed the principles from the Culturally Responsive Evaluation Framework in the evaluation.
The NSW Treasury Policy and Guidelines: Evaluation (PDF 9 MB) require that all NSW Government agencies monitor and evaluate their programs, both ongoing and new, to assess their achievement of intended outcomes and benefits to the people of NSW.
Evaluations must be conducted in accordance with the general principles and requirements of Treasury guidelines. The scale of an evaluation should be proportionate to the size and significance of the program according to the guidelines’ 3-scale model.
The Treasury guidelines require that evaluations of programs of significant size, government priority and risk are prioritised in agencies’ annual evaluation schedules.
Evaluations must be informed by the department’s Re-imagining Evaluation: A Culturally Responsive Evaluation Framework.
The framework highlights the importance of centering Aboriginal students, their families, and communities at the heart of evaluation methodology and processes.
Evaluators should recognise, respect and be responsive to the cultural values of participants, communities and settings for which the program intends to create impact.
This includes honouring the principles of Aboriginal family sovereignty and Aboriginal data sovereignty.
2.2. Decide on the evaluation type
Program owners should consider the following types of evaluation:
- process evaluations focus on program implementation and delivery
- outcome evaluations help determine whether a program has met its objectives
- economic evaluations consider the program’s costs, benefits, and value for money.
Other forms of performance measurement may also complement the evaluation types outlined above.
Process or implementation evaluation investigates how programs are delivered, describing the current conditions and identifying issues that may support or hinder success. The evaluation assesses whether activities are being implemented as intended, which aspects of a program are working well, and which could be improved, to inform adjustments to service delivery.
Examples of process evaluation questions include:
- Is the program being implemented as planned?
- How well is the program operating?
- What are the barriers or facilitators to implementing program activities?
- Which program activities are meeting the needs of participants and other key stakeholders?
Outcome or impact evaluation determines whether a program has met its stated objectives. The evaluation type also considers:
- the intended and unintended effects of a program
- if the program works for particular populations and under what circumstances.
Outcome evaluation requires certain conditions to produce robust findings:
- sufficient time to show an effect
- reliable data
- measurable outcomes
- and an adequately sized and representative sample.
Examples of outcome evaluation questions include:
- Did the program meet its stated objectives?
- What difference did the program make?
- Who has benefited from the program, how, and under what circumstances?
- Are there any unintended consequences for participants or stakeholders?
Economic evaluation identifies, measures, and values the costs and benefits of a program. Economic evaluation assigns a value to a program’s inputs and outcomes. Therefore, a quality economic evaluation can only be done when a program is producing reliable outcomes data.
There are various methods for economic evaluation, however the 2 the Treasury guidelines emphasise are:
- cost-benefit analysis, which should be used when assessing the net social benefits and value for money of a program. It is particularly suitable for large, complex or risky programs
- cost-effectiveness analysis, which should be used when assessing value for money of a program where it is not feasible to quantify or monetise benefits.
Program owners may wish to use additional performance measurement activities, including:
- monitoring
- ‘deliverology’
- implementation-focused activities that focus on tracking and targeting key performance metrics.
While these activities can complement the other forms of evaluation, they cannot determine the program's overall effectiveness or its achievement of intended outcomes. Consequently, on their own, these activities do not satisfy the evaluation requirements and recommendations outlined in the Treasury guidelines.
2.3. Consider the general principles of evaluation
Staff need to consider the principles outlined in Table 1 when planning for an evaluation.
Table 1 Principles of evaluation
Principal | Action and resources |
---|---|
Plan your evaluation early | Ideally, evaluations should be planned during the program design stage to ensure that programs can be evaluated, and to increase the evaluation’s robustness. Periodic program evaluation, including before, during and after program implementation, should be embedded into program design. Supporting resources:
|
State the purpose clearly | Evaluation stakeholders should have a clear understanding of an evaluation’s purpose and how the findings are intended to be used. This includes knowing why the evaluation matters, to whom the findings will be important and why, and an awareness of the context of the evaluation. |
Ensure it is appropriately resourced | Evaluations should be appropriately resourced, considering what is feasible and realistic to achieve within time and budget constraints. Resources for evaluation should form part of a program’s overall budget. Based on Treasury guidelines, a one to five per cent allocation from the overall program budget should be allocated to monitoring and evaluation. |
Ensure evaluation is rigorous and methodologically sound | Evaluations should be rigorous, systematic and objective, with appropriate scale and design. Evaluations should be methodologically sound and replicable in accordance with the program’s size, risk, priority and significance. Robust evaluation design incorporates the most appropriate methods specific to that evaluation, including quantitative and/or qualitative methods. |
Consider ethical issues and requirements | Ethical considerations and requirements should be incorporated into the design and conduct of evaluations. Supporting resources:
|
Ensure effective governance and oversight | Evaluations should have effective governance structures and processes in place to ensure oversight of the evaluation design, implementation and reporting. These should include clear roles and responsibilities and should support the independence of the evaluators and the reporting lines. |
Ensure the appropriate mix of expertise, independence and impartiality | Evaluations should be conducted with the right mix of expertise, impartiality and independence from program owners. Evaluation teams should be formed based on the principles of diversity, inclusion and equity. Stakeholders should be identified and actively involved in the evaluation process. This will ensure that the definition of outcomes, activities and outputs, as well as what is important to measure in assessing program success, is determined in a collaborative way. Stakeholders are vital in contributing to the interpretation of evaluation information and in formulating recommendations. The Treasury guidelines state that the evaluators should be independent from program owners for evaluations of initiatives of a moderate to high size, priority or risk. Responsibility for evaluation activities and outputs, including the final content of evaluation reports, will rest with the evaluators. Evaluation reports should reflect the findings and conclusions as determined by the evaluators and should not be amended without the evaluators’ agreement. |
Evaluation conduct and findings should be transparent | The conduct of evaluations should be open to scrutiny. Comprehensive information on all aspects of the evaluation should be systematically recorded and reported, including methods, analyses, and conclusions. Factual findings and conclusions should be explicitly justified and clearly distinguished from value judgments and recommendations. |
Conduct evaluations in a timely manner | Evaluations should be timely and strategic to influence decision-making. Providing valid, reliable information requires a balance of technical and time requirements with practical considerations to ensure the evaluation supports evidence-based decision-making. |
Findings should be used for decision-making | Evaluation findings should be used to inform decision-making, to improve outcomes and resource allocation, and to support accountability. The department uses evaluations to identify impacts and benefits, to expand successful programs, to redesign the delivery of existing programs, or to reprioritise resources from existing programs should they no longer be considered a government priority, or effective and efficient in achieving expected benefits. |
3. Publish the findings
Evaluation findings must be made publicly available, except where there is an overriding public interest against disclosure, in line with the Government Information (Public Access) Act 2009 (GIPAA). Evaluation reports should be released in a range of forums including publication on the CESE evaluation evidence bank.
Supporting tools, resources and related information
- Centre for Education Statistics and Evaluation – Evaluation resource hub
- NSW Government Evaluation Toolkit
- NSW Treasury’s Evaluation Policy and Guidelines – Evaluation workbooks
- NSW Treasury’s Evaluation Policy and Guidelines – Resources
- Knowledge platform and global community – Better Evaluation
- Government Information (Public Access) Act 2009
- Re-imagining Evaluation: A Culturally Responsive Evaluation Framework
- NSW Treasury – NSW Government Business Case Guidelines TPP18-06 (PDF 1 MB)
- NSW Treasury – NSW Government Guide to Cost-Benefit Analysis TPG23-08 (PDF 2 MB)
- NSW Treasury Policy and Guidelines: Evaluation TPG22-22 (PDF 9 MB)
Policy contact
info@cese.nsw.gov.au
The Director, Evaluation and Effectiveness, CESE monitors the implementation of this procedure, regularly reviews its contents to ensure relevance and accuracy, and updates it as needed.