Value added models for NSW government schools

This report was originally published 02 September 2014.

Image: Value added models for NSW government schools


Identifying high performing schools is an important step in developing the evidence base about “what works” to improve educational outcomes for students. However, such a task is not straightforward. Absolute performance measures (e.g., average test scores of a school) are commonly used by the media to develop league tables of schools. Such measures are generally more reflective of the characteristics of students attending a school, rather than the contribution the school has made to its students’ learning. The Centre for Education Statistics and Evaluation (CESE) has developed a suite of value added (VA) measures that are intended to be fair and robust indicators of the contribution schools make to their students’ development. The value added measures take into account those contextual factors (both school-and student-related) that impact on students’ learning and that are conceived to be largely beyond the control of schools. They help to identify schools that make a larger than average contribution to students’ learning, as the basis of further investigation of “what works” to provide sound evidence for educators and policy makers to continually improve teaching and learning.

Based on available state-wide student assessment data, VA measures have been developed using the following matched student test results: Year 3 to Year 5 (NAPLAN), Year 5 to Year 7 (NAPLAN), Year 7 to Year 9 (NAPLAN), and Year 9 to Year 12 (NAPLAN to HSC). In addition, an exploratory Kindergarten to Year 3 measure (Best Start to NAPLAN) is also being trialled.

The key features of the proposed DEC VA models include:

  • Explicitly accounting for the available school and student contextual factors that have been shown to have a persistent and significant impact on students’ learning outcomes.
  • Utilising a multilevel modelling approach that takes account of the nesting of students within schools and hence provides more reliable and accurate school effect estimates.
  • Pooling of data across two measurement periods to reduce random errors, so that estimates are more likely to reflect any persistent differences in school performance.
  • Reducing the volatility of VA estimates, for small schools especially, by applying a statistical technique that adjusts the estimates in proportion to their reliability.
Future work planned to further enhance the validity and reliability of the VA measures includes:
  • Estimating bias arising from movements of students across schools and across sectors within a measurement period.
  • Investigation of the suitability of teacher assessments at entry to school (e.g., Best Start program data) as a reliable baseline indicator for the VA K-3 measures.
  • Ongoing work to identify other, currently unmeasured, contextual factors (such as student mobility and student disability).

The suite of VA measures that have been developed can help schools to evaluate their performance and to identify and implement improvement strategies. However, care needs to be taken when interpreting and using VA estimates. It is recommended that VA estimates are always reported with confidence intervals, and along with estimates for previous years where possible. Guidelines to aid the interpretation of VA estimates should also be developed and included in the reporting package so that schools can make the best use of the VA information.


  • Leadership and school improvement
  • Research report

Business Unit:

  • Centre for Education Statistics and Evaluation
Return to top of page Back to top