School improvement frameworks: the evidence base

This literature review was originally published 11 December 2014.

Image: School improvement frameworks: the evidence base

Summary

Background

Continuous, systemic school improvement is increasingly seen as essential by education systems. The focus on school improvement is driven in part by a growing awareness of international educational performance (through, for instance, the Programme for International Student Assessment, or PISA), and major comparative works published by organisations such as the OECD and McKinsey & Company. One recent OECD report argues that assessment and evaluation of students, teachers and schools are ‘becoming critical’ to establishing high performance and providing feedback, with the ultimate goal of improving student outcomes1.

In his foreword to the 2007 McKinsey & Company paper, How the world’s best performing school systems come out on top, Andreas Schleicher explained the economic imperative sitting behind the drive for improvement:

The capacity of countries … to compete in the global knowledge economy increasingly depends on whether they can meet a fast-growing demand for high-level skills. This, in turns, hinges on significant improvements in the quality of schooling outcomes and a more equitable distribution in learning opportunities2.

Although it is widely accepted as necessary, systemic and continuous improvement is also acknowledged to be a complex process, requiring action over many domains. Many large initiatives for school improvement fail because they do not change day-to-day school practices, which are ‘recognised as remarkably impervious to, and self-protective against, fluctuating external policies and agendas’3.

A related challenge is sustaining change once it has been enacted: sustainability is essential if Australia wants to become and remain a top educational performer. Sustaining change means building capacity within schools, to ensure that teachers and schools are adaptive, capable of continuous learning, and can take charge of change4. As part of this process, schools and districts must work together to share best practice5 . This ‘systemic’ approach is not new, however Barber and Fullan have argued that what is needed is a shift from systems thinking to systems action — for the strategic, powerful pursuit of improvement in practice6.

There is evidence that substantial, long-lasting change is possible. McKinsey & Company’s 2010 report, How the world’s most improved school systems keep getting better, identifies a range of improved education systems from across the world that have made sustained improvements from a wide range of starting points7. For instance, Singapore has transformed its education system from ‘fair’ to ‘great’ in a twelve-year period; and Ontario, Canada moved from ‘good’ to ‘great’ within the space of ten years8.

The McKinsey report also provides guidance as to the kinds of practices improving educational systems have undertaken, to foster that improvement. These practices include building teacher and principal technical skills; student assessment; use of data systems; facilitation of improvement through policy and law as well as revision of curriculum and standards; and ensuring appropriate reward and remunerations structures for teachers and principals9.

In recent years, many school systems have developed school improvement or performance frameworks as a means of:

  • identifying the core components of the school improvement process
  • supporting schools through this process, and
  • assessing schools’ performance against core components.

All states and territories in Australia have developed such frameworks. Australian interest has sharpened recently, with the development of a National School Improvement Tool (NSIT), intended for implementation as part of the National Plan for School Improvement.

Many of these frameworks use standards as policy levers. These are useful as they provide a clear articulation of good practice; they support self-reflection and assessment; and, as they are not relative, they support system-wide improvement. For instance, the NSIT describes school performance at ‘Low’, ‘Medium’, ‘High’ or ‘Outstanding’ levels. This allows schools to ‘make judgements about where they are on their improvement journeys, to set goals and design strategies for improvement, and to monitor and demonstrate school improvement over time’10.

This paper considers the research literature related to school improvement frameworks, to identify the core components and processes of such frameworks, and to assess evidence of their efficacy. While there is a substantial body of research bearing on the development of improvement frameworks, the evidence regarding the effect of frameworks on student outcomes is comparatively slight and inconclusive. This is because it is much easier to describe a framework than it is to test whether its implementation improves student outcomes. However, as outlined below, many of the individual components of school improvement frameworks are underpinned by robust evidence.

How is improvement measured?

The international evidence indicates that school improvement is best measured with reference to both student outcomes and school practices or processes, rather than by focusing exclusively on one or the other. Raudenbush argues that no matter how sophisticated analysis of student outcome data is, there will always be limitations as to how much the data can tell us, claiming that ‘to be successful, accountability must be informed by other sources of information … in particular, information on organizational and instructional practice’11. Similarly, Elmore argues that accountability systems must go beyond testing and regulation, to actively engage those who work in schools through ‘explicit strategies for developing and deploying knowledge and skill in classrooms and schools’12.

A dual approach (encompassing both outcomes and practices) is already evident in some school systems. For instance, the Nova Scotia School Accreditation Program (NSSAP) requires schools to select one area of practice alongside one area of student achievement for improvement. Working in ‘professional learning communities’, staff members collectively establish goals for these focal areas, while the school more broadly establishes strategies to meet them13.

The evidence so far: The impact of improvement frameworks on learning

It is difficult to find evidence about the operation or the efficacy of school improvement and accreditation frameworks, for a number of reasons.

First, it is difficult to isolate the impact of a school improvement system when it has been implemented nation-wide, with no control groups, at the same time as other major policy changes127. In the United States, school accreditation takes place against a backdrop of Federal standard-setting (particularly, No Child Left Behind) and it can be difficult to disentangle the impact of state-based accreditation from that of Federal measures.

Further, the range of models, and the range of contexts in which they have been implemented, make it difficult to isolate and identify ‘what works’128. For instance, there is some evidence available in relation to the impact of school inspections, but these inspections may occur quite separately from any explicit school improvement or accreditation agenda. Some of these studies cite ‘plausible’ evidence that school inspections lead to school improvement and teachers’ behavioural change but findings are far from conclusive: one literature review found that school inspections had both small positive and negative effects on student outcomes129. The authors of that review concluded that researchers still ‘do not know how school inspections drive improvement of schools and which types of approaches are most effective and cause the least unintended consequences’130. Findings in relation to accreditation programs are also mixed: while many teachers and leaders find accreditation to be a useful process that has enhanced the overall quality of their schools131, others point to the stress and anxiety that can result from inspection and evaluation processes132.

Few studies have empirically assessed the impact of school improvement frameworks on student outcomes, and those that have been undertaken present inconclusive results133. Studies tend to use surveys or interviews of staff rather than performing any analysis of student outcomes134. Repeated evaluations of the Southern Association of Colleges and Schools, which operated within a school-improvement framework, did assess student outcomes, but found no difference in school performance data (as measured by standardised achievement tests in reading and mathematics), between accredited and nonaccredited schools135.

A number of studies have found that teaching and learning are the school elements that benefit least from accreditation or improvement frameworks. Such findings have been reported in:

  • Nova Scotia, where some of the lowest-scoring survey items were those relating to the impact of the School Accreditation Program on teacher practice and student achievement136;
  • New England, where interviewees ‘strongly asserted’ the benefits of the program for teachers and the school, but held conflicting views as to the impact of accreditation on students137.

A similar finding was made in Queensland, where a Masters developed tool (similar to the NSIT) was used to evaluate school performance across eight domains, over time. All Queensland schools were audited with the tool in 2010, and 25 per cent of schools were re-audited in 2011. While there were improvements across some areas after the 12-month period, the teacher practice domain showed the least improvement138.

These findings seem at odds with the evidence about the impact best practice (such as high expectations, professional development in data skills and instructional leadership) can have on teacher practices and student outcomes. This may indicate that problems arise not in the content of these frameworks, but in their implementation. Hattie’s synthesis of 800 metaanalyses identifies the challenge of realising the results of any educational initiative where it matters most, finding that while professional development was likely to change teacher learning (with an effect size of 0.90), it was less likely to change teacher behaviour (0.60) and even less likely to have an impact on student learning (0.37)139.

Enabling and sustaining school improvement through cultural change

In the view of some educational experts, accountability measures are best used as means to an end, rather than the end itself. Pressure and accountability, when divorced from support and other goals (such as development of capacity) can have a negative effect140. Elmore suggests that this may be because teachers are already operating ‘more or less at the limit of their knowledge and pedagogical skill’, and adding pressure, without also providing support or guidance as to how to reach goals, may have little impact141. Fullan similarly argues that using test results alone to punish or reward schools ‘assumes that educators will respond to these prods by putting in the effort to make the necessary changes … it assumes that educators have the capacity or will be motivated to develop the skills and competencies to get better results’142. Drivers of school improvement are far more likely to be successful if they foster intrinsic motivation; engage educators and students; inspire team work; and affect all teachers and students143.

There is some support for this in the McKinsey study, which found that teachers in successful systems received 56 per cent of all support initiatives, but only 3 per cent of accountability measures, such as teacher appraisals. Teachers in these systems were held accountable through their students’ learning and collaborative practice with their peers:

By developing a shared concept of what good practice looks like, and basing it on a fact-based inquiry into what works best to help students learn, teachers hold each other accountable to adhering to those accepted practices144.

A number of education systems provide support for their schools as part of their improvement or accreditation frameworks. For instance, in the Northern Territory, a coaching model is used to develop principals’ skills145; in South Australia, the Department of Education and Children’s Services is responsible for developing workforce capabilities and system capacity as part of its Improvement and Accountability Framework146; and in Victoria, the Performance and Development Culture framework encourages effective induction and mentoring support for teachers147.

Cultural change of the type described by Fullan may be difficult to achieve, but it is possible. Moreover, it is essential to sustaining educational improvements. Tucker observes that a ‘sustained emphasis on education quality … carries enormous implications’ in terms of garnering support at all levels, from government to educators and the broader community148.

The McKinsey study identified 13 ‘sustained improvers’ – systems with at least five years of consistent rises in student performance across multiple data points and subjects. As the study reports:

For a system’s improvement journey to be sustained over the long term, the improvements have to be integrated into the very fabric of the system pedagogy149.

1 OECD 2013, Synergies for better learning: An international perspective on evaluation and assessment, p.13.

2 McKinsey & Company 2007, How the world’s best-performing school systems come out on top, report prepared by M Barber and M Mourshed, p.8.

3 Commonwealth Department of Education, Employment and Workplace Relations (DEEWR) 2012a, Measuring and rewarding school improvement, paper prepared by G Masters, p.1.

4 L Stoll 2009, ‘Capacity building for school improvement or creating capacity for learning? A changing landscape’, Journal of Educational Change, vol.10, p.117.

5 NSW Department of Education and Training 2005, Building a more responsive system of public education, companion paper 5, prepared by M Fullan.

6 M Barber and M Fullan 2005, ‘Tri-level development: It’s the system’, Education Week, March 2.

7 ‘Sustained improvers’ are defined as education systems that have seen five years or more of consistent improvements in student performance, across multiple data sets and subjects. McKinsey & Company 2010a, How the world’s most improved school systems keep getting better, report prepared by M Mourshed, C Chijioke and M Barber, p.11.

8 Major educational reforms began in 2003 in Ontario. McKinsey & Company 2010a, p.19.

9 McKinsey & Company 2010a, p.20.

10 Commonwealth Department of Education, Employment and Workplace Relations (DEEWR) 2012b, National School Improvement Tool, prepared by the Australian Council for Educational Research, p.1.

11 S Raudenbush 2004, ‘Schooling, statistics and poverty: Can we measure school improvement?’ presented at the William H Angoff Memorial Lecture Series, Educational Testing Service, Princeton NJ, 1 April, p.37.

12 R Elmore 2006, ‘Leadership as the practice of improvement’, presented at the International Conference on Perspectives on Leadership for Systemic Improvement, London, 6 July, p.3

13 National Scientific Council on the Developing Child 2005.

127 L Woessman 2006, Efficiency and equity of European education and training policies, CESifo Working paper No 1779 cited in Faubert 2009, p.43.

128 Faubert 2009, p.6.

129 M Ehren et al 2013, p.6.

130 M Ehren et al 2013, p.6.

131 See, eg, New England Association of Schools and Colleges 2006, The Impact of Accreditation on the Quality of Education: Results of the Regional Accreditation and Quality of Education Survey, NEASC 2005, p.188; Wood and Meyer 2011, p.12.

132 Fairman, Peirce and Harris 2009, p.21; M Ehren and A Visscher 2006, ‘Towards a theory on the Impact of school inspections’, British Journal of Educational Studies, vol.54, no.1, p.53.

133 H Gaertner and H Pant 2011, ‘How valid are school inspections? Problems and strategies for validating processes and results’, Studies in Educational Evaluation, vol 37, no.2-3. An Australian paper written as part of the Smarter Schools National Partnerships also commented on the dearth of empirical evidence regarding the impact of school improvement frameworks on student outcomes: Smarter Schools National Partnerships 2010, National collaboration project: School performance improvement frameworks, Final Report, p.5.

134 See, eg, New England Association of Schools and Colleges p.188; Wood and Meyer 2011, p.12.

135 D Bruner and L Brantley 2004, ‘Southern Association of Colleges and Schools Accreditation: Impact on Elementary Student Performance’, Education Policy Analysis Archives, vol.12, no.34, pp.3, 12-13.

136 Wood and Meyer 2011, p.12.

137 Fairman, Peirce and Harris, p.37.

138 Commonwealth Department of Education, Employment and Workplace Relations (DEEWR) 2012, pp.20-24.

139 J Hattie 2009, p.120.

140 R Elmore 2007, Educational Improvement in Victoria, p.2; M Fullan 2011, ‘Choosing the wrong drivers for whole system reform’, Centre for Strategic Education Seminar Series, Paper no. 204.

141 R Elmore 2006, ‘OECD activity on improving school leadership’ Paper presented at the International Perspectives on School Leadership for Systemic Improvement International Conference, Harvard University, 6 July.

142 Fullan 2011, p.8.

143 Fullan 2011, p.3.

144 Mourshed, Chijioke and Barber 2010, p.75.

145 Northern Territory Government, School accountability and performance improvement framework, viewed September 8 2014, http://www.education.nt.gov.au/__data/assets/ pdf_file/0005/15773/SchoolAPIF.pdf

146 Government of South Australia, DECS improvement and accountability framework, viewed 16 July 2014, http://www.decd.sa.gov.au/docs/documents/1/ DecsImprovementandAccou-1.pdf

147 Department of Education and Early Childhood Development, Performance and development culture: Revised self-assessment framework, viewed 16 July 2014, http://www.education.vic.gov.au/Documents/school/principals/management/
perfdevculture.pdf

148 Tucker 2012, p.40.

149 Mourshed, Chijioke and Barber 2010, p.72.

Category:

  • Literature review

Business Unit:

  • Centre for Education Statistics and Evaluation
Return to top of page Back to top