Download a print version (PDF, 60 kB) of the glossary or view the online version below.
|Activities||The specific things you are doing.|
|Aggregated data||Data collected and compiled from multiple sources and/or on multiple measures. For example, data about whether individual students completed secondary school may be aggregated into a completion rate for a school or region.|
|Analysis||A systematic examination and evaluation of data or information, by breaking it into its component parts to uncover their interrelationships.|
A single value representing the typical, normal, or middle value of a set of data. See the definition for mean.
|An analysis or description of the situation prior to an intervention, against which progress can be assessed or comparisons made.|
|Benchmark||A standard or reference point against which performance can be assessed. For example, the NAPLAN National Minimum Standard or the performance of your Similar Schools Group.|
A specific investigation of the current and past actions and experiences of a single person, group or organisation. Case studies can incorporate different methodologies, such as interviews or direct observation. They give a lot of detail about how things work in a specific situation, meaning that care needs to be taken when generalising to other settings.
|Causal relationship||A relationship between two events where one event is the direct consequence of another. This is also referred to as cause and effect.|
A sample that aims to include every member of the targeted population of the research. For example, the Australian Early Development Census includes all Kindergarten students in Australia.
A range of values within which the true value probably lies. A confidence interval is indicated by its endpoints; graphically it is indicated by ‘whiskers’ extending from the plotted value. The size of a confidence interval gives an indication of the certainty attached to the estimate – a small confidence interval indicates greater certainty that the estimate is close to the true value. See a more detailed overview of confidence intervals.
In an experiment, the control group does not receive the intervention or treatment under investigation. This group may also be referred to as the comparison group.
A relationship between two things or variables. Importantly, a correlation between two variables means that the two things are likely to occur together, but does not mean that one causes the other. See definition for causal relationship.
|Facts, figures, or information. Data can be both qualitative and quantitative. See also definitions for evidence, qualitative and quantitative.|
|Dataset||A collection of information that you can analyse to form evidence. Could be a single document (such as a spreadsheet of survey responses) or a series of documents (such as a collection of student work samples).|
|Direct observation||A method of gathering data primarily through close visual inspection of a natural setting (for example, observing another teacher conducting a lesson). Unlike other methodologies, such as interviews, direct observation does not involve actively engaging with members of the setting.|
|Effect||The change, either intended or unintended, as a result of an intervention. We are interested in the causal effect – the things that would not have happened if not for the intervention.|
|Effectiveness||The extent to which the development intervention’s objectives were achieved, or are expected to be achieved.|
Knowledge acquired by observation, experiment or experience.
|Evaluation||A systematic and objective assessment of an ongoing or completed project, program or policy. This includes investigating how, why and to what extent, the stated objectives or goals were achieved.|
|Evaluative thinking||A form of reflective practice that integrates the skills that characterise good evaluation throughout all of an organization’s work practices. It involves systematically thinking about what results are expected, how results can be achieved, what evidence is needed to inform future actions and judgements and how results can be improved in the future.|
|Evidence||A general term that refers to qualitative and quantitative data that can inform a decision or course of action. Often the terms evidence and data are used interchangeably, however, they have distinct meanings. Data becomes evidence when it is used to prove something or support a conclusion. For example, NAPLAN scores on their own are data. However, if you then use these performance scores to support the conclusion that your students’ performance has improved, these scores become evidence of that conclusion.|
|Evidence-based practice||The practice of applying and using the best available research and data to inform decision-making.|
|Focus Group||A form of qualitative research in which a group of people are asked about their opinions, beliefs and attitudes towards a product, service or idea. This is conducted in a group setting, and participants are able to talk with other members of the group. This can be a more natural, informal setting than a one-on-one interview.|
|Family Occupation and Education Index (FOEI)||The FOEI is a school-level index of educational disadvantage related to socio-economic background. FOEI values range from 0 to approximately 300, with higher FOEI scores indicating higher levels of need (i.e. lower socio-economic status). FOEI is used as the basis of the equity loading for socio-economic background in the Department’s new Resource Allocation Model. See definition for RAM.|
|Generalise||To draw conclusions about a broader population (such as a school) based on data for a smaller group of people.|
|Hypothesis||A predictive claim put forward for testing through research or experimentation. Hypotheses are often made based on limited evidence as a starting point for investigation.|
|Index of Community Socio-Educational Advantage. A measure of school socio-economic status created by ACARA and calculated for most schools in Australia. Incorporates parental education and occupation, student Aboriginality, and school remoteness. For more information, see the ICSEA fact sheet .|
|A marked effect or change as a result of a particular intervention, such as a program or policy. Impacts can be short, medium or long-term.|
|The financial, human and/or material resources used to implement activities. For example, if you are implementing a program in your school, the inputs may include the cost of the program and how many hours teachers spend working on it.|
|A program, product, practice, or policy aimed at improving outcomes.|
|A data collection method in which participants are asked questions about a specific topic. Interviews can be structured, semi-structured or unstructured.|
|Literature review||A summary of the existing research on a particular topic. This includes discussing the methodologies, findings, strengths and limitations of a group of studies.|
|Logic model||A tool to help evaluate the effectiveness of an intervention. This identifies the chain of events that an intervention is meant to result in and how the components link together. We use: 1) inputs to undertake 2) activities; which result in 3) outputs; and which achieve 4) outcomes.|
|The sum of the value of each observation in a dataset divided by the number of observations. This is also known as the arithmetic average. For example, if you had five student scores of 7/10, 4/10, 4/10, 9/10 and 6/10, the mean score would be 6/10 [(7 + 4 + 4 + 9 + 6) ÷ 5 = 6].|
|Measure||An indicator of a more abstract concept that you are interested in. For example, a student’s NAPLAN Reading score is a measure of their reading ability.|
|The middle value in a given set of numbers when they are arranged in order. For example, if you had the same student scores described above, the median score would be 6.|
|Milestones||In the School Plan, milestones are the scheduled activities relating to the implementation, achievement and impact of each strategic direction. Milestones provide schools with a pathway to validate their progress towards the achievement of their improvement measures, products and practices.|
|Mode||The value that occurs most frequently in a set of numbers. Using the same example as above, the mode is 4. If no value occurs more than once, there is no mode for that set of values.|
|Normative||Relating to an expected behaviour. In the context of student performance, normative benchmarks are those that establish an expected level of performance (such as the National Minimum Standards in NAPLAN, or proficiency levels in PISA).|
|A conclusion about your school’s practices based on your evidence and the Statement of Excellence for the relevant element. ‘On-balance judgement’ acknowledges that a school may have evidence against descriptors for some ‘Excelling’ practices and some ‘Sustaining and Growing' practices within one element. An ‘on-balance judgement’ weighs these up to make the most accurate assessment of the school’s practices.|
|Outcomes||Results or changes that are observable and measurable based on a set of inputs and activities. Outcomes can be short-term, such as milestones or steps towards a final goal, or long-term.|
|Qualitative||Qualitative data is text based. It can be derived from in-depth interviews, observations, surveys or questionnaires. Qualitative research seeks to answer questions about how and why things have happened. Surveys collect qualitative data if they involve asking people for detailed (free-text) responses.|
|Quantitative||Quantitative data is always numerical and is used to find out a quantity, such as how much, how many or how often. It aims to be objective and is analysed using mathematical and statistical methods. It can be collected via a range of methods such as observation, interviews, surveys, questionnaires, assessments or tests. Surveys are quantitative if they involve rating things on a numerical scale or counting responses.|
|Questionnaire||A survey document with questions that are used to gather information from individuals to be used in research. The terms questionnaire and survey are often used interchangeably. See definition for survey.|
RAM(Resource allocation model)
|The RAM is a needs-based funding model that uses a base and loadings approach. The RAM is based on student and school needs, and is made up of three components: base school allocation; equity loadings; and targeted (individual student) funding.|
|The consistency of the findings when some analysis is repeated at different times, by different people, or using different students.|
When the people for whom you have data are an accurate reflection of the target audience you are most interested in (for example, when the parents responding to a parent survey accurately reflect the characteristics of all parents at your school).
A group that is selected from a larger group (the population). By studying the sample the researcher tries to draw valid conclusions about the population. See definition for generalise.
Summaries of existing research, literature reviews, text books etc. written by those who did not carry out the original research. Helps identify the key research studies, theories and researchers in a subject area.
|Similar schools group (SSG)||A collection of 40 schools with similar SES to your school (20 just above and 20 just below). The students in this group may be a good comparison for the students in your school.|
|Socio-economic status (SES)||Socio-economic status or socio-educational status. A measure of social and economic position in society, often created using the education and occupation of the parents of students. One of the biggest drivers of student outcomes. See definitions for FOEI and ICSEA.|
A statistical measure of whether an observed difference represents a real change, rather than due to normal fluctuations in data. For a more detailed explanation, see this introduction to statistical significance.
|Survey||A tool involving polling a section of the population to gather data on attitudes or experiences.|
|Triangulate||The practice of investigating and coming to a conclusion that is derived from multiple pieces of evidence, rather than only one.|
|Value-added||A statistical measure of the contribution a school makes to the growth of its students. For more information about value-added and how it is calculated, see the CESE value-added Learning Curve. Download the modified value-added methodology for progress from Kindergarten to Year 3 (PDF 772.54KB).|
|Validity||The extent to which the data collection strategies and instruments measure what they are meant to measure.|
|Variable||Any characteristic that can change (over time or between people).|
The extent to which a characteristic changes (over time or between people). Some variability represents actual changes, whereas other variability represents regular fluctuations in data. People often distinguish between these types of variability by using statistical significance.