Determine acceptable evidence

Determining acceptable evidence is the second stage of the backward design planning process. Knowing what evidence validates that the targeted learning has been achieved helps to sharpen and focus the teaching.

It is essential to decide how students will demonstrate they have achieved the goals/ learning intentions. Determine the acceptable evidence to assess the learning.

Essential questions to consider are:

  • What evidence needs to be collected and assessed, given the desired results identified in Stage 1?
  • What is evidence of understanding, as opposed to recall?
  • What important transfer tasks should anchor the assessment since transfer is the essence of understanding?
  • What criteria should be used to assess work related to the desired results, not just the particulars of the task?

Determining acceptable evidence involves 6 aspects:

  1. consider evidence of the learning identified in Stage 1
  2. design performance task/s
  3. consider 6 facets to identify needed elements of understanding
  4. use the GRASP elements to plan authentic performance tasks (optional)
  5. identify appropriate criteria and use them to develop scoring rubrics
  6. identify other evidence that will be needed.

The 6 aspects

1. Consider Stage 1 evidence

Think about how all the learning goals/intentions established in Stage 1 can be assessed, for example, your transfer goals/learning intentions, meaning goals/learning intentions and acquisition goals/learning intentions.

Assessing for understanding requires evidence of the student’s ability to insightfully explain or interpret their learning - to 'show their work' and 'justify' or 'support' their performance/ product with commentary (meaning). It also requires evidence of the student’s ability to apply their learning in new, varied, and realistic situations.

Sources of evidence

Some examples of sources of evidence are:

  • selected-response format, for example, multiple-choice and true-false quizzes and tests
  • peer reviews and peer response groups
  • written/ oral responses to academic prompts, for example, short-answer format
  • performance assessment tasks, for example, extended written products (essays, lab reports), visual products (PowerPoint, mural), oral performances (oral report, foreign language dialogues) and demonstrations (skill performance in physical education)
  • long-term 'authentic' projects, for example, a senior exhibition
  • portfolios - collections of student work over time
  • reflective journals or learning logs
  • informal, ongoing observations of students
  • formal observations of students using observable indicators or criterion list
  • student self-assessment
  • peer reviews and peer response groups.

Wiggins & McTighe (2011) distinguish between 2 broad types of assessment - performance tasks and other evidence.

Performance tasks:

  • are culminating performances for a lesson sequence/ unit and require students to apply their learning to a new and authentic situation as means of assessing their understanding
  • reflect the 6 facets of understanding: explanation, interpretation, application, perspective, empathy, and self-understanding
  • establish real-world contexts, demands, audiences, and purposes
  • can be written in the GRASPS format to make assessment tasks more authentic and engaging
  • are evaluated using valid criteria and indicators, reflective of not only quality performance but related to the desired results of Stage 1.

Performance task examples include constructing a performance task scenario, possible student roles and audiences, possible products and performances, and considering student interests and task variables. (McTighe & Wiggins, 2011).

Other evidence

Other evidence can include evidence from quizzes, tests, observations, and work samples that round out the assessment picture in relation to the stage 1 goals/ learning intentions. It may overlap the performance-based evidence, increasing the reliability of the overall assessment, especially if the performance task was done by a group.

Wiggins & McTighe (2011) have identified 6 facets of understanding for assessment purposes. They are intended to serve as indicators of how understanding is revealed and to provide guidance as to the kinds of assessments needed to determine the extent of student understanding.

The 6 facets are:

  1. Explanation - students will explain it in their own words, make and support an inference application, represent it in a different form and teach it to someone else.
  2. Application - students will use their learning effectively in a new situation
  3. Perspective - students will transfer perspective, recognise different points of view, see the 'big picture', and take a critical stance.
  4. Self-knowledge - students will realise their strengths and weaknesses, recognise the limits of their own understanding and reflect on their learning and actions.
  5. Empathy - students will get 'inside' another person's feelings and worldview and recognise merit in the odd, unorthodox or unfamiliar interpretation.
  6. Interpretation - students will make meaning of a text or data set, see and describe patterns and make new connections.

Note: All 6 facets of understanding need not be evident in assessment all of the time, for example, in mathematics, application, interpretation, and explanation is the most natural fit, whereas, in social studies, empathy and perspective may be added when appropriate. Performance tasks based on one or more facets should be seen as culminating performances for a unit of study. Refer to McTighe & Wiggins (2011) for examples of Brainstorming assessment ideas using the facets, Questioning for understanding using the facets, Designing tasks using the 6 facets, and Generating assessment ideas and using the facets.

The GRASPS acronym helps construct authentic scenarios for performance tasks: goal, role, audience, situation, performance, standards and criteria for success. Each key task listed below has a corresponding set of stem statements.

These stem statements should be considered as idea starters when constructing a scenario for a performance task. Select the starters that are most suitable and avoid filling in all of the blanks.

GRASPS elements Example

Goal: the goal or challenge statement in the scenario

  • Your task is…
  • The obstacle/s to overcome is/are…
  • The goal is to…
  • The problem/ challenge is…

Role: the role the student plays in the scenario

  • You are…
  • Your job is…
  • You have been asked to…

Audience: the audience/ client that the student must be concerned with in doing the task

  • The target audience is…
  • You need to convince…
Situation: the particular setting/context and its constraints and opportunities
  • The context you find yourself in is…
  • The challenge involves dealing with…
Performance: the specific performance or product expected
  • You will create a…in order to…
  • You need to develop…so that…
Standards and criteria for success: the standards against which the work will be will judged in the scenario
  • Your performance needs to…
  • A successful result will…
  • Your work will be judged by…*
  • Your product must meet the following standards…

When deciding on the criteria for understanding performances, the challenge is to:

  • ensure that what is assessed is central to the understanding, not just what is easy to score
  • identify the separate traits of performance to ensure that the student gets specific and valid feedback, for example, a written paper can be well organised but not informative and vice versa
  • consider the different types of criteria, for example, the quality of the understanding versus the quality of the performance in which it is revealed.

Different criteria

  • Content criteria: this describes the degree of students’ knowledge of factual information or understanding of concepts, principles, and processes. Associated indicators include accurate, appropriate, authentic, complete, correct, credible, explained, justified, important, in-depth, insightful, logical, make connections, precise, relevant, sophisticated, supported, thorough and valid.
  • Process criteria: this describes the degree of skill/proficiency, and also refers to the effectiveness of the process or method used. Associated indicators: careful; clever; coherent; collaborative; concise; coordinated; effective; efficient; flawless; followed process; logical/reasoned; mechanically correct; methodical; meticulous; organised; planned; purposeful; rehearsed; sequential; skilled.
  • Quality criteria: describes the degree of quality evident in products and performances. Associated indicators include attractive, competent, creative, detailed, extensive, focused, graceful, masterful, organised, polished, proficient, precise, neat, novel, rigorous, skilled, stylish, smooth, unique and well-crafted.
  • Result criteria: describes the overall impact and the extent to which goals, purposes, or results are achieved. Associated indicators include beneficial, ; conclusive; convincing; decisive; effective; engaging; entertaining; informative; inspiring; meets standards; memorable; moving; persuasive; proven; responsive; satisfactory; satisfying; significant; useful; understood.

Rubrics

You can use the following 6 areas and 4 points within each to describe differences in degree when constructing a 'first-time' scoring rubric with a 4-point scale. Once the rubric is applied, an analysis of student work will yield more precise descriptive language and/or a rubric with more gradations.

1. Degrees of Understanding Degrees of Frequency
  • thorough/ complete
  • misunderstanding/ serious misconceptions
  • substantial
  • partial/incomplete.
2. Degrees of Frequency
  • always/ consistently
  • rarely/ never
  • frequently/ generally
  • sometimes/ occasionally.
3. Degrees of Effectiveness
  • highly effective
  • ineffective
  • generally effective
  • somewhat effective.
4. Degrees of Independence
  • the student successfully completes the task:
    • independently
    • with minimal assistance
    • with moderate assistance
    • only with considerable assistance.
5. Degrees of Accuracy
  • completely accurate
    • all (facts, concepts, mechanics, computations) are correct
    • major inaccuracies - significant errors throughout
    • generally accurate - minor inaccuracies do not affect the overall result
    • inaccurate - numerous errors detract from results.
6. Degrees of Clarity
  • exceptionally clear - easy to follow
  • unclear - impossible to follow
  • generally clearable to follow
  • lacks clarity - difficult to follow.

Refer to McTighe & Wiggins (2011) for more information about the Rubric design process and Tips for designing effective scoring tools.

Consider any other evidence through which students can demonstrate achievement of the desired results, for example, quizzes, tests, academic prompts, observations, homework and journals. Consider how students can reflect upon and self-assess their learning.

Use the Determine acceptable evidence template for support for determining acceptable evidence.

References

  • McTighe, J., & Wiggins, G. (2011). Designing an understanding-based curriculum around common core standards.
Return to top of page Back to top