Evaluating marking rubrics

This process was developed by Adjunct Associate Professor Leonora Ritter, lritter@csu.edu.au, adapted by Kirsty Smith.

While it should be noted that it is not necessary to have a rubric in order to do criterion referenced, standards based assessment (CRSBA), and that using a rubric is not sufficient to ensure that CRSBA is occurring, many practitioners will be looking closely at examples of marking rubrics across the sector as they move to implement CRSBA. The following checklist is designed to help practitioners to evaluate sample rubrics and to modify them to fit their purposes.
N.B. the overarching requirement is that the rubric covers all the criteria that are being assessed by the particular task and that these criteria all reflect subject learning outcomes.

General attributes

  • Does the rubric provide sufficiently detailed feedback, or does it require significant additional annotation on the rubric and/or a lot of additional in-text annotation from the marker for each student?
  • Is the rubric sufficiently specific to facilitate objective (reliable and valid) rather than subjective evaluations of the student’s work? Eg your “very good” is likely not to be the same as my “very good”; you need to provide more specific qualities so that all markers and students know exactly what constitutes “very good”.
  • Is the language clear and helpful?
  • Is the rubric’s format easily interpreted by students and markers?

Criteria

  • Do the criteria specifically relate to the assessment task and the subject learning outcomes?
  • Do the criteria reflect deep and/or advanced learning as well as basic competencies?
  • Have affective criteria, assessing attitudes and values, been included if they are relevant?
  • Are the criteria free of descriptors such as “satisfactory” or “effective”? (such descriptors belong in the standards not the criteria)

Standards

  • Do the standards reflect a clearly stepped and consistent taxonomy? That is, is the language consistent and addressing the same quality at each level, or does it jump to different qualities at different levels?
  • Does the rubric avoid using norm-referenced descriptors such as “average” and “exceptional”?
  • Does the rubric ensure that all students who pass will be sufficiently literate?
  • Does the language avoid contradiction? eg a task asking for “evaluation of...” is contradicted by standards that look for student’s “understanding” or “knowledge of”.
  • Does the rubric risk generating a high fail rate by setting an uncompromising pass standard, or are you able to be less rigid about certain criteria?

Feedback qualities

  • Is the embedded feedback sufficiently specific to have formative value, ie students can easily see from the rubric what they specifically need to do to improve their performance?
  • Is the language sympathetic, encouraging and diagnostic?

Assigning grades

  • Does the weighting and the way in which marks are combined contribute to the likelihood of the outcome reflecting the marker’s holistic impression? For example, if you weight quality of academic writing too low, it may be possible for a barely literate response to still achieve a high grade.
  • Does the language used make it clear that to attain a particular grade the described level of performance against all criteria is essential, or whether performance at that level against a sufficient proportion of criteria would justify the grade?
  • Is the system for determining a summative mark from the combined criteria explicit?