Reviewing the validity of assessments
Access Validating Agencies are required by QAA to ensure that Access to HE Diploma assessments are appropriate for the delivery method and for the demand required, and facilitate valid assessment of student achievement. However, reliance on regulation alone can encourage an overly procedural approach, with a dependence on remedial action once assessment standards are deemed to be insufficient or inconsistent.
Over the last few years, to support the identification of issues within assessment standards at an earlier stage, CAVA has been exploring with our providers and moderators what it means to have valid assessment, and how we can measure validity in assessment design. Informed by these discussions, we have developed a new guide which provides a definition of the term ‘validity’, and practical questions to support consistent measurement of particular aspects of assessment design. The ‘Guide for reviewing the validity of assessments’ can be found in the Resources section of the CAVA Members Area.
The guide highlights four key areas of assessment design:
Aspects of the guide may feel familiar to CAVA provider staff and moderators, as it is aligned to the QAA requirements for assessment design and standards, and mapped to the broader questions on assessment within the CAVA External Moderator’s report template. The guide can be applied to any assessment type, and it is recommended that it is used as a prompt to explore and answer these broader questions at the moderation stage, and to strengthen the validity of assessments which are under development.
Beyond the use of the guide in assessment development, we recommend the following ‘top down approach’ when designing assignments/exams. Keeping a focus on:
This provides a systematic, efficient, and informed method for the creation of assignments and exams. Similarly it is important during moderation to keep in mind the overarching intended purposes of the assessment to support the evaluation of the assignment/exam against the questions within the guide and identify the most pressing areas for improvement.
It is hoped that using the guide will improve our evidence-based identification of good practice in assignment brief design, a benefit which is already being seen in the quality and detail of this year’s moderator feedback on assessments and in the development of our quality assurance steps for the buildup of our library of assignment briefs.