Publication
Title
A meta-analysis on the reliability of comparative judgement
Author
Abstract
Comparative Judgement (CJ) aims to improve the quality of performance-based assessments by letting multiple assessors judge pairs of performances. CJ is generally associated with high levels of reliability, but there is also a large variation in reliability between assessments. This study investigates which assessment characteristics influence the level of reliability. A meta-analysis was performed on the results of 49 CJ assessments. Results show that there was an effect of the number of comparisons on the level of reliability. In addition, the probability of reaching an asymptote in the reliability, i.e., the point where large effort is needed to only slightly increase the reliability, was larger for experts and peers than for novices. For reliability levels of .70 between 10 and 14 comparisons per performance are needed. This rises to 26 to 37 comparisons for a reliability of .90.
Language
English
Source (journal)
Assessment in education : principles, policy & practice. - Abingdon, 1994, currens
Related dataset(s)
Publication
Abingdon : Carfax , 2019
ISSN
0969-594X [print]
1465-329X [online]
DOI
10.1080/0969594X.2019.1602027
Volume/pages
26 :5 (2019) , p. 541-562
ISI
000490321800002
Full text (Publisher's DOI)
Full text (open access)
Full text (publisher's version - intranet only)
UAntwerpen
Faculty/Department
Research group
Project info
Development, validation and effects of a digital platform for the assessement of competences (D-PAC).
Publication type
Subject
Affiliation
Publications with a UAntwerp address
External links
Web of Science
Record
Identifier
Creation 24.04.2019
Last edited 12.11.2024
To cite this reference