hrcak mascot   Srce   HID

Review article

Interrater reliability: the kappa statistic

Mary L. McHugh ; Department of Nursing, National University, Aero Court, San Diego, California

Fulltext: english, pdf (180 KB) pages 276-282 downloads: 14.291* cite
APA 6th Edition
McHugh, M.L. (2012). Interrater reliability: the kappa statistic. Biochemia Medica, 22 (3), 276-282. Retrieved from https://hrcak.srce.hr/89395
MLA 8th Edition
McHugh, Mary L.. "Interrater reliability: the kappa statistic." Biochemia Medica, vol. 22, no. 3, 2012, pp. 276-282. https://hrcak.srce.hr/89395. Accessed 15 May 2021.
Chicago 17th Edition
McHugh, Mary L.. "Interrater reliability: the kappa statistic." Biochemia Medica 22, no. 3 (2012): 276-282. https://hrcak.srce.hr/89395
Harvard
McHugh, M.L. (2012). 'Interrater reliability: the kappa statistic', Biochemia Medica, 22(3), pp. 276-282. Available at: https://hrcak.srce.hr/89395 (Accessed 15 May 2021)
Vancouver
McHugh ML. Interrater reliability: the kappa statistic. Biochemia Medica [Internet]. 2012 [cited 2021 May 15];22(3):276-282. Available from: https://hrcak.srce.hr/89395
IEEE
M.L. McHugh, "Interrater reliability: the kappa statistic", Biochemia Medica, vol.22, no. 3, pp. 276-282, 2012. [Online]. Available: https://hrcak.srce.hr/89395. [Accessed: 15 May 2021]

Abstracts
The kappa statistic is frequently used to test interrater reliability. The importance of rater reliability lies in the fact that it represents the extent to which the data collected in the study are correct representations of the variables measured. Measurement of the extent to which data collectors (raters) assign the same score to the same variable is called interrater reliability. While there have been a variety of methods to measure interrater reliability, traditionally it was measured as percent agreement, calculated as the number of agreement scores divided by the total number of scores. In 1960, Jacob Cohen critiqued use of percent agreement due to its inability to account for chance agreement. He introduced the Cohen’s kappa, developed to account for the possibility that raters actually guess on at least some variables due to uncertainty. Like most correlation statistics, the kappa can range from -1 to +1. While the kappa is one of the most commonly used statistics to test interrater reliability, it has limitations. Judgments about what level of kappa should be acceptable for health research are questioned. Cohen’s suggested interpretation may be too lenient for health related studies because it implies that a score as low as 0.41 might be acceptable. Kappa and percent agreement are compared, and levels for both kappa and percent agreement that should be demanded in healthcare studies are suggested.

Keywords
kappa; reliability; rater; interrater

Hrčak ID: 89395

URI
https://hrcak.srce.hr/89395

Visits: 26.540 *