FAQ/kappa - CBU statistics Wiki

Revision 17 as of 2012-08-07 16:12:47

Clear message
location: FAQ / kappa

Kappa statistic evaluation in SPSS

SPSS syntax available:

  • [:FAQ/kappa/kappans:Non-square tables where one rater does not give all possible ratings]
  • [:FAQ/kappa/multiple:More than 2 raters]
  • [:FAQ/ad:An inter-rater measure based on Euclidean distances]

Note: Reliability as defined by correlation coefficients (such as Kappa) requires variation in the scores to acheive a determinate result. If you have a program which produces a determinate result when the scores of one of the coders is constant, the bug is in that program, not in SPSS. Each rater must give at least two ratings.

  • [:FAQ/kappa/magnitude:Benchmarks for suggesting what makes a high kappa]

There is also a weighted kappa which allows different weights to be attached to misclassifications. Warrens (2011) shows that weighted kappa is an example of a more general test of randomness. This [attachment:kappa.pdf paper] by Von Eye and Von Eye (2005) gives a comprehensive insight into kappa and variants of it. These include a variant by Brennan and Prediger (1981) which enables kappa to attain the maximum value of '1' when the number of category ratings is not fixed however Von Eye and Von Eye's paper suggests this measure has drawbacks.

References

Brennan RL, & Prediger DJ (1981). Coefficient kappa: Some uses, misuses, and alternatives. Educational and Psychological Measurement 41 687–699.

von Eye A & von Eye M (2005). Can One Use Cohen's Kappa to Examine Disagreement? Methodology 1(4) 129–142.

Warrens MJ (2011). Chance-corrected measures for 2 × 2 tables that coincide with weighted kappa. British Journal of Mathematical and Statistical Psychology 64(2) 355–365.