Diff for "FAQ/kappa" - CBU statistics Wiki
location: Diff for "FAQ/kappa"
Differences between revisions 11 and 20 (spanning 9 versions)
Revision 11 as of 2012-06-21 09:27:22
Size: 1104
Editor: PeterWatson
Comment:
Revision 20 as of 2012-08-07 16:21:21
Size: 1868
Editor: PeterWatson
Comment:
Deletions are marked like this. Additions are marked like this.
Line 13: Line 13:
requires variation in the scores to acheive a determinate result. If you requires variation in the scores to achieve a determinate result. If you
Line 19: Line 19:
There is also a weighted kappa which allows different weights to be attached to misclassifications. Warrens (2011) shows that weighted kappa is an example of a more general test of randomness. There is also a weighted kappa which allows different weights to be attached to misclassifications. Warrens (2011) shows that weighted kappa is an example of a more general test of randomness. This [attachment:kappa.pdf paper] by Von Eye and Von Eye (2005) gives a comprehensive insight into kappa and variants of it. These include a variant by Brennan and Prediger (1981) which enables kappa to attain the maximum value of '1' comparing to a uniform distribution when the number of category ratings is not fixed. Von Eye and Von Eye's paper suggests, however, that this measure can give a misleadingly high value if the raters give different numbers of category ratings.
Line 21: Line 21:
__Reference__ __References__

Brennan RL, & Prediger DJ (1981). Coefficient kappa: Some uses, misuses, and alternatives. ''Educational and Psychological Measurement'' '''41''' 687–699.

von Eye A & von Eye M (2005). Can One Use Cohen's Kappa to Examine Disagreement? ''Methodology'' '''1(4)''' 129–142.

Kappa statistic evaluation in SPSS

SPSS syntax available:

  • [:FAQ/kappa/kappans:Non-square tables where one rater does not give all possible ratings]
  • [:FAQ/kappa/multiple:More than 2 raters]
  • [:FAQ/ad:An inter-rater measure based on Euclidean distances]

Note: Reliability as defined by correlation coefficients (such as Kappa) requires variation in the scores to achieve a determinate result. If you have a program which produces a determinate result when the scores of one of the coders is constant, the bug is in that program, not in SPSS. Each rater must give at least two ratings.

  • [:FAQ/kappa/magnitude:Benchmarks for suggesting what makes a high kappa]

There is also a weighted kappa which allows different weights to be attached to misclassifications. Warrens (2011) shows that weighted kappa is an example of a more general test of randomness. This [attachment:kappa.pdf paper] by Von Eye and Von Eye (2005) gives a comprehensive insight into kappa and variants of it. These include a variant by Brennan and Prediger (1981) which enables kappa to attain the maximum value of '1' comparing to a uniform distribution when the number of category ratings is not fixed. Von Eye and Von Eye's paper suggests, however, that this measure can give a misleadingly high value if the raters give different numbers of category ratings.

References

Brennan RL, & Prediger DJ (1981). Coefficient kappa: Some uses, misuses, and alternatives. Educational and Psychological Measurement 41 687–699.

von Eye A & von Eye M (2005). Can One Use Cohen's Kappa to Examine Disagreement? Methodology 1(4) 129–142.

Warrens MJ (2011). Chance-corrected measures for 2 × 2 tables that coincide with weighted kappa. British Journal of Mathematical and Statistical Psychology 64(2) 355–365.

None: FAQ/kappa (last edited 2019-09-24 14:25:53 by PeterWatson)