Diff for "FAQ/kappa" - CBU statistics Wiki
location: Diff for "FAQ/kappa"
Differences between revisions 11 and 16 (spanning 5 versions)
Revision 11 as of 2012-06-21 09:27:22
Size: 1104
Editor: PeterWatson
Comment:
Revision 16 as of 2012-08-07 16:09:27
Size: 1616
Editor: PeterWatson
Comment:
Deletions are marked like this. Additions are marked like this.
Line 19: Line 19:
There is also a weighted kappa which allows different weights to be attached to misclassifications. Warrens (2011) shows that weighted kappa is an example of a more general test of randomness. There is also a weighted kappa which allows different weights to be attached to misclassifications. Warrens (2011) shows that weighted kappa is an example of a more general test of randomness. This [attachment:kappa.pdf paper] by Von Eye and Von Eye gives a comprehensive insight into kappa and variants of it. These include a variant by Brennan and Prediger (1981) which enables kappa to attain the maximum value of '1' when the number of category ratings is not fixed however Von Eye and Von Eye's paper suggests this measure has drawbacks.
Line 21: Line 21:
__Reference__ __References__

Brennan RL, & Prediger DJ (1981). Coefficient kappa: Some uses, misuses, and alternatives. ''Educational and Psychological Measurement'' '''41''' 687–699.

Kappa statistic evaluation in SPSS

SPSS syntax available:

  • [:FAQ/kappa/kappans:Non-square tables where one rater does not give all possible ratings]
  • [:FAQ/kappa/multiple:More than 2 raters]
  • [:FAQ/ad:An inter-rater measure based on Euclidean distances]

Note: Reliability as defined by correlation coefficients (such as Kappa) requires variation in the scores to acheive a determinate result. If you have a program which produces a determinate result when the scores of one of the coders is constant, the bug is in that program, not in SPSS. Each rater must give at least two ratings.

  • [:FAQ/kappa/magnitude:Benchmarks for suggesting what makes a high kappa]

There is also a weighted kappa which allows different weights to be attached to misclassifications. Warrens (2011) shows that weighted kappa is an example of a more general test of randomness. This [attachment:kappa.pdf paper] by Von Eye and Von Eye gives a comprehensive insight into kappa and variants of it. These include a variant by Brennan and Prediger (1981) which enables kappa to attain the maximum value of '1' when the number of category ratings is not fixed however Von Eye and Von Eye's paper suggests this measure has drawbacks.

References

Brennan RL, & Prediger DJ (1981). Coefficient kappa: Some uses, misuses, and alternatives. Educational and Psychological Measurement 41 687–699.

Warrens MJ (2011). Chance-corrected measures for 2 × 2 tables that coincide with weighted kappa. British Journal of Mathematical and Statistical Psychology 64(2) 355–365.

None: FAQ/kappa (last edited 2019-09-24 14:25:53 by PeterWatson)