635
Comment:
|
1105
|
Deletions are marked like this. | Additions are marked like this. |
Line 9: | Line 9: |
* [:FAQ/ad:An inter-rater measure based on Euclidean distances] |
|
Line 16: | Line 18: |
There is also a weighted kappa which allows different weights to be attached to misclassifications (Warrens, 2011). __Reference__ Warrens MJ (2011). Chance-corrected measures for 2 × 2 tables that coincide with weighted kappa. ''British Journal of Mathematical and Statistical Psychology'' '''64(2)''' 355–365. Article first published online: 7 DEC 2010 | DOI: 10.1348/2044-8317.002001 |
Kappa statistic evaluation in SPSS
SPSS syntax available:
- [:FAQ/kappa/kappans:Non-square tables where one rater does not give all possible ratings]
- [:FAQ/kappa/multiple:More than 2 raters]
- [:FAQ/ad:An inter-rater measure based on Euclidean distances]
Note: Reliability as defined by correlation coefficients (such as Kappa) requires variation in the scores to acheive a determinate result. If you have a program which produces a determinate result when the scores of one of the coders is constant, the bug is in that program, not in SPSS. Each rater must give at least two ratings.
- [:FAQ/kappa/magnitude:Benchmarks for suggesting what makes a high kappa]
There is also a weighted kappa which allows different weights to be attached to misclassifications (Warrens, 2011).
Reference
Warrens MJ (2011). Chance-corrected measures for 2 × 2 tables that coincide with weighted kappa. British Journal of Mathematical and Statistical Psychology 64(2) 355–365.
Article first published online: 7 DEC 2010 | DOI: 10.1348/2044-8317.002001