Diff for "FAQ/kappa" - CBU statistics Wiki
location: Diff for "FAQ/kappa"
Differences between revisions 12 and 15 (spanning 3 versions)
Revision 12 as of 2012-07-06 14:29:51
Size: 90
Editor: 188
Comment: This is an atrclie that makes you think "never thought of that!"
Revision 15 as of 2012-07-09 13:31:37
Size: 1104
Editor: PeterWatson
Comment:
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
This is an atrclie that makes you think "never thought of that!"
----
CategoryHomepage
== Kappa statistic evaluation in SPSS ==

SPSS syntax available:

 * [:FAQ/kappa/kappans:Non-square tables where one rater does not give all possible ratings]

 * [:FAQ/kappa/multiple:More than 2 raters]

 * [:FAQ/ad:An inter-rater measure based on Euclidean distances]


'''Note:''' Reliability as defined by correlation coefficients (such as Kappa)
requires variation in the scores to acheive a determinate result. If you
have a program which produces a determinate result when the scores of one
of the coders is constant, the bug is in that program, not in SPSS. Each rater must give at least two ratings.

 * [:FAQ/kappa/magnitude:Benchmarks for suggesting what makes a high kappa]

There is also a weighted kappa which allows different weights to be attached to misclassifications. Warrens (2011) shows that weighted kappa is an example of a more general test of randomness.

__Reference__

Warrens MJ (2011). Chance-corrected measures for 2 × 2 tables that coincide with weighted kappa. ''British Journal of Mathematical and Statistical Psychology'' '''64(2)''' 355–365.

Kappa statistic evaluation in SPSS

SPSS syntax available:

  • [:FAQ/kappa/kappans:Non-square tables where one rater does not give all possible ratings]
  • [:FAQ/kappa/multiple:More than 2 raters]
  • [:FAQ/ad:An inter-rater measure based on Euclidean distances]

Note: Reliability as defined by correlation coefficients (such as Kappa) requires variation in the scores to acheive a determinate result. If you have a program which produces a determinate result when the scores of one of the coders is constant, the bug is in that program, not in SPSS. Each rater must give at least two ratings.

  • [:FAQ/kappa/magnitude:Benchmarks for suggesting what makes a high kappa]

There is also a weighted kappa which allows different weights to be attached to misclassifications. Warrens (2011) shows that weighted kappa is an example of a more general test of randomness.

Reference

Warrens MJ (2011). Chance-corrected measures for 2 × 2 tables that coincide with weighted kappa. British Journal of Mathematical and Statistical Psychology 64(2) 355–365.

None: FAQ/kappa (last edited 2019-09-24 14:25:53 by PeterWatson)