Diff for "FAQ/icc" - CBU statistics Wiki
location: Diff for "FAQ/icc"
Differences between revisions 26 and 27
Revision 26 as of 2010-11-17 11:23:55
Size: 3125
Editor: PeterWatson
Comment:
Revision 27 as of 2010-11-17 11:25:06
Size: 3175
Editor: PeterWatson
Comment:
Deletions are marked like this. Additions are marked like this.
Line 14: Line 14:

where n is the number of subejcts being rated.

Intraclass correlations

An alternative, to the kappa statistic, which uses an analysis of variance output to estimate rater reliability is the intraclass correlation coefficient (ICC).

For a repeated measures anova involving k raters it follows assuming both subjects and raters are fixed effects that

$$\mbox{ICC1} = \frac{\mbox{MS(subjects)–MS(subjects x raters)}}{\mbox{MS(subjects) + (k-1)MS(subjects x raters)}}$$

where MS is the Mean square from the repeated measures analysis of variance. It follows that the intra-class correlation (ICC), unlike the Pearson correlation, is useful for pooling paired data each having three or more observations. Einfield and Tonge (1992, p 12) prefer using the ICC to the Pearson as it is more conservative owing to that fact it "takes account of the absolute as well as the relative difference between the scores of two raters".

Howell (1997) recommends an alternative, most widely used ICC which assumes that the raters are a random sample from a larger population (and is called one-way random in SPSS) which has an extra term in the denominator and is of form

$$\mbox{ICC2} = \frac{\mbox{MS(subjects)–MS(subjects x raters)}}{\mbox{MS(subjects) + (k-1)MS(subjects x raters) + k[MS(raters) - MS(subjects x raters)]/n}}$$

where n is the number of subejcts being rated.

ICC2 is to be preferred as it is more robust to differences in absolute ratings betweeen raters. For example suppose we have two raters and one rater always gives exactly half the rating of the other then only ICC2 has a correct low value. e.g. if two raters rate three subjects giving ratings 1,2; 2,4; 3,6 then ICC1 = 0.80 and ICC2 = 0.46.

Examples of its use in SPSS [http://www.nyu.edu/its/statistics/Docs/intracls.html are available here] and [attachment:ICC.doc here.]

The fixed ICC correlations called sfsingle, sf random and sffixed in the above article are of form

$$\frac{\mbox{true inter-rater variance}}{\mbox{true inter-rater variance + common error in rating variance}}$$

as mentioned as a reliability correlation in the two rater case, for example, in [http://www-users.york.ac.uk/%7Emb55/talks/oxtalk.htm a paper by Martin Bland and Doug Altman.]

An overview of approaches to inter rater reliability including the ICC is given by Darroch and McCloud (1986).

  • [:FAQ/iccpr: Inferiority of using a Pearson correlation compared to an ICC]

References

Darroch JN, McCloud PI (1986) Category distinguishability and observer agreement Australian Journal of Statistics 28 371-88.

Howell DC (1997) Statistical methods for psychologists. Fourth edition. Wadsworth:Belmont,CA. (pages 490-493).

Einfield, SL and Tonge, BJ (1992) Manual for the developmental hebaviour checklist (DBC)(Primary Carer version). Melbourne:School of Psychiatry, Unievrsity of new South Wales, and Centre for Developmental Psychiatry, Monash University, Clayton, Victoria.

Shrout, PE and Fleiss, JL (1979). Intraclass Correlations: Uses in Assessing Rater Reliability, Psychological Bulletin, 86 (2) 420-428. (A good primer showing how anova output can be used to compute ICCs).

None: FAQ/icc (last edited 2018-04-26 11:21:52 by PeterWatson)