FAQ/icc - CBU statistics Wiki

Revision 15 as of 2007-12-07 14:42:44

Clear message
location: FAQ / icc

Intraclass correlations

An alternative, to the kappa statistic, which uses an analysis of variance output to estimate rater reliability is the intraclass correlation coefficient (ICC).

For a repeated measures anova involving k raters it follows

$$\mbox{ICC} = \frac{\mbox{MS(subjects)–MS(subjects x raters)}}{\mbox{MS(subjects) + (k-1)MS(subjects x raters)}}$$

where MS is the Mean square from the repeated measures analysis of variance.

Examples of its use in SPSS [http://www.nyu.edu/its/statistics/Docs/intracls.html are available.]

The fixed ICC correlations called sfsingle, sf random and sffixed in the above article are of form $$\frac{\mbox{true inter-rater variance}}{\mbox{true inter-rater variance + common error in rating variance}}$$ as mentioned as a reliability correlation in the two rater case, for example, in [http://www-users.york.ac.uk/%7Emb55/talks/oxtalk.htm a paper by Martin Bland and Doug Altman.]

An overview of approaches to inter rater reliability including the ICC is given by Darroch and McCloud(1986).

Reference:

Darroch JN, McCloud PI (1986) Category distinguishability and observer agreement Australian Journal of Statistics 28 371-88.