FAQ/ad - CBU statistics Wiki

You can't save spelling words.

Clear message
location: FAQ / ad

An inter-rater agreement measure based on Euclidean distances, Ad

Kreuzpointner, Simon and Theis (2010) suggest an alternative measure of inter-rater reliability called $$a_text{d}$$ which also takes values between 0 and 1 with values near 1 indicative of agreement between raters.

It is based on summing the squares of the differences in all pairs of rater ratings across items (euclidean distances). This is an intuitive approach. If, for example, a pair of raters give the same rating to the same item the difference is zero for that pair fo raters on that item.

Raw data may be entered into the on-line program available here which produces the agreement measure and a test of statistical significance. R code for producing the agreement measure and the 95% critical threshold is given here.

This rating measure gives the same results as using the within-group agreement coefficient which is a measure recommended by Bliese(2000), Chan(1998) and others based upon the work of Finn(1970).

Reference

Bliese PD (2000) Within-group agreement, non-independence, and reliability. Implications for data aggregation and analysis. In KJ Klein & WJ Kozlowski (Eds.), Multilevel theory, research, and methods in organizations (pp. 349-381) Jossey Bass:San Francisco.

Chan D (1998) Functional relations among constructs in the same content domain at different levels of analysis: A typology of composition models. Journal of Applied Psychology, 83(2), 234-246.

Finn RH (1970) A note on estimating the reliability of categorical data. Educational and Psychological Measurement, 30, 71-76.

Kreuzpointner L, Simon P and Theis FJ (2010) The ad coefficient as a descriptive measure of the within-group agreement of ratings. British Journal of Mathematical and Statistical Psychology, 63, 341-360.

None: FAQ/ad (last edited 2013-03-08 10:17:36 by localhost)