FAQ/kappa/ad-r - CBU statistics Wiki
location: FAQ / kappa / ad-r

R code for computing the ad agreement measure and its 95% critical value

The code below can be pasted at a R prompt in R.

Firstly read in the fields and foreign libraries.

Install.packages(c(“fields”,"foreign"))
library(fields)
library(foreign)

Next read in the raw data which can be read in from a SPSS spreadsheet with items as rows and raters as columns. An example input for 3 raters (r1, r2 and r3) rating each of 5 items (upto a maximum rating of 7) taken from section 5.2 of Kreuzpointner et al. (2010) is given here.

score <- read.spss("U:\\R_Work\\items.sav")

Just paste this function text below into R.

adval <- function(b, perc) {
score <- data.frame(score)
score <- as.matrix(score)
score <- t(score)
nrate <- nrow(score)
nitem <- ncol(score)
a <- 1

rate <- matrix(score,nrow=nrate,ncol=nitem)
adboot <- matrix(0,1000,1)
out <- 0

for (i in 1:nitem) {
out <- out + sum(rdist(rate[,i])^2)/2 
}

ifelse (nrate-floor(nrate/2)*2 > 0, dmax <- nitem * (b-a)*(b-a)*0.25*(nrate*nrate-1), dmax <- nitem * (b-a)*(b-a)*0.25*(nrate*nrate))

ad <- 1 - (out/dmax)

# bootstrap to obtain 95 percentile for the null distribution of ad based on the 
# binomial distribution using 10000 samples as sugegsted by Kreuzpointner et al.

p <- (mean(rate)-1)/(b-1)
n <- b-1
outboo <- 0

for (ict in 1:10000) {
outboo <- 0
rb <- rbinom(nitem*nrate,n,p)+1
rboo <- matrix(rb,nrow=nrate,ncol=nitem)

for (i in 1:nitem) {
outboo <- outboo + sum(rdist(rboo[,i])^2)/2 
}

adboot[ict] <- 1 - (outboo/dmax)

 }

cat("ad = ", ad, "\n") 
cat(100*perc,"percentile for ad = ",quantile(adboot,probs=perc), "\n")
}

Then you are ready to run the function above using the maximum number of ratings and 1- the significance level as inputs.

adval(7,0.95)

which should output something like the below

ad =  0.9722222 
95 percentile for ad =  0.9777778 

So there is no evidence that there is an agreement between the raters. Note: we get a slightly different result to Kruezpointner et al. because we use a binomial probability of 0.15 instead of 0.2, as in their paper, to estimate the critical 5% threshold for statistical significance.

Reference

Kreuzpointner L, Simon P and Theis FJ (2010) The ad coefficient as a descriptive measure of the within-group agreement of ratings. British Journal of Mathematical and Statistical Psychology 63 341-360.

None: FAQ/kappa/ad-r (last edited 2013-03-08 10:17:24 by localhost)