= Effect size for multilevel models = $$\rho = \frac{\mbox{variance(between subjects)}}{\mbox{(variance(between subjects) + variance(within subjects)}}$$ In the wikipedia it states that this "design effect" is used with cluster observations. If we fit a mixed model with a '''random subject-specific intercept''', the clusters are the observations within a participant e. g. the 7 times the participant chose a fruit or a snack. The "design effect" is D_{eff} = 1 + (m-1) $$\rho$$ where m is the number of observations in each cluster (e.g. number of repeated measures per subject). The [[attachment:sandb.pdf | Snijders and Bosker]] effect size formed from pooling the intercept and residual variances may be evaluated using the below R syntax incorporating a function written by Adam Wagner. T statistics associated with fixed regression terms may also be used (see [[FAQ/td|here.]]) which can be converted into Cohen's d and r (correlation) effect sizes for pairwise group and covariate predictors respectively. {{{ #install.packages("stringr") library(stringr) library (foreign) # make available data import/export commands library (graphics) # makes graphics available library (stats) # makes some important stats available, eg nlm library (nlme) # makes nlme and lme available #library (lme4) #not needed as only used with lme which has a VarCorr function of its own; the mod function does not work with VarCorr produced in the lme4 library BDIData <- read.spss("U:\\My Documents\\BDItest.sav") BDIData <- data.frame(BDIData) BDIData$BDI[BDIData$BDI==999] <- NA BDIData <- na.omit(BDIData) attach(BDIData) bdiModelA <- lme(BDI~grp*time, random=~1|id, BDIData, method="ML", na.action=na.omit) summary(bdiModelA) ############################################################################################ # Function calculates the proportion of variance explained by each term in an nlme LME model # Assumes: # - stringr package loaded # - nlme package loaded (VarCorr is key for the following code to work) # Not sure what is coded works in the prescence of interactions of greater order than 2 calcPropExplain <- function(mod){ # Inner function for calculating pooled variance - ignores any slope pooledVar <- function(model)sum(as.numeric(VarCorr(model)[c("(Intercept)","Residual"), "Variance"])) # Extract the names of the terms in the model termLabels <- attr(terms(mod), "term.labels") # Refit the model length(termLabels) times, where each fit excludes; lapply lists the output from fitting separate models and puts these in modDroppedTerms # one term (and any interactions associated with that term) modDroppedTerms <- lapply(termLabels, function(term){ termsToDrop <- termLabels[str_detect(termLabels, term)] update(mod, paste(".~.", paste("-", termsToDrop, sep="", collapse="")))}) # Find comparison models for looking at change in proportion variance explained # Essentially, Where there are interactions involving the term being tested, drop them from the model # checks for any terms in the model with names a: or :a e.g. grp:time will be flagged as an interaction to drop for grp since 'grp:' is part of grp:time and also for time since # :time is a part of the string grp:time compModels <- lapply(termLabels, function(term){ intsToDrop <- c(termLabels[str_detect(termLabels, paste(term, ":", sep=""))], termLabels[str_detect(termLabels, paste(":", term, sep=""))]) if(length(intsToDrop)==0) return(mod) else return(update(mod, paste(".~.", paste("-", intsToDrop, sep="", collapse="")))) }) # Find the residual variance for each term; sapply produces a vector of values and puts thses in residVars and compModelVars residVars <- sapply(modDroppedTerms, pooledVar) compModelVars <- sapply(compModels, pooledVar) # Calculate the variance explained by each term propExplained <- 1-compModelVars/residVars names(propExplained) <- termLabels propExplained } calcPropExplain(bdiModelA) }}}