FAQ/td - CBU statistics Wiki

Revision 86 as of 2017-06-05 14:42:28

Clear message
location: FAQ / td

How do I convert a t-statistic (and Odds Ratio) into an effect size?

One sample t

Since Sqrt(n(mean)) / sd = t

Cohen's d = t/Sqrt(n) = (mean-constant)/sd

This value of Cohen's d is used by Lenth (2006) in his one sample and (paired sample) t-test option.

Notice than as n goes up t will increase if the new observations are close to the mean which should be the case if, as assumed, the response is normally distributed.

Two sample unpaired t

Pustejovsky (2014) states for sample sizes using an unpaired t-test with sample sizes n1 and n2

Cohen's d = t [Sqrt(1/n1 + 1/n2)]

When n1 = n2

Cohen's d = 2t / Sqrt(df) (See Rosenthal (1994) and Howell (2013), p.649). This formula is used to compute Cohen's d from a t ratio via the CohensD.unpairedT function in R. For example for an outputted t statistic equal to 2 based upon group sizes each of size 12 we have

install.packages("BaylorEdPsych")
library(BaylorEdPsych)
CohensD.unpairedT(2,12,12)

giving

        d 
0.8164966 

As n1 and n2 increase the t value should also increase if the new observations are close to their group means (which will tend to be the case for the assumed Normal distributions within each group) since the group means should remain roughly constant and the variance of each mean is related to the spread about the group means and 1/group sizes. You can also obtain a Cohen's d from a F ratio (see formula 5 here).

Paired t

Baguley (2012, p.271) gives a formula, amongst other conversion formulae, for converting a paired t to d using a joint group size equal to n:

d = (difference in means) / (sd of (population) difference in means) where the population sd is 1/(n-1) (sum of squared deviations from the average difference).

d, above, can also be expressed as

d = t Sqrt(1/n) Sqrt(n/(n-1)) = difference in means / sd of sample difference [n/(n-1)] (see p.248 of Baguley (2012)) where the sd of the sample difference is the square root of 1/n (sum of the squared deviations of each difference from the overall sample mean difference) as defined on page 23 of Baguley (2012) and Sqrt[n/(n-1)] is the correction factor for estimating a population sd from a sample sd (pages 26-27 of Baguley).

Dunlap, Cortina, Vaslow, & Burke (1996) however convincingly argue that the original standard deviations (or the between group t-test value) should be used to compute ES for correlated designs. They argue that if the pooled standard deviation is corrected for the amount of correlation between the measures, then the ES estimate will be an overestimate of the actual ES.See here for further details. Jake Westfall also advocates here using unpaired t-tests irrespective of whether the comparison is paired or unpaired. (A copy of this if the link is broken is here).

2 way interaction

Abelson and Prentice (1997) suggest a way of converting a F statistic from a two-way interaction into Cohen's d:

Cohen's d = Sqrt(2) Sqrt(F)/Sqrt(n)

where n is the assumed equal number of observations for each combination of the two factors. If these are unequal then we use the harmonic mean of the sample sizes.

The two sample t-test with equal sample sizes is a special case since t equals $$\sqrt{F}$$ and df is made equal to 2n.

Pearson Correlation

Rosenthal (1994) and Field, Miles and Field (2012, p.581) also give a conversion formula to turn a t-statistic into a correlation

Correlation = square root [(t2 / (t2 + df)]

General Conversions

Jamie DeCoster (2012) has written a spreadsheet to convert a range of commonly used effect sizes such as Cohen's d, Pearson's r and odds ratios. He also has other Excel spreadsheets for computing effects located here. Anwar Alhenshiri has also written aspreadsheet which also compute the 95% confidence interval for Cohen's d and converts a chi-square on 1 degree of freedom to a Cohen's d and correlation. This spreadsheet is also available from Anwar's website here. Pustejovsky (2014) gives simple to use formulae for computing effect sizes from t and F statistics and converting between d, r and z.

Converting an Odds Ratio to d

From equation (15) in Sanchez-Meca et al. (2003)

d = ln(OR) x sqrt(3)/ pi

Converting F to a R 2

R2 = df1*F / (df1*F + df2) where F is distributed as F(df1,df2). See here for further details. if this link is broken the working out is reproduced below:

Let SST be the total (corrected) sum of squares, let SSR be the sum of squares from the regression model (which must contain df1 predictors in addition to the mean), and let the error sum of squares be SSE = SST - SSR. Then R^2 = SSR / SST and F = (SSR/df1) / (SSE/df2), and the stated relationship can be obtained with a little algebra.

Similarly, F = (df2/df1) * R2 / (1-R2 ).

References

Abelson, R. P. and Prentice, D. A. (1997) Contrast tests of interaction hypotheses. Psychological Methods 2(4) 315-328.

Baguley, T. (2012) Serious Stats. A guide to advanced statistics for the behavioral sciences. Palgrave Macmillan:New York. In addition to those mentioned above, Chapter 7 gives some conversion formulae including converting from r to g, where g is an effect size estimator which is very closely related to d.

DeCoster, J. (2012) Spreadsheet for converting effect size measures. Available from: http://www.stat-help.com/spreadsheets/Converting%20effect%20sizes%202012-06-19.xls (accessed 04.09.2014)

Dunlap, W. P., Cortina, J. M., Vaslow, J. B., & Burke, M. J. (1996). Meta-analysis of experiments with matched groups or repeated measures designs. Psychological Methods 1 170-177.

Field, A., Miles, J. and Field, Z. (2012) Discovering statistics using R. Sage:London.

Howell, D. C. (2013) Statistical methods for psychologists. 8th Edition. International Edition. Wadsworth:Belmont, CA.

Lenth, R. V. (2006) Java Applets for Power and Sample Size [Computer software]. Retrieved month day, year, from http://www.stat.uiowa.edu/~rlenth/Power.

Pustejovsky, J. E. (2014) Converting from d to r to z when the design uses extreme groups, dichotomization, or experimental control. Psychological Methods 19(1) 92-112.

Rosenthal, R. (1994) Parametric measures of effect size. In H. Cooper and L.V. Hedges (Eds.). The handbook of research synthesis. New York: Russell Sage Foundation.

Sanchez-Meca J, Chacon-Moscoco S and Marin-Martinez F (2003) Effect-size indices for dichotomized outcomes in meta-analysis. Psychological Methods 8(4) 448-467.