Diff for "EffectSize" - CBU statistics Wiki
location: Diff for "EffectSize"
Differences between revisions 21 and 22
Revision 21 as of 2007-03-14 14:15:01
Size: 2398
Comment:
Revision 22 as of 2013-03-08 10:17:40
Size: 2398
Editor: localhost
Comment: converted to 1.6 markup
Deletions are marked like this. Additions are marked like this.
Line 15: Line 15:
Effect Size $$d$$ was defined by Cohen (1988)[[FootNote(Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). New York:Academic Press)]] as the difference between the two condition means divided by the common standard deviation: Effect Size $$d$$ was defined by Cohen (1988)<<FootNote(Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). New York:Academic Press)>> as the difference between the two condition means divided by the common standard deviation:

Effect Size

The purpose of the various measures of effect size is to provide a statistically valid reflection of the size of the effect of some feature of an experiment. As such it is a rather loose concept. However there is an underlying assumption that this is taking place in some parametric design, and that the effect of the feature of interest (or manipulation) can be measured by some estimable function of the parameters.

This is certainly the case in the paradigmatic model for the evaluation of effect size, namely the two-conditions, two-groups design. Suppose that a test $$\mathbf{T}$$ is administered to two groups of sizes $$n_A$$ and $$n_B$$ in two conditions $$A$$ and $$B$$.

The samples are assumed to be independently and normally distributed with the same variance:

  • $${a_i|i=1 \ldots n_A} \qquad ~ \qquad \textrm{(i.i.d.)} \qquad N(\mu_A,\sigma^2)$$

and

  • $${b_i|i=1 \ldots n_B} \qquad ~ \qquad \textrm{(i.i.d.)} \qquad N(\mu_B,\sigma^2)$$.

Effect Size $$d$$ was defined by Cohen (1988)1 as the difference between the two condition means divided by the common standard deviation:

  • $$d = \frac{\mu_A - \mu_B}{\sigma}.$$

That is to say it is the Signal to Noise Ratio. There are obvious connections with the definition of the classical Signal Detection Theory parameter $$d'$$.

Let $$\bar{a}=\frac{\sum_{i=1}{n_A} a_i}{n_A}$$, $$\bar{b}=\frac{\sum_{i=1}{n_B} b_i}{n_B}$$ and $$SS_A=\sum_{i=1}{n_A}(a_i-\bar(a))2$$, $$SS_B=\sum_{i=1}{n_B}(b_i-\bar(b))2$$. Then $$\hat{\sigma}2 = \frac{SS_A + SS_B}{n_A+n_B-2}$$ is the conventional estimator of $$\sigma2$$, and $$\hat{d}=\frac{\bar{a}-\bar{b}}{\hat{sigma}}$$ serves at an estimator for $$d$$.

It is instructive to compare the estimator $$\hat{d}$$ with the standard $$T$$-test statistic. In fact

  • $$\hat{d} = t \sqrt{\frac{1}{n_A} + \frac{1}{n_B}}$$.

The effect size measure $$d$$ deliberately ignores design aspects that relate to sample size, and expresses things in terms of a the variance of a single observation from eith of the two underlying distributions. It is instructive to rewite the above relationship as

  • $$\hat{d} = t / \sqrt{n_h/2}$$ where $$n_h$$ is the harmonic mean of $$n_A$$ and $$n_B$$.

  1. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). New York:Academic Press (1)

None: EffectSize (last edited 2013-03-08 10:17:40 by localhost)