FAQ/os2 - CBU statistics Wiki

You can't save spelling words.

Clear message
location: FAQ / os2

Partial and Generalized omega-squared as effect sizes in analysis of variance

Partial $$\omega2 $$ has been suggested by Field (2013, p.473-4), Keppel (1991,pp 222-224) and Olejnik and Algina (2003) as an unbiased alternative to partial $$\eta^text{2}$$ when comparing the size of sources of variation across studies from analysis of variance where MSE is the mean square of the error term.

(Partial) $$\omega2 = $$ [SS(effect)-df(effect) MSE(effect)] / [SS(effect)+(N-df(effect))MSE]

This takes the value of zero when the F ratio is one (no group differences) since F = 1 implies that SS(effect) / df(effect) = MSE(effect) implying SS(effect) = MSE(effect) df(effect) so the above becomes 0 / (MSE(effect) [df(effect) - df(effect) + N]) = 0/[N MSE(effect)] = 0.

In the special case of a between subjects ANOVA with b groups and a total of N subjects the denominator in the above may be rewritten as below.

SS(effect)+(N-df(effect))MSE = SS(group) + (N-df(group))MSE

= SS(group) + (N-(b-1))MSE = SS(group) + (N-b+1)MSE = SS(group) + (N-b)MSE + MSE

= Total SS + MSE = SS(group) + SSE + MSE = SS(group) +((df(error term)+1) x MSE)

since in a one-way ANOVA between subjects design the total SS = SS(group) + (N-b)MSE. The latter denominator term (SS(group) +((df(error term)+1) x MSE)) will also be correct for any main effect in a factorial ANOVA since the df(effect) takes the same form as above ie equal to the number of groups - 1 provided the remaining factors are non-random ie chosen by the experimenter. Baguley (2012, p.483) and Field (2013, p.473) give the above formula for $$\omega2 $$ with Total SS + MSE in the denominator. There is also a calculator for the one-way ANOVA and other effect sizes here.

Partial $$\omega2 $$ is also equal to the ratio of variance components

$$\sigma2 $$ (effect) / ( $$\sigma2 $$ (effect) + $$\sigma2 $$ (error) )

For example in repeated measures analysis of variance with a factor W and N participants:

$$\sigma2 $$ (effect) / ( $$\sigma2 $$ (effect) + $$\sigma2 $$ (error) )

$$\sigma2 $$ (W) = N $$\sigma2 $$ (W) + $$\sigma2 $$ (error) - $$\sigma2 $$ (error)

= MS(W) - MS(error)/N

$$\sigma2 $$ (W) + $$\sigma2 $$ (error)

= N $$\sigma2 $$ (W) + $$\sigma2 $$ (error) - $$\sigma2 $$ (error) + $$\sigma2 $$ (error)

= MS(W) - MS(error)/N + MS(error)

So the variance component ratio equals [ MS(W) - MS(error)/N ] / [MS(W) - MS(error)/N + MS(error) ]

Multiplying top and bottom by N df(W) gives

[ SS(W) - df(W) MS(error) ] / [SS(W) - df(W)MS(error) + N df(effect) MS(error)]

= [ SS(effect) - df(effect)MS(error) ] / [SS(effect) + (N - df(effect))MS(error)] =

Partial $$\omega2 $$

Olejnik and Algina (2003) quote the partial $$\omega2 $$ formula and then state immediately afterwards that partial $$ \omega2 $$ (and also partial $$ \eta2 $$ ) “eliminate the influence of other factors in the design on the denominator”. Lakens (2013) states that the same formula for partial $$\omega2 $$ is used for both between and within subject designs.

Olejnik and Algina (2003, Equation 7 on page 441) further present and illustrate an alternative to the above for ANOVAs (both between subjects only or including a single repeated measures factor) for effects using a more general formula for $$\omega2 $$ to take into account any non-manipulated factors in the ANOVA.

(Generalized) $$\omega2 $$ = [ SS(Effect) - df(effect)MSE ] / [d(SS(Effect) - df(effect)MSE) + \sum_M_ [SS(M) - df(M)(MSE of M)] + N MS(Cells) ]

where d=1 if the effect of interest is non-random and 0 otherwise, M are any sources of variation which include at least one random factor, N is the total number of observations (scores) and MS(Cells) is a pooled error term consisting of the sum of all the error terms (which are connected to variation between subjects) in the ANOVA divided by the sum of their respective degrees of freedom. A spreadsheet is available to work out generalized omega^2 using this formula.

$$\omega2 $$ may take values between $$\pm$$ 1 with a value of zero indicating no effect. A negative value will result if the observed F is less than one.

The numerator in $$\omega2 $$ compares the mean square of the effect (=SS(effect)/df(effect)) to its mean square error which theoretically should be equal (giving a difference, as measured in the numerator, of zero) because the ratio of the mean squares may be approximated by a F distribution which takes a minimum value of one indicating no statistical evidence of the effect.

Olejnik and Algina present simplified versions of their general formula above for specific types of term in different types of ANOVA with upto a total of three factors quoting these in tables in their paper. Note that some of the terms in these tables are incorrect and the use of the general formula above is advised. Field (2013, p.537-8) illustrates an application of the Olejnik and Algina formula computing $$\omega2 $$ for between subjects ANOVAs with more than one factor using variance components.

He also (pages 566-567) mentions that a different form of $$\omega2 $$ to that discussed above is needed for a factor in a repeated measures analysis of variance since the estimate used for a between subjects factor overestimates effect size if used in a repeated measures.

In particular he applies the Olejnik and Algina formula to a factor in a one-way repeated measures ANOVA with k levels and n subjects and obtains the simplified form for its effect size, namely

$$ \omega2 = $$ [ [(k-1)/(nk)] (MS(effect)-MSE)] / [MSE + (MSB - MSE)/k + [(k-1)/(nk)] (MS(effect)-MSE) ]

where MS(B) = (total variance(N-1) - SS(effect) - SSE)/(N-1) where SSE is the error sum of squares for the error term for the repeated measures factor. Field also suggests only using effect sizes based upon pairwise group comparisons for repeated measures with more than one factor (including for mixed between-within factor ANOVAs) using a simpler effect size equal to sqrt[F(1,dfe)/(F(1,dfe)+dfe)] which is interpreted as a correlation.

An important characteristic of $$\omega2 $$ estimates is that they are unaffected by small sample sizes unlike $$\eta2 $$ estimates which tend to overestimate the effect in smaller samples (Larson-Hall (2010), p.120; Field (2013), p.473). This is due to $$\omega2,$$ $$\mbox{ unlike } \eta2 $$ estimates, being population-based measures consequently taking smaller values.

References

Baguley T (2012). Serious Stats. A guide to advanced statistics for the behavioral sciences. Palgrave Macmillan:New York.

Cohen J (1988). Statistical power analysis for the behavioral science. 2nd ed. Hillsdale: Lawrence Erlbaum Associates pp. 284–288.

Field AP (2005, pp. 417-419). Discovering statistics using SPSS. Sage:London. Formulae using ANOVA output to compute omega-squared for factorial designs.

Field A (2013) Discovering statistics using IBM SPSS statistics. Fourth Edition. Sage:London. On page 474 it is suggested using values for $$\omega^text{2}$$ of 0.01, 0.06, 0.14 to indicate small, medium and large effects respectively. These same cut-offs are also given by Cohen (1988) above and here.

Olejnik S and Algina J (2003). Generalized Eta and Omega Squared Statistics: Measures of effect size for some common research designs. Psychological Methods 8(4) 434-447. A pdf of this is available for free to CBUers using PsychNet and also here.

Keppel G (1991). Design and analysis:A researcher's handbook. Prentice-Hall:Englewood Cliffs, NJ

Lakens D (2013). Calculating and reporting effect sizes to facilitate cumulative science: a practical primer for t-tests and ANOVAs. Frontiers in Psychology 4 1-12. (Gives conversion formula of f-squared in SPSS for computing f-squared in G*Power and discusses omega-squared and partial omega squared.)

Larson-Hall J (2010). A Guide to Doing Statistics in Second Language Research Using SPSS. Routledge:New York.

None: FAQ/os2 (last edited 2023-01-24 15:32:09 by PeterWatson)