Diff for "FAQ/meta" - CBU statistics Wiki
location: Diff for "FAQ/meta"
Differences between revisions 38 and 39
Revision 38 as of 2011-01-19 14:34:18
Size: 6350
Editor: PeterWatson
Comment:
Revision 39 as of 2011-01-19 14:34:46
Size: 6348
Editor: PeterWatson
Comment:
Deletions are marked like this. Additions are marked like this.
Line 5: Line 5:
This [attachment:james.pdf paper] of Peters et al. (2006) proposes using a linear regression to assess publication bias. The regression uses the log odds from each study as the outcome and the inverse of the sample size of each study as the only predictor. Funnel scatterplots can be drawn plotting standardized differences (effect size/s.e.) against precision (1/s.e. of effect size). If there is no publication bias the plot should be symmetric. Peters st al. criticise the usual publication bias approach of Egger et al. (1997) [(details here) http://www.bmj.com/content/315/7109/629.full] who proposed a test for asymmetry of the funnel plot. This is a test for the Y intercept = 0 from a linear regression of normalized effect estimate (estimate divided by its standard error) against precision (reciprocal of the standard error of the estimate). The power of this method to detect bias will be low with small numbers of studies. This [attachment:james.pdf paper] of Peters et al. (2006) proposes using a linear regression to assess publication bias. The regression uses the log odds from each study as the outcome and the inverse of the sample size of each study as the only predictor. Funnel scatterplots can be drawn plotting standardized differences (effect size/s.e.) against precision (1/s.e. of effect size). If there is no publication bias the plot should be symmetric. Peters st al. criticise the usual publication bias approach of Egger et al. (1997) [http://www.bmj.com/content/315/7109/629 full details here] who proposed a test for asymmetry of the funnel plot. This is a test for the Y intercept = 0 from a linear regression of normalized effect estimate (estimate divided by its standard error) against precision (reciprocal of the standard error of the estimate). The power of this method to detect bias will be low with small numbers of studies.

Meta-analysis issues: 1) How do I measure publication bias? 2) How do I obtain a confidence interval for pooled estimates of a log odds, Cohen's d and Pearson correlations using meta-analysis?

Measuring Publication Bias

This [attachment:james.pdf paper] of Peters et al. (2006) proposes using a linear regression to assess publication bias. The regression uses the log odds from each study as the outcome and the inverse of the sample size of each study as the only predictor. Funnel scatterplots can be drawn plotting standardized differences (effect size/s.e.) against precision (1/s.e. of effect size). If there is no publication bias the plot should be symmetric. Peters st al. criticise the usual publication bias approach of Egger et al. (1997) [http://www.bmj.com/content/315/7109/629 full details here] who proposed a test for asymmetry of the funnel plot. This is a test for the Y intercept = 0 from a linear regression of normalized effect estimate (estimate divided by its standard error) against precision (reciprocal of the standard error of the estimate). The power of this method to detect bias will be low with small numbers of studies.

Obtaining pooled effect sizes

It is also usually of interest to obtain a pooled estimate of the effect size. This is usually done using the inverse of the effect size variance. In the case of the log odds its variance is found by simply summing the inverses of each of the four cells in the table of frequencies which are used to compute the odds ratio for a particular study (see an example [:FAQ/oddsr:here.])

Schmidt, Oh and Hayes (2009) illustrate methods for performing meta-analyses combining estimates of Cohen's d from various studies assuming (a) the same population value underlies all the studies being combined (fixed) and (b) different population values underlie the studies being combined (random) and conclude that, although the majority of studies in the literature use fixed effects, in many cases combined estimates based on pooling random terms have a better coverage of the population group differences. Both combined effects use the variance of Cohen's d suggested in the 'distributions of effect sizes' section just short of half-way down the page given [http://en.wikipedia.org/wiki/Effect_size here.]

The above article also mentions briefly combining study correlations using Fisher's z transform. An involved iterative method for producing a random effects meta-analysis using the WLS weight option in SPSS on correlations is described in the introductory pdf article [attachment:metaSPSS.pdf here.]

Wilson (2005) has written a comprehensive set of SPSS macros for meta-analysis pooling Cohen's d, the logged odds ratio or the Fisher transformed Pearson correlation, anova effect sizes and regression effect sizes which assume either fixed or random effects. These are available for download from [http://mason.gmu.edu/~dwilsonb/ma.html here.] Note that you may need to add a full stop to comment lines (starting with an asterisk) in the downloaded macros.

There is also a 'help' file detailing how to use these macros on that website (or [attachment:wilson.pdf here if the link is broken]). The help file explains and illustrates the uses of simple formulae for adjusting Cohen's d for small samples and computing its variance which is inverted to give the study weights prior to running the macro.

These macros are used by Lindberg et al (2010) to obtain a pooled Cohen's d pooling across 242 studies! A [attachment:metacd.xls spreadsheet] performs Wilson's inverse variance pooling of Cohen's D using the formulae in his MeanES.sps macro.

Formulae used in Wilson's macros are also explained more fully [http://www.zoology.ubc.ca/~schluter/bio548/workshop-meta.html here.]

An alternative procedure, the Mantel-Haenszel method, which is available in the CROSSTABS procedure in SPSS is recommended by Cochrane (see above primer and [attachment:cochraneor.pdf here]) for pooling odds ratios (ORs) with the weightings of each odds ratio related to the number of observations on which that odds ratio is based. The approach used by Wilson is to weight the log of each odds ratio by its inverse variance (also related to sample sizes but using different weights to the Mantel-Haenszel). The log odds ratios are used because their variance has [:FAQ/oddsr:a simple form]. Results can be backtransformed to odds ratios if desired. This method is the one computed in the above macros or using this [attachment:metaor.xls spreadsheet.]

The variance of Fisher transformed Pearson correlations also has a simple form (see [:FAQ/corrs: here]) and can, therefore, be pooled using the inverse variance method of Wilson(2005) in this [attachment:metafr.xls spreadsheet.]

The WLS weight option in SPSS LINEAR REGRESSION can be used for assessing the influence of covariates on effect sizes combining summary measures using fixed and random variances as the weights on each summary measures. For an application of this using the log odds ratio effect size see the powerpoint slides located [attachment:WLSmeta.ppt here.]

A further primer on meta-analysis (in pdf format) is given [attachment:PLane.pdf here.]

References

Egger M, et al. (1997). Bias in meta-analysis detected by a simple, graphical test. British Medical Journal 315 629-634.

Lindberg, S. M., Hyde, J. S., Linn, M. C. and Petersen, J. L. (2010). New trends in gender and mathematics performance: a meta-analysis. Psychological Bulletin 136(6) 1123-1135.

(A PDF COPY IS AVAILABLE FOR FREE DOWNLOAD VIA SCIENCE DIRECT FOR CBSU USERS)

Morris, S. B. and Deshon, R. P. (2002) Combining effect size estimates in meta-analysis with repeated measures and independent-groups designs. Psychological Methods 7(1) 105-125.

(A PDF COPY IS AVAILABLE FOR FREE DOWNLOAD VIA SCIENCE DIRECT FOR CBSU USERS)

Schmidt, F. L., Oh, I. S. and Hayes, T. L. (2009). Fixed- versus random-effects models in meta-analysis: model properties and an empircial comparison of differences in results. British Journal of Mathematical and Statistical Psychology 62 97-128.

Wilson, D. B. (2005). Meta-analysis macros for SAS, SPSS, and Stata [Computer software]. Retrieved from http://mason.gmu.edu/~dwilsonb/ma.html

None: FAQ/meta (last edited 2019-01-07 15:33:11 by PeterWatson)