# Meta-analysis issues: 1) How do I measure publication bias? 2) How do I obtain a confidence interval for pooled estimates from sets of log odds, Cohen's d and Pearson correlations using meta-analysis?

Measuring Publication Bias

This paper of Peters et al. (2006) proposes using a weighted linear regression to assess publication bias in reporting log odds ratios as an alternative to the usual funnel plot. Funnel scatterplots can be drawn plotting differences/associations which are either standardized (effect size/s.e.) or unstandardized on the x-axis against precision (1/s.e. of effect size) on the y-axis. If there is no publication bias the plot should be symmetric forming a funnel shape. (As an aside Forest plots are also commonly presented in meta-analysis which plot effect size confidence intervals for each study or set of studies on a single plot (Peters et al, 2010)).

The method of Peters et al. (2006) corresponds to a weighted linear regression using the log odds, obtained using a 2x2 table of frequencies from each study, as the outcome and the reciprocal of the sample size of each study as the only predictor with a variance measure, (1/n11+1/n12+1/n21+1/n22)^{-1 }, used as a study weight where nij are the frequencies in the i,jth cells in each study (Peters et al. 2010). This may be carried out using the *WLS weight* option in *SPSS LINEAR REGRESSION* in SPSS (see this powerpoint talk)) or the metabias procedure from the meta library in R. It follows from the form of the weights that Peters regression procedure, unlike that of Egger, can only be used for odds ratios from 2x2 tables.

Peters et al. (2006) criticise the usual publication bias approach of Egger et al. (1997) (details here) who proposed a test for asymmetry of the funnel plot. In particular Peters et al. (2006) say the p-values of Egger et al.'s test are not as reliable as theirs because they have inflated type I errors. They, instead assess the regression coefficient showing the association between sample size and the log odds ratio and test to check this is zero. Both these methods may be fitted using this spreadsheet.

Egger et al.'s test is a test for the Y intercept = 0 from a linear regression of normalized effect estimate (estimate divided by its standard error) against precision (reciprocal of the standard error of the estimate). Note that this regression may also be equivalently carried out using the unstandardized effect size as outcome, its standard error as predictor and weighting each effect size by the reciprocal of its squared standard error. The power of this method to detect bias will be low with small numbers of studies.

More recently van Assen, van Aert and Wicherts (2015) have proposed an easy to use method which sums p-values from studies in the meta-analysis to assess publication bias by comparing this sum to a Gamma distribution. Their estimate, however, assumes that there is no between study heterogeneity.

Obtaining pooled effect sizes

It is also usually of interest to obtain a pooled estimate of the effect size. This is usually done using the inverse of the effect size variance. In the case of the log odds its variance is found by simply summing the inverses of each of the four cells in the table of frequencies which are used to compute the odds ratio for a particular study (see an example here.)

Schmidt, Oh and Hayes (2009) illustrate methods for performing meta-analyses combining estimates of Cohen's d from various studies assuming (a) the same population value underlies all the studies being combined (fixed) and (b) different population values underlie the studies being combined (random) and conclude that, although the majority of studies in the literature use fixed effects, in many cases combined estimates based on pooling random terms have a better coverage of the population group differences. Both combined effects use the variance of Cohen's d suggested in the 'distributions of effect sizes' section just short of half-way down the page given here. Note that the variation across studies (random element) is usually measured using the variance component, tau^{2}. Tau^{2} can take the value zero (usually due to it being estimated as being negative which is conceptually inadmissable given it is a variance). A zero tau^2 indicates no variation in effect sizes across studies. Lopez-Lopez et al (2014) have performed simulation studies which suggest at least 20 studies are needed for precise estimates of the heterogeneity variance (tau-squared).

The above article also mentions briefly combining study correlations using Fisher's z transform. An involved iterative method for producing a random effects meta-analysis using the WLS weight option in SPSS on correlations is described in the introductory pdf article here.

McShane and Bockenholt (2018) introduce their website available to use here illustrating its use to compare interaction and main effects in a meta-analysis of between subjects studies.

Wilson (2005) has written a comprehensive set of SPSS macros for meta-analysis pooling Cohen's d, the logged odds ratio or the Fisher transformed Pearson correlation, anova effect sizes and regression effect sizes which assume either fixed or random effects. These macros and the spreadsheets mentioned below also perform a test for random effects using the Q statistic (Peters et al, 2010). The macros are available for download from here. Note that you may need to add a full stop to comment lines (starting with an asterisk) in the downloaded macros to make them work. There are also some SPSS macros, MeanES, MetaF and MetaReg from Lipsey and Wilson (2001) as used in Guilera et al.(2013). In addition David Wilson's website also hosts a web calculator for effect size 95% Confidence Intervals here.

There is also a 'help' file detailing how to use these macros on that website (or here if the link is broken). The help file explains and illustrates the uses of simple formulae for adjusting Cohen's d for small samples and computing its variance which is inverted to give the study weights prior to running the macro.

These macros are used by Lindberg et al (2010) to obtain a pooled Cohen's d pooling across 242 studies! A spreadsheet performs Wilson's inverse variance pooling of Cohen's D using the formulae in his MeanES.sps macro.

Anderson and Maxwell (2016) give formulae (equations (4)-(8)) for combining two Cohen's ds from two studies to assess the reproducibility of a result.

Obtaining a pooled SD from a pooled weighted difference in means in a meta-analysis

Stewart et al. (2012) suggest on page 4 of this paper using random effects regression e.g. lme or lme4 in R to obtain effect sizes for single case meta-analysis. They call this a one-stage procedure. They also give R code for using the output in metaphor to make a two-stage procedure.

Further to the above R formulae in a worked example showing two-stage code using summary measures from the repeated scores of individuals (stage 1) inputted into a meta-analysis (stage 2) is given here.

Formulae used in Wilson's macros are also explained more fully here.

An alternative procedure, the Mantel-Haenszel method, which is available in the CROSSTABS procedure in SPSS is recommended by the Cochrane handbook (Higgins and Green, 2008)(see above primer and here) for pooling odds ratios (ORs) with the weightings of each odds ratio related to the number of observations on which that odds ratio is based. The approach used by Wilson is to weight the *log* of each odds ratio by its inverse variance (also related to sample sizes but using different weights to the Mantel-Haenszel). The log odds ratios are used because their variance has a simple form. Results can be backtransformed to odds ratios if desired. This method is the one computed in the above macros or using this spreadsheet.

The variance of Fisher transformed Pearson correlations also has a simple form (see here) and can, therefore, be pooled using the inverse variance method of Wilson(2005) in this spreadsheet.

In addition to Wilson's Metareg SPSS macro the *WLS weight* option in *SPSS LINEAR REGRESSION* can be used for assessing the influence of covariates on effect sizes combining summary measures using fixed and random variances as the weight on each summary measure. For an application of this using the log odds ratio effect size see the powerpoint slides located here.

A further primer on meta-analysis (in pdf format) is given here.

Moderators in meta-analyses

The R procedure **metafor** (Viechtbauer (2007) is recommended for more advanced meta-analyses involving moderator variables where the effect of one or more predictors on variation across studies is of interest. This works out statistics such as tau^2 described above and also some Q statistics to assess the influence of the moderators using the 'rma' (random meta-analysis) procedure.

References

Anderson, S. F. and Maxwell, S. E. (2016) There’s More Than One Way to Conduct a Replication Study:Beyond Statistical Significance. *Psychological Methods*, **21(1)** 1–12

Borenstein, M., Hedges, L.V., Higgins, J.P. and Rothstein, H.R. (2009). Introduction to Meta-Analysis (Statistics in Practice). Wiley:New York. Good primer covering differences between fixed and random effects meta-analyses and moderation analyses relating predictors to variation across studies.

Egger, M., Smith, G. D., Schneider, M. and Minder, C. (1997). Bias in meta-analysis detected by a simple, graphical test. *British Medical Journal* **315** 629-634.

Field, A. P. and Gillett, R. (2010). How to do a meta-analysis. *British Journal of Mathematical and Statistical Psychology* **63** 665-694.

The Field and Gillett article is a useful primer with illustrations of the main issues and presentation of results. This website contains further information files including SPSS and R programmes which can be downloaded to carry out aspects of a standard meta-analysis including pooled effect size calculations and funnel plots.

Guilera, G., Gomez-Benito, J., Hidalgo, M.D. and Sanchez-Meca, J. (2013). Type I error and statistical power of the Mantel-Haenszel procedure for detecting DIF: A meta-analysis. *Psychological Methods* **18(4)** 553-571.

Higgins, J. P. T. and Green, S. (2008). *Cochrane Handbook for Systematic Reviews of Interventions, Version 5.0.1.* Cochrane Collaboration:Oxford. (Available from http://www.cochrane-handbook.org).

Lindberg, S. M., Hyde, J. S., Linn, M. C. and Petersen, J. L. (2010). New trends in gender and mathematics performance: a meta-analysis. *Psychological Bulletin* **136(6)** 1123-1135.

(A PDF COPY OF THE ABOVE ARTICLE IS AVAILABLE FOR FREE DOWNLOAD VIA SCIENCE DIRECT FOR CBSU USERS)

Lipsey, M. W. and Wilson, D. B. (2001). Practical meta-analysis. Thousand Oaks, CA: Sage.

Lopez-Lopez, J. A., Fulgencio Marın-Martınez1, F., Sanchez-Meca1, J., Van den Noortgate, W. and Viechtbauer, W. (2014). Estimation of the predictive power of the model in mixed-effects meta-regression: A simulation study. *British Journal of Mathematical and Statistical Psychology* **67**, 30–48.

McShane, B. B. and Bockenholt, U. (2018) Want to make behavioual research more replicable? Promote single paper meta-analysis. *Significance* **15(6)** 38-40.

Morris, S. B. and Deshon, R. P. (2002). Combining effect size estimates in meta-analysis with repeated measures and independent-groups designs. *Psychological Methods* **7(1)** 105-125.

(A PDF COPY OF THE ABOVE ARTICLE IS AVAILABLE FOR FREE DOWNLOAD VIA SCIENCE DIRECT FOR CBSU USERS)

Peters, J. L., Sutton, A. J., Jones, D. R. and Abrams K. R. (2010). Assessing publication bias in meta-analyses in the presence of between-study heterogeneity. *Journal of the Royal Statistical Society A* **173(3)** 575-591. (see here.)

(THE ABOVE JOURNAL ARTICLE IS ALSO AVAILABLE FOR CBSUERS FROM PETER WATSON UPON REQUEST AND IS A GOOD PRIMER PROVIDING AN OVERVIEW OF META-ANALYSES INCLUDING ILLUSTRATIONS OF THE ABOVE IDEAS)

Schmidt, F. L., Oh, I. S. and Hayes, T. L. (2009). Fixed- versus random-effects models in meta-analysis: model properties and an empircial comparison of differences in results. *British Journal of Mathematical and Statistical Psychology* **62** 97-128.

Stewart, G. B., Altman, D. G., Askie, L. M., Duley, L., Simmonds, M. C. and Stewart, L. A. (2012) Statistical analysis of individual participant data meta-analyses: a comparison of methods and recommendations for practice. PLOS **7(10)** e46042.

Stram, D. O. (1996). Meta-analysis of published data using linear mixed-effects model. *Biometrics* **52(2)** 536-544.

van Assen, M. A. L. M., van Aert, R. C. M. and Wicherts, J. M. (2015) Meta-analysis using effect size distributions of only statistically significant studies. *Psychological Methods* **20(3)** 293-309.

VanDerwerken, D. (2012). What Petri dishes have to do with your research. *Significance* **9(3)** 40-42. Runner-up in a young statisticians writing competition, this article illustrates advantages such as increased power and precision afforded by meta-analysis (using 1/variance as weighting) in an example from microbiology.

Viechtbauer, W. (2007). Confidence intervals for the amount of heterogeneity in meta-analysis. *Statistics in Medicine* **26(1)** 37-52. Illustrates using the R procedure 'metafor'.

Wilson, D. B. (2005). Meta-analysis macros for SAS, SPSS, and Stata [Computer software]. Retrieved from http://mason.gmu.edu/~dwilsonb/ma.html