Diff for "FAQ/meta" - CBU statistics Wiki
location: Diff for "FAQ/meta"
Differences between revisions 73 and 120 (spanning 47 versions)
Revision 73 as of 2011-06-21 16:06:54
Size: 9553
Editor: PeterWatson
Comment:
Revision 120 as of 2019-01-07 15:33:11
Size: 14691
Editor: PeterWatson
Comment:
Deletions are marked like this. Additions are marked like this.
Line 5: Line 5:
This [attachment:james.pdf paper] of Peters et al. (2006) proposes using a weighted linear regression to assess publication bias in reporting log odds ratios as an alternative to the usual funnel plot. Funnel scatterplots can be drawn plotting differences/associations which are either standardized (effect size/s.e.) or unstandardized on the x-axis against precision (1/s.e. of effect size) on the y-axis. If there is no publication bias the plot should be symmetric forming a funnel shape. (As an aside Forest plots are also commonly presented in meta-analysis which plot effect size confidence intervals for each study or set of studies on a single plot (Peters et al, 2010)). This [[attachment:james.pdf|paper]] of Peters et al. (2006) proposes using a weighted linear regression to assess publication bias in reporting log odds ratios as an alternative to the usual funnel plot. Funnel scatterplots can be drawn plotting differences/associations which are either standardized (effect size/s.e.) or unstandardized on the x-axis against precision (1/s.e. of effect size) on the y-axis. If there is no publication bias the plot should be symmetric forming a funnel shape. (As an aside Forest plots are also commonly presented in meta-analysis which plot effect size confidence intervals for each study or set of studies on a single plot (Peters et al, 2010)).
Line 7: Line 7:
The method of Peters et al. (2006) corresponds to a weighted linear regression using the log odds, obtained using a 2x2 table of frequencies from each study, as the outcome and the reciprocal of the sample size of each study as the only predictor with a variance measure, $$\(\frac{1}{n_text{11}+n_text{12}}+\frac{1}{n_text{21}+n_text{22}}\)^text{-1}$$, used as a study weight where $$n_text{ij}$$ are the frequencies in the i,jth cells in each study (Peters et al. 2010). This may be carried out using the ''WLS weight'' option in ''SPSS LINEAR REGRESSION'' in SPSS or the metabias procedure from the meta library in R. It follows from the form of the weights that Peters regression procedure, unlike that of Egger, can only be used for odds ratios from 2x2 tables. The method of Peters et al. (2006) corresponds to a weighted linear regression using the log odds, obtained using a 2x2 table of frequencies from each study, as the outcome and the reciprocal of the sample size of each study as the only predictor with a variance measure, (1/n11+1/n12+1/n21+1/n22)^-1 ^, used as a study weight where nij are the frequencies in the i,jth cells in each study (Peters et al. 2010). This may be carried out using the ''WLS weight'' option in ''SPSS LINEAR REGRESSION'' in SPSS (see this [[attachment:wlsregppt.ppt | powerpoint talk)]]) or the metabias procedure from the meta library in R. It follows from the form of the weights that Peters regression procedure, unlike that of Egger, can only be used for odds ratios from 2x2 tables.
Line 9: Line 9:
Peters et al. (2006) criticise the usual publication bias approach of Egger et al. (1997) ([http://www.bmj.com/content/315/7109/629.full details here)] who proposed a test for asymmetry of the funnel plot. In particular Peters et al. (2006) say the p-values of Egger et al.'s test are not as reliable as theirs because they have inflated type I errors. They, instead assess the regression coefficient showing the association between sample size and the log odds ratio and etst to check this is zero. Both these methods may be fitted using this [attachment:eggpe.xls spreadsheet.] Peters et al. (2006) criticise the usual publication bias approach of Egger et al. (1997) ([[http://www.bmj.com/content/315/7109/629.full|details here)]] who proposed a test for asymmetry of the funnel plot. In particular Peters et al. (2006) say the p-values of Egger et al.'s test are not as reliable as theirs because they have inflated type I errors. They, instead assess the regression coefficient showing the association between sample size and the log odds ratio and test to check this is zero. Both these methods may be fitted using this [[attachment:eggpe.xls|spreadsheet.]]
Line 13: Line 13:
More recently van Assen, van Aert and Wicherts (2015) have proposed an easy to use method which sums p-values from studies in the meta-analysis to assess publication bias by comparing this sum to a Gamma distribution. Their estimate, however, assumes that there is no between study heterogeneity.
Line 15: Line 17:
It is also usually of interest to obtain a pooled estimate of the effect size. This is usually done using the inverse of the effect size variance. In the case of the log odds its variance is found by simply summing the inverses of each of the four cells in the table of frequencies which are used to compute the odds ratio for a particular study (see an example [:FAQ/oddsr:here.]) It is also usually of interest to obtain a pooled estimate of the effect size. This is usually done using the inverse of the effect size variance. In the case of the log odds its variance is found by simply summing the inverses of each of the four cells in the table of frequencies which are used to compute the odds ratio for a particular study (see an example [[FAQ/oddsr|here.]])
Line 17: Line 19:
Schmidt, Oh and Hayes (2009) illustrate methods for performing meta-analyses combining estimates of Cohen's d from various studies assuming (a) the same population value underlies all the studies being combined (fixed) and (b) different population values underlie the studies being combined (random) and conclude that, although the majority of studies in the literature use fixed effects, in many cases combined estimates based on pooling random terms have a better coverage of the population group differences. Both combined effects use the variance of Cohen's d suggested in the 'distributions of effect sizes' section just short of half-way down the page given [http://en.wikipedia.org/wiki/Effect_size here.]  Schmidt, Oh and Hayes (2009) illustrate methods for performing meta-analyses combining estimates of Cohen's d from various studies assuming (a) the same population value underlies all the studies being combined (fixed) and (b) different population values underlie the studies being combined (random) and conclude that, although the majority of studies in the literature use fixed effects, in many cases combined estimates based on pooling random terms have a better coverage of the population group differences. Both combined effects use the variance of Cohen's d suggested in the 'distributions of effect sizes' section just short of half-way down the page given [[http://en.wikipedia.org/wiki/Effect_size|here.]] Note that the variation across studies (random element) is usually measured using the variance component, tau^2^. Tau^2^ can take the value zero (usually due to it being estimated as being negative which is conceptually inadmissable given it is a variance). A zero tau^2 indicates no variation in effect sizes across studies. Lopez-Lopez et al (2014) have performed simulation studies which suggest __at least 20 studies are needed for precise estimates of the heterogeneity variance__ (tau-squared).
Line 19: Line 21:
The above article also mentions briefly combining study correlations using Fisher's z transform. An involved iterative method for producing a random effects meta-analysis using the WLS weight option in SPSS on correlations is described in the introductory pdf article [attachment:metaSPSS.pdf here.] The above article also mentions briefly combining study correlations using Fisher's z transform. An involved iterative method for producing a random effects meta-analysis using the WLS weight option in SPSS on correlations is described in the introductory pdf article [[attachment:metaSPSS.pdf|here.]]
Line 21: Line 23:
Wilson (2005) has written a comprehensive set of SPSS macros for meta-analysis pooling Cohen's d, the logged odds ratio or the Fisher transformed Pearson correlation, anova effect sizes and regression effect sizes which assume either fixed or random effects. These macros and the spreadsheets mentioned below also perform a test for random effects using the Q statistic (Peters et al, 2010). The macros are available for download from [http://mason.gmu.edu/~dwilsonb/ma.html here.] __Note that you may need to add a full stop to comment lines (starting with an asterisk) in the downloaded macros to make them work.__ McShane and Bockenholt (2018) introduce their website [[http://www.singlepapermetaanalysis.com | available to use here]] illustrating its use to compare interaction and main effects in a meta-analysis of between subjects studies.
Line 23: Line 25:
There is also a 'help' file detailing how to use these macros on that website (or [attachment:wilson.pdf here if the link is broken]). The help file explains and illustrates the uses of simple formulae for adjusting Cohen's d for small samples and computing its variance which is inverted to give the study weights prior to running the macro. Wilson (2005) has written a comprehensive set of SPSS macros for meta-analysis pooling Cohen's d, the logged odds ratio or the Fisher transformed Pearson correlation, anova effect sizes and regression effect sizes which assume either fixed or random effects. These macros and the spreadsheets mentioned below also perform a test for random effects using the Q statistic (Peters et al, 2010). The macros are available for download from [[http://mason.gmu.edu/~dwilsonb/ma.html|here.]] __Note that you may need to add a full stop to comment lines (starting with an asterisk) in the downloaded macros to make them work.__
There are also some SPSS macros, MeanES, MetaF and MetaReg from Lipsey and Wilson (2001) as used in Guilera et al.(2013). In addition David Wilson's website also hosts a web calculator for effect size 95% Confidence Intervals [[http://cebcp.org/practical-meta-analysis-effect-size-calculator/ | here.]]
Line 25: Line 28:
These macros are used by Lindberg et al (2010) to obtain a pooled Cohen's d pooling across 242 studies! A [attachment:metacd.xls spreadsheet] performs Wilson's inverse variance pooling of Cohen's D using the formulae in his MeanES.sps macro. There is also a 'help' file detailing how to use these macros on that website (or [[attachment:wilson.pdf|here if the link is broken]]). The help file explains and illustrates the uses of simple formulae for adjusting Cohen's d for small samples and computing its variance which is inverted to give the study weights prior to running the macro.
Line 27: Line 30:
Formulae used in Wilson's macros are also explained more fully [http://www.zoology.ubc.ca/~schluter/bio548/workshop-meta.html here.] These macros are used by Lindberg et al (2010) to obtain a pooled Cohen's d pooling across 242 studies! A [[attachment:metacd.xls|spreadsheet]] performs Wilson's inverse variance pooling of Cohen's D using the formulae in his MeanES.sps macro.
Line 29: Line 32:
An alternative procedure, the Mantel-Haenszel method, which is available in the CROSSTABS procedure in SPSS is recommended by the Cochrane handbook (Higgins and Green, 2008)(see above primer and [attachment:cochraneor.pdf here]) for pooling odds ratios (ORs) with the weightings of each odds ratio related to the number of observations on which that odds ratio is based. The approach used by Wilson is to weight the ''log'' of each odds ratio by its inverse variance (also related to sample sizes but using different weights to the Mantel-Haenszel). The log odds ratios are used because their variance has [:FAQ/oddsr:a simple form]. Results can be backtransformed to odds ratios if desired. This method is the one computed in the above macros or using this [attachment:metaor.xls spreadsheet.] Anderson and Maxwell (2016) give formulae (equations (4)-(8)) for combining two Cohen's ds from two studies to assess the reproducibility of a result.
Line 31: Line 34:
The variance of Fisher transformed Pearson correlations also has a simple form (see [:FAQ/corrs: here]) and can, therefore, be pooled using the inverse variance method of Wilson(2005) in this [attachment:metafr.xls spreadsheet.]  * [[FAQ/metapoolsd | Obtaining a pooled SD from a pooled weighted difference in means in a meta-analysis]]
Line 33: Line 36:
In addition to Wilson's Metareg SPSS macro the ''WLS weight'' option in ''SPSS LINEAR REGRESSION'' can be used for assessing the influence of covariates on effect sizes combining summary measures using fixed and random variances as the weight on each summary measure. For an application of this using the log odds ratio effect size see the powerpoint slides located [attachment:WLSmeta.ppt here.]  *[[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3463584/ | Stewart et al. (2012) suggest on page 4 of this paper using random effects regression e.g. lme or lme4 in R to obtain effect sizes for single case meta-analysis.]] They call this a one-stage procedure. They also give R code for using the output in metaphor to make a two-stage procedure.
Line 35: Line 38:
A further primer on meta-analysis (in pdf format) is given [attachment:PLane.pdf here.]  *Further to the above [[http://www.metafor-project.org/doku.php/tips:two_stage_analysis | R formulae in a worked example showing two-stage code using summary measures from the repeated scores of individuals (stage 1) inputted into a meta-analysis (stage 2) is given here.]]

Formulae used in Wilson's macros are also explained more fully [[http://www.zoology.ubc.ca/~schluter/bio548/workshop-meta.html|here.]]

An alternative procedure, the Mantel-Haenszel method, which is available in the CROSSTABS procedure in SPSS is recommended by the Cochrane handbook (Higgins and Green, 2008)(see above primer and [[attachment:cochraneor.pdf|here]]) for pooling odds ratios (ORs) with the weightings of each odds ratio related to the number of observations on which that odds ratio is based. The approach used by Wilson is to weight the ''log'' of each odds ratio by its inverse variance (also related to sample sizes but using different weights to the Mantel-Haenszel). The log odds ratios are used because their variance has [[FAQ/oddsr|a simple form]]. Results can be backtransformed to odds ratios if desired. This method is the one computed in the above macros or using this [[attachment:metaor.xls|spreadsheet.]]

The variance of Fisher transformed Pearson correlations also has a simple form (see [[FAQ/corrs| here]]) and can, therefore, be pooled using the inverse variance method of Wilson(2005) in this [[attachment:metafr.xls|spreadsheet.]]

In addition to Wilson's Metareg SPSS macro the ''WLS weight'' option in ''SPSS LINEAR REGRESSION'' can be used for assessing the influence of covariates on effect sizes combining summary measures using fixed and random variances as the weight on each summary measure. For an application of this using the log odds ratio effect size see the powerpoint slides located [[attachment:WLSmeta.ppt|here.]]

A further primer on meta-analysis (in pdf format) is given [[attachment:PLane.pdf|here.]]

__Moderators in meta-analyses__

The R procedure '''metafor''' (Viechtbauer (2007) is recommended for more advanced meta-analyses involving moderator variables where the effect of one or more predictors on variation across studies is of interest. This works out statistics such as tau^2 described above and also some Q statistics to assess the influence of the moderators using the 'rma' (random meta-analysis) procedure.
Line 38: Line 55:

Anderson, S. F. and Maxwell, S. E. (2016) There’s More Than One Way to Conduct a Replication Study:Beyond Statistical Significance. ''Psychological Methods'', '''21(1)''' 1–12

Borenstein, M., Hedges, L.V., Higgins, J.P. and Rothstein, H.R. (2009). [[attachment:bhhr.pdf|Introduction to Meta-Analysis (Statistics in Practice). Wiley:New York.]] Good primer covering differences between fixed and random effects meta-analyses and moderation analyses relating predictors to variation across studies.
Line 43: Line 64:
The Field and Gillett article is a useful primer with illustrations of the main issues and presentation of results. This [http://www.statisticshell.com/meta_analysis website] contains further information files including SPSS and R programmes which can be downloaded to carry out aspects of a standard meta-analysis including pooled effect size calculations and funnel plots. The Field and Gillett article is a useful primer with illustrations of the main issues and presentation of results. This [[http://www.statisticshell.com/meta_analysis|website]] contains further information files including SPSS and R programmes which can be downloaded to carry out aspects of a standard meta-analysis including pooled effect size calculations and funnel plots.

Guilera, G., Gomez-Benito, J., Hidalgo, M.D. and Sanchez-Meca, J. (2013). Type I error and statistical power of the Mantel-Haenszel procedure for detecting DIF: A meta-analysis. ''Psychological Methods'' '''18(4)''' 553-571.
Line 51: Line 74:
Lipsey, M. W. and Wilson, D. B. (2001). Practical meta-analysis. Thousand Oaks, CA: Sage.

Lopez-Lopez, J. A., Fulgencio Marın-Martınez1, F., Sanchez-Meca1, J., Van den Noortgate, W. and Viechtbauer, W. (2014). Estimation of the predictive power of the model in
mixed-effects meta-regression: A simulation study. ''British Journal of Mathematical and Statistical Psychology'' '''67''', 30–48.

McShane, B. B. and Bockenholt, U. (2018) [[attachment:Mcshame.pdf | Want to make behavioual research more replicable? Promote single paper meta-analysis.]] ''Significance'' '''15(6)''' 38-40.
Line 55: Line 85:
Peters, J. L., Sutton, A. J., Jones, D. R. and Abrams K. R. (2010). Assessing publication bias in meta-analyses in the presence of between-study heterogeneity. ''Journal of the Royal Statistical Society A'' '''173(3)''' 575-591. (see [http://onlinelibrary.wiley.com/doi/10.1111/j.1467-985X.2009.00629.x/full here.]) Peters, J. L., Sutton, A. J., Jones, D. R. and Abrams K. R. (2010). Assessing publication bias in meta-analyses in the presence of between-study heterogeneity. ''Journal of the Royal Statistical Society A'' '''173(3)''' 575-591. (see [[http://onlinelibrary.wiley.com/doi/10.1111/j.1467-985X.2009.00629.x/full|here.]])
Line 61: Line 91:
Stewart, G. B., Altman, D. G., Askie, L. M., Duley, L., Simmonds, M. C. and Stewart, L. A. (2012) Statistical analysis of individual participant data meta-analyses: a comparison of methods and recommendations for practice. PLOS '''7(10)''' e46042.

Stram, D. O. (1996). Meta-analysis of published data using linear mixed-effects model. ''Biometrics'' '''52(2)''' 536-544.

van Assen, M. A. L. M., van Aert, R. C. M. and Wicherts, J. M. (2015) Meta-analysis using effect size distributions of only statistically significant studies. ''Psychological Methods'' '''20(3)''' 293-309.

VanDerwerken, D. (2012). What Petri dishes have to do with your research. ''Significance'' '''9(3)''' 40-42. Runner-up in a young statisticians writing competition, this article illustrates advantages such as increased power and precision afforded by meta-analysis (using 1/variance as weighting) in an example from microbiology.

Viechtbauer, W. (2007). Confidence intervals for the amount of heterogeneity in meta-analysis. ''Statistics in Medicine'' '''26(1)''' 37-52. Illustrates using the R procedure 'metafor'.

Meta-analysis issues: 1) How do I measure publication bias? 2) How do I obtain a confidence interval for pooled estimates from sets of log odds, Cohen's d and Pearson correlations using meta-analysis?

Measuring Publication Bias

This paper of Peters et al. (2006) proposes using a weighted linear regression to assess publication bias in reporting log odds ratios as an alternative to the usual funnel plot. Funnel scatterplots can be drawn plotting differences/associations which are either standardized (effect size/s.e.) or unstandardized on the x-axis against precision (1/s.e. of effect size) on the y-axis. If there is no publication bias the plot should be symmetric forming a funnel shape. (As an aside Forest plots are also commonly presented in meta-analysis which plot effect size confidence intervals for each study or set of studies on a single plot (Peters et al, 2010)).

The method of Peters et al. (2006) corresponds to a weighted linear regression using the log odds, obtained using a 2x2 table of frequencies from each study, as the outcome and the reciprocal of the sample size of each study as the only predictor with a variance measure, (1/n11+1/n12+1/n21+1/n22)-1 , used as a study weight where nij are the frequencies in the i,jth cells in each study (Peters et al. 2010). This may be carried out using the WLS weight option in SPSS LINEAR REGRESSION in SPSS (see this powerpoint talk)) or the metabias procedure from the meta library in R. It follows from the form of the weights that Peters regression procedure, unlike that of Egger, can only be used for odds ratios from 2x2 tables.

Peters et al. (2006) criticise the usual publication bias approach of Egger et al. (1997) (details here) who proposed a test for asymmetry of the funnel plot. In particular Peters et al. (2006) say the p-values of Egger et al.'s test are not as reliable as theirs because they have inflated type I errors. They, instead assess the regression coefficient showing the association between sample size and the log odds ratio and test to check this is zero. Both these methods may be fitted using this spreadsheet.

Egger et al.'s test is a test for the Y intercept = 0 from a linear regression of normalized effect estimate (estimate divided by its standard error) against precision (reciprocal of the standard error of the estimate). Note that this regression may also be equivalently carried out using the unstandardized effect size as outcome, its standard error as predictor and weighting each effect size by the reciprocal of its squared standard error. The power of this method to detect bias will be low with small numbers of studies.

More recently van Assen, van Aert and Wicherts (2015) have proposed an easy to use method which sums p-values from studies in the meta-analysis to assess publication bias by comparing this sum to a Gamma distribution. Their estimate, however, assumes that there is no between study heterogeneity.

Obtaining pooled effect sizes

It is also usually of interest to obtain a pooled estimate of the effect size. This is usually done using the inverse of the effect size variance. In the case of the log odds its variance is found by simply summing the inverses of each of the four cells in the table of frequencies which are used to compute the odds ratio for a particular study (see an example here.)

Schmidt, Oh and Hayes (2009) illustrate methods for performing meta-analyses combining estimates of Cohen's d from various studies assuming (a) the same population value underlies all the studies being combined (fixed) and (b) different population values underlie the studies being combined (random) and conclude that, although the majority of studies in the literature use fixed effects, in many cases combined estimates based on pooling random terms have a better coverage of the population group differences. Both combined effects use the variance of Cohen's d suggested in the 'distributions of effect sizes' section just short of half-way down the page given here. Note that the variation across studies (random element) is usually measured using the variance component, tau2. Tau2 can take the value zero (usually due to it being estimated as being negative which is conceptually inadmissable given it is a variance). A zero tau^2 indicates no variation in effect sizes across studies. Lopez-Lopez et al (2014) have performed simulation studies which suggest at least 20 studies are needed for precise estimates of the heterogeneity variance (tau-squared).

The above article also mentions briefly combining study correlations using Fisher's z transform. An involved iterative method for producing a random effects meta-analysis using the WLS weight option in SPSS on correlations is described in the introductory pdf article here.

McShane and Bockenholt (2018) introduce their website available to use here illustrating its use to compare interaction and main effects in a meta-analysis of between subjects studies.

Wilson (2005) has written a comprehensive set of SPSS macros for meta-analysis pooling Cohen's d, the logged odds ratio or the Fisher transformed Pearson correlation, anova effect sizes and regression effect sizes which assume either fixed or random effects. These macros and the spreadsheets mentioned below also perform a test for random effects using the Q statistic (Peters et al, 2010). The macros are available for download from here. Note that you may need to add a full stop to comment lines (starting with an asterisk) in the downloaded macros to make them work. There are also some SPSS macros, MeanES, MetaF and MetaReg from Lipsey and Wilson (2001) as used in Guilera et al.(2013). In addition David Wilson's website also hosts a web calculator for effect size 95% Confidence Intervals here.

There is also a 'help' file detailing how to use these macros on that website (or here if the link is broken). The help file explains and illustrates the uses of simple formulae for adjusting Cohen's d for small samples and computing its variance which is inverted to give the study weights prior to running the macro.

These macros are used by Lindberg et al (2010) to obtain a pooled Cohen's d pooling across 242 studies! A spreadsheet performs Wilson's inverse variance pooling of Cohen's D using the formulae in his MeanES.sps macro.

Anderson and Maxwell (2016) give formulae (equations (4)-(8)) for combining two Cohen's ds from two studies to assess the reproducibility of a result.

Formulae used in Wilson's macros are also explained more fully here.

An alternative procedure, the Mantel-Haenszel method, which is available in the CROSSTABS procedure in SPSS is recommended by the Cochrane handbook (Higgins and Green, 2008)(see above primer and here) for pooling odds ratios (ORs) with the weightings of each odds ratio related to the number of observations on which that odds ratio is based. The approach used by Wilson is to weight the log of each odds ratio by its inverse variance (also related to sample sizes but using different weights to the Mantel-Haenszel). The log odds ratios are used because their variance has a simple form. Results can be backtransformed to odds ratios if desired. This method is the one computed in the above macros or using this spreadsheet.

The variance of Fisher transformed Pearson correlations also has a simple form (see here) and can, therefore, be pooled using the inverse variance method of Wilson(2005) in this spreadsheet.

In addition to Wilson's Metareg SPSS macro the WLS weight option in SPSS LINEAR REGRESSION can be used for assessing the influence of covariates on effect sizes combining summary measures using fixed and random variances as the weight on each summary measure. For an application of this using the log odds ratio effect size see the powerpoint slides located here.

A further primer on meta-analysis (in pdf format) is given here.

Moderators in meta-analyses

The R procedure metafor (Viechtbauer (2007) is recommended for more advanced meta-analyses involving moderator variables where the effect of one or more predictors on variation across studies is of interest. This works out statistics such as tau^2 described above and also some Q statistics to assess the influence of the moderators using the 'rma' (random meta-analysis) procedure.

References

Anderson, S. F. and Maxwell, S. E. (2016) There’s More Than One Way to Conduct a Replication Study:Beyond Statistical Significance. Psychological Methods, 21(1) 1–12

Borenstein, M., Hedges, L.V., Higgins, J.P. and Rothstein, H.R. (2009). Introduction to Meta-Analysis (Statistics in Practice). Wiley:New York. Good primer covering differences between fixed and random effects meta-analyses and moderation analyses relating predictors to variation across studies.

Egger, M., Smith, G. D., Schneider, M. and Minder, C. (1997). Bias in meta-analysis detected by a simple, graphical test. British Medical Journal 315 629-634.

Field, A. P. and Gillett, R. (2010). How to do a meta-analysis. British Journal of Mathematical and Statistical Psychology 63 665-694.

The Field and Gillett article is a useful primer with illustrations of the main issues and presentation of results. This website contains further information files including SPSS and R programmes which can be downloaded to carry out aspects of a standard meta-analysis including pooled effect size calculations and funnel plots.

Guilera, G., Gomez-Benito, J., Hidalgo, M.D. and Sanchez-Meca, J. (2013). Type I error and statistical power of the Mantel-Haenszel procedure for detecting DIF: A meta-analysis. Psychological Methods 18(4) 553-571.

Higgins, J. P. T. and Green, S. (2008). Cochrane Handbook for Systematic Reviews of Interventions, Version 5.0.1. Cochrane Collaboration:Oxford. (Available from http://www.cochrane-handbook.org).

Lindberg, S. M., Hyde, J. S., Linn, M. C. and Petersen, J. L. (2010). New trends in gender and mathematics performance: a meta-analysis. Psychological Bulletin 136(6) 1123-1135.

(A PDF COPY OF THE ABOVE ARTICLE IS AVAILABLE FOR FREE DOWNLOAD VIA SCIENCE DIRECT FOR CBSU USERS)

Lipsey, M. W. and Wilson, D. B. (2001). Practical meta-analysis. Thousand Oaks, CA: Sage.

Lopez-Lopez, J. A., Fulgencio Marın-Martınez1, F., Sanchez-Meca1, J., Van den Noortgate, W. and Viechtbauer, W. (2014). Estimation of the predictive power of the model in mixed-effects meta-regression: A simulation study. British Journal of Mathematical and Statistical Psychology 67, 30–48.

McShane, B. B. and Bockenholt, U. (2018) Want to make behavioual research more replicable? Promote single paper meta-analysis. Significance 15(6) 38-40.

Morris, S. B. and Deshon, R. P. (2002). Combining effect size estimates in meta-analysis with repeated measures and independent-groups designs. Psychological Methods 7(1) 105-125.

(A PDF COPY OF THE ABOVE ARTICLE IS AVAILABLE FOR FREE DOWNLOAD VIA SCIENCE DIRECT FOR CBSU USERS)

Peters, J. L., Sutton, A. J., Jones, D. R. and Abrams K. R. (2010). Assessing publication bias in meta-analyses in the presence of between-study heterogeneity. Journal of the Royal Statistical Society A 173(3) 575-591. (see here.)

(THE ABOVE JOURNAL ARTICLE IS ALSO AVAILABLE FOR CBSUERS FROM PETER WATSON UPON REQUEST AND IS A GOOD PRIMER PROVIDING AN OVERVIEW OF META-ANALYSES INCLUDING ILLUSTRATIONS OF THE ABOVE IDEAS)

Schmidt, F. L., Oh, I. S. and Hayes, T. L. (2009). Fixed- versus random-effects models in meta-analysis: model properties and an empircial comparison of differences in results. British Journal of Mathematical and Statistical Psychology 62 97-128.

Stewart, G. B., Altman, D. G., Askie, L. M., Duley, L., Simmonds, M. C. and Stewart, L. A. (2012) Statistical analysis of individual participant data meta-analyses: a comparison of methods and recommendations for practice. PLOS 7(10) e46042.

Stram, D. O. (1996). Meta-analysis of published data using linear mixed-effects model. Biometrics 52(2) 536-544.

van Assen, M. A. L. M., van Aert, R. C. M. and Wicherts, J. M. (2015) Meta-analysis using effect size distributions of only statistically significant studies. Psychological Methods 20(3) 293-309.

VanDerwerken, D. (2012). What Petri dishes have to do with your research. Significance 9(3) 40-42. Runner-up in a young statisticians writing competition, this article illustrates advantages such as increased power and precision afforded by meta-analysis (using 1/variance as weighting) in an example from microbiology.

Viechtbauer, W. (2007). Confidence intervals for the amount of heterogeneity in meta-analysis. Statistics in Medicine 26(1) 37-52. Illustrates using the R procedure 'metafor'.

Wilson, D. B. (2005). Meta-analysis macros for SAS, SPSS, and Stata [Computer software]. Retrieved from http://mason.gmu.edu/~dwilsonb/ma.html

None: FAQ/meta (last edited 2019-01-07 15:33:11 by PeterWatson)