MTS 525-0
Special Topics Research Seminar
Section 20: Generalizing about Message Effects
Spring 2020
SYLLABUS: TOPIC 4
TOPIC 4: Research synthesis via
meta-analysis
Outline:
4.1 Overviews and summary treatments
4.2 Formulating the research question(s)
4.3 Retrieving relevant research
4.3.1 Locating relevant research
4.3.2 Inclusion criteria
4.4 Managing and coding cases
4.5 Effect sizes
4.5.1 Effect-size indices
4.5.2 Adjusting effect sizes
4.6 Analytic procedures
4.6.1 Vote-counting
4.6.2 Combining raw data
4.6.3 Combining p levels
4.6.4 Combining effect sizes: Fixed-effect vs. random-effects models
4.6.5 Detecting publication bias
4.6.6 Handling multiple outcome variables
4.6.7 Statistical power of meta-analytic procedures
4.7 Reporting meta-analyses
4.8 Single-paper meta-analysis
4.9 Reflections
4.1 Overviews and summary treatments
For further reading:
Field, A. (n.d.). A bluffer’s guide to meta-analysis. Available at http://www.discoveringstatistics.com/docs/meta.pdf or http://users.sussex.ac.uk/~andyf/meta.pdf
Field, A. P., & Gillett, R. (2010). How to do a meta-analysis. British Journal of Mathematical & Statistical Psychology, 63, 665-694. doi: 10.1348/000711010X502733
Noar, S. M., & Snyder, L. B. (2014). Building cumulative knowledge in health communication: The application of meta-analytic methods. In B. B. Whaley (Ed.), Research methods in health communication: Principles and application (pp. 232-253). New York: Routledge.
Zebregs, S., & de Bruijn, G.-J. (2017). Meta-analysis in health and risk messaging. Oxford Research Encyclopedia of Communication. doi:10.1093/acrefore/9780190228613.013.523
Eisend, M. (2017). Meta-analysis in advertising research. Journal of Advertising, 46, 21-35. doi:10.1080/00913367.2016.1210064
Siddaway, A. P., Wood, A. M., & Hedges, L. V. (2019). How to do a systematic review: a best practice guide for conducting and reporting narrative reviews, meta-analyses, and meta-syntheses. Annual Review of Psychology, 70, 747-770. https://doi.org/10.1146/annurev-psych-010418-102803
Borenstein, M. (2019). Common mistakes in meta-analysis and how to avoid them. Englewood, NJ: Biostat.
4.2 Formulating the research question(s)
For further reading:
Cooper,
H. (1998). Chapter 2: The problem formulation stage. In H. Cooper, Synthesizing research: A guide for
literature reviews (3rd ed., pp. 12-33).
Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2009). Chapter 40: When does it make sense to perform a meta-analysis? (pp. 357-364). Introduction to meta-analysis. Chicester, West Sussex, UK: Wiley.
Card, N. A. (2012). Chapter 2: Questions that can and questions that cannot be answered through meta-analysis. (pp. 16-33). Applied meta-analysis for social science research. New York: Guilford.
Carpenter, C. J. (in press as of December 2019). Meta-analyzing apples and oranges: How to make applesauce instead of fruit salad. Human Communication Research. https://doi.org/10.1093/hcr/hqz018
4.3 Retrieving relevant research
4.3.1 Locating relevant research
For further reading:
Ogilvie, D., Hamilton, V., Egan, M., & Petticrew, M. (2005). Systematic reviews of health effects of social interventions: 1. Finding the evidence: How far should you go? Journal of Epidemiology and Community Health, 59, 804-804. doi:10.1136/jech.2005.034181
Wicherts, J. M., Borsboom, D., Kats, J., & Molenaar, D. (2006). The poor availability of psychological research data for reanalysis. American Psychologist, 61, 726-728.
White, H. D. (2009). Scientific communication and literature retrieval. In H. Cooper, L. V. Hedges, & J. C. Valentine (Eds.), The handbook of research synthesis and meta-analysis (2nd ed., pp. 51-71). New York: Russell Sage Foundation.
Reed, J. G., & Baxter, P. M. (2009). Using reference databases. In H. Cooper, L. V. Hedges, & J. C. Valentine (Eds.), The handbook of research synthesis and meta-analysis (2nd ed., pp. 73-101). New York: Russell Sage Foundation.
Rothstein, H. R., & Hopewell, S. (2009). Grey literature. In H. Cooper, L. V. Hedges, & J. C. Valentine (Eds.), The handbook of research synthesis and meta-analysis (2nd ed., pp. 103-125). New York: Russell Sage Foundation.
Ecker, E. D., & Skelly, A. C. (2010). Conducting a winning literature search. Evidence-Based Spine-Care Journal, 1, 9–14. doi:10.1055/s-0028-1100887
Mahood Q., Eerd D. V., & Irvin E. (2014). Searching for grey literature for systematic reviews: Challenges and benefits. Research Synthesis Methods, 5, 221–234. doi:10.1002/jrsm.1106
Marshall, I. J., Johnson, B. T., Wang, Z., Rajasekaran, S., & Wallace, B. C. (2020). Semi-automated evidence synthesis in health psychology: current methods and future prospects. Health Psychology Review, 14(1), 145-158. DOI:10.1080/17437199.2020.1716198
4.3.2 Inclusion criteria
For further reading:
Kraemer, H. C., Gardner, C., Brooks, J. O., & Yesavage, J. A. (1998). Advantages of excluding underpowered studies in meta-analysis: Inclusionist versus exclusionist viewpoints. Psychological Methods, 3, 23–31. doi: 10.1037/1082-989x.3.1.23
Ogilvie,
D., Egan, M.,
Eisend, M., & Tarrahi, F. (2014). Meta-analysis selection bias in marketing research. International Journal of Research in Marketing, 31, 317-326. doi:10.1016/j.ijresmar.2014.03.006
Friese, M., & Frankenbach, J. (in press as of January 2020). p-hacking and publication bias interact to distort meta-analytic effect size estimates. Psychological Methods. https://doi.org/10.1037/met0000246
4.4 Managing and coding cases
For further reading:
Yeaton, W. H., & Wortman, P. M. (1993). On the reliability of meta-analytic reviews: The role of intercoder agreement. Evaluation Review, 17, 292-309.
Woodworth,
G. (1994). Managing meta-analytic databases. In H. Cooper & L. V. Hedges
(Eds.), The handbook of research
synthesis (pp. 177-189).
Juni, P., Witschi, A., Bloch R., & Egger, M. (1999). The hazards of scoring the quality of clinical trials for meta-analysis. JAMA, 282(11), 1054–1060. doi:10.1001/jama.282.11.1054
Lipsey, M. W., & Wilson, D. B. (2001). Chapter 4:
Developing a coding scheme and coding study reports. In M. W. Lipsey & D. B. Wilson, Practical meta-analysis (pp. 73-90).
Lipsey, M. W., & Wilson, D. B. (2001). Chapter 5: Data
management. In M. W. Lipsey & D. B. Wilson, Practical meta-analysis (pp. 91-104).
Valentine, J. C. & Cooper, H. (2008). A systematic and transparent approach for assessing the methodological quality of intervention effectiveness research: The study design and implementation assessment device (Study DIAD). Psychological Methods, 13, 130-149. doi:10.1037/1082-989X.13.2.130
Wilson, D. B. (2009). Systematic coding. In H. Cooper, L. V. Hedges, & J. C. Valentine (Eds.), The handbook of research synthesis and meta-analysis (2nd ed., pp. 159-176). New York: Russell Sage Foundation.
Valentine, J. C. (2009). Judging the quality of primary research. In H. Cooper, L. V. Hedges, & J. C. Valentine (Eds.), The handbook of research synthesis and meta-analysis (2nd ed., pp. 129-146). New York: Russell Sage Foundation.
Balshem H, Helfand M, Schünemann HJ, Oxman AD, Kunz R, Brozek J, Vist GE, Falck-Ytter Y, Meerpohl J, Norris S, Guyatt GH. (2011). GRADE guidelines: 3. Rating the quality of evidence. Journal of Clinical Epidemiology, 64, 401-406.
Guyatt, G. H., Oxman, A.D., Vist, G., Kunz, R., Brozek, J., Alonso-Coello, P., Montori, V., Akl, E. A., Djulbegovic, B., Falck-Ytter, Y., Norris, S. L., Williams, J. W., Atkins, D., Meerpohl, J., & Schünemann, H. J. (2011). GRADE guidelines: 4. Rating the quality of evidence-study limitations (risk of bias). Journal of Clinical Epidemiology, 64, 407-415.
Johnson, B. T., Low, R. E., & MacDonald, H. V. (2015). Panning for the gold in health research: Incorporating studies’ methodological quality in meta-analysis. Psychology and Health, 30, 135-152. doi:10.1080/08870446.2014.953533
4.5 Effect sizes
4.5.1 Effect-size indices
For further reading:
Becker, B. J. (1988). Synthesizing standardized mean-change measures. British Journal of Mathematical and Statistical Psychology, 41, 257-278.
Haddock, C. K., Rindskopf, D., & Shadish, W. R. (1998). Using odds ratios as effect sizes for meta-analysis of dichotomous data: A primer on methods and issues. Psychological Methods, 3, 339-353.
Viera, A. J. (2008). Odds ratios and risk ratios: what's the difference and why does it matter? Southern Medical Journal, 101, 730-734. doi: 10.1097/SMJ.0b013e31817a7ee4 Cummings, P. (2009). The relative merits of risk ratios and odds ratios. Archives of Pediatric and Adolescent Medicine, 163, 438-445. doi:10.1001/archpediatrics.2009.31
Borenstein, M. (2009). Effect sizes for continuous data. In H. Cooper, L. V., Hedges, & J. C. Valentine (Eds.), The handbook of research synthesis and meta-analysis (2nd ed., pp. 221-235). New York: Russell Sage Foundation.
Fleiss, J. L., & Berlin, J. A. (2009). Effect sizes for dichotomous data. In H. Cooper, L. V., Hedges, & J. C. Valentine (Eds.), The handbook of research synthesis and meta-analysis (2nd ed., pp. 237-253). New York: Russell Sage Foundation.
Rosnow, R. L., & Rosenthal, R. (2009). Effect sizes: Why, when, and how to use them. Zeitschrift fur Psychologie, 217, 6-14. doi:10.1027/0044-3409.217.1.6
Lakens, D. (2013). Calculating and reporting effect sizes to facilitate cumulative science: A practical primer for t-tests and ANOVAs. Frontiers in Psychology, 4(863). doi:10.3389/fpsyg.2013.00863
Cuijpers, P., Weitz, E., Cristea, I., & Twisk, J. (2017). Pre-post effect sizes should be avoided in meta-analyses. Epidemiology and Psychiatric Sciences, 26, 364-368. doi:10.1017/S2045796016000809
4.5.2 Adjusting
effect sizes
For further reading:
Schmidt, F. L., & Hunter, J. E. (1999). Theory testing and measurement error. Intelligence, 27, 183–198.
Borsboom, D., & Mellenbergh, G. J. (2002). True scores, latent variables, and constructs: A comment on Schmidt and Hunter. Intelligence, 30, 505-514.
Schmidt, F. L., Le, H., & Oh, I.-S. (2009). Correcting for the distorting effects of study artifacts in meta-analysis. In H. Cooper, L. V. Hedges, & J. C. Valentine (Eds.), The handbook of research synthesis and meta-analysis (2nd ed., pp. 317-333). New York: Russell Sage Foundation.
Michel, J. S., Viswesvaran, C., & Thomas, J. (2011). Conclusions from meta-analytic structural equation models generally do not change due to corrections for study artifacts. Research Synthesis Methods, 2, 174–187. doi: 10.1002/jrsm.47
Card, N. A. (2012). Chapter 6: Corrections to effect sizes. In Applied meta-analysis for social science research (pp. 126-146). New York: Guilford.
Wigley, C. J., III. (2013). Dispelling four myths about correction for attenuation in communication trait research. Communication Research Reports, 30, 175-182. doi: 10.1080/08824096.2012.762908
4.6 Analytic procedures
4.6.1 Vote-counting
For further reading:
Hedges,
L. V., & Olkin,
Bushman, B. J., & Wang, M. C. (2009). Vote-counting procedures in meta-analysis. In H. Cooper, L. V. Hedges, & J. C. Valentine (Eds.), The handbook of research synthesis and meta-analysis (2nd ed., pp. 207-220). New York: Russell Sage Foundation.
4.6.2 Combining raw data
For further reading:
Cooper, H., & Patall, E. A. (2009). The relative benefits of meta-analysis conducted with individual participant data versus aggregated data. Psychological Methods, 14, 165-176.
Curran, P. J., & Hussong, A. M. (2009). Integrative data analysis: The simultaneous analysis of multiple data sets. Psychological Methods, 14, 81-100.
Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2009). Introduction to meta-analysis. Chichester, West Sussex, UK: Wiley. (see pp. 303-309)
4.6.3 Combining p levels
For further reading:
Becker, B. J. (1987). Applying tests of combined significance in meta-analysis. Psychological Bulletin, 102, 164‑171.
Rosenthal,
R. (1991). Chapter 5: Combining probabilities. In R. Rosenthal, Meta-analytic procedures for social research
(2nd ed., pp. 89‑109).
Hedges, L. V., Cooper, H., & Bushman, B. J. (1992). Testing the null hypothesis in meta-analysis: A comparison of combined probability and confidence interval procedures. Psychological Bulletin, 111, 188-194.
4.6.4 Combining effect sizes: Fixed-effect vs. random-effects models
For further reading:
Field,
A. P. (2001). Meta-analysis of correlation coefficients: A
Schulze, R. (2004). Meta-analysis: A comparison of approaches. Cambridge, MA: Hogrefe and Huber Publishing.
Kisamore, J. L., & Brannick, M. T. (2008). An illustration of the consequences of meta-analysis model choice. Organizational Research Methods, 11, 35-53. doi:10.1177/1094428106287393
Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2009). Fixed-effect versus random-effects models. Chapter 13 (pp. 77-86) in: Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2009). Introduction to meta-analysis. Chichester, West Sussex, UK: Wiley.
Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2010). A basic introduction to fixed-effect and random-effects models for meta-analysis. Research Synthesis Methods, 1, 97–111. doi: 10.1002/jrsm.12
Anker, A. E., Reinhart, A. M., & Feeley, T. H. (2010). Meta-analysis of meta-analyses in communication: Comparing fixed effects and random effects analysis models. Communication Quarterly, 58, 257-278. doi: 10.1080/01463373.2010.503154
Rice, K., Higgins, J. P. T., & Lumley, T. (2018). A re-evaluation of fixed effect(s) meta-analysis. Journal of the Royal Statistical Society A, 181(1), 205-227. doi:10.1111/rssa.12275
Hall, J. A., & Rosenthal, R. (2018). Choosing between random effects models in meta‐analysis: Units of analysis and the generalizability of obtained results. Social and Personality Psychology Compass, 12, e12414. doi:10.1111/spc3.12414
4.6.5 Detecting publication bias
For further reading:
Duval, S.., & Tweedie, R.
(2000). Trim and fill: A simple funnel-plot-based method of testing and
adjusting for publication bias in meta-analysis. Biometrics, 56, 455-463.
Tang, J.-L., & Liu, J. L. Y. (2000).
Misleading funnel plot for detection of bias in meta-analysis. Journal of Clinical Epidemiology, 53,
477–484. https://doi.org/10.1016/S0895-4356(99)00204-8
Sterne, J. A. C., Becker, B. J., & Egger, M. (2005). The funnel plot. In H. R. Rothstein, A. J. Sutton, & M. Borenstein (Eds.), Publication bias in meta-analysis: Prevention, assessment and adjustments (pp. 75-98). Chichester, UK: John Wiley.
Duval, S. (2005). The trim and fill method. In H. R. Rothstein, A. J. Sutton, & M. Borenstein (Eds.), Publication bias in meta-analysis: Prevention, assessment and adjustments (pp. 127-144). Chichester, UK: John Wiley.
Borenstein, M. (2005). Software for publication bias. In H. R. Rothstein, A. J. Sutton, & M. Borenstein (Eds.), Publication bias in meta-analysis: Prevention, assessment and adjustments (pp. 193-220). Chichester, UK: John Wiley.
Sterne, J. A. C., & Egger, M. (2005). Regression methods to detect publication and other bias in meta-analysis. In H. R. Rothstein, A. J. Sutton, & M. Borenstein (Eds.), Publication bias in meta-analysis: Prevention, assessment and adjustments (pp. 99-110). Chichester, UK: John Wiley.
Kromrey, J. D., & Rendina-Gobioff, G. (2006). On knowing what we do not know: An empirical comparison of methods to detect publication bias in meta-analysis. Educational and Psychological Measurement, 66, 357-373. https://doi.org/10.1177/0013164405278585
Peters, J. L., Sutton, A. J., Jones, D. R., Abrams, K. R., & Rushton, L. (2007). Performance of the trim and fill method in the presence of publication bias and between-study heterogeneity. Statistics in Medicine, 26(25), 4544–4562. doi:10.1002/sim.2889
Ioannidis, J. P. A., & Trikalinos, T. A. (2007). The appropriateness of asymmetry tests for publication bias in meta-analyses: a large survey. Canadian Medical Association Journal, 176, 1091–1096. https://doi.org/10.1503/cmaj.060410
Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2009). Publication bias. Chapter 30 (pp. 277-292) in: Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2009). Introduction to meta-analysis. Chichester, West Sussex, UK: Wiley.
Sterne, J. A. C., Sutton, A. J., Ioannidis, J. P. A., Terrin, N., Jones, D. R., Lau, J., Carpenter, J., Rücker, G., Harbord, R. M., Schmid, C. H., Tetzlaff, J., Deeks, J. J., Peters, J., Macaskill, P., Schwarzer, G., Duval, S., Altman, D. G., Moher, D., & Higgins, J. P. T. (2011). Recommendations for examining and interpreting funnel plot asymmetry in meta-analyses of randomised controlled trials. BMJ, 343, d4200. https://doi.org/10.1136/bmj.d4002
Ferguson, C. J., & Brannick, M. T. (2012). Publication bias in psychological science: Prevalence, methods for identifying and controlling, and implications for the use of meta-analyses. Psychological Methods, 17, 120–128. https://doi.org/ 10.1037/a0024445
Hunter, J. P., Saratzis, A., Sutton, A. J., Boucher, R. H., Sayers, R. D., & Bown, M. J. (2014). In meta-analyses of proportion studies, funnel plots were found to be an inaccurate method of assessing publication bias. Journal of Clinical Epidemiology, 67(8, 897-903. https://doi.org/ 10.1016/j.jclinepi.2014.03.003
Renkewitz, F., & Keiner, M. (2019). How to detect publication bias in psychological research. Zeitschrift für Psychologie, 227(4), 261-279. https://doi.org/10.1027/2151-2604/a000386
Carter, E. C., Schönbrodt, F. D., Gervais, W. M., & Hilgard, J. (2019). Correcting for bias in psychology: A comparison of meta-analytic methods. Advances in Methods and Practices in Psychological Science, 2, 115–144. doi:10.1177/2515245919847196
Fernández-Castilla, B., Declercq, L., Jamshidi, L., Beretvas, S. N., Onghena, P., & Van den Noortgate, W. (in press). Detecting selection bias in meta-analyses with multiple outcomes: A simulation study. Journal of Experimental Education. doi:10.1080/00220973.2019.1582470
4.6.6 Handling multiple outcome variables
For further reading:
Van den Noortgate, W., López-López, J. A., Marín-Martínez, F., & Sánchez-Meca, J. (2013). Three-level meta-analysis of dependent effect sizes. Behavior Research Methods, 45, 576–594. doi:10.3758/s13428-012-0261-6
O’Keefe, D. J. (2013). The relative persuasiveness of different message types does not vary as a function of the persuasive outcome assessed: Evidence from 29 meta-analyses of 2,062 effect sizes for 13 message variations. Annals of the International Communication Association, 37, 221-249. doi:10.1080/23808985.2013.11679151
Moeyaert, M., Ugille, M., Beretvas, S. N., Ferron, J., Bunuan,
R., & van den Noortgate, W. (2017). Methods for dealing
with multiple outcomes in meta-analysis: A comparison between averaging effect
sizes, robust variance estimation and multilevel meta-analysis. International
Journal of Social Research Methodology, 20, 559–572.
doi:10.1080/13645579.2016.1252189
Park, S. & Beretvas, S. N. (2019). Synthesizing effects for multiple outcomes per study using robust variance estimation versus the three-level model. Behavior Research Methods, 51, 152-171. doi:10.3758/s13428-018-1156-y
4.6.7 Statistical power of meta-analytic procedures
For further reading:
Hedges, L. V., & Pigott, T. D. (2001). The power of statistical tests in meta-analysis. Psychological Methods, 6, 203-217. doi:10.1037/1082-989X.6.3.203
Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2009). Power analysis for meta-analysis. Chapter 29 (pp. 257-276) in: Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2009). Introduction to meta-analysis. Chichester, West Sussex, UK: Wiley.
Valentine, J. C., Pigott, T. D., & Rothstein, H. R. (2010). How many studies do you need? A primer on statistical power for meta-analysis. Journal of Educational and Behavioral Statistics, 35, 215-247. doi:10.3102/1076998609346961 [correction notice: Valentine, J. C., Pigott, T. D., & Rothstein, H. R. (2010). Erratum to: How many studies do you need? A primer on statistical power for meta-analysis. Journal of Educational and Behavioral Statistics, 35, 375. doi: 10.3102/1076998610376621]
Liu, X. S. (2014). Chapter 9: Meta-analysis. In: Liu, X. S. (2014). Statistical power analysis for the social and behavioral sciences: Basic and advanced techniques (pp. 253-268). New York: Routledge.
4.7 Reporting meta-analyses
For further reading:
Rosenthal, R. (1995). Writing meta-analytic reviews. Psychological Bulletin, 118, 183-192.
Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2009). Reporting the results of a meta-analysis. Chapter 41 (pp. 365-370) in: Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2009). Introduction to meta-analysis. Chichester, West Sussex, UK: Wiley.
Moher, D., Liberati, A., Tetzlaff, J., Altman, D. G., & the PRISMA Group. (2009). Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement. Annals of Internal Medicine, 151, 1-5.
Anzures-Cabrera, J., & Higgins. J. P. T. (2010). Graphical displays for meta-analysis: An overview with suggestions for practice. Research Synthesis Methods, 1, 66–80. doi: 10.1002/jrsm.6
Card, N. A. (2012). Chapter 13: Writing meta-analytic results. In Applied meta-analysis for social science research (pp. 313-343). New York: Guilford.
4.8 Single-paper meta-analysis
For further reading:
Jackson, S. (1991). Meta-analysis for primary and secondary data analysis: The super-experiment metaphor. Communication Monographs, 58, 449-462. doi:10.1080/03637759109376241
McShane, B. B., & Böckenholt, U. (2017). Single-paper meta-analysis: Benefits for study summary, theory testing, and replicability. Journal of Consumer Research, 43, 1048-1063. doi:10.1093/jcr/ucw085
McShane, B. B., & Böckenholt, U. (2018). Want to make behavioural research more replicable? Promote single paper meta‐analysis. Significance, 15(6), 38-40. doi:10.1111/j.1740-9713.2018.01214.x
4.9 Reflections
For further reading:
Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2009). Criticisms of meta-analysis. Chapter 43 (pp. 377-387) in: Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2009). Introduction to meta-analysis. Chichester, West Sussex, UK: Wiley.
Ioannidis, J. P. A. (2010). Meta-research: The art of getting it wrong. Research Synthesis Methods, 1, 169-184. doi: 10.1002/jrsm.19
Aguinis, H., Pierce, C. A., Bosco, F. A., Dalton, D. R., & Dalton, C. M. (2011). Debunking myths and urban legends about meta-analysis. Organizational Research Methods, 14, 306-331. doi:10.1177/1094428110375720
Van Elk, M., Matzke, D., Gronau, Q. F., Guan, M., Vandekerckhove, J., & Wagenmakers, E.-J. (2015). Meta-analyses are no substitute for registered replications: A skeptical perspective on religious priming. Frontiers in Psychology, 6, 1365. doi:10.3389/fpsyg.2015.01365
Lakens, D., Hilgard, J., & Staaks, J. (2016). On the reproducibility of meta-analyses: Six practical recommendations. BMC Psychology, 4, 24. doi:10.1186/s40359-016-0126-3
Ioannidis, J. P. A. (2016). The mass production of redundant, misleading, and conflicted systematic reviews and meta-analyses. The Milbank Quarterly, 94, 485–514. doi:10.1111/1468-0009.12210
Gurevitch, J., Koricheva, J., Nakagawa, S., & Stewart, G.. (2018). Meta-analysis and the science of research synthesis. Nature, 555, 175–182.
Corker, K. S. (in press). Strengths and weaknesses of meta-analysis. In L. Jussim, S. Stevens, & J. Krosnick (Eds.) Research integrity in the behavioral sciences. PsyArXiv preprint available at: https://psyarxiv.com/6gcnm/.
Moreau, D., & Gamble, B. (2020, January 7). Conducting a meta-analysis in the age of open science: Tools, tips, and practical recommendations. PsyArXiv manuscript. https://doi.org/10.31234/osf.io/t5dwg