Methods for testing publication bias in ecological and evolutionary meta-analyses

This is a Preprint and has not been peer reviewed. The published version of this Preprint is available: https://doi.org/10.1111/2041-210X.13724. This is version 1 of this Preprint.

Add a Comment

You must log in to post a comment.


Comments

There are no comments or no comments have been made public for this article.

Downloads

Download Preprint

Authors

Shinichi Nakagawa , Malgorzata Lagisz, Michael Jennions, Julia Koricheva, Daniel W.A. Noble, Timothy H Parker , Alfredo Sánchez-Tójar, Yefeng Yang, Rose E O'Dea

Abstract

1. Publication bias threatens the validity of quantitative evidence from meta-analyses as it results in some findings being overrepresented in meta-analytic datasets because they are published more frequently or sooner (e.g., ‘positive’ results). Unfortunately, methods to test for the presence of publication bias, or assess its impact on meta-analytic results, are unsuitable for datasets with high heterogeneity and non-independence, as is common in ecology and evolutionary biology.
2. We first review both classic and emerging publication bias tests (e.g., funnel plots, Egger’s regression, cumulative meta-analysis, fail-safe N, trim-and-fill tests, p-curve and selection models), showing that some tests cannot handle heterogeneity, and, more importantly, none of the methods can deal with non-independence. For each method we estimate current usage in ecology and evolutionary biology, based on a representative sample of 102 meta-analyses published in the last ten years.
3. Then, we propose a new method using multilevel meta-regression, which can model both heterogeneity and non-independence, by extending existing regression-based methods (i.e. Egger’s regression). We describe how our multilevel meta-regression can test not only publication bias, but also time-lag bias, and how it can be supplemented by residual funnel plots.
4. Overall, we provide ecologists and evolutionary biologists with practical recommendations on which methods are appropriate to employ given independent and non-independent effect sizes. No method is ideal, and more simulation studies are required to understand how Type 1 and 2 error rates are impacted by complex data structures. Still, the limitations of these methods do not justify ignoring publication bias in ecological and evolutionary meta-analyses.

DOI

https://doi.org/10.32942/osf.io/k7pmz

Subjects

Ecology and Evolutionary Biology, Life Sciences, Other Ecology and Evolutionary Biology

Keywords

decline effect, effective sample size, mulitlelve meta-analysis, Outcome repoting bias, P-hacking, radial plot, selection bias, time-lag bias

Dates

Published: 2021-04-09 03:33

License

CC-By Attribution-NonCommercial-NoDerivatives 4.0 International