Publication bias impacts on effect size, statistical power, and magnitude (Type M) and sign (Type S) errors in ecology and evolutionary biology

This is a Preprint and has not been peer reviewed. This is version 1 of this Preprint.

Downloads

Download Preprint

Supplementary Files
Authors

Yefeng Yang, Alfredo Sánchez-Tójar, Rose E O'Dea, Daniel W.A. Noble, Julia Koricheva, Michael D Jennions, Timothy H Parker, Malgorzata Lagisz, Shinichi Nakagawa

Abstract

Collaborative assessments of direct replicability of empirical studies in the medical and social sciences have exposed alarmingly low rates of replicability, a phenomenon dubbed the ‘replication crisis’. Poor replicability has spurred cultural changes targeted at improving reliability in these disciplines. Given the absence of equivalent replication projects in ecology and evolutionary biology, two inter-related indicators offer us the possibility to retrospectively assess replicability: publication bias and statistical power. This registered report assesses the prevalence and severity of small-study (i.e., smaller studies reporting larger effect sizes) and decline effects (i.e., effect sizes decreasing over time) across ecology and evolutionary biology using 87 meta-analyses including 4,250 primary studies and 17,638 effect sizes. Further, we estimate how publication bias might distort the estimation of effect sizes, statistical power, and errors in magnitude (Type M or exaggeration ratio) and sign (Type S). We show strong evidence for the pervasiveness of both small-study and decline effects in ecology and evolution. There was widespread prevalence of publication bias that resulted in meta-analytic means being over-estimated by (at least) 0.12 standard deviations. The prevalence of publication bias distorted confidence in meta-analytic results with 66% of initially statistically significant meta-analytic means becoming non-significant after correcting for publication bias. Ecological and evolutionary studies consistently had a low statistical power (15%) with a 4-fold exaggeration of effects on average (Type M error rates = 4.4). Notably, publication bias aggravates low power (from 23% to 15%) and type M error rates (from 2.7 to 4.4) because it creates a non-random sample of effect size evidence. The sign errors of effect sizes (Type S error) increased from 5% to 8% because of publication bias. Our research provides clear evidence that many published ecological and evolutionary findings are inflated. Our results highlight the importance of designing high-power empirical studies (e.g., via collaborative team science), promoting and encouraging replication studies, testing and correcting for publication bias in meta-analyses, and embracing open and transparent research practices, such as (pre)registration, data- and code-sharing, and transparent reporting.

DOI

https://doi.org/10.32942/osf.io/97nv6

Subjects

Biology, Ecology and Evolutionary Biology, Environmental Sciences, Life Sciences, Physical Sciences and Mathematics, Statistics and Probability

Keywords

generalizability, many labs, meta-research, open science, P-hacking, questionable research practices, registered report, selective reporting, transparency

Dates

Published: 2022-09-12 01:03

License

CC-BY Attribution-NonCommercial 4.0 International

Add a Comment

You must log in to post a comment.


Comments

There are no comments or no comments have been made public for this article.