This is a Preprint and has not been peer reviewed. This is version 1 of this Preprint.
Downloads
Authors
Abstract
Researchers have incentives to search for and selectively report findings that appear to be statistically significant and/or conform to prior beliefs. Such selective reporting practices, including p-hacking and publication bias, can lead to a distorted set of results being published, potentially undermining the process of knowledge accumulation and evidence-based decision making. We take stock of the state of empirical research in the environmental sciences using 67,947 statistical tests obtained from 547 meta-analyses. We find that 59% of the p-values that were reported as significant are not actually expected to be statistically significant. The median power of these tests is between 6% to 12%, which is the lowest yet identified for any discipline. Only 8% of tests are adequately powered with statistical power of 80% or more. Exploratory regressions suggest that increased statistical power and the use of experimental research designs reduce the extent of selective reporting. Differences between subfields can be mostly explained by methodological differences. To improve the environmental sciences evidence base, researchers should pay more attention to statistical power, but incentives for selective reporting may remain even with adequate statistical power. Ultimately, a paradigm shift towards open science is needed to ensure the reliability of published empirical research.
DOI
https://doi.org/10.32942/X24G6Z
Subjects
Life Sciences, Medicine and Health Sciences, Social and Behavioral Sciences
Keywords
Dates
Published: 2023-01-25 05:13
Last Updated: 2023-01-25 10:13
License
CC-By Attribution-NonCommercial-NoDerivatives 4.0 International
There are no comments or no comments have been made public for this article.