Measuring the effects of publication bias in political science

View Researcher's Other Codes

Disclaimer: The provided code links for this paper are external links. Science Nest has no responsibility for the accuracy, legality or content of these links. Also, by downloading this code(s), you agree to comply with the terms of use as set out by the author(s) of the code(s).

Authors Justin Esarey, Ahra Wu
Journal/Conference Name RESEARCH & POLITICS
Paper Category
Paper Abstract Prior research finds that statistically significant results are overrepresented in scientific publications. If significant results are consistently favored in the review process, published results could systematically overstate the magnitude of their findings even under ideal conditions. In this paper, we measure the impact of this publication bias on political science using a new data set of published quantitative results. Although any measurement of publication bias depends on the prior distribution of empirical relationships, we determine that published estimates in political science are on average substantially larger than their true value under a variety of reasonable choices for this prior. We also find that many published estimates have a false positive probability substantially greater than the conventional α = 0.05 threshold for statistical significance if the prior probability of a null relationship exceeds 50%. Finally, although the proportion of published false positives would be reduced if si...
Date of publication 2016
Code Programming Language R
Comment

Copyright Researcher 2022