Data and Code for: Unpacking P-hacking and Publication Bias
Principal Investigator(s): View help for Principal Investigator(s) Abel Brodeur, University of Ottawa; Scott Carrell, University of of Texas at Austin; David Figlio, University of Rochester; Lester Lusher, University of Pittsburgh
Version: View help for Version V1
Name | File Type | Size | Last Modified |
---|---|---|---|
|
application/x-stata-dta | 3.7 MB | 06/07/2023 04:24:PM |
|
application/x-stata-dta | 4.8 MB | 06/07/2023 04:24:PM |
|
application/x-stata-dta | 659.1 KB | 10/25/2023 06:33:PM |
Project Description
Summary:
View help for Summary
We use unique data from journal submissions to identify and unpack publication bias and p-hacking. We find that initial submissions display significant bunching, suggesting the distribution among published statistics cannot be fully attributed to a publication bias in peer review. Desk-rejected manuscripts display greater heaping than those sent for review i.e. marginally significant results are more likely to be desk rejected. Reviewer recommendations, in contrast, are positively associated with statistical significance. Overall, the peer review process has little effect on the distribution of test statistics. Lastly, we track rejected papers and present evidence that the prevalence of publication biases is perhaps not as prominent as feared.
Scope of Project
Subject Terms:
View help for Subject Terms
Publication bias;
p-hacking
JEL Classification:
View help for JEL Classification
A11 Role of Economics; Role of Economists; Market for Economists
C13 Estimation: General
C40 Econometric and Statistical Methods: Special Topics: General
A11 Role of Economics; Role of Economists; Market for Economists
C13 Estimation: General
C40 Econometric and Statistical Methods: Special Topics: General
Data Type(s):
View help for Data Type(s)
other
Methodology
Data Source:
View help for Data Source
Our sample of data from the JHR contains all manuscripts submitted for review from 2013 to 2018. During this time frame, there were 2,365 submissions that were desk rejected, 1,018 submissions that were rejected after receiving reviewer recommendation (i.e., “reviewer rejections”), and 223 (eventually) accepted manuscripts. We then keep a random sample of 250 desk rejections, 250 reviewer rejections, and all 223 accepted manuscripts, stratified by year of submission. Lastly, upon reading the paper, we removedmanuscripts which did not contain a clear experimental or quasi-experimental statistical inference (difference in differences, instrumental variables, regression discontinuity, and/or randomized control trials and experiments)
Unit(s) of Observation:
View help for Unit(s) of Observation
Test statistic
Related Publications
Published Versions
Report a Problem
Found a serious problem with the data, such as disclosure risk or copyrighted content? Let us know.
This material is distributed exactly as it arrived from the data depositor. ICPSR has not checked or processed this material. Users should consult the investigator(s) if further information is desired.