Scientific Utopia: II. Restructuring Incentives and Practices to Promote Truth Over Publishability
π Original study βPlain English Summary
Here's a wake-up call for science itself. The authors tried to replicate a splashy psychology finding and -- with nearly 1,300 participants -- got absolutely nothing. That personal experience fueled a deep dive into why science keeps producing results that don't hold up. The culprit? A publish-or-perish culture that rewards exciting, positive findings and buries boring-but-honest null results. They catalogue nine sneaky tricks (called questionable research practices) that researchers use, often without realizing it, like peeking at data early and stopping when it looks good, or only reporting the analyses that worked. Their fix is a bold transparency overhaul: share your data openly, pre-register your study plans before collecting data so you can't move the goalposts, and build systems that actually reward replication (re-running studies to verify them). This paper became a rallying cry for the open-science movement and remains the yardstick against which modern parapsychology research is judged.
Actual Paper Abstract
An academic scientist's professional success depends on publishing. Publishing norms emphasize novel, positive results. As such, disciplinary incentives encourage design, analysis, and reporting decisions that elicit positive results and ignore negative results. Prior reports demonstrate how these incentives inflate the rate of false effects in published science. When incentives favor novelty over replication, false results persist in the literature unchallenged, reducing efficiency in knowledge accumulation. Previous suggestions to address this problem are unlikely to be effective. For example, a journal of negative results publishes otherwise unpublishable reports. This enshrines the low status of the journal and its content. The persistence of false findings can be meliorated with strategies that make the fundamental but abstract accuracy motive β getting it right β competitive with the more tangible and concrete incentive β getting it published. We develop strategies for improving scientific practices and knowledge accumulation that account for ordinary human motivations and self-serving biases.
Research Notes
A foundational open-science manifesto that directly informs the library's meta-debate controversy (#10). Its QRP taxonomy provides a checklist for evaluating every empirical psi paper in the collection, and its reform proposals (pre-registration, open data) are the standards against which modern parapsychology studies are increasingly judged.
Publication norms in academic science emphasize novel, positive results, creating incentives that inflate false-positive rates and discourage replication. Drawing on the authors' own failed replication of a provocative embodied-cognition finding (original p = .01, N = 1,979; replication p = .59, N = 1,300), the paper catalogues nine common practices that increase publishability at the expense of accuracy, including optional stopping, selective reporting, HARKing, and avoidance of direct replication. Existing remedies (negative-results journals, education campaigns, reviewer vigilance) are judged insufficient. The proposed solutions restructure incentives around open data, open methods, open workflow with pre-registration, post-publication review, and Replication Value metrics, arguing that transparency and accountability can make the abstract accuracy motive competitive with the concrete publication motive.
Links
Related Papers
Cites
- False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant β Simmons, Joseph P (2011)
- The "File Drawer Problem" and Tolerance for Null Results β Rosenthal, Robert (1979)
- How Many Scientists Fabricate and Falsify Research? A Systematic Review and Meta-Analysis of Survey Data β Fanelli, Daniele (2009)
- Scientists behaving badly β Martinson, Brian C (2005)
- Why Psychologists Must Change the Way They Analyze Their Data: The Case of Psi β Wagenmakers, Eric-Jan (2011)
Companion
- Registered Reports: A Method to Increase the Credibility of Published Results β Nosek, Brian A (2014)
- Why Science Is Not Necessarily Self-Correcting β Ioannidis, John P.A (2012)
- Theoretical Risks and Tabular Asterisks: Sir Karl, Sir Ronald, and the Slow Progress of Soft Psychology β Meehl, Paul E (1978)
- Options for Prospective Meta-Analysis and Introduction of Registration-Based Prospective Meta-Analysis β Watt, Caroline A (2017)
- Estimating the Reproducibility of Psychological Science β Open Science Collaboration (2015)
Also by these authors
More in Methodology
Paranormal belief, conspiracy endorsement, and positive wellbeing: a network analysis
Planning Falsifiable Confirmatory Research
Addressing Researcher Fraud: Retrospective, Real-Time, and Preventive Strategies β Including Legal Points and Data Management That Prevents Fraud
Quantum Aspects of the Brain-Mind Relationship: A Hypothesis with Supporting Evidence
Paranormal beliefs and cognitive function: A systematic review and assessment of study quality across four decades of research
π Cite this paper
Nosek, Brian A, Spies, Jeffrey R, Motyl, Matt (2012). Scientific Utopia: II. Restructuring Incentives and Practices to Promote Truth Over Publishability. Perspectives on Psychological Science. https://doi.org/10.1177/1745691612459058
@article{nosek_spies_motyl_2013_scientific_utopia,
title = {Scientific Utopia: II. Restructuring Incentives and Practices to Promote Truth Over Publishability},
author = {Nosek, Brian A and Spies, Jeffrey R and Motyl, Matt},
year = {2012},
journal = {Perspectives on Psychological Science},
doi = {10.1177/1745691612459058},
}