Skip to main content

Testing for Questionable Research Practices in a Meta-Analysis: An Example from Experimental Parapsychology

πŸ“„ Original study β†—
Bierman, Dick J, Spottiswoode, James P, Bijl, Aron β€’ 2016 Current Era β€’ methodology

πŸ“Œ Appears in:

Plain English Summary

Scientists sometimes cut corners -- peeking at data early, tweaking analyses until something looks good. These 'questionable research practices' (QRPs) can inflate results across a field. So how much do they explain Ganzfeld telepathy experiments, where subjects score 31% when chance is 25%? This study simulated seven QRPs at realistic rates. The headline: QRPs account for roughly 60% of the effect -- hefty! But even after scrubbing that noise away, a small but stubbornly significant residual remains. This makes the paper a rare honest broker, giving ammunition to both skeptics and proponents: the evidence is messy but not fully explainable by sloppy methods.

Actual Paper Abstract

We describe a method of quantifying the effect of Questionable Research Practices (QRPs) on the results of meta-analyses. As an example we simulated a meta-analysis of a controversial telepathy protocol to assess the extent to which these experimental results could be explained by QRPs. Our simulations used the same numbers of studies and trials as the original meta-analysis and the frequencies with which various QRPs were applied in the simulated experiments were based on surveys of experimental psychologists. Results of both the meta-analysis and simulations were characterized by 4 metrics, two describing the trial and mean experiment hit rates (HR) of around 31%, where 25% is expected by chance, one the correlation between sample-size and hit-rate, and one the complete P-value distribution of the database. A genetic algorithm optimized the parameters describing the QRPs, and the fitness of the simulated meta-analysis was defined as the sum of the squares of Z-scores for the 4 metrics. Assuming no anomalous effect a good fit to the empirical meta-analysis was found only by using QRPs with unrealistic parameter-values. Restricting the parameter space to ranges observed in studies of QRP occurrence, under the untested assumption that parapsychologists use comparable QRPs, the fit to the published Ganzfeld meta-analysis with no anomalous effect was poor. We allowed for a real anomalous effect, be it unidentified QRPs or a paranormal effect, where the HR ranged from 25% (chance) to 31%. With an anomalous HR of 27% the fitness became F = 1.8 (p = 0.47 where F = 0 is a perfect fit). We conclude that the very significant probability cited by the Ganzfeld meta-analysis is likely inflated by QRPs, though results are still significant (p = 0.003) with QRPs. Our study demonstrates that quantitative simulations of QRPs can assess their impact. Since meta-analyses in general might be polluted by QRPs, this method has wide applicability outside the domain of experimental parapsychology.

Research Notes

First systematic simulation of multiple QRPs' combined impact on a parapsychological meta-analysis. Central to the ganzfeld telepathy (#1) and meta-debate (#10) controversies. The conclusion that QRPs explain ~60% but not all of the effect makes this relevant to both pro-psi and skeptical positions.

Using Monte Carlo simulations and a genetic algorithm, a method was developed to quantify the impact of Questionable Research Practices (QRPs) on meta-analytic results. Applied to 78 post-1985 Ganzfeld telepathy experiments (3,494 trials, mean hit rate 31% vs. 25% chance), seven QRPs were modeled at prevalence rates from published surveys of psychologists. With realistic QRP parameters and no anomalous effect, simulations failed to reproduce the empirical database (F=10.15, p < 0.05). Allowing a 2% excess hit rate yielded acceptable fit (F=1.79, p=0.47). QRPs explain approximately 60% of the reported effect size, but a residual effect remains significant (p=0.003).

Links

Related Papers

Also by these authors

More in Methodology

πŸ“‹ Cite this paper
APA
Bierman, Dick J, Spottiswoode, James P, Bijl, Aron (2016). Testing for Questionable Research Practices in a Meta-Analysis: An Example from Experimental Parapsychology. PLOS ONE. https://doi.org/10.1371/journal.pone.0153049
BibTeX
@article{bierman_2016_questionable_practices,
  title = {Testing for Questionable Research Practices in a Meta-Analysis: An Example from Experimental Parapsychology},
  author = {Bierman, Dick J and Spottiswoode, James P and Bijl, Aron},
  year = {2016},
  journal = {PLOS ONE},
  doi = {10.1371/journal.pone.0153049},
}