A growing consensus has emerged that psychology has a replicability “crisis of confidence”. A growing number of findings cannot be replicated via high-powered independent replication attempts, across all areas of psychology. This includes findings from evolutionary psychology (ovulation on men’s testosterone [
1
Replication difficulties of Correll's (2008) modulation of 1/f noise racial bias effect
C Madurski & EP LeBel (2015)
Psychonomic Bulletin & Review

]; sex differences in infidelity distress [
2
Sex differences in distress from infidelity: A replication of Shackelford et al.
H IJzerman, I Blanken, MJ Brandt et al. (2014)
Social Psychology

]) and judgment and decision making (unconscious thought advantage [
3
Replication difficulties of Correll's (2008) modulation of 1/f noise racial bias effect
C Madurski & EP LeBel (2015)
Psychonomic Bulletin & Review

]; choice-overload [
4
Replication difficulties of Correll's (2008) modulation of 1/f noise racial bias effect
C Madurski & EP LeBel (2015)
Psychonomic Bulletin & Review

]).
More generalizable evidence supporting a general replicability problem comes from an ambitious and unprecedented large-scale effort, the Reproducibility Project. In this project, researchers were unable to replicate 60 out of 100 findings from the 2008 issues of 3 major psychology journals. In another large-scale effort, only 30% (7 out of 23) of highly cited findings from cognitive/social psychology could be replicated.
Figure 1: Average song length (in seconds) across the years.
Though there are different ways to interpret successful versus unsuccessful replication results, taken together, these observations strongly suggest psychology currently has a general replicability problem (as do several other areas of science including cancer cell biology and cardiovascular health literatures).
New Initiatives and Reforms
Several new initiatives have been launched to improve research practices in order to increase the reliability of findings in psychology. For instance, higher reporting standards have recently been instituted at several prominent psychology journals. At such journals (e.g., Psychological Science, Memory & Cognition, Attention, Perception, & Psychophysics, Psychonomic Bulletin & Review, Personality and Social Psychology Bulletin), authors submitting a manuscript must now acknowledge that they have disclosed basic methodological details critical for the accurate evaluation and interpretation of reported findings such as fully disclosing all excluded observations, all tested experimental conditions, all assessed outcome measures, and their data collection termination rule.
Figure 2: Life expectancy of countries by GDP per capita and population (across continents).
There is also a significant push to incentivize “open data”, the public posting of the raw data underlying studies reported in a published article. For instance, at Psychological Science, authors who make their data publicly available now earn an open data badge that is prominently displayed alongside their published article. This includes findings from evolutionary psychology (ovulation on men’s testosterone [
1
Replication difficulties of Correll's (2008) modulation of 1/f noise racial bias effect
C Madurski & EP LeBel (2015)
Psychonomic Bulletin & Review

]; sex differences in infidelity distress [
2
Sex differences in distress from infidelity: A replication of Shackelford et al.
H IJzerman, I Blanken, MJ Brandt et al. (2014)
Social Psychology

]) and judgment and decision making (unconscious thought advantage [
3
Replication difficulties of Correll's (2008) modulation of 1/f noise racial bias effect
C Madurski & EP LeBel (2015)
Psychonomic Bulletin & Review

]; choice-overload [
4
Replication difficulties of Correll's (2008) modulation of 1/f noise racial bias effect
C Madurski & EP LeBel (2015)
Psychonomic Bulletin & Review

]). Furthermore, the new Journal of Open Psychology Data now publishes data papers that feature publicly posted data sets. Such open data practices not only facilitate independent verification of analyses and results so crucial to identifying errors, but substantially facilitate the execution of meta-analyses and re-analyses from different theoretical perspectives, which can accelerate knowledge development.
Figure 3: Life expectancy as a function of fertility rate over time.
In addition, several journals (e.g., Cortex, Perspectives on Psychological Science, Attention, Perception, & Psychophysics, Comprehensive Results in Social Psychology) now offer pre-registered publication options whereby authors submit a study proposal that pre-specifies the methodology and analytical approaches to be used to test a specific hypothesis. Proposals are evaluated on the soundness of the methodology and theoretical importance of the research question. Once accepted, the proposed study is executed and the article is published regardless of the results, eliminating questionable research practices and researcher bias which can grossly mischaracterize the evidence.
Figure 4: Replication effect size estimates across labs.
A final development is the growing practice of prominent journals to publish independent direct replication results, including replication results inconsistent with those originally published by the journal (e.g., Psychological Science, Psychonomic, Bulletin, & Review, Journal of Research in Personality, Journal of Experimental Social Psychology, Social Psychological & Personality Science). Though calls for the publication of replication results have been made for decades, the actual practice of prominent journals systematically publishing replication results is unprecedented and has immense potential to increase the reliability of findings in psychology. Such practice directly incentivizes researchers to execute independent replications so crucial to corroborating past findings and hence accelerating theoretical progress. The new development of journals publishing replications may also reduce the tendency for researchers to report unexpected, exploratory, and/or tenuous results as confirmatory or conclusive findings.
Video 1: Brian Nosek (Center for Open Science) discusses the Reproducibility Project: Psychology (video source).
Though this final development is particularly exciting, many researchers are currently afraid or unsure about possible social and career-related risks involved in executing and publishing independent replication results given several recent high-profile cases where the publication of replication results lead to nasty threats, retaliation, and personal attacks of incompetence by original authors. This situation represents a serious hurdle that substantially interferes with the development of a research culture where the execution and publication of independent direct replications is seen as a routine part of the research process rather than something done mostly by selfless “open science” psychologists. To overcome this important hurdle, I propose a new replication norm that has the potential to substantially increase the execution and publication of independent direct replications so important to ensuring a self-correcting cumulative knowledge base. As Cohen propounded “…we must finally rely, as have the older sciences, on replication” (p. 1002). Similarly, as Sir Ronald Fisher stated: “A scientific fact should be regarded as experimentally established only if a properly designed (independent) experiment rarely fails to give this level of significance [referring to p < .05]” (p. 504).