In the area of meta-science, I study the credibility
of scientific research. I've developed a
unified
framework to measure and track transparency and various forms of replication, which guide the creation of
transparency and replication-related web tools for researchers (see
CurateScience.org). As a trail-blazing meta-scientist,
unafraid of controversy, I helped raise transparency and replication standards
by leading several status-quo disrupting initiatives:
As a side project, I am leading the development of JamBox,
a voice-driven music app that let's musicians play their favorite songs with the fewest possible device disruptions.
I use JamBox to play music as a daily spiritual practice (mainly as a cover song musician, see
BrainTicklerBand.com)
Curate Science transparency standardEmpirical
Gold/diamond/green open access
Open/public data when ethically possible
COIs/funding disclosures
EP LeBel (2011)
Saarbrucken, Germany: LAP Lambert Academic Publishing.
Public Study Materials
Public Data
Public Code
Inspired by the history of the development of instruments in the physical sciences, and by past psychology giants, the following dissertation aimed to advance basic psychological science by investigating the metric calibration of psychological instruments. The over-arching goal of the dissertation was to demonstrate that it is both useful and feasible to calibrate the metric of psychological instruments so as to render their metrics non-arbitrary. Concerning utility, a conceptual analysis was executed delineating four categories of proposed benefits of non-arbitrary metrics including (a) help in the interpretation of data, (b) facilitation of construct validity research, (c) contribution to theory development, and (d) facilitation of general accumulation of knowledge. With respect to feasibility, the metric calibration approach was successfully applied to instruments of seven distinct constructs commonly studied in psychology, across three empirical demonstration studies and re-analyses of other researchers’ data. Extending past research, metric calibration was achieved in these empirical demonstration studies by finding empirical linkages between scores of the measures and specifically configured theoretically-relevant behaviors argued to reflect particular locations (i.e., ranges) of the relevant underlying psychological dimension. More generally, such configured behaviors can serve as common reference points to calibrate the scores of different instruments, rendering the metric of those instruments non-arbitrary. Study 1 showed a meaningful metric mapping between scores of a frequently used instrument to measure need for cognition and probability of choosing to complete a cognitively effortful over a cognitively simpler task. Study 1 also found an interesting metric linkage between scores of a practically useful self-report measure of task persistence and actual persistence in an anagram persistence task. Study 2, set in the context of the debate of pan-cultural self-enhancement, found theoretically interesting metric mappings between a trait rating measure of self-enhancement often used in the debate and a specifically configured behavioral measure of self-enhancement (i.e., over-claiming of knowledge). Study 3 demonstrated the metric calibration approach for popular behavioral measures of risk-taking often used in experimental studies and found meaningful metric linkages to risky gambles in binary lottery choices involving the possibility of winning real money. Re-analyses of relevant datasets shared by other researchers also revealed meaningful metric mappings for instruments assessing extraversion, conscientiousness, and self-control. Gregariousness facet scores were empirically linked to number of social parties attended per month, Dutifulness facet scores (conscientiousness) were connected to maximum driving speed, and trait self-control scores were calibrated to GPA. In addition, to further demonstrate the utility of non-arbitrary metrics for basic psychological research, some of my preliminary metric calibration findings were applied to actual research findings from the literature. Limitations and obstacles of metric calibration and promising future directions are also discussed.
Competing interests: None to declare.
Funding sources: Social Science and Humanities Research Council (Canada, SSHRC doctoral fellowship # 767-2007-1425: EP LeBel)
Examiners: R Goffin, L Campbell, J O'Brien, B Donnellan
Prior experimental studies of anomalous information reception (AIR) have been touted as strong evidence for postmortem survival of consciousness yet are plagued by several methodological weaknesses that preclude clear evidence of positive results. The present team provides an adversarial collaboration to identify and compensate for the major limitations of these previous approaches. We outline a more rigorous pre-registered study design that eliminates or minimizes researcher bias in (a) data cleaning and (b) statistical analysis. Obtaining positive results with our recommended design would arguably yield data that skeptics and sympathetic researchers would agree is more clearly interpretable and offers stronger support for a survivalist interpretation. However, this proposed study is not intended to be definitive but rather only a next step in a research program that aims to improve on earlier published efforts. It would also admittedly be time-consuming and expensive to implement, as well as raise ethical considerations in utilizing vulnerable research populations. However, these costs are required to achieve the rigor necessary to advance scientific knowledge in survival research.
Funding sources: First author LeBel received a small research grant provided by the Institute for the Study of Anomalous and Religious Experience (ISRAE): https://www.israenet.org/ . No other research funding was received.
Competing interests: No other competing interests to declare.
The importance of replication is becoming increasingly appreciated, however, considerably less consensus exists about how to evaluate the design and results of replications. We make concrete recommendations on how to evaluate replications with more nuance than what is typically done currently in the literature. We highlight six study characteristics that are crucial for evaluating replications: replication method similarity, replication differences, investigator independence, method/data transparency, analytic result reproducibility, and auxiliary hypotheses’ plausibility evidence. We also recommend a more nuanced approach to statistically interpret replication results at the individual-study and meta-analytic levels, and propose clearer language to communicate replication results.
EP LeBel conceived the general idea, drafted and revised the manuscript, and created the figure and table. W Vanpaemel, I Cheung, and L Campbell contributed to the conceptual development of the ideas, made substantial contributions to the writing and revision of the manuscript, and provided critical commentary and feedback. All authors approved the final submitted version of the manuscript.
Competing interests: EP LeBel was remunerated as an independent scientific consultant for a subset of his work on this manuscript (FASS research grant, Huron University College).
Funding sources: European Commission (Marie-Curie grant, Project ID: 793669: EP LeBel, W Vanpaemel), Huron University College (FASS research grant: I Cheung).
Editor: R CarlssonReviewers: MB Nuijten, U Schimmack
Societies invest in scientific studies to better understand the world and attempt to harness such improved understanding to address pressing societal problems. Published research, however, can be useful for theory or application only if it is credible. In science, a credible finding is one that has repeatedly survived risky falsification attempts. However, state-of-the-art meta-analytic approaches cannot determine the credibility of an effect because they do not account for the extent to which each included study has survived such attempted falsification. To overcome this problem, we outline a unified framework for estimating the credibility of published research by examining four fundamental falsifiability-related dimensions: (a) transparency of the methods and data, (b) reproducibility of the results when the same data-processing and analytic decisions are reapplied, (c) robustness of the results to different data-processing and analytic decisions, and (d) replicability of the effect. This framework includes a standardized workflow in which the degree to which a finding has survived scrutiny is quantified along these four facets of credibility. The framework is demonstrated by applying it to published replications in the psychology literature. Finally, we outline a Web implementation of the framework and conclude by encouraging the community of researchers to contribute to the development and crowdsourcing of this platform.
E. P. LeBel conceived the general idea, drafted and revised the manuscript, created the figures, and executed the analytic-reproducibility checks and meta-analyses for the application of the framework to the infidelity-distress effect. W. Vanpaemel provided substantial contributions to the conceptual development of the ideas presented. W. Vanpaemel, R. J. McCarthy, B. D. Earp, and M. Elson provided critical commentary and made substantial contributions to writing and revising the manuscript. All authors approved the final submitted version of the manuscript.
Competing interests: None to declare.
Funding sources: Ministry of Culture and Science (Germany: M Elson)
Editor: S VazireReviewers: Anonymous reviewer 1, Anonymous reviewer 2
Finkel, Eastwick, and Reis (2016; FER2016) argued the post-2011 methodological reform movement has focused narrowly on replicability, neglecting other essential goals of research. We agree multiple scientific goals are essential, but argue, however, a more fine-grained language, conceptualization, and approach to replication is needed to accomplish these goals. Replication is the general empirical mechanism for testing and falsifying theory. Sufficiently methodologically similar replications, also known as direct replications, test the basic existence of phenomena and ensure cumulative progress is possible a priori. In contrast, increasingly methodologically dissimilar replications, also known as conceptual replications, test the relevance of auxiliary hypotheses (e.g., manipulation and measurement issues, contextual factors) required to productively investigate validity and generalizability. Without prioritizing replicability, a field is not empirically falsifiable. We also disagree with FER2016’s position that “bigger samples are generally better, but... that very large samples could have the downside of commandeering resources that would have been better invested in other studies” (abstract). We identify problematic assumptions involved in FER2016’s modifications of our original research-economic model, and present an improved model that quantifies when (and whether) it is reasonable to worry that increasing statistical power will engender potential trade-offs. Sufficiently powering studies (i.e., 80%) maximizes both research efficiency and confidence in the literature (research quality). Given that we are in agreement with FER2016 on all key open science points, we are eager to start seeing the accelerated rate of cumulative knowledge development of social psychological phenomena such a sufficiently transparent, powered, and falsifiable approach will generate.
Curate Science transparency standardNon-empirical
Gold/diamond/green open access
COIs/funding disclosures
EP LeBel, L Campbell, & TJ Loving (2017)
Journal of Personality and Social Psychology
Public Code
21
Several researchers recently outlined unacknowledged costs of open science practices, arguing these costs may outweigh benefits and stifle discovery of novel findings. We scrutinize these researchers’ (a) statistical concern that heightened stringency with respect to false-positives will increase false-negatives and (b) metascientific concern that larger samples and executing direct replications engender opportunity costs that will decrease the rate of making novel discoveries. We argue their statistical concern is unwarranted given open science proponents recommend such practices to reduce the inflated Type I error rate from .35 down to .05 and simultaneously call for high-powered research to reduce the inflated Type II error rate. Regarding their metaconcern, we demonstrate that incurring some costs is required to increase the rate (and frequency) of making true discoveries because distinguishing true from false hypotheses requires a low Type I error rate, high statistical power, and independent direct replications. We also examine pragmatic concerns raised regarding adopting open science practices for relationship science (preregistration, open materials, open data, direct replications, sample size); while acknowledging these concerns, we argue they are overstated given available solutions. We conclude benefits of open science practices outweigh costs for both individual researchers and the collective field in the long run, but that short term costs may exist for researchers because of the currently dysfunctional academic incentive structure. Our analysis implies our field’s incentive structure needs to change whereby better alignment exists between researcher’s career interests and the field’s cumulative progress. We delineate recent proposals aimed at such incentive structure realignment.
Finkel, Rusbult, Kumashiro, and Hannon (2002, Study 1) demonstrated a causal link between subjective commitment to a relationship and how people responded to hypothetical betrayals of that relationship. Participants primed to think about their commitment to their partner (high commitment) reacted to the betrayals with reduced exit and neglect responses relative to those primed to think about their independence from their partner (low commitment). The priming manipulation did not affect constructive voice and loyalty responses. Although other studies have demonstrated a correlation between subjective commitment and responses to betrayal, this study provides the only experimental evidence that inducing changes to subjective commitment can causally affect forgiveness responses. This Registered Replication Report (RRR) meta-analytically combines the results of 16 new direct replications of the original study, all of which followed a standardized, vetted, and preregistered protocol. The results showed little effect of the priming manipulation on the forgiveness outcome measures, but it also did not observe an effect of priming on subjective commitment, so the manipulation did not work as it had in the original study. We discuss possible explanations for the discrepancy between the findings from this RRR and the original study.
Curate Science transparency standardNon-empirical
Gold/diamond/green open access
COIs/funding disclosures
EP LeBel (2015)
Collabra
In recent years, there has been a growing concern regarding the replicability of findings in psychology, including a mounting number of prominent findings that have failed to replicate via high-powered independent replication attempts. In the face of this replicability “crisis of confidence”, several initiatives have been implemented to increase the reliability of empirical findings. In the current article, I propose a new replication norm that aims to further boost the dependability of findings in psychology. Paralleling the extant social norm that researchers should peer review about three times as many articles that they themselves publish per year, the new replication norm states that researchers should aim to independently replicate important findings in their own research areas in proportion to the number of original studies they themselves publish per year (e.g., a 4:1 original-to-replication studies ratio). I argue this simple approach could significantly advance our science by increasing the reliability and cumulative nature of our empirical knowledge base, accelerating our theoretical understanding of psychological phenomena, instilling a focus on quality rather than quantity, and by facilitating our transformation toward a research culture where executing and reporting independent direct replications is viewed as an ordinary part of the research process. To help promote the new norm, I delineate (1) how each of the major constituencies of the research process (i.e., funders, journals, professional societies, departments, and individual researchers) can incentivize replications and promote the new norm and (2) any obstacles each constituency faces in supporting the new norm.
Competing interests: None to declare.
Funding sources: None to declare.
Editor: S VazireReviewers: K Corker, Anonymous reviewer 2
Excluded data (subjects/observations): Full details reported in article.
Experimental conditions: Full details reported in article.
Outcome measures: Full details reported in article.
Sample size determination: Full details reported in article.
Reproducibility is a defining feature of science, but the extent to which it characterizes current research is unknown. We conducted replications of 100 experimental and correlational studies published in three psychology journals using high-powered designs and original materials when available. Replication effects were half the magnitude of original effects, representing a substantial decline. Ninety-seven percent of original studies had statistically significant results. Thirty-six percent of replications had statistically significant results; 47% of original effect sizes were in the 95% confidence interval of the replication effect size; 39% of effects were subjectively rated to have replicated the original result; and if no bias in original results is assumed, combining original and replication results left 68% with statistically significant effects. Correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams.
Correll (2008; Study 2, Journal of Personality and Social Psychology, 94, 48–59) found that instructions to use or avoid race information decreased the emission of 1/f noise in a weapon identification task (WIT). These results suggested that 1/f noise in racial bias tasks reflected an effortful deliberative process, providing new insights regarding the mechanisms underlying implicit racial biases. Given the potential theoretical and applied importance of understanding the psychological processes underlying implicit racial biases – and in light of the growing demand for independent direct replications of findings to ensure the cumulative nature of our science – we attempted to replicate Correll’s finding in two high-powered studies. Despite considerable effort to closely duplicate all procedural and methodological details of the original study (i.e., same cover story, experimental manipulation, implicit measure task, original stimuli, task instructions, sampling frame, population, and statistical analyses), both replication attempts were unsuccessful in replicating the original finding challenging the theoretical account that 1/f noise in racial bias tasks reflects a deliberative process. However, the emission of 1/f noise did consistently emerge across samples in each of our conditions. Hence, future research is needed to clarify the psychological significance of 1/f noise in racial bias tasks.
Competing interests: None to declare.
Funding sources: Social Science and Humanities Research Council (SSHRC grant # 756-2011-0090, Canada: EP LeBel)
Excluded data (subjects/observations): Full details reported in article.
Experimental conditions: Full details reported in article.
Outcome measures: Full details reported in article.
Sample size determination: Full details reported in article.
Slepian, Masicampo, Toosi, and Ambady (2012, Study 1, Journal of Experimental Psychology: General, 141, 619–624) found that individuals recalling and writing about a big, meaningful secret judged a pictured hill as steeper than did those who recalled and wrote about a small, inconsequential secret (with estimates unrelated to physical effort unaffected). From an embodied cognition perspective, this result was interpreted as suggesting that important secrets weigh people down. Answering to mounting calls for the crucial need of independent direct replications of published findings to ensure the self-correcting nature of our science, we sought to corroborate Slepian et al.’s finding in two extremely highpowered, preregistered studies that were very faithful to all procedural and methodological details of the original study (i.e., same cover story, study title, manipulation, measures, item order, scale anchors, task instructions, sampling frame, population, and statistical analyses). In both samples, we were unsuccessful in replicating the target finding. Although Slepian et al. reported three other studies supporting the secret burdensomeness phenomenon, we advise that these three other findings need to be independently corroborated before the general phenomenon informs theory or health interventions.
Competing interests: None to declare.
Funding sources: Social Science and Humanities Research Council (SSHRC grant # 756-2011-0090, Canada: EP LeBel)
Editor: C MooreReviewers: A Glenberg, Anonymous reviewer 2, Anonymous reviewer 3
Curate Science transparency standardEmpirical
Gold/diamond/green open access
Open/public data when ethically possible
COIs/funding disclosures
EP LeBel, D Borsboom, F Hasselman et al. (2013)
Perspectives on Psychological Science
Public Study Materials
Public Data
Public Code
There is currently an unprecedented level of doubt regarding the reliability of research findings in psychology. Many recommendations have been made to improve the current situation. In this article, we report results from PsychDisclosure.org, a novel open-science initiative that provides a platform for authors of recently published articles to disclose four methodological design specification details that are not required to be disclosed under current reporting standards but that are critical for accurate interpretation and evaluation of reported findings. Grassroots sentiment -- as manifested in the positive and appreciative response to our initiative -- indicates that psychologists want to see changes made at the systemic level regarding disclosure of such methodological details. Almost 50% of contacted researchers disclosed the requested design specifications for the four methodological categories (excluded subjects, nonreported conditions and measures, and sample size determination). Disclosed information provided by participating authors also revealed several instances of questionable editorial practices, which need to be thoroughly examined and redressed. On the basis of these results, we argue that the time is now for mandatory methods disclosure statements for all psychology journals, which would be an important step forward in improving the reliability of findings in psychology.
Past research on close relationships has increasingly focused on the assessment of implicit constructs to shed new light on relationship processes. However, virtually nothing is known about the role of such constructs in understanding ongoing affective and behavioral romantic realities and how implicit and explicit relationship constructs interact in the context of daily relationship outcomes. Using a 21-day diary approach, the present research examined the unique and interactive role of implicit partner evaluations and explicit partner perceptions on relationship outcomes (daily relationship quality and positive relationship behaviors enacted toward partner). Results showed that more positive implicit partner evaluations uniquely predicted more positive relationship outcomes during the 21-day diary period, but that this was especially pronounced in individuals who did not explicitly perceive their partner’s attributes in an overly positive manner. Implications for the close relationship literature are discussed.
Competing interests: None to declare.
Funding sources: Social Science and Humanities Research Council (SSHRC grant # 756-2011-0090, Canada: EP LeBel), SSHRC grant (Canada: L Campbell)
Editor: C FinkenauerReviewers: Anonymous reviewer 1
Curate Science transparency standardNon-empirical
Gold/diamond/green open access
COIs/funding disclosures
SV Paunonen & EP LeBel (2012)
Journal of Personality and Social Psychology
Public Code
Past studies of socially desirable self-reports on the items of personality measures have found inconsistent effects of the response bias on the measures’ predictive validities, with some studies reporting small effects and other studies reporting large effects. Using Monte Carlo methods, we evaluated various models of socially desirable responding by systematically adding predetermined amounts of the bias to the simulated personality trait scores of hypothetical test respondents before computing test–criterion validity correlations. Our study generally supported previous findings that have reported relatively minor decrements in criterion prediction, even with personality scores that were massively infused with desirability bias. Furthermore, the response bias failed to reveal itself as a statistical moderator of test validity or as a suppressor of validity. Large differences between some respondents’ obtained test scores and their true trait scores, however, meant that the personality measure’s construct validity would be severely compromised and, more specifically, that estimates of those individuals’ criterion performance would be grossly in error. Our discussion focuses on reasons for the discrepant results reported in the literature pertaining to the effect of socially desirable responding on criterion validity. More important, we explain why the lack of effects of desirability bias on the usual indicators of validity, moderation, and suppression should not be surprising.
Competing interests: None to declare.
Funding sources: Social Science and Humanities Research Council (Canada, SSHRC grant # 410-2010-2586: SV Paunonen), SSHRC (Canada, SSHRC doctoral fellowship # 767-2007-1425: EP LeBel)
Editor: L KingReviewers: Anonymous reviewer 1, Anonymous reviewer 2
Curate Science transparency standardNon-empirical
Gold/diamond/green open access
COIs/funding disclosures
EP LeBel & KR Peters (2011)
Review of General Psychology
In this methodological commentary, we use Bem’s (2011) recent article reporting experimental evidence for psi as a case study for discussing important deficiencies in modal research practice in empirical psychology. We focus on (a) overemphasis on conceptual rather than close replication, (b) insufficient attention to verifying the soundness of measurement and experimental procedures, and (c) flawed implementation of null hypothesis significance testing. We argue that these deficiencies contribute to weak method-relevant beliefs that, in conjunction with overly strong theory-relevant beliefs, lead to a systemic and pernicious bias in the interpretation of data that favors a researcher’s theory. Ultimately, this interpretation bias increases the risk of drawing incorrect conclusions about human psychology. Our analysis points to concrete recommendations for improving research practice in empirical psychology. We recommend (a) a stronger emphasis on close replication, (b) routinely verifying the integrity of measurement instruments and experimental procedures, and (c) using stronger, more diagnostic forms of null hypothesis testing.
Curate Science transparency standardNon-empirical
Gold/diamond/green open access
COIs/funding disclosures
EP LeBel & SV Paunonen (2011)
Personality and Social Psychology Bulletin
Public Code
Implicit measures have contributed to important insights in almost every area of psychology. However, various issues and challenges remain concerning their use, one of which is their considerable variation in reliability, with many implicit measures having questionable reliability. The goal of the present investigation was to examine an overlooked consequence of this liability with respect to replication, when such implicit measures are used as dependent variables in experimental studies. Using a Monte Carlo simulation, the authors demonstrate that a higher level of unreliability in such dependent variables is associated with substantially lower levels of replicability. The results imply that this overlooked consequence can have farreaching repercussions for the development of a cumulative science. The authors recommend the routine assessment and reporting of the reliability of implicit measures and also urge the improvement of implicit measures with low reliability.
Competing interests: None to declare.
Funding sources: Social Science and Humanities Research Council (Canada, SSHRC doctoral fellowship # 767-2007-1425: EP LeBel), SSHRC grant (Canada, grant # 410-2010-2586: SV Paunonen)
Editor: DA StapelReviewers: Anonymous reviewer 1, Anonymous reviewer 2
A key question in the self-esteem literature involves the conditions under which implicit and explicit self-esteem correspond. The current investigation adds to this literature by using a novel strategy capitalizing on natural variation in self-report response latencies to shed further light on the conditions of implicit and explicit selfesteem consistency. The current study demonstrated that implicit and explicit selfesteem corresponded for highly accessible self-attitudes (as indexed by response latencies to the Rosenberg Self-Esteem Scale items, RSES; Rosenberg, 1965) whereas implicit and explicit self-esteem were virtually unrelated for less accessible self-attitudes. This effect was found using both the Name Letter Task (NLT; Nuttin, 1985) and the Self-Esteem Implicit Association Test (SE-IAT; Greenwald & Farnham, 2000) as measures of implicit self-esteem.
Competing interests: None to declare.
Funding sources: Social Science and Humanities Research Council (Canada, SSHRC doctoral fellowship # 767-2007-1425: EP LeBel)
Editor: JK BossonReviewers: Anonymous reviewer 1, Anonymous reviewer 2
Curate Science transparency standardEmpirical
Gold/diamond/green open access
Open/public data when ethically possible
COIs/funding disclosures
EP LeBel & B Gawronski (2009)
European Journal of Personality
Public Data
Public Code
Although the name-letter task (NLT) has become an increasingly popular technique to measure implicit self-esteem (ISE), researchers have relied on different algorithms to compute NLT scores and the psychometric properties of these differently computed scores have never been thoroughly investigated. Based on 18 independent samples, including 2690 participants, the current research examined the optimality of five scoring algorithms based on the following criteria: reliability; variability in reliability estimates across samples; types of systematic error variance controlled for; systematic production of outliers and shape of the distribution of scores. Overall, an ipsatized version of the original algorithm exhibited the most optimal psychometric properties, which is recommended for future research using the NLT.
Funding sources: Social Science and Humanities Research Council (Canada, SSHRC doctoral fellowship # 767-2007-1425: EP LeBel), Canada Research Chairs Program Grant (Canada, grant # 202555: B Gawronski), SSHRC (grant # 410-2008-2247: B Gawronski)
The current research investigated the role of spontaneous partner feelings (implicit partner affect) in the dynamics of relationship satisfaction, commitment, and romantic dissolution. Participants completed a variant of the name-letter task as a measure of implicit partner affect, and self-report measures of relationship satisfaction and commitment. Approximately 4 months later, participants were contacted to assess their current relationship status. Overall, participants showed a biased preference for their partner’s initials (after adjusting for proper baselines), indicating the presence of positive implicit partner affect. Participants with more positive implicit partner affect were more satisfied with, but not more committed to, their relationship. Additionally, implicit partner affect exerted a significant indirect effect on relationship stability. These effects were independent of relationship length, age, and gender. Implications for the role of automatic affective processes in relationship processes and the utility of indirect measures for shedding light on relationship dynamics are discussed.
Funding sources: Social Science and Humanities Research Council (Canada, SSHRC doctoral fellowship # 767-2007-1425: EP LeBel), SSHRC grant (Canada: L Campbell)
A common assumption in research on attitudes is that indirect measures assess relatively stable implicit attitudes, whereas traditional self-report measures assess more recently acquired explicit attitudes that coexist with old, presumably stable implicit attitudes. This assumption seems difficult to reconcile with research showing experimentally induced changes on implicit but not explicit measures. The present research tested a process-account of such asymmetrical patterns. Specifically, we argue that implicit measures show experimental effects that do not emerge on explicit measures when (a) the pairing of an attitude object with positive or negative valence creates new automatic associations in memory, and, at the same time, (b) the consideration of additional information about the attitude object eliminates the impact of automatic associations on self-reported evaluative judgments. Results from three studies support these predictions. Implications for research on attitude change are discussed.
Competing interests: None to declare.
Funding sources: Canada Research Chairs Program Grant (Canada, grant # 202555: B Gawronski), SSHRC (Canada, grant # 410-2005-1339: B Gawronski), Academic Development Fund (Canada, UWO, grant # 05-303: B Gawronski)
Experimental paradigms designed to assess ‘implicit’ representations are currently very popular in many areas of psychology. The present article addresses the validity of three widespread assumptions in research using these paradigms: that (a) implicit measures reflect unconscious or introspectively inaccessible representations; (b) the major difference between implicit measures and self-reports is that implicit measures are resistant or less susceptible to social desirability; and (c) implicit measures reflect highly stable, older representations that have their roots in long-term socialization experiences. Drawing on a review of the available evidence, we conclude that the validity of all three assumptions is equivocal and that theoretical interpretations should be adjusted accordingly. We discuss an alternative conceptualization that distinguishes between activation and validation processes.
Competing interests: None to declare.
Funding sources: Canada Research Chairs Program Grant (Canada, grant # 202555: B Gawronski), SSHRC (Canada, grant # 410-2005-1339: B Gawronski), Academic Development Fund (Canada, UWO, grant # 05-303: B Gawronski)
I formerly offered the following science consulting services in the areas of:
Transparency Curation & Labeling
For authors, curate the transparency of articles as per transparency badges (eg) or transparency level of articles as per transparency standards (eg)
Curate & display the transparency of articles for journals (eg1; eg2; eg3), professors' publications for uni departments (eg), grantees' articles for funders (eg), or articles in open science platforms/academic search engines via custom user interfaces (UIs)
Credibility Curation & Evaluation
Curate the credibility of scientific evidence, including follow-up critical commentaries, robustness reanalyses, & replications at the article (eg1; eg2) or effect/hypothesis level (eg)
Advise legal teams and others on evaluating (& labeling) the credibility of scientific evidence (e.g., evidence cited in expert testimony affidavits or public policy papers)
Scientific Study Design
Design scientific studies to answer a question, test a hypothesis, or investigate a topic (any field; experimental or observational)
Clarify the question(s) & what specific aspect(s) of a topic you are most interested in shedding new light on
Designed a longitudinal study of open science behaviors for Berkeley's BITSS commissioned by an anonymous donor
Older Essays
EP LeBel (2005). The effect of subliminal self-affirmation on stereotype activation. Unpublished Honors thesis, University of Waterloo. PsyArXiv Preprint
EP LeBel (2004). Semantic priming in the dual task paradigm. Unpublished manuscript, University of Waterloo. PsyArXiv Preprint