Skip to content

schiekiera/metascience_experiment_psychology

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Publication bias in psychology

Overview

This repository contains anonymized data for a metascientific experiment on the influence of publication bias on psychologists' assessments of clinical psychology abstracts. The corresponding paper has been accepted as a pre-data collection Stage 1 registered report in Advances in Methods and Practices in Psychological Science.


How to use the data

The scripts are prepared so that the data will be directly loaded from the GitHub repository link without downloading the data manually. All main hypotheses are fully reproducible. Parts of the data for the exploratory analysis are anonymized because they contain sensitive information (e.g., age, gender, conscientiousness).


Abstract

Background:

Publishing all study results, including non-significant and hypothesis-inconsistent findings, is central to the process of scientific knowledge production. Yet, review studies suggest that results that are statistically significant or consistent with hypotheses are preferred in the publication process and in reception. However, questions remain about the mechanisms underlying publication bias, and work has focused on between-subjects designs rather than within-subjects experiments. To address this, we conducted a within-subject experimental study based on Dual Process Theories of decision-making.

Methods:

Across four online experiments, clinical psychology researchers (n = 303) evaluated 16 fictitious research abstracts. Participants first gave fast, intuitive evaluations of each abstract’s likelihood of being submitted for publication, read, or cited. They then rated their Feeling of Rightness (FOR) about that response and finally provided a more considered evaluation after deliberation. Abstracts varied systematically in statistical significance and hypothesis-consistency. We used multilevel and mediation models to analyze how these factors influenced intuitive and considered evaluations, and whether FOR mediated any changes.

Results:

Researchers rated statistically non-significant abstracts as less likely to be submitted‚ read, or cited compared to significant ones. No such bias was found for hypothesis-inconsistent results. In most cases, initial intuitive evaluations were not revised after deliberation and FOR did not predict decision changes.

Conclusion:

The findings suggest a persistent bias against statistically non-significant results in researchers’ publication, citation and reception decisions, while hypothesis-consistency does not seem to affect the decisions. Contrary to expectations, deliberation and FOR played minor roles in explaining publication, citation and reception decisions.


Keywords

file drawer, null hypothesis, publication bias, decision making, heuristics, preferences, meta-science, positive results

About

This repository contains anonymized data for a metascientific experiment on publication bias in decision making of clinical psychologists.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages