Discover 23 peer-reviewed studies in Crowdsourcing Research (2022–2025). Explore research findings powered by Prolific's diverse participant panel.
This page lists 23 peer-reviewed papers in the research area of Crowdsourcing Research in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: RG Rinderknecht, L Doan
Year: 2025
Published in: Sociological ..., 2025 - journals.sagepub.com
Institution: RAND
Research Area: Crowdsourcing Research Methods, Time Use Studies, Social Science
Discipline: Artificial Intelligence
Time use patterns of MTurk and Prolific respondents differ significantly from the general U.S. population (ATUS), including less housework and care work, more time at home and alone, even after accounting for demographic differences.
Methods: Time diaries were collected and analyzed for 136 MTurk and 156 Prolific respondents, then compared with 468 ATUS responses.
Key Findings: Daily time use patterns including work, housework, travel, leisure, and time spent alone or at home.
Citations: 6
Sample Size: 760
-
Authors: B Aksoy, S Nevo
Year: 2025
Published in: Participant Behavior and Motivations (March 21 ..., 2025 - papers.ssrn.com
Institution: Rensselaer Polytechnic Institute
Research Area: Crowdsourcing Research, Participant Behavior
Discipline: Computational Social Science
Research on Prolific reveals that participant compensation significantly impacts sample selection, potentially introducing biases, and offers insights into participant motivations and behavior to improve study reliability and design.
Methods: A carefully designed experiment was performed to analyze correlations between participants' reservation wages, socioeconomic attributes, and study compensations; sensitivity analyses were conducted for further guidance.
Key Findings: Participant reservation wages, socioeconomic attributes, perceptions of general behavior and motivations, and implications of study design decisions.
Citations: 3
-
Authors: J Li, E Huusko, NN Ahooie, M Kuutila
Year: 2025
Published in: ... Journal of Human ..., 2025 - Taylor & Francis
Institution: University of Oulu
Research Area: Social Media Credibility, Human-Computer Interaction (HCI) in Social Media, Crowdsourcing Research
Discipline: Human-Computer Interaction (HCI)
Credtwi, a browser plugin for assessing tweet credibility, revealed that perceived Twitter credibility declines with use and author verification status heavily influences perceived credibility.
Methods: A browser plugin was used for crowdsourced credibility assessment through participant questionnaires during a week-long field study.
Key Findings: Perceptions of online tweet credibility, factors affecting tweet credibility (e.g., verification status, bio), variations in credibility assessments across genders.
DOI: https://doi.org/10.1080/10447318.2025.2480885
Citations: 2
Sample Size: 150
-
Authors: Y Ba, MV Mancenido, EK Chiou, R Pan
Year: 2025
Published in: Behavior Research Methods, 2025 - Springer
Institution: University of Delaware, National Taiwan University, University of British Columbia, Monash University
Research Area: Crowdsourcing, Data Quality, Spamming Behavior Detection, LLM Applications in Behavioral Research
Discipline: Computer Science, Artificial Intelligence, LLM
The paper introduces a systematic method to evaluate crowdsourced data quality and detect spam behaviors through variance decomposition, proposing a spammer index and credibility metrics to improve consistency and reliability in labeling tasks.
Methods: Variance decomposition, Markov chain models, and generalized random effects models were used to assess annotator consistency and credibility; metrics were applied to both simulated and real-world data from two crowdsourcing platforms.
Key Findings: Quality of crowdsourced data, spammer behaviors, annotators’ consistency, and credibility.
Citations: 2
-
Authors: D OConnell, A Bautista
Year: 2025
Published in: ... Student Journal of ..., 2025 - journals.library.columbia.edu
Institution: University of Houston, Webster University
Research Area: Crowdsourcing Research Methodology, Human-Computer Interaction (HCI)
Discipline: Computational Social Science, Behavioral Research
Prolific outperforms MTurk in participant data quality and affordability for online survey-based research.
Methods: Data from participants recruited via MTurk and Prolific were analyzed for cost, attention measures, participation duration, and internal consistency.
Key Findings: Comparison of data quality and cost-effectiveness between MTurk and Prolific for online survey recruitment.
Citations: 1
Sample Size: 699
-
Authors: JS Michel, G Sawhney, GP Watson
Year: 2025
Published in: How to Conduct and ..., 2025 - elgaronline.com
Institution: Auburn University
Research Area: Crowdsourcing, Research Methods, Social Science
Discipline: Social Science
Crowdsourcing is a versatile tool leveraging collective intelligence for efficient task completion and has applications across various fields including decentralized finance, blockchain technologies, and IO Psychology research and practice.
Methods: The paper discusses the theoretical and practical applications of crowdsourcing in various domains, referencing prior work and examples such as Wikipedia, crowdfunding platforms, and blockchain networks.
Key Findings: The applications and impact of crowdsourcing in different fields, particularly its role in Industrial-Organizational Psychology for data collection and analysis.
Citations: 1
-
Authors: DT Esch, N Mylonopoulos, V Theoharakis
Year: 2025
Published in: Behavior Research Methods, 2025 - Springer
Institution: University of Cologne, University of Piraeus, Aristotle University of Thessaloniki
Research Area: Crowdsourcing Behavioral Research, Mobile Data Collection
Discipline: Behavioral Research
Mobile-based responses via platforms like Pollfish are comparable in quality to computer-based ones from MTurk and Prolific, though attentiveness varies significantly across platforms and is influenced by factors like incentives, distractions, and system 1 thinking.
Methods: Conducted two studies distributing the same survey across MTurk, Prolific, Pollfish, and Qualtrics panels to compare data quality and analyze attentiveness scores.
Key Findings: Attentiveness, device usage (mobile vs. computer), and factors influencing data quality such as incentives, respondent activity, distractions, and survey familiarity.
Citations: 1
-
Authors: C Heath, JM Williams, D Leightley
Year: 2025
Published in: JMIR mHealth and ..., 2025 - mhealth.jmir.org
Institution: Swansea University, King's College London, Reykjavík University
Research Area: mHealth Interventions, Crowdsourcing, Social Media Recruitment, Mental Health Research (PTSD, Harmful Gambling)
Discipline: Digital Health, Mental Health Research
Social media and online platforms like Facebook and Prolific were effective but faced challenges in recruiting and retaining military veterans with PTSD or harmful gambling for a digital mHealth intervention pilot study.
Methods: Multiple recruitment strategies were used, including paid and unpaid advertisements on Facebook, Prolific, direct mailing, event hosting with veterans' charities, snowball sampling, and incentives.
Key Findings: The effectiveness of different recruitment strategies for enrolling military veterans with PTSD or harmful gambling into a digital intervention study.
Sample Size: 79
-
Authors: F Joessel, S Denkinger, PE Joessel, CS Green
Year: 2025
Published in: Acta Psychologica, 2025 - Elsevier
Institution: Max Planck Institute, University of Potsdam, University of Maryland, University of Zurich, University of Arizona
Research Area: Online cognitive training, Automated psychological studies, Crowdsourcing, behavioral research
Discipline: Psychology
The study introduces a fully online method for conducting cognitive training experiments using Prolific, significantly reducing resource demands while achieving robust results and diverse participant recruitment.
Methods: Participants were recruited via Prolific, assigned to groups using a pseudo-randomized procedure, and completed a 12-hour remote cognitive training study with pre- and post-test assessments monitored via custom dashboards.
Key Findings: Impact of a 12-hour cognitive training intervention on participants' cognitive functions, conducted in a remote and automated manner.
-
Authors: DA Albert, D Smilek
Year: 2024
Published in: Scientific Reports, 2023 - nature.com
Institution: University of Waterloo, University of Waterloo
Research Area: Crowdsourcing Research Methods, Behavioral Science, Human-Computer Interaction (HCI)
Discipline: Psychological Science
Prolific participants exhibited lower levels of attentional disengagement compared to MTurk participants, with risk conditions and platform traits influencing task performance and disengagement.
Methods: Participants from Prolific and MTurk completed an attention task with varying error risk levels (high vs. low), and attentional disengagement was measured using task performance, self-reported mind wandering, and multitasking.
Key Findings: Attentional disengagement through task performance, mind wandering, and multitasking under different risk conditions across two recruitment platforms (Prolific and MTurk).
Citations: 150
Sample Size: 80
-
Authors: C Clemmow, I van der Vegt, B Rottweiler
Year: 2024
Published in: ... and political violence, 2024 - Taylor & Francis
Institution: University College London
Research Area: Crowdsourcing for Violent Extremism Research
Discipline: Computational Social Science
Citations: 12
-
Authors: E Christoforou, G Demartini
Year: 2024
Published in: Proceedings of the ..., 2024 - ojs.aaai.org
Institution: University of Sheffield, University of Southampton
Research Area: Crowdsourcing, Generative AI, Web and Social Media Research, LLM
Discipline: Artificial Intelligence
DOI: https://doi.org/10.1609/icwsm.v18i1.31452
Citations: 10
-
Authors: A Berke, R Mahari, A Pentland, K Larson
Year: 2024
Published in: Proceedings of the ACM ..., 2024 - dl.acm.org
Institution: Stanford's CodeX Center, Harvard Law School, MIT Media Lab, Stanford Institute for Human-Centered AI, The Larson Institute, Massachusetts Institute of Technology, Stanford University
Research Area: Crowdsourcing, Transparency, Human-Computer Interaction (HCI) in Social Science Research
Discipline: Computational Social Science, Human-Computer Interaction (HCI)
DOI: https://dl.acm.org/doi/abs/10.1145/3687005
Citations: 9
-
Authors: KD Wang, Z Chen, C Wieman
Year: 2024
Published in: ... of the 14th Learning Analytics and ..., 2024 - dl.acm.org
Institution: Delft University of Technology, University of Queensland
Research Area: Crowdsourcing for Educational Research
Discipline: Educational Research, Computer Science
Citations: 8
-
Authors: Eyal Peer
Year: 2024
Published in: CAMBRIDGE
Institution: Hebrew University, University of Cambridge
Research Area: Crowdsourcing, Research Methodology in Behavioral and Social Sciences
Discipline: Social, Behavioral Sciences
Citations: 7
-
Authors: M Hirth, J Jacques, P Rodgers, O Scekic
Year: 2023
Published in: Evaluation in the Crowd ..., 2017 - Springer
Research Area: Crowdsourcing, Research Methodology, Human-Computer Interaction (HCI) in Academic Research
Discipline: Social Science (specifically Crowdsourcing, Human-Computer Interaction (HCI), Research Methodology)
Citations: 15
-
Authors: R Kapitany, C Kavanagh
Year: 2023
Published in: 2023 - OSF
Research Area: Crowdsourcing ethics, best practices in behavioral science research
Discipline: Behavioral Science
Citations: 4
-
Authors: SG Lopez, SV Rouse
Year: 2023
Published in: Psi Chi Journal of ..., 2023 - search.ebscohost.com
Research Area: Crowdsourcing, Mechanical Turk in Psychological Research
Discipline: Psychological Research
Citations: 2
-
Authors: D Russo
Year: 2022
Published in: arXiv preprint arXiv:2203.14695, 2022 - arxiv.org
Institution: Università della Svizzera italiana
Research Area: Crowdsourcing, Software Engineering Research, Participant Recruitment
Discipline: Software Engineering
Citations: 30
-
Authors: J Oppenlaender, T Abbas, U Gadiraju
Year: 2022
Published in: ... of the ACM on Human-Computer ..., 2024 - dl.acm.org
Institution: University of Oulu, University of Tübingen, Aston University, Delft University of Technology
Research Area: Crowdsourcing, Research Best Practices, Human-Computer Interaction (HCI)
Discipline: Human-Computer Interaction (HCI)
The paper shows that pilot studies are widely used but poorly reported in crowdsourcing and HCI research, making studies harder to interpret, replicate, and trust. It proposes clear reporting guidelines to improve transparency, rigor, and data quality in human-subject experiments.
Citations: 6