Discover 48 peer-reviewed studies in Crowdsourcing (2024–2025). Explore research findings powered by Prolific's diverse participant panel.
This page lists 48 peer-reviewed papers in the research area of Crowdsourcing in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: J Li, M Kuutila, E Huusko, N Kariyakarawana
Year: 2025
Published in: Proceedings of the 15th ..., 2023 - dl.acm.org
Institution: University of Oulu
Research Area: Social Media Credibility, Crowdsourcing, Human-Computer Interaction (HCI)
Discipline: Human-Computer Interaction (HCI)
Credibility of short-form health-related social media posts is influenced by factors such as author profession and post engagement metrics, with experts being encouraged to actively participate in information correction online.
Methods: Crowdsourced online credibility assessment using health-themed social media posts with varied content features deployed across three platforms; quantitative and qualitative data collection.
Key Findings: Credibility factors like author profession, engagement metrics (likes/shares), and personal strategies influencing perceived trustworthiness of social media posts.
DOI: 10.1145/3605390.3605406
Citations: 11
-
Authors: P Thwaites, N Vandeweerd, M Paquot
Year: 2025
Published in: Applied Linguistics, 2025 - academic.oup.com
Institution: University College Londonouvain, Radboud University Nijmegen, Fonds de la Recherche Scientifique – FNRS
Research Area: Applied Linguistics, Educational Assessment, Crowdsourcing
Discipline: Applied Linguistics
The study demonstrates that crowdsourcing platforms can recruit judges to evaluate learner texts with reliability and validity comparable to assessments conducted by trained linguists.
Methods: Judges recruited via an online crowdsourcing platform conducted comparative judgement assessments of learner texts to measure writing proficiency.
Key Findings: Reliability and concurrent validity of learner text evaluations performed via crowdsourced judges compared to linguist evaluations.
Citations: 10
-
Authors: RG Rinderknecht, L Doan
Year: 2025
Published in: Sociological ..., 2025 - journals.sagepub.com
Institution: RAND
Research Area: Crowdsourcing Research Methods, Time Use Studies, Social Science
Discipline: Artificial Intelligence
Time use patterns of MTurk and Prolific respondents differ significantly from the general U.S. population (ATUS), including less housework and care work, more time at home and alone, even after accounting for demographic differences.
Methods: Time diaries were collected and analyzed for 136 MTurk and 156 Prolific respondents, then compared with 468 ATUS responses.
Key Findings: Daily time use patterns including work, housework, travel, leisure, and time spent alone or at home.
Citations: 6
Sample Size: 760
-
Authors: TK Koh
Year: 2025
Published in: Organization Science, 2025 - pubsonline.informs.org
Institution: University of North Carolina Chapel Hill
Research Area: Crowdsourcing Contests, Feedback Use, Priming Intervention, Organizational Science
Discipline: Behavioral Sciences
The paper examines how solvers in crowdsourcing contests prioritize feedback from seekers over peers, even when equally constructive, and proposes an intervention to improve feedback usage for better outcomes.
Methods: The study involved a field survey and three online experiments to test the theorized source effect and the proposed feedback evaluation intervention.
Key Findings: Solvers' feedback usage patterns, the source effect of feedback (seeker vs. peer), and the influence of feedback constructiveness on idea quality and solvers’ winning prospects.
Citations: 5
-
Authors: N Byrd
Year: 2025
Published in: Byrd, N. (2025). Reflection-Philosophy Order Effects and Correlations Across Samples. Analysis. DOI: 10.1093/analys/anaf015. https://osf.io/preprints/psyarxiv/y8sdm
Institution: Stevens Institute of Technology
Research Area: Behavioral Research Methods, Experimental Psychology, Crowdsourcing Platforms
Discipline: Psychology
Reflective reasoning correlates with certain philosophical decisions, and the study suggests bidirectional causal paths between reflection and philosophy, with test order effects influencing reflection test outcomes but not philosophical decisions.
Methods: Participants from four sources (Amazon Mechanical Turk, CloudResearch, Prolific, and a university) were tested on reflective reasoning and their decisions on 10 philosophical thought experiments.
Key Findings: Impact of reflective reasoning on philosophical decisions and the effect of test order on reflection and philosophy outcomes.
Citations: 4
-
Authors: L Gienapp, T Hagen, M Fröbe, M Hagen, B Stein, M Potthast, H Scells
Year: 2025
Published in: ArXiv
Institution: Bauhaus-Universitat Weimar, Friedrich-Schiller-Universitat Jena, Leipzig University, University of Kassel, ScaDS.AI, hessian.AI
Research Area: Crowdsourcing, RAG Evaluation, Artificial Intelligence, AI Evaluation, RAG
Discipline: Artificial Intelligence
The study investigates the feasibility of using crowdsourcing for RAG evaluation, finding that human pairwise judgments are reliable and cost-effective compared to LLM-based or automated methods.
Methods: Two complementary studies on response writing and response utility judgment using 903 human-written and 903 LLM-generated responses for 301 topics; pairwise judgments across seven utility dimensions were collected via human and LLM evaluators.
Key Findings: Human effectiveness in writing and judging responses in RAG scenarios, considering discourse styles and utility dimensions like coverage and coherence.
Citations: 4
Sample Size: 903
-
Authors: B Aksoy, S Nevo
Year: 2025
Published in: Participant Behavior and Motivations (March 21 ..., 2025 - papers.ssrn.com
Institution: Rensselaer Polytechnic Institute
Research Area: Crowdsourcing Research, Participant Behavior
Discipline: Computational Social Science
Research on Prolific reveals that participant compensation significantly impacts sample selection, potentially introducing biases, and offers insights into participant motivations and behavior to improve study reliability and design.
Methods: A carefully designed experiment was performed to analyze correlations between participants' reservation wages, socioeconomic attributes, and study compensations; sensitivity analyses were conducted for further guidance.
Key Findings: Participant reservation wages, socioeconomic attributes, perceptions of general behavior and motivations, and implications of study design decisions.
Citations: 3
-
Authors: J Li, E Huusko, NN Ahooie, M Kuutila
Year: 2025
Published in: ... Journal of Human ..., 2025 - Taylor & Francis
Institution: University of Oulu
Research Area: Social Media Credibility, Human-Computer Interaction (HCI) in Social Media, Crowdsourcing Research
Discipline: Human-Computer Interaction (HCI)
Credtwi, a browser plugin for assessing tweet credibility, revealed that perceived Twitter credibility declines with use and author verification status heavily influences perceived credibility.
Methods: A browser plugin was used for crowdsourced credibility assessment through participant questionnaires during a week-long field study.
Key Findings: Perceptions of online tweet credibility, factors affecting tweet credibility (e.g., verification status, bio), variations in credibility assessments across genders.
DOI: https://doi.org/10.1080/10447318.2025.2480885
Citations: 2
Sample Size: 150
-
Authors: Y Ba, MV Mancenido, EK Chiou, R Pan
Year: 2025
Published in: Behavior Research Methods, 2025 - Springer
Institution: University of Delaware, National Taiwan University, University of British Columbia, Monash University
Research Area: Crowdsourcing, Data Quality, Spamming Behavior Detection, LLM Applications in Behavioral Research
Discipline: Computer Science, Artificial Intelligence, LLM
The paper introduces a systematic method to evaluate crowdsourced data quality and detect spam behaviors through variance decomposition, proposing a spammer index and credibility metrics to improve consistency and reliability in labeling tasks.
Methods: Variance decomposition, Markov chain models, and generalized random effects models were used to assess annotator consistency and credibility; metrics were applied to both simulated and real-world data from two crowdsourcing platforms.
Key Findings: Quality of crowdsourced data, spammer behaviors, annotators’ consistency, and credibility.
Citations: 2
-
Authors: D OConnell, A Bautista
Year: 2025
Published in: ... Student Journal of ..., 2025 - journals.library.columbia.edu
Institution: University of Houston, Webster University
Research Area: Crowdsourcing Research Methodology, Human-Computer Interaction (HCI)
Discipline: Computational Social Science, Behavioral Research
Prolific outperforms MTurk in participant data quality and affordability for online survey-based research.
Methods: Data from participants recruited via MTurk and Prolific were analyzed for cost, attention measures, participation duration, and internal consistency.
Key Findings: Comparison of data quality and cost-effectiveness between MTurk and Prolific for online survey recruitment.
Citations: 1
Sample Size: 699
-
Authors: JS Michel, G Sawhney, GP Watson
Year: 2025
Published in: How to Conduct and ..., 2025 - elgaronline.com
Institution: Auburn University
Research Area: Crowdsourcing, Research Methods, Social Science
Discipline: Social Science
Crowdsourcing is a versatile tool leveraging collective intelligence for efficient task completion and has applications across various fields including decentralized finance, blockchain technologies, and IO Psychology research and practice.
Methods: The paper discusses the theoretical and practical applications of crowdsourcing in various domains, referencing prior work and examples such as Wikipedia, crowdfunding platforms, and blockchain networks.
Key Findings: The applications and impact of crowdsourcing in different fields, particularly its role in Industrial-Organizational Psychology for data collection and analysis.
Citations: 1
-
Authors: DT Esch, N Mylonopoulos, V Theoharakis
Year: 2025
Published in: Behavior Research Methods, 2025 - Springer
Institution: University of Cologne, University of Piraeus, Aristotle University of Thessaloniki
Research Area: Crowdsourcing Behavioral Research, Mobile Data Collection
Discipline: Behavioral Research
Mobile-based responses via platforms like Pollfish are comparable in quality to computer-based ones from MTurk and Prolific, though attentiveness varies significantly across platforms and is influenced by factors like incentives, distractions, and system 1 thinking.
Methods: Conducted two studies distributing the same survey across MTurk, Prolific, Pollfish, and Qualtrics panels to compare data quality and analyze attentiveness scores.
Key Findings: Attentiveness, device usage (mobile vs. computer), and factors influencing data quality such as incentives, respondent activity, distractions, and survey familiarity.
Citations: 1
-
Authors: S Liu, Z Cai, H Wang, Z Ma, X Li
Year: 2025
Published in: arXiv preprint arXiv:2505.19134, 2025 - arxiv.org
Institution: Meta, Imperial College London
Research Area: Artificial Intelligence, Crowdsourcing, LLM
Discipline: Artificial Intelligence
The paper develops a principal-agent model to incentivize high-quality human annotations using golden questions and identifies criteria for these questions to effectively monitor annotators' performance.
Methods: The authors use a principal-agent model with maximum likelihood estimators (MLE) and hypothesis testing to design incentive-compatible systems for annotators. Golden questions of high certainty and similar format to normal data were selected and validated through experiments.
Key Findings: The effectiveness of golden questions for incentivizing and monitoring high-quality human annotations in preference data.
DOI: https://doi.org/10.48550/arXiv.2505.19134
Citations: 1
-
Authors: C Heath, JM Williams, D Leightley
Year: 2025
Published in: JMIR mHealth and ..., 2025 - mhealth.jmir.org
Institution: Swansea University, King's College London, Reykjavík University
Research Area: mHealth Interventions, Crowdsourcing, Social Media Recruitment, Mental Health Research (PTSD, Harmful Gambling)
Discipline: Digital Health, Mental Health Research
Social media and online platforms like Facebook and Prolific were effective but faced challenges in recruiting and retaining military veterans with PTSD or harmful gambling for a digital mHealth intervention pilot study.
Methods: Multiple recruitment strategies were used, including paid and unpaid advertisements on Facebook, Prolific, direct mailing, event hosting with veterans' charities, snowball sampling, and incentives.
Key Findings: The effectiveness of different recruitment strategies for enrolling military veterans with PTSD or harmful gambling into a digital intervention study.
Sample Size: 79
-
Authors: G Allen, U Gadiraju
Year: 2025
Published in: Proceedings of the 4th Annual Symposium on ..., 2025 - dl.acm.org
Institution: TU Delft
Research Area: Gesture Recognition, Crowdsourcing, Input Modalities in HCI
Discipline: Human-Computer Interaction (HCI)
Switching input modalities in microtask crowdsourcing does not affect worker accuracy or perceived cognitive load but influences task completion time; ergonomically informed gestures can integrate effectively without impacting worker experiences.
Methods: A between-subjects study was conducted across 16 experimental conditions with varying input modality sequences to assess impacts on task outcomes and worker experiences.
Key Findings: Effect of switching input modalities on task completion time, accuracy, and perceived cognitive load among crowd workers.
DOI: https://doi.org/10.1145/3729176.3729184
Sample Size: 717
-
Authors: F Joessel, S Denkinger, PE Joessel, CS Green
Year: 2025
Published in: Acta Psychologica, 2025 - Elsevier
Institution: Max Planck Institute, University of Potsdam, University of Maryland, University of Zurich, University of Arizona
Research Area: Online cognitive training, Automated psychological studies, Crowdsourcing, behavioral research
Discipline: Psychology
The study introduces a fully online method for conducting cognitive training experiments using Prolific, significantly reducing resource demands while achieving robust results and diverse participant recruitment.
Methods: Participants were recruited via Prolific, assigned to groups using a pseudo-randomized procedure, and completed a 12-hour remote cognitive training study with pre- and post-test assessments monitored via custom dashboards.
Key Findings: Impact of a 12-hour cognitive training intervention on participants' cognitive functions, conducted in a remote and automated manner.
-
Authors: DA Albert, D Smilek
Year: 2024
Published in: Scientific Reports, 2023 - nature.com
Institution: University of Waterloo, University of Waterloo
Research Area: Crowdsourcing Research Methods, Behavioral Science, Human-Computer Interaction (HCI)
Discipline: Psychological Science
Prolific participants exhibited lower levels of attentional disengagement compared to MTurk participants, with risk conditions and platform traits influencing task performance and disengagement.
Methods: Participants from Prolific and MTurk completed an attention task with varying error risk levels (high vs. low), and attentional disengagement was measured using task performance, self-reported mind wandering, and multitasking.
Key Findings: Attentional disengagement through task performance, mind wandering, and multitasking under different risk conditions across two recruitment platforms (Prolific and MTurk).
Citations: 150
Sample Size: 80
-
Authors: S Du, MT Babalola, P D'cruz, E Dóci
Year: 2024
Published in: Journal of Business ..., 2024 - Springer
Institution: Nottingham University Business School, University of Reading, Oxford Brookes University, University of Portsmouth
Research Area: Crowdsourcing Ethics, Social Sciences, Organizational Behavior
Discipline: Social Science
The paper explores the ethical, societal, and global implications of using crowdsourcing platforms for research, emphasizing the need for fair compensation, transparency, and consideration of global disparities between the Global North and South.
Methods: The paper provides a conceptual analysis and critique of crowdsourcing research practices, focusing on ethical and societal considerations.
Key Findings: Ethical, societal, and global implications of crowdsourcing research practices, including data quality, reporting transparency, fair remuneration, and the role of global disparities.
Citations: 24
-
Authors: C Clemmow, I van der Vegt, B Rottweiler
Year: 2024
Published in: ... and political violence, 2024 - Taylor & Francis
Institution: University College London
Research Area: Crowdsourcing for Violent Extremism Research
Discipline: Computational Social Science
Citations: 12
-
Authors: E Christoforou, G Demartini
Year: 2024
Published in: Proceedings of the ..., 2024 - ojs.aaai.org
Institution: University of Sheffield, University of Southampton
Research Area: Crowdsourcing, Generative AI, Web and Social Media Research, LLM
Discipline: Artificial Intelligence
DOI: https://doi.org/10.1609/icwsm.v18i1.31452
Citations: 10