Explore 40 peer-reviewed papers in Computational Social Science (2024–2025). Academic research using Prolific for high-quality human data collection.
This page lists 40 peer-reviewed papers in the discipline of Computational Social Science in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: M Steyvers, H Tejeda, A Kumar, C Belem
Year: 2025
Published in: Nature Machine ..., 2025 - nature.com
Institution: University of California Irvine
Research Area: Computational Linguistics, Computational Social Science, AI Ethics, Trust in AI
Discipline: Computational Social Science
LLMs often lead to user overestimation of response accuracy, especially with longer explanations; adjusting explanation styles to align with model confidence improves calibration and discrimination gaps, enhancing trust in AI-assisted decision making.
Methods: Conducted experiments using multiple-choice and short-answer questions to study user confidence versus model-stated confidence; varied explanation length and alignment with model internal confidence.
Key Findings: Calibration gap (human vs. model confidence), discrimination gap (ability to distinguish correct vs. incorrect answers), and effects of explanation style and length on user trust.
Citations: 100
-
Authors: M Groh, A Sankaranarayanan, N Singh, DY Kim
Year: 2025
Published in: Nature ..., 2024 - nature.com
Institution: Northwestern University, Massachusetts Institute of Technology
Research Area: Deepfakes, Media Forensics, Human Perception of AI-Generated Content, Political Communication
Discipline: Computational Social Science
Humans are better at detecting deepfake political speeches using audio-visual cues than relying on text alone; state-of-the-art text-to-speech audio makes deepfakes harder to discern.
Methods: Five pre-registered randomized experiments with varied base rates of misinformation, audio sources, question framings, and media modalities were conducted.
Key Findings: Human accuracy in discerning real political speeches from deepfakes across media formats and contextual variables.
DOI: https://doi.org/10.1038/s41467-024-51998-z
Citations: 63
Sample Size: 2215
-
Authors: H Bai, JG Voelkel, S Muldowney, JC Eichstaedt
Year: 2025
Published in: Nature ..., 2025 - nature.com
Institution: Stanford University
Research Area: Political Persuasion, LLM
Discipline: Computational Social Science
LLM-generated messages can effectively persuade humans on policy issues similarly to human-crafted messages, with differences in perceived persuasion mechanisms.
Methods: Three pre-registered experiments were conducted comparing the persuasive effectiveness of LLM-generated and human-generated messages on policy attitudes, using control conditions with neutral messages.
Key Findings: Influence of LLM-generated messages on participants' policy attitudes and perceived characteristics of the message authors.
Citations: 37
Sample Size: 4829
-
Authors: K Hackenburg, BM Tappin, P Röttger, SA Hale
Year: 2025
Published in: Proceedings of the ..., 2025 - pnas.org
Institution: University of California Berkeley, University of Cambridge, University of Oxford, Max Planck Institute
Research Area: Political Persuasion, LLM
Discipline: Computational Social Science, Political Science
Scaling language model sizes leads to diminishing returns in generating persuasive political messages, with larger models providing minimal gains compared to smaller ones after controlling for task completion metrics like coherence and relevance.
Methods: Generated 720 political messages using 24 LLMs of varying sizes and tested their persuasiveness through a large-scale randomized survey experiment.
Key Findings: Persuasive capability of language models across different sizes in generating political messages.
Citations: 31
Sample Size: 25982
-
Authors: Y Ding, J You, TK Machulla, J Jacobs, P Sen
Year: 2025
Published in: Proceedings of the ..., 2022 - dl.acm.org
Institution: University of California Irvine, University of Florida, State University of New York at Buffalo, University of Waterloo, Virginia Tech
Research Area: Computational Social Science, Human-Computer Interaction (HCI), Sentiment Analysis
Discipline: Computational Social Science
Demographic differences among annotators significantly affect sentiment dataset labels, causing up to a 4.5% accuracy difference in sentiment prediction models.
Methods: Crowdsourced annotations from >1000 workers combined with demographic data; analysis of multimodal sentiment datasets and evaluation using machine learning models.
Key Findings: Impact of annotator demographics on sentiment labeling and its effect on model predictions.
DOI: https://doi.org/10.1145/3555632
Citations: 28
Sample Size: 1000
-
Authors: M Alizadeh, E Hoes, F Gilardi
Year: 2025
Published in: Scientific Reports, 2023 - nature.com
Institution: Department of Marketing, University of Amsterdam, Department of Social Sciences, Università Degli Studi di Milano, Department of Political Science and International Relations, Università Degli Studi di Milano
Research Area: Social media, Misinformation, Computational Social Science
Discipline: Computational Social Science
Token-based incentives for social media engagement increase the sharing of misinformation, but implementing penalties for objectionable content can reduce this trend without fully eliminating it.
Methods: Survey experiment analyzing the impact of hypothetical token rewards and penalties on user willingness to share different types of news content.
Key Findings: Effect of token-based incentives and penalties on user engagement and the willingness to share misinformation.
DOI: https://doi.org/10.1038/s41598-023-40716-2
Citations: 20
-
Authors: K Hackenburg, BM Tappin, L Hewitt, E Saunders
Year: 2025
Published in: Science, 2025 - science.org
Institution: London School of Economics and Political Science, Stony Brook University
Research Area: Political Persuasion with Conversational AI, LLM, Factual Accuracy in AI Systems.
Discipline: Political Science, Computational Social Science
This Science paper shows that conversational AI chatbots can systematically influence political opinions at scale, and that techniques like post-training and prompting make them far more persuasive—but that increased persuasion is tied to reduced factual accuracy in what the AI says.
Citations: 12
-
Authors: AG Møller, DM Romero, D Jurgens
Year: 2025
Published in: arXiv preprint arXiv ..., 2025 - arxiv.org
Institution: University of Copenhagen, University of Michigan, Pioneer Centre for AI
Research Area: Generative AI, Social Media, Human-Computer Interaction (HCI)
Discipline: Computational Social Science
Generative AI tools on social media increase user engagement and content volume but reduce perceived quality and authenticity in discussions, highlighting challenges for ethical integration.
Methods: Controlled experiment with participants assigned to small discussion groups under distinct AI-assisted treatment conditions including chat assistance, conversation starters, feedback on comment drafts, and reply suggestions.
Key Findings: Impact of generative AI tools on user behavior, engagement, content volume, perceived quality, and authenticity in social media interactions.
DOI: https://doi.org/10.48550/arXiv.2506.14295
Citations: 9
Sample Size: 680
-
Authors: D Guilbeault, S Delecourt, BS Desikan
Year: 2025
Published in: Nature, 2025 - nature.com
Institution: Stanford University, University of California Berkeley, University of Oxford
Research Area: AI Bias, Media Representation, Social Science
Discipline: Computational Social Science, Artificial Intelligence
The study highlights age-related gender bias in online media and language models, showing women are portrayed as younger than men, especially in high-status occupations, and explores how algorithms amplify these biases.
Methods: Analysis of 1.4 million images and videos from online sources and nine language models, followed by a pre-registered experiment involving participants to evaluate biases in internet content and algorithms.
Key Findings: Age and gender bias in occupational depiction across online platforms and language models, as well as its influence on beliefs and hiring preferences.
Citations: 4
Sample Size: 459
-
Authors: A Meythaler
Year: 2025
Published in: 2025 - scholarspace.manoa.hawaii.edu
Institution: University of Potsdam, Weizenbaum Institute
Research Area: Social Media, Anxiety, Qualitative Research, Computational Social Science
Discipline: Psychological Science, Computational Social Science
The study identifies six categories of social media content—negative news, incivility, social comparison content, political content, misinformation, and depictions of dangerous behavior—as triggers for anxiety among users.
Methods: A qualitative study was conducted using interviews or focus groups with 249 social media users to explore the effects of different content types on anxiety.
Key Findings: The role of specific social media content categories in inducing feelings of anxiety.
DOI: https://doi.org/10.24251/HICSS.2025.334
Citations: 4
Sample Size: 249
-
Authors: B Aksoy, S Nevo
Year: 2025
Published in: Participant Behavior and Motivations (March 21 ..., 2025 - papers.ssrn.com
Institution: Rensselaer Polytechnic Institute
Research Area: Crowdsourcing Research, Participant Behavior
Discipline: Computational Social Science
Research on Prolific reveals that participant compensation significantly impacts sample selection, potentially introducing biases, and offers insights into participant motivations and behavior to improve study reliability and design.
Methods: A carefully designed experiment was performed to analyze correlations between participants' reservation wages, socioeconomic attributes, and study compensations; sensitivity analyses were conducted for further guidance.
Key Findings: Participant reservation wages, socioeconomic attributes, perceptions of general behavior and motivations, and implications of study design decisions.
Citations: 3
-
Authors: G Lima, N Grgić-Hlača, M Langer, Y Zou
Year: 2025
Published in: Proceedings of the 2025 CHI ..., 2025 - dl.acm.org
Institution: University of Maryland, Max Planck Institute, Stanford University, Cornell University
Research Area: Algorithmic Fairness, Systemic Injustice, Social Perception of AI, Algorithmic Discrimination
Discipline: Computational Social Science
The study examines how contextualizing algorithms within systemic injustice impacts perceptions of algorithmic discrimination, finding disparate effects based on participant group identity and revealing unintended consequences of such contextualization.
Methods: 2x3 between-participants experiment using the hiring context as a case-study; examined the influence of systemic injustice information and algorithms' bias perpetuation on lay perceptions.
Key Findings: Impact of systemic injustice framing and explanation of algorithmic bias perpetuation on participants' views of algorithmic fairness and discrimination.
DOI: 10.1145/3706598.3713536
Citations: 2
Sample Size: 716
-
Authors: T Hu, N Collier
Year: 2025
Published in: arXiv preprint arXiv:2503.03335, 2025 - arxiv.org
Institution: University of Cambridge
Research Area: Affective Computing, Natural Language Processing, Computational Social Science
Discipline: Computational Social Science
The iNews dataset is a multimodal resource for studying personalized affective responses to news, improving modeling accuracy by incorporating annotator persona metadata.
Methods: 292 demographically diverse UK participants annotated 2,899 Facebook news posts with multidimensional labels (e.g., emotions, valence, arousal), combined with comprehensive participant persona data.
Key Findings: Modeled personalized affective responses to news through annotations capturing valence, arousal, emotions, and persona metadata.
Citations: 2
Sample Size: 2899
-
Authors: D OConnell, A Bautista
Year: 2025
Published in: ... Student Journal of ..., 2025 - journals.library.columbia.edu
Institution: University of Houston, Webster University
Research Area: Crowdsourcing Research Methodology, Human-Computer Interaction (HCI)
Discipline: Computational Social Science, Behavioral Research
Prolific outperforms MTurk in participant data quality and affordability for online survey-based research.
Methods: Data from participants recruited via MTurk and Prolific were analyzed for cost, attention measures, participation duration, and internal consistency.
Key Findings: Comparison of data quality and cost-effectiveness between MTurk and Prolific for online survey recruitment.
Citations: 1
Sample Size: 699
-
Authors: C Qian, V Tsai, M Behr, N Hussein, L Laugier, N Thain, L Dixon
Year: 2025
Published in: ArXiv
Institution: Google, Google DeepMind, EPFL
Research Area: Human-AI Interaction, Social Experiments, Platform Design
Discipline: Computational Social Science
Deliberate Lab is an open-source platform designed to enable real-time, multi-user human and AI (LLM) experiments. Developed by DeepMind researchers, it supports synchronous interaction, custom experimental stages, and integrates with platforms like Prolific for streamlined participant recruitment and payment. The system has been successfully used in over 600 experiments with more than 9,000 pa...
Citations: 1
-
Authors: Y Zhang, J Pang, Z Zhu, Y Liu
Year: 2025
Published in: arXiv preprint arXiv:2506.06991, 2025 - arxiv.org
Institution: Rutgers University, University of California Santa Cruz
Research Area: Artificial Intelligence, Computational Social Science
Discipline: Computational Social Science
The paper proposes a training-free scoring mechanism using peer prediction to detect and mitigate LLM-assisted cheating in crowdsourced annotation tasks, with theoretical guarantees and empirical validation.
Methods: A peer prediction-based mechanism quantifies correlations between worker answers while conditioning on LLM-generated labels, without requiring ground truth or high-dimensional training data.
Key Findings: Detection of LLM-assisted low-effort cheating in crowdsourced annotation tasks, focusing on theoretical effectiveness and empirical robustness.
DOI: https://doi.org/10.48550/arXiv.2506.06991
Citations: 1
-
Authors: A Qian, R Shaw, L Dabbish, J Suh, H Shen
Year: 2025
Published in: arXiv preprint arXiv ..., 2025 - arxiv.org
Institution: Carnegie Mellon University, University of Pittsburgh, University of Utah, Yale School of Medicine, Yale University
Research Area: Responsible AI, Content Moderation, Risk Disclosure, Worker Well-being in Human-Computer Interaction (HCI).
Discipline: Computational Social Science, Human-Computer Interaction (HCI)
The paper examines how task designers approach well-being risk disclosure in Responsible AI (RAI) content work, highlighting a need for better frameworks to communicate such risks effectively.
Methods: Interviews were conducted with 23 task designers from academic and industry sectors to gather insights on risk recognition, interpretation, and communication practices.
Key Findings: How task designers recognize, interpret, and communicate well-being risks in RAI content work.
Citations: 1
Sample Size: 23
-
Authors: C Qian, AT Parisi, C Bouleau, V Tsai
Year: 2025
Published in: Proceedings of the ..., 2025 - aclanthology.org
Institution: Google, Google DeepMind
Research Area: Human-AI Alignment, Collective Reasoning, Social Biases, LLM Simulation of Human Behavior, AI Bias
Discipline: Natural Language Processing, Artificial Intelligence, Computational Social Science
This study examines human-AI alignment in collective reasoning using an empirical framework, demonstrating how LLMs either mirror or mask human biases depending on context, cues, and model-specific inductive biases.
Methods: The study uses the Lost at Sea social psychology task in a large-scale online experiment, simulating LLM groups conditioned on human decision-making data across varying conditions of visible or pseudonymous demographics.
Key Findings: Alignment of LLM behavior with human social reasoning, focusing on collective decision-making and biases in group interactions.
Citations: 1
Sample Size: 748
-
Authors: Liudmila Zavolokina, Kilian Sprenkamp, Zoya Katashinskaya, Daniel Gordon Jones
Year: 2025
Published in: ArXiv
Institution: University of Zurich
Research Area: AI Ethics, AI Bias, News Literacy, Critical Thinking, Computational Social Science
Discipline: Computational Social Science
The study explores leveraging inherent biases in AI to enhance critical thinking in news consumption, proposing strategies such as bias awareness, personalization, and gradual introduction of diverse perspectives.
Methods: Qualitative user study investigating user responses to personalized AI-driven propaganda detection tools.
Key Findings: The effectiveness of AI bias-based strategies in improving critical thinking and news readers’ engagement with diverse perspectives.
-
Authors: HR Kirk, M Bartolo, A Whitefield, P Rottger
Year: 2024
Published in: Advances in ..., 2024 - proceedings.neurips.cc
Institution: Meta, Cohere, AWS AI Labs, Contextual AI, Factored AI, University of Oxford, Bocconi University, Meedan, Hugging Face, University College London, ML Commons, University of Pennsylvania
Research Area: LLM Alignment, Human Feedback, Multicultural Studies
Discipline: Artificial Intelligence, Computational Social Science
The PRISM Alignment Dataset presents a large-scale, culturally diverse human feedback dataset linking sociodemographic profiles of 1,500 participants from 75 countries to their contextual preferences and fine‑grained ratings in 8,011 live conversations with 21 LLMs. This enables analysis of how subjective values vary across people and cultures in LLM alignment data.
DOI: https://doi.org/10.52202/079017-3342
Citations: 204