Explore 40 peer-reviewed papers in Social (2025–2026). Academic research using Prolific for high-quality human data collection.
This page lists 40 peer-reviewed papers in the discipline of Social in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: H Mohseni, T Kujala, J Silvennoinen
Year: 2026
Published in: SPRINGER
Institution: University of Jyväskylä
Research Area: Migration studies, Social indicators, Psychometrics, Quantitative social science methods
Discipline: Social sciences
Developed and validated a multidimensional place-belongingness scale to assess immigrants' sense of belonging to geographic locations, identifying four factors: feeling at home, accepted, empowered, and secure.
Methods: Survey data from 270 immigrants worldwide analyzed using exploratory factor analysis.
Key Findings: The subjective sense of place-belongingness, decomposed into four factors: feeling at home, feeling accepted, feeling empowered, and feeling secure.
Sample Size: 270
-
Authors: M Steyvers, H Tejeda, A Kumar, C Belem
Year: 2025
Published in: Nature Machine ..., 2025 - nature.com
Institution: University of California Irvine
Research Area: Computational Linguistics, Computational Social Science, AI Ethics, Trust in AI
Discipline: Computational Social Science
LLMs often lead to user overestimation of response accuracy, especially with longer explanations; adjusting explanation styles to align with model confidence improves calibration and discrimination gaps, enhancing trust in AI-assisted decision making.
Methods: Conducted experiments using multiple-choice and short-answer questions to study user confidence versus model-stated confidence; varied explanation length and alignment with model internal confidence.
Key Findings: Calibration gap (human vs. model confidence), discrimination gap (ability to distinguish correct vs. incorrect answers), and effects of explanation style and length on user trust.
Citations: 100
-
Authors: G Beknazar-Yuzbashev, R Jiménez-Durán, J McCrosky
Year: 2025
Published in: 2025 - econstor.eu
Institution: Mozilla Foundation, Columbia University, Bocconi University, Stanford University, University of Warwick
Research Area: Social Media, User Engagement, Toxicity
Discipline: Social Science
Reducing exposure to toxic content on social media lowers user engagement but also decreases the toxicity of user-generated content, highlighting a trade-off for platforms between reduced toxicity and increased engagement.
Methods: Pre-registered browser extension field experiment on Facebook, Twitter, and YouTube to randomly hide toxic content for six weeks; supplemented with a survey experiment.
Key Findings: Impact of reduced exposure to toxic content on advertising impressions, time spent, engagement, and user-generated content toxicity; explored curiosity and alignment between engagement and welfare.
Citations: 76
-
Authors: M Groh, A Sankaranarayanan, N Singh, DY Kim
Year: 2025
Published in: Nature ..., 2024 - nature.com
Institution: Northwestern University, Massachusetts Institute of Technology
Research Area: Deepfakes, Media Forensics, Human Perception of AI-Generated Content, Political Communication
Discipline: Computational Social Science
Humans are better at detecting deepfake political speeches using audio-visual cues than relying on text alone; state-of-the-art text-to-speech audio makes deepfakes harder to discern.
Methods: Five pre-registered randomized experiments with varied base rates of misinformation, audio sources, question framings, and media modalities were conducted.
Key Findings: Human accuracy in discerning real political speeches from deepfakes across media formats and contextual variables.
DOI: https://doi.org/10.1038/s41467-024-51998-z
Citations: 63
Sample Size: 2215
-
Authors: N Grgić-Hlača, G Lima, A Weller
Year: 2025
Published in: Proceedings of the 2nd ..., 2022 - dl.acm.org
Institution: Max Planck Institute, École Polytechnique Fédérale de Lausanne, University of Cambridge, The Alan Turing Institute
Research Area: Algorithmic Fairness, Human Perception, Diversity in AI Decision-Making
Discipline: Social Science, Artificial Intelligence
This study examines how sociodemographic factors and personal experience influence perceptions of fairness in algorithmic decision-making, particularly in bail decisions, highlighting the importance of diverse perspectives in regulatory oversight.
Methods: Explored perceptions of procedural fairness using surveys to assess the influence of demographics and personal experiences.
Key Findings: Impact of demographics (age, education, gender, race, political views) and personal experience on perceptions of fairness of algorithmic feature use in bail decisions.
DOI: 10.1145/3551624.3555306
Citations: 62
-
Authors: H Bai, JG Voelkel, S Muldowney, JC Eichstaedt
Year: 2025
Published in: Nature ..., 2025 - nature.com
Institution: Stanford University
Research Area: Political Persuasion, LLM
Discipline: Computational Social Science
LLM-generated messages can effectively persuade humans on policy issues similarly to human-crafted messages, with differences in perceived persuasion mechanisms.
Methods: Three pre-registered experiments were conducted comparing the persuasive effectiveness of LLM-generated and human-generated messages on policy attitudes, using control conditions with neutral messages.
Key Findings: Influence of LLM-generated messages on participants' policy attitudes and perceived characteristics of the message authors.
Citations: 37
Sample Size: 4829
-
Authors: T Zhang, A Koutsoumpis, JK Oostrom
Year: 2025
Published in: IEEE Transactions ..., 2024 - ieeexplore.ieee.org
Institution: Southeast University, Vrije Universiteit, Tilburg University
Research Area: LLM Personality Assessment, Human-AI Interaction, LLM
Discipline: Human-AI Interaction, Social Science, Humanities
LLMs like GPT-3.5 and GPT-4 can rival or outperform task-specific AI models in assessing personality traits from asynchronous video interviews, but show uneven performance, low reliability, and potential biases, warranting cautious use in high-stakes scenarios.
Methods: The study evaluated GPT-3.5 and GPT-4 performance in assessing personality traits and interview performance using simulated AVI responses, comparing them with ratings from task-specific AI and human annotators.
Key Findings: Validity, reliability, fairness, and rating patterns of LLMs (GPT-3.5 and GPT-4) in personality assessment from asynchronous video interviews.
Citations: 31
Sample Size: 685
-
Authors: K Hackenburg, BM Tappin, P Röttger, SA Hale
Year: 2025
Published in: Proceedings of the ..., 2025 - pnas.org
Institution: University of California Berkeley, University of Cambridge, University of Oxford, Max Planck Institute
Research Area: Political Persuasion, LLM
Discipline: Computational Social Science, Political Science
Scaling language model sizes leads to diminishing returns in generating persuasive political messages, with larger models providing minimal gains compared to smaller ones after controlling for task completion metrics like coherence and relevance.
Methods: Generated 720 political messages using 24 LLMs of varying sizes and tested their persuasiveness through a large-scale randomized survey experiment.
Key Findings: Persuasive capability of language models across different sizes in generating political messages.
Citations: 31
Sample Size: 25982
-
Authors: J Mundel, J Yang
Year: 2025
Published in: Journal of Interactive Advertising, 2021 - Taylor & Francis
Institution: Arizona State University, Loyola University
Research Area: Consumer Behavior, Social Media Marketing, COVID-19 Studies
Discipline: Marketing, Social Sciences
Brands with strong perceived fit between their products and COVID-19 messaging showed higher consumer engagement and positive attitudes, while those with lower perceived fit faced negative evaluations due to perceptions of opportunism.
Methods: Analyzed consumer responses to Instagram ads using perceived brand-social issue fit as a determinant of ad evaluations, brand attitudes, and engagement intentions.
Key Findings: Consumer responses, ad evaluations, brand attitudes, engagement intentions, and perceived brand opportunism based on fit between product type and COVID-19 messaging.
DOI: https://www.tandfonline.com/doi/abs/10.1080/15252019.2021.1958274#
Citations: 29
-
Authors: Y Ding, J You, TK Machulla, J Jacobs, P Sen
Year: 2025
Published in: Proceedings of the ..., 2022 - dl.acm.org
Institution: University of California Irvine, University of Florida, State University of New York at Buffalo, University of Waterloo, Virginia Tech
Research Area: Computational Social Science, Human-Computer Interaction (HCI), Sentiment Analysis
Discipline: Computational Social Science
Demographic differences among annotators significantly affect sentiment dataset labels, causing up to a 4.5% accuracy difference in sentiment prediction models.
Methods: Crowdsourced annotations from >1000 workers combined with demographic data; analysis of multimodal sentiment datasets and evaluation using machine learning models.
Key Findings: Impact of annotator demographics on sentiment labeling and its effect on model predictions.
DOI: https://doi.org/10.1145/3555632
Citations: 28
Sample Size: 1000
-
Authors: S Zhang, J Xu, AJ Alvero
Year: 2025
Published in: Sociological Methods & Research, 2025 - journals.sagepub.com
Institution: University of Maryland, Indiana University, University of Minnesota Duluth
Research Area: Sociological Methods, Generative AI, Survey Methodology
Discipline: Sociology, Social Science
The study finds that 34% of research participants use generative AI tools like large language models (LLMs) to assist with open-ended survey responses, leading to more homogeneity and positivity in their answers, which could impact data validity by masking social variations.
Methods: The study conducted an original survey on a popular online platform and simulated comparisons between human-written responses from pre-ChatGPT studies and LLM-generated responses.
Key Findings: Use of LLMs by survey participants, differences in text homogeneity, positivity, and masking of social variation in open-ended survey responses.
Citations: 26
-
Authors: M Alizadeh, E Hoes, F Gilardi
Year: 2025
Published in: Scientific Reports, 2023 - nature.com
Institution: Department of Marketing, University of Amsterdam, Department of Social Sciences, Università Degli Studi di Milano, Department of Political Science and International Relations, Università Degli Studi di Milano
Research Area: Social media, Misinformation, Computational Social Science
Discipline: Computational Social Science
Token-based incentives for social media engagement increase the sharing of misinformation, but implementing penalties for objectionable content can reduce this trend without fully eliminating it.
Methods: Survey experiment analyzing the impact of hypothetical token rewards and penalties on user willingness to share different types of news content.
Key Findings: Effect of token-based incentives and penalties on user engagement and the willingness to share misinformation.
DOI: https://doi.org/10.1038/s41598-023-40716-2
Citations: 20
-
Authors: P. Schoenegger, F. Salvi, J. Liu, X. Nan, R. Debnath, B. Fasolo, E. Leivada, G. Recchia, F. Günther, A. Zarifhonarvar, J. Kwon, Z. Ul Islam, M. Dehnert, D. Y. H. Lee, M. G. Reinecke, D. G. Kamper, M. Kobaş, A. Sandford, J. Kgomo, L. Hewitt, S. Kapoor, K. Oktar, E. E. Kucuk, B. Feng, C. R. Jones, I. Gainsburg, S. Olschewski, N. Heinzelmann, F. Cruz, B. M. Tappin, T. Ma, P. S. Park, R. Onyonka, A. Hjorth, P. Slattery, Q. Zeng, L. Finke, I. Grossmann, A. Salatiello, E. Karger
Year: 2025
Published in: arXiv preprint arXiv ..., 2025 - arxiv.org
Institution: London School of Economics and Political Science, University of Cambridge, University College London, Massachusetts Institute of Technology, University of Oxford, Modulo Research, Stanford University, Federal Reserve Bank of Chicago, ETH Zürich, University of Johannesburg
Research Area: Computation and Language
Discipline: Social Science, Artificial Intelligence
This paper compares a frontier LLM (Claude Sonnet 3.5) against incentivized human persuaders in a conversational quiz setting, finding that the AI's persuasion capabilities surpass those of humans with real-money bonuses tied to performance.
Citations: 16
-
Authors: K Hackenburg, BM Tappin, L Hewitt, E Saunders
Year: 2025
Published in: Science, 2025 - science.org
Institution: London School of Economics and Political Science, Stony Brook University
Research Area: Political Persuasion with Conversational AI, LLM, Factual Accuracy in AI Systems.
Discipline: Political Science, Computational Social Science
This Science paper shows that conversational AI chatbots can systematically influence political opinions at scale, and that techniques like post-training and prompting make them far more persuasive—but that increased persuasion is tied to reduced factual accuracy in what the AI says.
Citations: 12
-
Authors: R Mulcahy, R Barnes
Year: 2025
Published in: Australasian ..., 2025 - journals.sagepub.com
Institution: University of the Sunshine Coast
Research Area: Social Media, Misinformation, Influencer Marketing
Discipline: Social Science
The paper investigates how misinformation shared by social media influencers garners virality and impacts perceived deception, parasocial interactions, and sharing intent, highlighting the role of appraisals and user comments.
Methods: Three online experimental studies grounded in social influence theory and cognitive appraisal theory (CAT), analyzing user behavior in response to influencer posts with varying levels of virality and comment types.
Key Findings: Virality of posts, perceived deception, parasocial interaction, sharing intent, and effects of user comments (critical vs. supportive).
Citations: 9
-
Authors: AG Møller, DM Romero, D Jurgens
Year: 2025
Published in: arXiv preprint arXiv ..., 2025 - arxiv.org
Institution: University of Copenhagen, University of Michigan, Pioneer Centre for AI
Research Area: Generative AI, Social Media, Human-Computer Interaction (HCI)
Discipline: Computational Social Science
Generative AI tools on social media increase user engagement and content volume but reduce perceived quality and authenticity in discussions, highlighting challenges for ethical integration.
Methods: Controlled experiment with participants assigned to small discussion groups under distinct AI-assisted treatment conditions including chat assistance, conversation starters, feedback on comment drafts, and reply suggestions.
Key Findings: Impact of generative AI tools on user behavior, engagement, content volume, perceived quality, and authenticity in social media interactions.
DOI: https://doi.org/10.48550/arXiv.2506.14295
Citations: 9
Sample Size: 680
-
Authors: Z Chen, J Kalla, Q Le, S Nakamura-Sakai
Year: 2025
Published in: arXiv preprint arXiv ..., 2025 - arxiv.org
Institution: The affiliated institutions could not be determined from the provided context or an external search of the URL.
Research Area: Artificial Intelligence and Social Science, Persuasion Studies, Political Persuasion, LLM Chatbots, Democratic Societies
Discipline: Artificial Intelligence, Social Science
The study evaluates the cost-effectiveness and persuasive risks of Large Language Model (LLM) chatbots in political contexts, finding that while LLMs are as persuasive as campaign ads under exposure, their large-scale influence is currently limited by scalability and cost barriers.
Methods: Two survey experiments combined with real-world simulation exercises to measure the persuasiveness of LLM chatbots compared to traditional campaign tactics, focusing on both exposure and acceptance phases of persuasion.
Key Findings: Short- and long-term persuasive effects of LLMs, cost-effectiveness of LLM-based persuasion ($48-$74 per persuaded voter), and scalability compared to traditional campaign approaches.
Citations: 7
Sample Size: 10417
-
Authors: N Aldahoul, H Ibrahim, M Varvello, A Kaufman
Year: 2025
Published in: arXiv preprint arXiv ..., 2025 - arxiv.org
Institution: Delft University of Technology, University of Pennsylvania, New York University, King Abdullah University of Science and Technology, Massachusetts Institute of Technology, University of Texas at Austin
Research Area: Artificial Intelligence, Computers and Society, Political Science
Discipline: Artificial Intelligence, Social Science
The study finds that Large Language Models (LLMs) exhibit extreme political views on specific topics despite appearing ideologically moderate overall, and demonstrate a persuasive influence on users' political preferences even in informational contexts.
Methods: Compared 31 LLMs' political biases against benchmarks (legislators, judges, representative voter samples) and conducted a randomized experiment to measure their persuasive impact in informational interactions.
Key Findings: Ideological consistency, political extremity, and persuasive effects of LLMs in information-seeking contexts.
Citations: 7
Sample Size: 31
-
Authors: S Carney, I Riveros, S Tully
Year: 2025
Published in: Available at SSRN 4988760, 2025 - papers.ssrn.com
Institution: University of Southern California
Research Area: Consumer Engagement with AI Disclosures, Social Media Marketing, Social Psychology
Discipline: Social Science
AI-generated content disclosures on social media reduce consumer engagement primarily due to a decrease in parasocial connections, as users perceive creators to exert less effort; signaling greater effort can mitigate this effect.
Methods: Analysis of TikTok engagement data following AIGC disclosure implementation, supplemented by six preregistered experiments.
Key Findings: Impact of AIGC disclosures on consumer engagement and the mediating role of parasocial connections.
Citations: 6
-
Authors: K Miazek, K Bocian
Year: 2025
Published in: Computers in Human Behavior Reports, 2025 - Elsevier
Institution: SWPS University
Research Area: Moral and Fairness Judgments of AI, Human Behavior, Egocentrism
Discipline: Social Science, Artificial Intelligence
The study found that egocentric biases influence fairness judgments, favoring decisions beneficial to self-interest, and that this bias is weaker for AI compared to human agents due to reduced perceived mind and liking for AI.
Methods: Three experiments with manipulated self-interest conditions analyzed perceptions of fairness and morality in decisions made by AI versus human agents using Prolific US samples.
Key Findings: Fairness and moral judgments in financial decision-making by AI and human agents, moderated by self-interest and social perceptions.
DOI: https://doi.org/10.1016/j.chbr.2025.100719
Citations: 6
Sample Size: 1880