Discover 7 peer-reviewed studies in Ai Social Science (2022–2026). Explore research findings powered by Prolific's diverse participant panel.
This page lists 7 peer-reviewed papers in the research area of Ai Social Science in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: C Yuan, B Ma, Z Zhang, B Prenkaj, F Kreuter, G Kasneci
Year: 2026
Published in: arXiv preprint arXiv:2601.08634, 2026•arxiv.org
Institution: Munich Center for Machine Learning, LMU Munich, Technical University of Munich
Research Area: Artificial Intelligence, AI Ethics, AI Alignment, Political Science, Computational Social Science
Discipline: Computer Science, Natural Language Processing (NLP)
This paper examines how large language models’ (LLMs) political outputs shift when you explicitly prime them with different moral values. Instead of just assigning fake personas (like “pretend to be liberal”), the authors condition models to endorse or reject specific moral values (e.g., utilitarianism, fairness, authority). They then measure how those moral primes move the models’ positions in...
DOI: https://doi.org/10.48550/arXiv.2601.08634
-
Authors: D Guilbeault, S Delecourt, BS Desikan
Year: 2025
Published in: Nature, 2025 - nature.com
Institution: Stanford University, University of California Berkeley, University of Oxford
Research Area: AI Bias, Media Representation, Social Science
Discipline: Computational Social Science, Artificial Intelligence
The study highlights age-related gender bias in online media and language models, showing women are portrayed as younger than men, especially in high-status occupations, and explores how algorithms amplify these biases.
Methods: Analysis of 1.4 million images and videos from online sources and nine language models, followed by a pre-registered experiment involving participants to evaluate biases in internet content and algorithms.
Key Findings: Age and gender bias in occupational depiction across online platforms and language models, as well as its influence on beliefs and hiring preferences.
Citations: 4
Sample Size: 459
-
Authors: Liudmila Zavolokina, Kilian Sprenkamp, Zoya Katashinskaya, Daniel Gordon Jones
Year: 2025
Published in: ArXiv
Institution: University of Zurich
Research Area: AI Ethics, AI Bias, News Literacy, Critical Thinking, Computational Social Science
Discipline: Computational Social Science
The study explores leveraging inherent biases in AI to enhance critical thinking in news consumption, proposing strategies such as bias awareness, personalization, and gradual introduction of diverse perspectives.
Methods: Qualitative user study investigating user responses to personalized AI-driven propaganda detection tools.
Key Findings: The effectiveness of AI bias-based strategies in improving critical thinking and news readers’ engagement with diverse perspectives.
-
Authors: B Lebrun, S Temtsin, A Vonasch
Year: 2024
Published in: Frontiers in Robotics and ..., 2024 - frontiersin.org
Institution: University of Lausanne, University of California Berkeley, University of Massachusetts Amherst, Arizona State University
Research Area: AI in Social Science Research, Survey Methodology, Data Quality
Discipline: Artificial Intelligence
The study examines the integrity of online questionnaire responses and concludes that humans can identify AI-generated text with 76% accuracy, but current AI detection systems are ineffective, raising concerns about data quality in online surveys.
Methods: Human participants and automatic AI detection systems were tested on their ability to differentiate AI-generated text from human-generated text in the context of online questionnaires.
Key Findings: The study measured the ability of humans and AI detection tools to correctly identify whether text was generated by a human or an AI system for online questionnaire responses.
DOI: https://doi.org/10.3389/frobt.2023.1277635
Citations: 26
-
Authors: K Hackenburg, BM Tappin, P Röttger, S Hale
Year: 2024
Published in: arXiv preprint arXiv ..., 2024 - arxiv.org
Institution: University of Oxford, The Alan Turing Institute, Royal Holloway, University of London, Bocconi University, Meedan
Research Area: LLM scaling laws, Political Persuasion, LLM, AI Social Science
Discipline: Political Science, Artificial Intelligence
Persuasiveness of messages generated by large language models follows a log scaling law with diminishing returns as model size increases, and task completion appears to primarily drive this capability.
Methods: Generated 720 persuasive messages on 10 U.S. political issues using 24 language models of varying sizes; evaluated persuasiveness through a large-scale randomized survey experiment.
Key Findings: Persuasiveness of large language model-generated political messages across different model sizes.
Citations: 17
Sample Size: 25982
-
Authors: L Munn, L Magee
Year: 2024
Published in: ArXiv
Institution: University of Illinois Urbana Champaign, University of Queensland
Research Area: AI and Economic Futures, Cybernetics, Social Science
Discipline: Artificial Intelligence, Economics
-
Authors: X Qin, M Huang, J Ding
Year: 2022
Published in: Available at SSRN 4922861, 2024 - papers.ssrn.com
Institution: Peking University, Tsinghua University, Nankai University
Research Area: AI Social Science, LLM Simulation of Human Behavior, AI Simulation
Discipline: Social Science Research
DOI: https://ssrn.com/abstract=4922861
Citations: 17