Discover 7 peer-reviewed studies in Social Bias (2023–2025). Explore research findings powered by Prolific's diverse participant panel.
This page lists 7 peer-reviewed papers in the research area of Social Bias in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: C Qian, AT Parisi, C Bouleau, V Tsai
Year: 2025
Published in: Proceedings of the ..., 2025 - aclanthology.org
Institution: Google, Google DeepMind
Research Area: Human-AI Alignment, Collective Reasoning, Social Biases, LLM Simulation of Human Behavior, AI Bias
Discipline: Natural Language Processing, Artificial Intelligence, Computational Social Science
This study examines human-AI alignment in collective reasoning using an empirical framework, demonstrating how LLMs either mirror or mask human biases depending on context, cues, and model-specific inductive biases.
Methods: The study uses the Lost at Sea social psychology task in a large-scale online experiment, simulating LLM groups conditioned on human decision-making data across varying conditions of visible or pseudonymous demographics.
Key Findings: Alignment of LLM behavior with human social reasoning, focusing on collective decision-making and biases in group interactions.
Citations: 1
Sample Size: 748
-
Authors: M Zhuang, E Deschrijver, R Ramsey, O Turel
Year: 2025
Published in: Scientific Reports, 2025 - nature.com
Institution: Monash University, The University of Melbourne, KU Leuven, California State University Fullerton
Research Area: Human-AI Interaction, Social Bias, Decision-Making
Discipline: Social Science, Human-AI Interaction
The study found that humans exhibit similar discriminatory behavior toward both AI and human agents, with resource allocation being influenced more by decision alignment than the recipient's identity.
Methods: A preregistered experiment was conducted where participants distributed resources between themselves and either human or AI agents based on dot estimation decisions.
Key Findings: Discriminatory behavior and resource allocation preferences toward AI and human agents as influenced by decision congruency.
DOI: https://doi.org/10.1038/s41598-025-94631-9
Sample Size: 500
-
Authors: T Davidson
Year: 2025
Published in: Nature Human Behaviour, 2025 - nature.com
Institution: University of Oxford, Davidson College
Research Area: Hate Speech Evaluation, Multimodal LLMs, Social Bias, Computational Law, AI Bias, AI Evaluation
Discipline: Artificial Intelligence
The study demonstrates that larger multimodal large language models (MLLMs) can align closely with human judgement in context-sensitive hate speech evaluations, though they still exhibit biases and limitations.
Methods: Conjoint experiments where simulated social media posts varying in attributes like slur usage and user demographics were evaluated by MLLMs and compared to human judgements.
Key Findings: The capacity of MLLMs to evaluate hate speech in a context-sensitive manner and their alignment with human judgement, while assessing biases and responsiveness to contextual cues.
Sample Size: 1854
-
Authors: D Guilbeault, S Delecourt, T Hull, BS Desikan, M Chu
Year: 2024
Published in: Nature, 2024 - nature.com
Institution: University of California Berkeley, Institute For Public Policy Research, Columbia University, University of Southern California Los Angeles
Research Area: Gender Bias, Computational Social Science, Online Media, AI Bias
Discipline: Computational Social Science
Online images significantly amplify gender bias compared to text, with biases in visual content impacting societal beliefs about gender roles.
Methods: Analyzed 3,495 social categories using over one million images from platforms like Google, Wikipedia, and IMDb, compared visual content to billions of words from the same platforms, and conducted a preregistered national experiment to assess the psychological impact on participants' beliefs.
Key Findings: The prevalence and psychological impact of gender bias in online images compared to text, including gender associations and representation disparities.
DOI: https://doi.org/10.1038/s41586-024-07068-x
Citations: 72
Sample Size: 3495
-
Authors: Yang Trista Cao, Anna Sotnikova, Jieyu Zhao, Linda X. Zou, Rachel Rudinger, Hal Daumé III
Year: 2024
Published in: ArXiv
Institution: Microsoft Research, University of Maryland
Research Area: Multilingual Bias, Social Science, LLM, AI Bias
Discipline: Artificial Intelligence, Social Science, Large Language Models
-
Authors: Yi-Cheng Lin, Wei-Chih Chen, Hung-yi Lee
Year: 2024
Published in: ArXiv
Institution: National Taiwan University
Research Area: Speech LLM, Social Bias, Evaluation
Discipline: Artificial Intelligence
-
Authors: Matthew J.A. Craig a, Mina Choi b
Year: 2023
Published in: Science Direct
Institution: Kent State University, Sejong University
Research Area: Media Studies, Social Psychology, Cognitive Bias, AI and Communication, AI Bias
Discipline: Social Science, Media Studies, Human-AI Interaction