Discover 19 peer-reviewed studies in Ai Bias (2022–2026). Explore research findings powered by Prolific's diverse participant panel.
This page lists 19 peer-reviewed papers in the research area of Ai Bias in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: M Raj, JM Berg, R Seamans
Year: 2026
Published in: Journal of Experimental Psychology …, 2026 - psycnet.apa.org
Institution: New York University, University of Michigan, Wharton
Research Area: Disclosure psychology, Biases in human–machine evaluation, AI Biases
Discipline: Experimental psychology
This paper sits at the intersection of experimental psychology, social cognition, and consumer judgment, examining how AI disclosure triggers persistent authenticity-based bias against creative work, revealing a robust form of algorithmic aversion in symbolic and expressive domains.
DOI: https://doi.org/10.1037/xge0001889
-
Authors: T Kosch, R Welsch, L Chuang, A Schmidt
Year: 2025
Published in: ACM Transactions on ..., 2023 - dl.acm.org
Institution: Aalto University
Research Area: User Expectations, HCI Research Bias, Artificial Intelligence, AI Bias
Discipline: Human-Computer Interaction (HCI)
The belief in receiving adaptive AI support positively impacts user performance, demonstrating a placebo effect in Human-Computer Interaction.
Methods: Two experiments where participants completed word puzzles under conditions with or without supposed AI support; in reality, no AI assistance was provided.
Key Findings: Impact of perceived AI support on user expectations and task performance.
DOI: https://doi.org/10.1145/3529225
Citations: 149
Sample Size: 469
-
Authors: U Messer
Year: 2025
Published in: Computers in Human Behavior: Artificial Humans, 2025 - Elsevier
Institution: Universität der Bundeswehr München
Research Area: Political Bias in Generative AI, Human-AI Interaction, Affective Computing, AI Bias
Discipline: Computer Science, Human-AI Interaction
People's acceptance and reliance on Generative AI (GAI) increase when they perceive alignment between their political orientation and the bias of GAI-generated content, leading to expanded trust in sensitive applications.
Methods: Three experiments analyzing behavioral reactions to politically biased content generated by GAI, including the impact of perceived alignment on acceptance and trust.
Key Findings: Participants' acceptance, reliance, and trust in GAI based on perceived alignment between political bias of GAI-generated content and their own political beliefs.
DOI: https://doi.org/10.1016/j.chbah.2024.100108
Citations: 24
Sample Size: 513
-
Authors: D Guilbeault, S Delecourt, BS Desikan
Year: 2025
Published in: Nature, 2025 - nature.com
Institution: Stanford University, University of California Berkeley, University of Oxford
Research Area: AI Bias, Media Representation, Social Science
Discipline: Computational Social Science, Artificial Intelligence
The study highlights age-related gender bias in online media and language models, showing women are portrayed as younger than men, especially in high-status occupations, and explores how algorithms amplify these biases.
Methods: Analysis of 1.4 million images and videos from online sources and nine language models, followed by a pre-registered experiment involving participants to evaluate biases in internet content and algorithms.
Key Findings: Age and gender bias in occupational depiction across online platforms and language models, as well as its influence on beliefs and hiring preferences.
Citations: 4
Sample Size: 459
-
Authors: C Qian, AT Parisi, C Bouleau, V Tsai
Year: 2025
Published in: Proceedings of the ..., 2025 - aclanthology.org
Institution: Google, Google DeepMind
Research Area: Human-AI Alignment, Collective Reasoning, Social Biases, LLM Simulation of Human Behavior, AI Bias
Discipline: Natural Language Processing, Artificial Intelligence, Computational Social Science
This study examines human-AI alignment in collective reasoning using an empirical framework, demonstrating how LLMs either mirror or mask human biases depending on context, cues, and model-specific inductive biases.
Methods: The study uses the Lost at Sea social psychology task in a large-scale online experiment, simulating LLM groups conditioned on human decision-making data across varying conditions of visible or pseudonymous demographics.
Key Findings: Alignment of LLM behavior with human social reasoning, focusing on collective decision-making and biases in group interactions.
Citations: 1
Sample Size: 748
-
Authors: Liudmila Zavolokina, Kilian Sprenkamp, Zoya Katashinskaya, Daniel Gordon Jones
Year: 2025
Published in: ArXiv
Institution: University of Zurich
Research Area: AI Ethics, AI Bias, News Literacy, Critical Thinking, Computational Social Science
Discipline: Computational Social Science
The study explores leveraging inherent biases in AI to enhance critical thinking in news consumption, proposing strategies such as bias awareness, personalization, and gradual introduction of diverse perspectives.
Methods: Qualitative user study investigating user responses to personalized AI-driven propaganda detection tools.
Key Findings: The effectiveness of AI bias-based strategies in improving critical thinking and news readers’ engagement with diverse perspectives.
-
Authors: M Zhuang, E Deschrijver, R Ramsey, O Turel
Year: 2025
Published in: Scientific Reports, 2025 - nature.com
Institution: Monash University, The University of Melbourne, KU Leuven, California State University Fullerton
Research Area: Human-AI Interaction, Social Bias, Decision-Making
Discipline: Social Science, Human-AI Interaction
The study found that humans exhibit similar discriminatory behavior toward both AI and human agents, with resource allocation being influenced more by decision alignment than the recipient's identity.
Methods: A preregistered experiment was conducted where participants distributed resources between themselves and either human or AI agents based on dot estimation decisions.
Key Findings: Discriminatory behavior and resource allocation preferences toward AI and human agents as influenced by decision congruency.
DOI: https://doi.org/10.1038/s41598-025-94631-9
Sample Size: 500
-
Authors: T Davidson
Year: 2025
Published in: Nature Human Behaviour, 2025 - nature.com
Institution: University of Oxford, Davidson College
Research Area: Hate Speech Evaluation, Multimodal LLMs, Social Bias, Computational Law, AI Bias, AI Evaluation
Discipline: Artificial Intelligence
The study demonstrates that larger multimodal large language models (MLLMs) can align closely with human judgement in context-sensitive hate speech evaluations, though they still exhibit biases and limitations.
Methods: Conjoint experiments where simulated social media posts varying in attributes like slur usage and user demographics were evaluated by MLLMs and compared to human judgements.
Key Findings: The capacity of MLLMs to evaluate hate speech in a context-sensitive manner and their alignment with human judgement, while assessing biases and responsiveness to contextual cues.
Sample Size: 1854
-
Authors: D Guilbeault, S Delecourt, T Hull, BS Desikan, M Chu
Year: 2024
Published in: Nature, 2024 - nature.com
Institution: University of California Berkeley, Institute For Public Policy Research, Columbia University, University of Southern California Los Angeles
Research Area: Gender Bias, Computational Social Science, Online Media, AI Bias
Discipline: Computational Social Science
Online images significantly amplify gender bias compared to text, with biases in visual content impacting societal beliefs about gender roles.
Methods: Analyzed 3,495 social categories using over one million images from platforms like Google, Wikipedia, and IMDb, compared visual content to billions of words from the same platforms, and conducted a preregistered national experiment to assess the psychological impact on participants' beliefs.
Key Findings: The prevalence and psychological impact of gender bias in online images compared to text, including gender associations and representation disparities.
DOI: https://doi.org/10.1038/s41586-024-07068-x
Citations: 72
Sample Size: 3495
-
Authors: T Eloundou, A Beutel, DG Robinson
Year: 2024
Published in: arXiv preprint arXiv ..., 2024 - arxiv.org
Institution: OpenAI, Google DeepMind, Google, University of Oxford
Research Area: Fairness in LLM, AI Bias, AI Ethics
Discipline: Artificial Intelligence, Social Science
The paper introduces a counterfactual approach to evaluate 'first-person fairness' in chatbots, demonstrating that reinforcement learning can mitigate biases based on demographics across extensive chatbot interactions.
Methods: The study uses a Language Model as a Research Assistant (LMRA) to quantitatively and qualitatively assess biases based on demographics across millions of chatbot interactions, covering 66 tasks in 9 domains and involving two genders and four races. Bias evaluations are corroborated by independent...
Key Findings: Demographic biases in chatbot responses, including harmful stereotypes and response differences by gender and race, across diverse tasks and domains.
DOI: https://doi.org/10.48550/arXiv.2410.19803
Citations: 33
Sample Size: 6000000
-
Authors: V Cheung, M Maier, F Lieder
Year: 2024
Published in: Psyarxiv preprint, 2024 - files.osf.io
Institution: University College LondonA
Research Area: AI Ethics, Moral Decision-Making, Cognitive Biases in LLMs, AI Bias
Discipline: Artificial Intelligence, Ethics
Citations: 11
-
Authors: N Meister
Year: 2024
Published in: ArXiv
Institution: Stanford University
Research Area: Distributional Alignment of LLMs, LLM Benchmarking, AI Robustness, AI Fairness, AI Bias
Discipline: Artificial Intelligence
-
Authors: A Bashkirova, D Krpan
Year: 2024
Published in: Science Direct
Institution: London School of Economics and Political Science
Research Area: AI-assisted Decision Making, Confirmation Bias, Professional Trust, Psychology, AI Bias
Discipline: Behavioral Science, Psychology
-
Authors: Md. Khairul Islam1, Andrew Wang1, Tianhao Wang1, Yangfeng Ji1, Judy Fox 1, Jieyu Zhao2
Year: 2024
Published in: ArXiv
Institution: University of Virginia
Research Area: Differential Privacy, Bias Mitigation, LLM, Natural Language Processing (NLP), AI Bias
Discipline: Artificial Intelligence, Natural Language Processing
-
Authors: C Ravulu, R Sarabu, M Suryadevara
Year: 2024
Published in: ... Conference on AI x ..., 2024 - ieeexplore.ieee.org
Institution: International Institute of Information Technology, University of California Santa Cruz, University of South Carolina Aiken
Research Area: Reinforcement Learning from Human Feedback (RLHF), Bias Mitigation, LLM, AI Bias
Discipline: Artificial Intelligence
DOI: https://ieeexplore.ieee.org/abstract/document/10990073/
-
Authors: Yang Trista Cao, Anna Sotnikova, Jieyu Zhao, Linda X. Zou, Rachel Rudinger, Hal Daumé III
Year: 2024
Published in: ArXiv
Institution: Microsoft Research, University of Maryland
Research Area: Multilingual Bias, Social Science, LLM, AI Bias
Discipline: Artificial Intelligence, Social Science, Large Language Models
-
Authors: Eunhae Lee, Pat Pataranutaporn, Judith Amores, Pattie Maes
Year: 2024
Published in: ArXiv
Institution: Massachusetts Institute of Technology, Microsoft Research, MIT Media Lab
Research Area: Human-AI Interaction, Cognitive Biases, Psychological Factors in AI Adoption, Trust in AI, AI Credibility
Discipline: Psychology, Artificial Intelligence
-
Authors: Matthew J.A. Craig a, Mina Choi b
Year: 2023
Published in: Science Direct
Institution: Kent State University, Sejong University
Research Area: Media Studies, Social Psychology, Cognitive Bias, AI and Communication, AI Bias
Discipline: Social Science, Media Studies, Human-AI Interaction
-
Authors: B Liefooghe, M Oliveira, LM Leisten, E Hoogers
Year: 2022
Published in: 2022 - research-portal.uu.nl
Institution: Utrecht University
Research Area: Trust in AI, Synthetic Media, Perceived Bias, AI Bias
Discipline: Artificial Intelligence, Psychology
DOI: https://doi.org/10.31234/osf.io/te2ju
Citations: 5