Discover 5 peer-reviewed studies in Fairness In Ai (2023–2025). Explore research findings powered by Prolific's diverse participant panel.
This page lists 5 peer-reviewed papers in the research area of Fairness In Ai in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: N Grgić-Hlača, G Lima, A Weller
Year: 2025
Published in: Proceedings of the 2nd ..., 2022 - dl.acm.org
Institution: Max Planck Institute, École Polytechnique Fédérale de Lausanne, University of Cambridge, The Alan Turing Institute
Research Area: Algorithmic Fairness, Human Perception, Diversity in AI Decision-Making
Discipline: Social Science, Artificial Intelligence
This study examines how sociodemographic factors and personal experience influence perceptions of fairness in algorithmic decision-making, particularly in bail decisions, highlighting the importance of diverse perspectives in regulatory oversight.
Methods: Explored perceptions of procedural fairness using surveys to assess the influence of demographics and personal experiences.
Key Findings: Impact of demographics (age, education, gender, race, political views) and personal experience on perceptions of fairness of algorithmic feature use in bail decisions.
DOI: 10.1145/3551624.3555306
Citations: 62
-
Authors: G Lima, N Grgić-Hlača, M Langer, Y Zou
Year: 2025
Published in: Proceedings of the 2025 CHI ..., 2025 - dl.acm.org
Institution: University of Maryland, Max Planck Institute, Stanford University, Cornell University
Research Area: Algorithmic Fairness, Systemic Injustice, Social Perception of AI, Algorithmic Discrimination
Discipline: Computational Social Science
The study examines how contextualizing algorithms within systemic injustice impacts perceptions of algorithmic discrimination, finding disparate effects based on participant group identity and revealing unintended consequences of such contextualization.
Methods: 2x3 between-participants experiment using the hiring context as a case-study; examined the influence of systemic injustice information and algorithms' bias perpetuation on lay perceptions.
Key Findings: Impact of systemic injustice framing and explanation of algorithmic bias perpetuation on participants' views of algorithmic fairness and discrimination.
DOI: 10.1145/3706598.3713536
Citations: 2
Sample Size: 716
-
Authors: T Eloundou, A Beutel, DG Robinson
Year: 2024
Published in: arXiv preprint arXiv ..., 2024 - arxiv.org
Institution: OpenAI, Google DeepMind, Google, University of Oxford
Research Area: Fairness in LLM, AI Bias, AI Ethics
Discipline: Artificial Intelligence, Social Science
The paper introduces a counterfactual approach to evaluate 'first-person fairness' in chatbots, demonstrating that reinforcement learning can mitigate biases based on demographics across extensive chatbot interactions.
Methods: The study uses a Language Model as a Research Assistant (LMRA) to quantitatively and qualitatively assess biases based on demographics across millions of chatbot interactions, covering 66 tasks in 9 domains and involving two genders and four races. Bias evaluations are corroborated by independent...
Key Findings: Demographic biases in chatbot responses, including harmful stereotypes and response differences by gender and race, across diverse tasks and domains.
DOI: https://doi.org/10.48550/arXiv.2410.19803
Citations: 33
Sample Size: 6000000
-
Authors: M Lerner, F Dorner, E Ash, N Goel
Year: 2024
Published in: ... of the 62nd Annual Meeting of ..., 2024 - aclanthology.org
Institution: ETH Zürich, University of Oxford
Research Area: Fairness in AI, Content Moderation, Human-AI Alignment
Discipline: Computational Social Science
Citations: 5
-
Authors: LS Treiman, CJ Ho, W Kool
Year: 2023
Published in: Proceedings of the AAAI Conference on ..., 2023 - ojs.aaai.org
Institution: Massachusetts Institute of Technology, Yale University
Research Area: AI Ethics, Human-AI Interaction, Fairness in Machine Learning Training
Discipline: Human-Computer Interaction (HCI), Artificial Intelligence Ethics
DOI: https://doi.org/10.1609/hcomp.v11i1.27556
Citations: 6