Authors: G Lima, N Grgić-Hlača, M Langer, Y Zou
Year: 2025
Published in: Proceedings of the 2025 CHI ..., 2025 - dl.acm.org
Institution: University of Maryland, Max Planck Institute, Stanford University, Cornell University
Research Area: Algorithmic Fairness, Systemic Injustice, Social Perception of AI, Algorithmic Discrimination
Discipline: Computational Social Science
The study examines how contextualizing algorithms within systemic injustice impacts perceptions of algorithmic discrimination, finding disparate effects based on participant group identity and revealing unintended consequences of such contextualization.
Methods: 2x3 between-participants experiment using the hiring context as a case-study; examined the influence of systemic injustice information and algorithms' bias perpetuation on lay perceptions.
Key Findings: Impact of systemic injustice framing and explanation of algorithmic bias perpetuation on participants' views of algorithmic fairness and discrimination.
DOI: 10.1145/3706598.3713536
Citations: 2
Sample Size: 716
Authors: Y Yin, N Jia, CJ Wakslak
Year: 2024
Published in: Proceedings of the National Academy of ..., 2024 - pnas.org
Institution: University of Southern California Los Angeles
Research Area: Human-AI Interaction, Social Perception of AI, Media Effects
Discipline: Social Sciences
AI responses make people feel more heard and are better at emotional support compared to humans, but labeling responses as AI diminishes this effect.
Methods: Experiment and follow-up study to assess recipient reactions to AI vs. human-generated responses and determine emotional support efficacy.
Key Findings: The degree to which recipients feel heard, emotion detection accuracy, and third-party ratings of emotional support quality.
DOI: https://doi.org/10.1073/pnas.2319112121
Citations: 201