Discover 25 peer-reviewed studies in Decision Making (2023–2025). Explore research findings powered by Prolific's diverse participant panel.
This page lists 25 peer-reviewed papers in the research area of Decision Making in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: N Grgić-Hlača, G Lima, A Weller
Year: 2025
Published in: Proceedings of the 2nd ..., 2022 - dl.acm.org
Institution: Max Planck Institute, École Polytechnique Fédérale de Lausanne, University of Cambridge, The Alan Turing Institute
Research Area: Algorithmic Fairness, Human Perception, Diversity in AI Decision-Making
Discipline: Social Science, Artificial Intelligence
This study examines how sociodemographic factors and personal experience influence perceptions of fairness in algorithmic decision-making, particularly in bail decisions, highlighting the importance of diverse perspectives in regulatory oversight.
Methods: Explored perceptions of procedural fairness using surveys to assess the influence of demographics and personal experiences.
Key Findings: Impact of demographics (age, education, gender, race, political views) and personal experience on perceptions of fairness of algorithmic feature use in bail decisions.
DOI: 10.1145/3551624.3555306
Citations: 62
-
Authors: JY Bo, S Wan, A Anderson
Year: 2025
Published in: Proceedings of the 2025 CHI Conference ..., 2025 - dl.acm.org
Institution: University of Toronto
Research Area: Appropriate reliance on LLM, Human-Computer Interaction (HCI), AI-assisted decision making.
Discipline: Human-Computer Interaction (HCI)
This paper explores the latest advancements and key trends in the field of Human-Computer Interaction (HCI), focusing on novel interfaces and user experience paradigms.
Citations: 25
-
Authors: JQ Zhu, JC Peterson, B Enke, TL Griffiths
Year: 2025
Published in: Nature Human Behaviour, 2025 - nature.com
Institution: Princeton University, Boston University, Harvard University
Research Area: Strategic decision-making, Machine learning, Computational Cognitive Science
Discipline: Artificial Intelligence
This study used deep neural networks to analyze human strategic decision-making, predicting choices more accurately than existing theories and uncovering the context-dependent nature of reasoning and decision-making in complex games.
Methods: Deep neural networks trained on data from procedurally generated matrix games with over 2,400 variations; models were modified for interpretability.
Key Findings: Human choices and reasoning in initial play of two-player matrix games, focusing on strategic decision-making and response to game complexity.
DOI: https://doi.org/10.1038/s41562-025-02230-5
Citations: 16
Sample Size: 90000
-
Authors: S de Jong, V Paananen, B Tag
Year: 2025
Published in: Proceedings of the ACM on ..., 2025 - dl.acm.org
Institution: Niels van Berkel: Aalborg University, Sander de Jong, Ville Paananen, Benjamin Tag: Monash University
Research Area: Cognitive Forcing, Human-AI Interaction, AI Explainability (XAI), Decision-Making in AI Systems.
Discipline: Human-Computer Interaction (HCI), Artificial Intelligence
Partial explanations encourage critical thinking and reduce user overreliance on incorrect AI suggestions, with performance varying based on individual need for cognition and task difficulty.
Methods: Two experiments were conducted: (1) participants identified shortest paths in weighted graphs, and (2) participants corrected spelling and grammar errors in text, with AI suggestions accompanied by no, partial, or full explanations.
Key Findings: Effectiveness of partial explanations in reducing overreliance on incorrect AI suggestions, and interaction of explanation type with task difficulty and user need for cognition.
DOI: https://doi.org/10.1145/3710946
Citations: 14
Sample Size: 474
-
Authors: P Spitzer, K Morrison, V Turri, M Feng, A Perer
Year: 2025
Published in: ACM Transactions on ..., 2025 - dl.acm.org
Institution: Carnegie Mellon University, Karlsruhe Institute of Technology, University of Bayreuth
Research Area: Explainable AI (XAI), AI-Assisted Decision-Making, Human-AI Collaboration
Discipline: Artificial Intelligence
The study highlights how imperfect explainable AI (XAI), along with human cognitive styles, affects reliance on AI and the performance of human–AI teams, providing design guidelines for better collaboration systems.
Methods: The researchers conducted a study with 136 participants, analyzing the effects of explanation imperfections and cognitive styles on AI-assisted decision-making and human–AI collaboration.
Key Findings: The impact of incorrect explanations and explanation modalities on human reliance, decision-making, and human–AI team performance, as well as the role of cognitive styles.
Citations: 2
Sample Size: 136
-
Authors: J Zhou, R Aloufi, N van Zalk
Year: 2025
Published in: 38th International BCS Human ..., 2025 - scienceopen.com
Institution: The affiliated institutions for the authors are not available in the provided context or search results.
Research Area: High-Stakes Decision-Making, Explainable AI, User Trust, Human-Centered AI, Interaction Design
Discipline: Human-Computer Interaction (HCI), Artificial Intelligence
This study explores how human collaboration and communication dynamics vary when interacting with an AI chatbot versus a human partner in a high-stakes decision-making task.
Methods: One-way between-subjects design using the NASA Moon Survival Task to compare behaviors, linguistic coordination, and perceptions in interactions with AI or human partners.
Key Findings: Collaboration processes, communicative dynamics, outcomes, retrospective interaction experience, partner perception, and linguistic coordination, with user profiling for AI benefit variations.
DOI: doi.org/10.14236/ewic/BCSHCI2025.52
Citations: 1
-
Authors: M Zhuang, E Deschrijver, R Ramsey, O Turel
Year: 2025
Published in: Scientific Reports, 2025 - nature.com
Institution: Monash University, The University of Melbourne, KU Leuven, California State University Fullerton
Research Area: Human-AI Interaction, Social Bias, Decision-Making
Discipline: Social Science, Human-AI Interaction
The study found that humans exhibit similar discriminatory behavior toward both AI and human agents, with resource allocation being influenced more by decision alignment than the recipient's identity.
Methods: A preregistered experiment was conducted where participants distributed resources between themselves and either human or AI agents based on dot estimation decisions.
Key Findings: Discriminatory behavior and resource allocation preferences toward AI and human agents as influenced by decision congruency.
DOI: https://doi.org/10.1038/s41598-025-94631-9
Sample Size: 500
-
Authors: Mukund Telukunta, Venkata Sriram Siddhardh Nadendla, Morgan Stuart, Casey Canfield
Year: 2025
Published in: ArXiv
Institution: Missouri University of Science and Technology, United Network for Organ Sharing
Research Area: Algorithmic Fairness, Healthcare AI, Decision-Making
Discipline: Artificial Intelligence
The study investigates fairness in regression-based predictive models for kidney transplantation, introducing three group fairness notions and identifying social preferences for fairness criteria, revealing biases against age groups but fairness towards gender and race groups.
Methods: Three novel fairness notions (independence, separation, sufficiency) were introduced alongside crowd feedback analysis through a Mixed-Logit discrete choice model.
Key Findings: Fairness in regression-based predictive analytics regarding group fairness criteria across social dimensions such as age, gender, and race.
Sample Size: 85
-
Authors: K Vodrahalli, R Daneshjou, T Gerstenberg
Year: 2024
Published in: Proceedings of the 2022 ..., 2022 - dl.acm.org
Institution: Stanford University, Massachusetts Institute of Technology
Research Area: Trust in AI, Human-AI Interaction, Decision Making
Discipline: Human-AI Interaction, Decision Science
Humans' trust in AI advice is influenced by their beliefs about AI performance, and once they accept AI advice, they treat it similarly to advice from human peers.
Methods: Crowdworkers participated in several experimental settings to evaluate how participants respond to AI versus human suggestions and characterize user behavior with a proposed activation-integration model.
Key Findings: The influence of AI advice compared to human advice on decision-making and the behavioral factors affecting the use of such advice.
DOI: 10.1145/3514094.3534150
Citations: 99
Sample Size: 1100
-
Authors: V Robbemond, O Inel, U Gadiraju
Year: 2024
Published in: ... of the 30th ACM Conference on ..., 2022 - dl.acm.org
Institution: V Robbemond: Delft University of Technology, O Inel: Delft University of Technology, U Gadiraju: Delft University of Technology
Research Area: Explanation Modality in AI-assisted Decision Making
Discipline: Human-Computer Interaction (HCI)
The study explores the role of explanation modalities in AI-assisted credibility assessment tasks, finding that combined modalities (text and/or audio with graphics) enhance accuracy, trust, and usability compared to single-modality approaches.
Methods: A between-subjects experiment was conducted with six explanation modalities to evaluate their influence on user performance, trust, and usability in credibility assessments.
Key Findings: The effects of different explanation modalities on decision accuracy, system trust, and usability in an AI-assisted credibility assessment system.
DOI: 10.1145/3503252.3531311
Citations: 47
Sample Size: 375
-
Authors: LS Treiman, CJ Ho, W Kool
Year: 2024
Published in: Proceedings of the National Academy of ..., 2024 - pnas.org
Institution: Massachusetts Institute of Technology, Yale University, Washington University in St. Louis
Research Area: AI Ethics, Behavioral Economics, Decision-Making in AI Systems
Discipline: Artificial Intelligence, Behavioral Science
People alter their behavior when they know their actions will train AI, leading to unintentional habits and biased training data for AI systems.
Methods: Five studies were conducted using the ultimatum game; participants were tasked with deciding on monetary splits proposed by either humans or AI, with some informed their decisions would train the AI.
Key Findings: Behavioral changes in participants when training AI, persistence of these changes over time, and implications for AI training bias.
DOI: https://doi.org/10.1073/pnas.2408731121
Citations: 13
-
Authors: V Cheung, M Maier, F Lieder
Year: 2024
Published in: Psyarxiv preprint, 2024 - files.osf.io
Institution: University College LondonA
Research Area: AI Ethics, Moral Decision-Making, Cognitive Biases in LLMs, AI Bias
Discipline: Artificial Intelligence, Ethics
Citations: 11
-
Authors: Z Li, M Yin
Year: 2024
Published in: Advances in Neural Information Processing ..., 2024 - proceedings.neurips.cc
Institution: Purdue University
Research Area: Human Behavior Modeling, Explainable AI, Decision Making in AI systems.
Discipline: Artificial Intelligence, Behavioral Science
DOI: https://doi.org/10.52202/079017-0163
Citations: 7
-
Authors: P Schoenegger, P Park, E Karger, P Tetlock
Year: 2024
Published in: ArXiv
Institution: Federal Reserve Bank of Chicago, London School of Economics and Political Science, Massachusetts Institute of Technology, University of Pennsylvania
Research Area: LLM Assistants, Human Forecasting, Predictive Modeling, AI-Augmented Decision Making, LLM
Discipline: Artificial Intelligence, Behavioral Science
-
Authors: G Hay, Beatrice Korwisi, Norman Lahme-Hütig, Winfried Rief, Antonia Barke
Year: 2024
Published in: Wiley
Institution: Marburg University, Münster School of Business, University of Duisburg-Essen
Research Area: ICD-11 Chronic Pain Classification, Clinical Diagnosis, Algorithmic Decision-Making
Discipline: Health, Medicine, Artificial Intelligence
-
Authors: P Hemmer, M Schemmer, N Kühl, M Vössing, G Satzger
Year: 2024
Published in: ArXiv
Institution: Karlsruhe Institute of Technology
Research Area: Human-AI Collaboration, Explainable AI (XAI), Complementarity in Decision Making
Discipline: Human-Computer Interaction (HCI)
-
Authors: A Bashkirova, D Krpan
Year: 2024
Published in: Science Direct
Institution: London School of Economics and Political Science
Research Area: AI-assisted Decision Making, Confirmation Bias, Professional Trust, Psychology, AI Bias
Discipline: Behavioral Science, Psychology
-
Authors: JC Cresswell, Y Sui, B Kumar, N Vouitsis
Year: 2024
Published in: ArXiv
Institution: Layer6
Research Area: Human-AI Decision Making, Conformal Prediction, Trust in AI
Discipline: Artificial Intelligence
-
Authors: Fabian Dvorak, Regina Stumpf, Sebastian Fehrler, Urs Fischbacher
Year: 2024
Published in: ArXiv
Institution: University of Konstanz
Research Area: Generative AI and Human Decision-Making, Behavioral Economics
Discipline: Artificial Intelligence, Behavioral Economics
-
Authors: H Vasconcelos, M Jörke
Year: 2023
Published in: Proceedings of the ..., 2023 - dl.acm.org
Institution: Stanford University, University of Washington
Research Area: Human-AI Interaction, Explainable AI (XAI), Decision-Making
Discipline: Human-Computer Interaction (HCI), Artificial Intelligence
DOI: https://doi.org/10.1145/3579605
Citations: 405