Discover 14 peer-reviewed studies in Trust In Ai (2021–2025). Explore research findings powered by Prolific's diverse participant panel.
This page lists 14 peer-reviewed papers in the research area of Trust In Ai in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: M Steyvers, H Tejeda, A Kumar, C Belem
Year: 2025
Published in: Nature Machine ..., 2025 - nature.com
Institution: University of California Irvine
Research Area: Computational Linguistics, Computational Social Science, AI Ethics, Trust in AI
Discipline: Computational Social Science
LLMs often lead to user overestimation of response accuracy, especially with longer explanations; adjusting explanation styles to align with model confidence improves calibration and discrimination gaps, enhancing trust in AI-assisted decision making.
Methods: Conducted experiments using multiple-choice and short-answer questions to study user confidence versus model-stated confidence; varied explanation length and alignment with model internal confidence.
Key Findings: Calibration gap (human vs. model confidence), discrimination gap (ability to distinguish correct vs. incorrect answers), and effects of explanation style and length on user trust.
Citations: 100
-
Authors: C Chen, Z Cui
Year: 2025
Published in: Journal of Medical Internet Research, 2025 - jmir.org
Institution: Medical College of Wisconsin
Research Area: Trust in AI, AI-assisted diagnosis, Health communication, Healthcare human-AI interaction
Discipline: Digital Health, Human-Computer Interaction (HCI), Behavioral Science
Patients trust and are more likely to seek help from doctors explicitly avoiding AI-assisted diagnosis rather than those using extensive or moderate AI, highlighting a strong aversion to AI in healthcare settings.
Methods: A randomized, web-based 4-group survey experiment was conducted with controls for sociodemographic factors and analysis using regression, mediation, and moderation techniques.
Key Findings: Trust in and intention to seek medical help from health care professionals using AI-assisted diagnosis versus those avoiding AI, and the influence of demographic, social, and experiential factors.
DOI: https://doi.org/10.2196/66083
Citations: 4
Sample Size: 1762
-
Authors: KO Alberts, AD Castel
Year: 2025
Published in: Experimental Aging Research, 2025 - Taylor & Francis
Institution: University of California Los Angeles
Research Area: Cognitive Aging, Associative Memory, Trustworthiness of Artificial Faces, Human-AI Interaction, Psychology, Trust in AI
Discipline: Psychology, Psychobiology, Aging Research
Older adults perceive artificial faces as equally trustworthy as real faces, unlike young adults who find artificial faces less trustworthy, and older adults show no difference in memory accuracy between face types.
Methods: Participants viewed real and artificial faces associated with scam or neutral conditions, then rated trustworthiness and were tested on associative memory.
Key Findings: Associative memory and perceived trustworthiness of real and artificial faces across young and older adults.
Citations: 1
-
Authors: P Cooper, A Lim, J Irons, M McGrath, H Jarvis
Year: 2025
Published in: Proceedings of the ..., 2025 - dl.acm.org
Institution: Microsoft Research, Massachusetts Institute of Technology, University of Washington
Research Area: Human-AI Interaction, Trust in AI
Discipline: Human-Computer Interaction (HCI)
Trust in AI dynamically influences users' reliance on AI advice during a deepfake detection task, with no significant impact observed from the timing of AI advice delivery.
Methods: Researchers conducted an online study with participants performing a deepfake detection task, comparing performance across conditions where AI advice was provided either concurrently with decisions or after an initial evaluation. Computational modeling was used to analyze trust dynamics.
Key Findings: Impact of AI advice and its timing on task performance, and the dynamic role of user trust in AI based on expectations of its ability.
DOI: https://dl.acm.org/doi/10.1145/3706599.3719870
Citations: 1
-
Authors: A Klingbeil, C Grützner, P Schreck
Year: 2024
Published in: Computers in Human Behavior, 2024 - Elsevier
Institution: University of Hohenheim, University of Hohenheim, University of Hohenheim
Research Area: Trust in AI, Overreliance on AI, Human-AI Interaction
Discipline: Human-Computer Interaction (HCI), Artificial Intelligence, Behavioral Science
The study found that individuals tend to overrely on AI-generated advice in uncertain situations, often to the detriment of their own decisions and third parties, despite contradicting contextual information or their own judgment.
Methods: A domain-independent, incentivized, interactive behavioral experiment was conducted to analyze user behavior in decision-making scenarios involving AI advice.
Key Findings: Extent and impact of user reliance on AI advice, including its effects on decision efficiency and outcomes for themselves and others.
DOI: https://doi.org/10.1016/j.chb.2024.108352
Citations: 247
-
Authors: K Vodrahalli, R Daneshjou, T Gerstenberg
Year: 2024
Published in: Proceedings of the 2022 ..., 2022 - dl.acm.org
Institution: Stanford University, Massachusetts Institute of Technology
Research Area: Trust in AI, Human-AI Interaction, Decision Making
Discipline: Human-AI Interaction, Decision Science
Humans' trust in AI advice is influenced by their beliefs about AI performance, and once they accept AI advice, they treat it similarly to advice from human peers.
Methods: Crowdworkers participated in several experimental settings to evaluate how participants respond to AI versus human suggestions and characterize user behavior with a proposed activation-integration model.
Key Findings: The influence of AI advice compared to human advice on decision-making and the behavioral factors affecting the use of such advice.
DOI: 10.1145/3514094.3534150
Citations: 99
Sample Size: 1100
-
Authors: JC Cresswell, Y Sui, B Kumar, N Vouitsis
Year: 2024
Published in: ArXiv
Institution: Layer6
Research Area: Human-AI Decision Making, Conformal Prediction, Trust in AI
Discipline: Artificial Intelligence
-
Authors: Eunhae Lee, Pat Pataranutaporn, Judith Amores, Pattie Maes
Year: 2024
Published in: ArXiv
Institution: Massachusetts Institute of Technology, Microsoft Research, MIT Media Lab
Research Area: Human-AI Interaction, Cognitive Biases, Psychological Factors in AI Adoption, Trust in AI, AI Credibility
Discipline: Psychology, Artificial Intelligence
-
Authors: B Liefooghe, M Oliveira, LM Leisten
Year: 2023
Published in: Collabra ..., 2023 - online.ucpress.edu
Institution: Ghent University, ISCTE-IUL, CIS-IUL, Universidade de Lisboa
Research Area: AI Ethics, Trust in AI, Social Evaluation of AI, AI Evaluation
Discipline: Social Science, AI Ethics
DOI: https://doi.org/10.1525/collabra.73066
Citations: 14
-
Authors: A Küper, N Krämer
Year: 2022
Published in: International Journal of Human--Computer ..., 2025 - Taylor & Francis
Institution: Universitätsklinikum Giessen und Marburg
Research Area: Trust in AI, Psychological Factors, Human-Computer Interaction (HCI)
Discipline: Psychological Science, Human-Computer Interaction (HCI)
DOI: https://www.tandfonline.com/doi/abs/10.1080/10447318.2024.2348216
Citations: 54
-
Authors: B Liefooghe, M Oliveira, LM Leisten, E Hoogers
Year: 2022
Published in: 2022 - research-portal.uu.nl
Institution: Utrecht University
Research Area: Trust in AI, Synthetic Media, Perceived Bias, AI Bias
Discipline: Artificial Intelligence, Psychology
DOI: https://doi.org/10.31234/osf.io/te2ju
Citations: 5
-
Authors: M Van der Biest, S Verschooren, F Verbruggen
Year: 2022
Published in: Plos one, 2025 - journals.plos.org
Institution: Ghent University
Research Area: Cognitive Psychology, Decision-Making, Trust in AI, Deep Fakes
Discipline: Psychology, Cognitive Science
Citations: 2
-
Authors: C Woodcock, B Mittelstadt, D Busbridge
Year: 2021
Published in: Journal of medical Internet ..., 2021 - jmir.org
Institution: Oxford University, Alan Turing Institute, University of Edinburgh
Research Area: Health Informatics, Explainable AI (XAI), Trust in AI, Digital Health
Discipline: Digital Health
DOI: https://doi.org/10.2196/29386
Citations: 52
-
Authors: C Arnold, LZ Xu, K Saffarizadeh
Year: 2021
Published in: Behaviour & Information ..., 2025 - Taylor & Francis
Institution: Northwestern Mutual Data Science Institute, Marquette University
Research Area: Generative AI, Crowdfunding, Trust in AI, Human-Computer Interaction (HCI), Behavioral Science
Discipline: Human-Computer Interaction (HCI), Behavioral Science