Discover 20 peer-reviewed studies in Trust (2021–2025). Explore research findings powered by Prolific's diverse participant panel.
This page lists 20 peer-reviewed papers in the research area of Trust in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: M Steyvers, H Tejeda, A Kumar, C Belem
Year: 2025
Published in: Nature Machine ..., 2025 - nature.com
Institution: University of California Irvine
Research Area: Computational Linguistics, Computational Social Science, AI Ethics, Trust in AI
Discipline: Computational Social Science
LLMs often lead to user overestimation of response accuracy, especially with longer explanations; adjusting explanation styles to align with model confidence improves calibration and discrimination gaps, enhancing trust in AI-assisted decision making.
Methods: Conducted experiments using multiple-choice and short-answer questions to study user confidence versus model-stated confidence; varied explanation length and alignment with model internal confidence.
Key Findings: Calibration gap (human vs. model confidence), discrimination gap (ability to distinguish correct vs. incorrect answers), and effects of explanation style and length on user trust.
Citations: 100
-
Authors: S Shekar, P Pataranutaporn, C Sarabu, GA Cecchi
Year: 2025
Published in: NEJM AI, 2025 - ai.nejm.org
Institution: MIT Media Lab, IBM Research, Stanford University, Massachusetts Institute of Technology
Research Area: AI Ethics, Healthcare, Patient Trust, Medical Misinformation
Discipline: Artificial Intelligence, Human-Computer Interaction (HCI), AI Ethics
This paper discusses a study by MIT researchers detailing patient trust in AI-generated medical advice, even when that advice is incorrect, raising concerns about misinformation in healthcare.
Citations: 19
-
Authors: C Chen, Z Cui
Year: 2025
Published in: Journal of Medical Internet Research, 2025 - jmir.org
Institution: Medical College of Wisconsin
Research Area: Trust in AI, AI-assisted diagnosis, Health communication, Healthcare human-AI interaction
Discipline: Digital Health, Human-Computer Interaction (HCI), Behavioral Science
Patients trust and are more likely to seek help from doctors explicitly avoiding AI-assisted diagnosis rather than those using extensive or moderate AI, highlighting a strong aversion to AI in healthcare settings.
Methods: A randomized, web-based 4-group survey experiment was conducted with controls for sociodemographic factors and analysis using regression, mediation, and moderation techniques.
Key Findings: Trust in and intention to seek medical help from health care professionals using AI-assisted diagnosis versus those avoiding AI, and the influence of demographic, social, and experiential factors.
DOI: https://doi.org/10.2196/66083
Citations: 4
Sample Size: 1762
-
Authors: KO Alberts, AD Castel
Year: 2025
Published in: Experimental Aging Research, 2025 - Taylor & Francis
Institution: University of California Los Angeles
Research Area: Cognitive Aging, Associative Memory, Trustworthiness of Artificial Faces, Human-AI Interaction, Psychology, Trust in AI
Discipline: Psychology, Psychobiology, Aging Research
Older adults perceive artificial faces as equally trustworthy as real faces, unlike young adults who find artificial faces less trustworthy, and older adults show no difference in memory accuracy between face types.
Methods: Participants viewed real and artificial faces associated with scam or neutral conditions, then rated trustworthiness and were tested on associative memory.
Key Findings: Associative memory and perceived trustworthiness of real and artificial faces across young and older adults.
Citations: 1
-
Authors: J Zhou, R Aloufi, N van Zalk
Year: 2025
Published in: 38th International BCS Human ..., 2025 - scienceopen.com
Institution: The affiliated institutions for the authors are not available in the provided context or search results.
Research Area: High-Stakes Decision-Making, Explainable AI, User Trust, Human-Centered AI, Interaction Design
Discipline: Human-Computer Interaction (HCI), Artificial Intelligence
This study explores how human collaboration and communication dynamics vary when interacting with an AI chatbot versus a human partner in a high-stakes decision-making task.
Methods: One-way between-subjects design using the NASA Moon Survival Task to compare behaviors, linguistic coordination, and perceptions in interactions with AI or human partners.
Key Findings: Collaboration processes, communicative dynamics, outcomes, retrospective interaction experience, partner perception, and linguistic coordination, with user profiling for AI benefit variations.
DOI: doi.org/10.14236/ewic/BCSHCI2025.52
Citations: 1
-
Authors: H Shakeeb, C Conrad
Year: 2025
Published in: 2025 - aisel.aisnet.org
Institution: Dalhousie University
Research Area: AI, Political Communication, Media Trustworthiness, Cognitive Science, Autonomous Applications
Discipline: Artificial Intelligence, Cognitive Science
AI-generated audio in political communication is perceived as more trustworthy than image or video formats, but lower realism leads to skepticism.
Methods: An online experiment with participants assessing AI-generated political content in audio, video, and image formats; data analyzed using linear mixed effects analysis and NLP.
Key Findings: Impact of AI-generated media formats on trust and willingness to follow political recommendations, considering realism levels.
Citations: 1
Sample Size: 150
-
Authors: P Cooper, A Lim, J Irons, M McGrath, H Jarvis
Year: 2025
Published in: Proceedings of the ..., 2025 - dl.acm.org
Institution: Microsoft Research, Massachusetts Institute of Technology, University of Washington
Research Area: Human-AI Interaction, Trust in AI
Discipline: Human-Computer Interaction (HCI)
Trust in AI dynamically influences users' reliance on AI advice during a deepfake detection task, with no significant impact observed from the timing of AI advice delivery.
Methods: Researchers conducted an online study with participants performing a deepfake detection task, comparing performance across conditions where AI advice was provided either concurrently with decisions or after an initial evaluation. Computational modeling was used to analyze trust dynamics.
Key Findings: Impact of AI advice and its timing on task performance, and the dynamic role of user trust in AI based on expectations of its ability.
DOI: https://dl.acm.org/doi/10.1145/3706599.3719870
Citations: 1
-
Authors: N Tyulina, Y Yu, TA Emmanouil, SI Levitan
Year: 2025
Published in: Proceedings of the 7th ACM ..., 2025 - dl.acm.org
Institution: University of Cambridge, University of Bath, University of Edinburgh, New York University
Research Area: Human-AI Interaction, Trust and Perception, Nonverbal Communication
Discipline: Applied Linguistics
Trust judgments are primarily influenced by auditory cues in both humans and multimodal models, though subtle differences in modality weighting exist between them.
Methods: Behavioral experiment with trust ratings of bimodal stimuli across four trust congruence conditions, combined with a multimodal model trained using HuBERT and ResNet-50 with late fusion, analyzed using Permutation Feature Importance (PFI).
Key Findings: The construction of trust from visual and auditory signals in both humans and multimodal models, focusing on modality dominance and feature weighting.
Sample Size: 150
-
Authors: A Klingbeil, C Grützner, P Schreck
Year: 2024
Published in: Computers in Human Behavior, 2024 - Elsevier
Institution: University of Hohenheim, University of Hohenheim, University of Hohenheim
Research Area: Trust in AI, Overreliance on AI, Human-AI Interaction
Discipline: Human-Computer Interaction (HCI), Artificial Intelligence, Behavioral Science
The study found that individuals tend to overrely on AI-generated advice in uncertain situations, often to the detriment of their own decisions and third parties, despite contradicting contextual information or their own judgment.
Methods: A domain-independent, incentivized, interactive behavioral experiment was conducted to analyze user behavior in decision-making scenarios involving AI advice.
Key Findings: Extent and impact of user reliance on AI advice, including its effects on decision efficiency and outcomes for themselves and others.
DOI: https://doi.org/10.1016/j.chb.2024.108352
Citations: 247
-
Authors: K Vodrahalli, R Daneshjou, T Gerstenberg
Year: 2024
Published in: Proceedings of the 2022 ..., 2022 - dl.acm.org
Institution: Stanford University, Massachusetts Institute of Technology
Research Area: Trust in AI, Human-AI Interaction, Decision Making
Discipline: Human-AI Interaction, Decision Science
Humans' trust in AI advice is influenced by their beliefs about AI performance, and once they accept AI advice, they treat it similarly to advice from human peers.
Methods: Crowdworkers participated in several experimental settings to evaluate how participants respond to AI versus human suggestions and characterize user behavior with a proposed activation-integration model.
Key Findings: The influence of AI advice compared to human advice on decision-making and the behavioral factors affecting the use of such advice.
DOI: 10.1145/3514094.3534150
Citations: 99
Sample Size: 1100
-
Authors: A Bashkirova, D Krpan
Year: 2024
Published in: Science Direct
Institution: London School of Economics and Political Science
Research Area: AI-assisted Decision Making, Confirmation Bias, Professional Trust, Psychology, AI Bias
Discipline: Behavioral Science, Psychology
-
Authors: JC Cresswell, Y Sui, B Kumar, N Vouitsis
Year: 2024
Published in: ArXiv
Institution: Layer6
Research Area: Human-AI Decision Making, Conformal Prediction, Trust in AI
Discipline: Artificial Intelligence
-
Authors: HC Gordon, T Stafford, K Dommett
Year: 2024
Published in: ... of the Annual Meeting of the ..., 2024 - escholarship.org
Institution: University of California, Irvine, University of New York, Buffalo, University of Bath
Research Area: Political Advertising, Trust, Political Communication, Transparency
Discipline: Political Science, Communication
-
Authors: Eunhae Lee, Pat Pataranutaporn, Judith Amores, Pattie Maes
Year: 2024
Published in: ArXiv
Institution: Massachusetts Institute of Technology, Microsoft Research, MIT Media Lab
Research Area: Human-AI Interaction, Cognitive Biases, Psychological Factors in AI Adoption, Trust in AI, AI Credibility
Discipline: Psychology, Artificial Intelligence
-
Authors: B Liefooghe, M Oliveira, LM Leisten
Year: 2023
Published in: Collabra ..., 2023 - online.ucpress.edu
Institution: Ghent University, ISCTE-IUL, CIS-IUL, Universidade de Lisboa
Research Area: AI Ethics, Trust in AI, Social Evaluation of AI, AI Evaluation
Discipline: Social Science, AI Ethics
DOI: https://doi.org/10.1525/collabra.73066
Citations: 14
-
Authors: A Küper, N Krämer
Year: 2022
Published in: International Journal of Human--Computer ..., 2025 - Taylor & Francis
Institution: Universitätsklinikum Giessen und Marburg
Research Area: Trust in AI, Psychological Factors, Human-Computer Interaction (HCI)
Discipline: Psychological Science, Human-Computer Interaction (HCI)
DOI: https://www.tandfonline.com/doi/abs/10.1080/10447318.2024.2348216
Citations: 54
-
Authors: B Liefooghe, M Oliveira, LM Leisten, E Hoogers
Year: 2022
Published in: 2022 - research-portal.uu.nl
Institution: Utrecht University
Research Area: Trust in AI, Synthetic Media, Perceived Bias, AI Bias
Discipline: Artificial Intelligence, Psychology
DOI: https://doi.org/10.31234/osf.io/te2ju
Citations: 5
-
Authors: M Van der Biest, S Verschooren, F Verbruggen
Year: 2022
Published in: Plos one, 2025 - journals.plos.org
Institution: Ghent University
Research Area: Cognitive Psychology, Decision-Making, Trust in AI, Deep Fakes
Discipline: Psychology, Cognitive Science
Citations: 2
-
Authors: C Woodcock, B Mittelstadt, D Busbridge
Year: 2021
Published in: Journal of medical Internet ..., 2021 - jmir.org
Institution: Oxford University, Alan Turing Institute, University of Edinburgh
Research Area: Health Informatics, Explainable AI (XAI), Trust in AI, Digital Health
Discipline: Digital Health
DOI: https://doi.org/10.2196/29386
Citations: 52
-
Authors: C Arnold, LZ Xu, K Saffarizadeh
Year: 2021
Published in: Behaviour & Information ..., 2025 - Taylor & Francis
Institution: Northwestern Mutual Data Science Institute, Marquette University
Research Area: Generative AI, Crowdfunding, Trust in AI, Human-Computer Interaction (HCI), Behavioral Science
Discipline: Human-Computer Interaction (HCI), Behavioral Science