Authors: C Chen, Z Cui
Year: 2025
Published in: Journal of Medical Internet Research, 2025 - jmir.org
Institution: Medical College of Wisconsin
Research Area: Trust in AI, AI-assisted diagnosis, Health communication, Healthcare human-AI interaction
Discipline: Digital Health, Human-Computer Interaction (HCI), Behavioral Science
Patients trust and are more likely to seek help from doctors explicitly avoiding AI-assisted diagnosis rather than those using extensive or moderate AI, highlighting a strong aversion to AI in healthcare settings.
Methods: A randomized, web-based 4-group survey experiment was conducted with controls for sociodemographic factors and analysis using regression, mediation, and moderation techniques.
Key Findings: Trust in and intention to seek medical help from health care professionals using AI-assisted diagnosis versus those avoiding AI, and the influence of demographic, social, and experiential factors.
DOI: https://doi.org/10.2196/66083
Citations: 4
Sample Size: 1762
Authors: Mukund Telukunta, Venkata Sriram Siddhardh Nadendla, Morgan Stuart, Casey Canfield
Year: 2025
Published in: ArXiv
Institution: Missouri University of Science and Technology, United Network for Organ Sharing
Research Area: Algorithmic Fairness, Healthcare AI, Decision-Making
Discipline: Artificial Intelligence
The study investigates fairness in regression-based predictive models for kidney transplantation, introducing three group fairness notions and identifying social preferences for fairness criteria, revealing biases against age groups but fairness towards gender and race groups.
Methods: Three novel fairness notions (independence, separation, sufficiency) were introduced alongside crowd feedback analysis through a Mixed-Logit discrete choice model.
Key Findings: Fairness in regression-based predictive analytics regarding group fairness criteria across social dimensions such as age, gender, and race.
Sample Size: 85