Browse 4 peer-reviewed papers from Ibm Research spanning AI Ethics, Healthcare (2024–2025). Research powered by Prolific's high-quality participant data.
This page lists 4 peer-reviewed papers from researchers at Ibm Research in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: S Shekar, P Pataranutaporn, C Sarabu, GA Cecchi
Year: 2025
Published in: NEJM AI, 2025 - ai.nejm.org
Institution: MIT Media Lab, IBM Research, Stanford University, Massachusetts Institute of Technology
Research Area: AI Ethics, Healthcare, Patient Trust, Medical Misinformation
Discipline: Artificial Intelligence, Human-Computer Interaction (HCI), AI Ethics
This paper discusses a study by MIT researchers detailing patient trust in AI-generated medical advice, even when that advice is incorrect, raising concerns about misinformation in healthcare.
Citations: 19
-
Authors: Z Ashktorab, A Buccella, J D'Cruz, Z Fowler, A Gill, KY Leung, PD Magnus, J Richards
Year: 2025
Published in: arXiv preprint arXiv:2507.02745, 2025•arxiv.org
Institution: IBM Research, University at Albany
Research Area: Human–AI interaction, AI systems evaluation, UX, User Experience
Discipline: Computer Science, Human–Computer Interaction (HCI)
In a preregistered study with 162 participants, people generally prefer explanatory apologies from LLM chatbots over rote or purely empathic ones—though in biased error scenarios empathic apologies are sometimes favored—highlighting the complexity of designing chatbot apologies that effectively repair trust.
DOI: https://doi.org/10.48550/arXiv.2507.02745
Citations: 1
-
Authors: K Grosse, N Ebert
Year: 2025
Published in: ARXIV
Institution: IBM Research, ZHAW
Research Area: Security and privacy risks, LLM, human–AI interaction, AI Safety
Discipline: Computer Science
A survey of 3,270 UK adults reveals significant security and privacy risks in AI conversational agent usage, with a third engaging in risky behavior enabling attacks and many unaware of how their data are used or opting out.
Methods: Representative survey conducted via Prolific platform targeting UK adults, focusing on usage behaviors of AI conversational agents.
Key Findings: User behaviors related to security and privacy risks, data sanitization practices, attempts to jailbreak AI models, and awareness of data usage policies.
Sample Size: 3270
-
Authors: M Svanberg, W Li, M Fleming, B Goehring, N Thompson
Year: 2024
Published in: SSRN
Institution: IBM Research, Massachusetts Institute of Technology, The Productivity Institute, CSAIL
Research Area: Computer Vision, Economics of IT, Deep Learning Systems
Discipline: Artificial Intelligence