Discover 4 peer-reviewed studies in Human Ai Interaction Hci (2024–2025). Explore research findings powered by Prolific's diverse participant panel.
This page lists 4 peer-reviewed papers in the research area of Human Ai Interaction Hci in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: B Katz, N Abdelgawad, D Friedberg, P Roberts, S Misra
Year: 2025
Published in: Innovation in Aging, 2025•pmc.ncbi.nlm.nih.gov
Institution: Virginia Tech
Research Area: Human–AI Interaction (HCI), Technology Perception
Discipline: Behavioral Science
Age significantly influences perceptions of generative AI tools, with older individuals perceiving more benefits and fewer risks compared to younger individuals; thinking dispositions also play a role.
Methods: A nationally representative survey of US adults conducted via the Prolific platform using various AI-relevant scales, including attitudes, risks, benefits, frequency of use, expertise, and literacy assessments.
Key Findings: Demographic factors, industry types, thinking dispositions, and attitudes toward generative AI tools, including risk and utility perceptions.
Citations: 1
Sample Size: 500
-
Authors: J Beck
Year: 2025
Published in: 2025 - edoc.ub.uni-muenchen.de
Institution: Ludwig-Maximilians-Universität München, University of Bayreuth
Research Area: Annotation Quality, Human-AI Collaboration, Behavioral Science, Human-Computer Interaction (HCI)
Discipline: Human-Computer Interaction (HCI)
The study empirically evaluates annotation bias, proposes strategies to reduce its impact, and explores the use of large language models in automated and hybrid annotation workflows.
Methods: Empirical assessments and experimental evaluations involving annotation workflows and large language models.
Key Findings: Annotation bias, annotation quality, and the effectiveness of hybrid workflows integrating human input and AI models.
-
Authors: R Zhang, C Flathmann, G Musick, B Schelble
Year: 2024
Published in: ACM Transactions on ..., 2024 - dl.acm.org
Institution: North Carolina State University, University of North Carolina at Charlotte, University of Georgia, University of Michigan
Research Area: Explainable AI (XAI), Human-AI Teaming, Human-Computer Interaction (HCI)
Discipline: Robotics, Artificial Intelligence
Explored how AI explanations impact human trust and team effectiveness in human-AI teams, finding that explanations increase trust when AI disobeys orders but reduce trust when AI lies, with individual characteristics influencing these perceptions.
Methods: Conducted an online experiment analyzing participant responses to scenarios where AI explained its actions within a teamwork context, comparing trust in AI versus human teammates.
Key Findings: Impact of AI explanations on human trust, team effectiveness, and how these vary with teammate identity (human or AI) and participant characteristics (e.g., gender, ethical framework).
DOI: https://doi.org/10.1145/3635474
Citations: 28
Sample Size: 156
-
Authors: FARHANA SHAHID, MAXIMILIAN DITTGEN
Year: 2024
Published in: ArXiv
Institution: Cornell University
Research Area: Human-AI Collaboration, Constructive Discourse, Online Communication, Human-Computer Interaction (HCI)
Discipline: Human-Computer Interaction (HCI), Artificial Intelligence