Explore 40 peer-reviewed papers in Human Ai Interaction (2025–2026). Academic research using Prolific for high-quality human data collection.
This page lists 40 peer-reviewed papers in the discipline of Human Ai Interaction in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: X Yang, N Xi, J Hamari
Year: 2026
Published in: 2026 - scholarspace.manoa.hawaii.edu
Institution: Tampere University
Research Area: NFT, Gamification, Virtual economies
Discipline: Human–Computer Interaction (HCI), Consumer behavior, Behavioral psychology
Using a Prolific survey of 805 people, the paper shows that Big Five personality traits predict why people “love vs. hate” NFT art—with agreeableness and conscientiousness linked to higher perceived value across most dimensions, and neuroticism linked to more skepticism (especially about transparency).
-
Authors: T Kosch, R Welsch, L Chuang, A Schmidt
Year: 2025
Published in: ACM Transactions on ..., 2023 - dl.acm.org
Institution: Aalto University
Research Area: User Expectations, HCI Research Bias, Artificial Intelligence, AI Bias
Discipline: Human-Computer Interaction (HCI)
The belief in receiving adaptive AI support positively impacts user performance, demonstrating a placebo effect in Human-Computer Interaction.
Methods: Two experiments where participants completed word puzzles under conditions with or without supposed AI support; in reality, no AI assistance was provided.
Key Findings: Impact of perceived AI support on user expectations and task performance.
DOI: https://doi.org/10.1145/3529225
Citations: 149
Sample Size: 469
-
Authors: SSY Kim, JW Vaughan, QV Liao, T Lombrozo
Year: 2025
Published in: Proceedings of the ..., 2025 - dl.acm.org
Institution: Wake Forest University, University of Illinois at Urbana-Champaign, Princeton University, University of California Berkeley
Research Area: Appropriate Reliance on LLMs, Explainable AI, Human-AI Interaction, Cognitive Psychology
Discipline: Cognitive Psychology, Artificial Intelligence, Human-Computer Interaction (HCI)
The study examines factors that influence users' reliance on LLM responses, finding explanations increase reliance, while sources and inconsistent explanations reduce reliance on incorrect responses.
Methods: Think-aloud study followed by a pre-registered, controlled experiment to assess the impact of explanations, sources, and inconsistencies in LLM responses on user reliance.
Key Findings: Users' reliance on LLM responses, accuracy, and the influence of explanations, inconsistencies, and sources on these measures.
DOI: https://doi.org/10.1145/3706598.3714020
Citations: 38
Sample Size: 308
-
Authors: C Diebel, M Goutier, M Adam, A Benlian
Year: 2025
Published in: Business & Information Systems ..., 2025 - Springer
Institution: Technical University of Darmstadt, University of Goettingen
Research Area: Human-AI Collaboration, System Satisfaction, User Competence
Discipline: Information Systems, Human-Computer Interaction (HCI), Artificial Intelligence
Proactive AI-based agent assistance decreases users' competence-based self-esteem and system satisfaction, especially for users with higher AI knowledge.
Methods: Vignette-based online experiment using self-determination theory as the framework to evaluate user responses to proactive vs. reactive AI assistance.
Key Findings: Impact of proactive vs. reactive AI help on users' competence-based self-esteem and system satisfaction, moderated by users' AI knowledge levels.
DOI: https://doi.org/10.1007/s12599-024-00918-y
Citations: 32
-
Authors: T Zhang, A Koutsoumpis, JK Oostrom
Year: 2025
Published in: IEEE Transactions ..., 2024 - ieeexplore.ieee.org
Institution: Southeast University, Vrije Universiteit, Tilburg University
Research Area: LLM Personality Assessment, Human-AI Interaction, LLM
Discipline: Human-AI Interaction, Social Science, Humanities
LLMs like GPT-3.5 and GPT-4 can rival or outperform task-specific AI models in assessing personality traits from asynchronous video interviews, but show uneven performance, low reliability, and potential biases, warranting cautious use in high-stakes scenarios.
Methods: The study evaluated GPT-3.5 and GPT-4 performance in assessing personality traits and interview performance using simulated AVI responses, comparing them with ratings from task-specific AI and human annotators.
Key Findings: Validity, reliability, fairness, and rating patterns of LLMs (GPT-3.5 and GPT-4) in personality assessment from asynchronous video interviews.
Citations: 31
Sample Size: 685
-
Authors: M Riveiro, S Thill
Year: 2025
Published in: Proceedings of the 30th ACM Conference on User ..., 2022 - dl.acm.org
Institution: Linköping University, University of Skövde
Research Area: Explainable AI, Human-Computer Interaction (HCI)
Discipline: Human-Computer Interaction (HCI)
Users prefer factual explanations when AI outputs match expectations and mechanistic explanations when outputs deviate, with preferences influenced by response format (multiple-choice vs free text).
Methods: Participants were presented with scenarios involving an automated text classifier and asked to express their preference for explanations either through multiple-choice or free text responses.
Key Findings: User-desired content of AI explanations based on whether system behaviour aligns or deviates from expectations.
DOI: 10.1145/3503252.3531306
Citations: 30
-
Authors: L Ibrahim, C Akbulut, R Elasmar, C Rastogi, M Kahng, MR Morris, KR McKee, V Rieser, M Shanahan, L Weidinger
Year: 2025
Published in: arXiv preprint arXiv:2502.07077, 2025•arxiv.org
Institution: Google DeepMind, Google, University of Oxford
Research Area: Multimodal conversational AI, conversational AI, Evaluation methodology, benchmarking
Discipline: Computer Science, Natural Language Processing (NLP), Human–Computer Interaction (HCI)
The paper evaluates anthropomorphic behaviors in SOTA LLMs through a multi-turn methodology, showing that such behaviors, including empathy and relationship-building, predominantly emerge after multiple interactions and influence user perceptions.
Methods: Multi-turn evaluation of 14 anthropomorphic behaviors using simulations of user interactions, validated by a large-scale human subject study.
Key Findings: Anthropomorphic behaviors in large language models, including relationship-building and pronoun usage, and their perception by users.
Citations: 26
Sample Size: 1101
-
Authors: JY Bo, S Wan, A Anderson
Year: 2025
Published in: Proceedings of the 2025 CHI Conference ..., 2025 - dl.acm.org
Institution: University of Toronto
Research Area: Appropriate reliance on LLM, Human-Computer Interaction (HCI), AI-assisted decision making.
Discipline: Human-Computer Interaction (HCI)
This paper explores the latest advancements and key trends in the field of Human-Computer Interaction (HCI), focusing on novel interfaces and user experience paradigms.
Citations: 25
-
Authors: U Messer
Year: 2025
Published in: Computers in Human Behavior: Artificial Humans, 2025 - Elsevier
Institution: Universität der Bundeswehr München
Research Area: Political Bias in Generative AI, Human-AI Interaction, Affective Computing, AI Bias
Discipline: Computer Science, Human-AI Interaction
People's acceptance and reliance on Generative AI (GAI) increase when they perceive alignment between their political orientation and the bias of GAI-generated content, leading to expanded trust in sensitive applications.
Methods: Three experiments analyzing behavioral reactions to politically biased content generated by GAI, including the impact of perceived alignment on acceptance and trust.
Key Findings: Participants' acceptance, reliance, and trust in GAI based on perceived alignment between political bias of GAI-generated content and their own political beliefs.
DOI: https://doi.org/10.1016/j.chbah.2024.100108
Citations: 24
Sample Size: 513
-
Authors: S Shekar, P Pataranutaporn, C Sarabu, GA Cecchi
Year: 2025
Published in: NEJM AI, 2025 - ai.nejm.org
Institution: MIT Media Lab, IBM Research, Stanford University, Massachusetts Institute of Technology
Research Area: AI Ethics, Healthcare, Patient Trust, Medical Misinformation
Discipline: Artificial Intelligence, Human-Computer Interaction (HCI), AI Ethics
This paper discusses a study by MIT researchers detailing patient trust in AI-generated medical advice, even when that advice is incorrect, raising concerns about misinformation in healthcare.
Citations: 19
-
Authors: K Zhou, JD Hwang, X Ren, N Dziri
Year: 2025
Published in: Proceedings of the ..., 2025 - aclanthology.org
Institution: Stanford University, University of Southern California, Carnegie Mellon University, Allen Institute for AI
Research Area: Human-LM Reliance, Interaction-Centered Framework, Human-Computer Interaction (HCI)
Discipline: Human-Computer Interaction (HCI), Artificial Intelligence
The study introduces Rel-A.I., an interaction-centered evaluation approach to measure human reliance on LLM responses, revealing that politeness and interaction context significantly influence user reliance.
Methods: Nine user studies were conducted, analyzing user reliance influenced by LLM communication features such as politeness and context through participant interaction experiments.
Key Findings: The degree of human reliance on LLM responses based on communication style (e.g., politeness) and interaction context (e.g., knowledge domain, prior interactions).
Citations: 18
Sample Size: 450
-
Authors: S de Jong, V Paananen, B Tag
Year: 2025
Published in: Proceedings of the ACM on ..., 2025 - dl.acm.org
Institution: Niels van Berkel: Aalborg University, Sander de Jong, Ville Paananen, Benjamin Tag: Monash University
Research Area: Cognitive Forcing, Human-AI Interaction, AI Explainability (XAI), Decision-Making in AI Systems.
Discipline: Human-Computer Interaction (HCI), Artificial Intelligence
Partial explanations encourage critical thinking and reduce user overreliance on incorrect AI suggestions, with performance varying based on individual need for cognition and task difficulty.
Methods: Two experiments were conducted: (1) participants identified shortest paths in weighted graphs, and (2) participants corrected spelling and grammar errors in text, with AI suggestions accompanied by no, partial, or full explanations.
Key Findings: Effectiveness of partial explanations in reducing overreliance on incorrect AI suggestions, and interaction of explanation type with task difficulty and user need for cognition.
DOI: https://doi.org/10.1145/3710946
Citations: 14
Sample Size: 474
-
Authors: H Ju, S Aral
Year: 2025
Published in: arXiv preprint arXiv:2503.18238, 2025 - arxiv.org
Institution: Johns Hopkins Carey Business School, MIT Sloan School of Management
Research Area: Human-AI Collaboration, Teamwork, Organizational Productivity
Discipline: Human-AI Interaction
Collaboration with AI agents increases productivity, reshapes communication patterns, and improves text quality while human teams excel in image quality; AI requires fine-tuning for multimodal workflows.
Methods: Large-scale randomized controlled trials using Pairit platform with human-human and human-AI teams performing collaborative marketing tasks.
Key Findings: Productivity, communication patterns, workflow processes, ad quality (text and image), and ad performance metrics.
DOI: https://doi.org/10.48550/arXiv.2503.18238
Citations: 14
Sample Size: 2310
-
Authors: P Spitzer, J Holstein, K Morrison
Year: 2025
Published in: ... Journal of Human ..., 2025 - Taylor & Francis
Institution: Karlsruhe Institute of Technology, Carnegie Mellon University, University of Bayreuth
Research Area: Human-AI Collaboration, Explainable AI (XAI)
Discipline: Human-Computer Interaction (HCI)
Incorrect explanations in AI-assisted decision-making lead to a misinformation effect, negatively impacting human reasoning, procedural knowledge, and collaboration performance.
Methods: A study on human-AI collaboration involving AI-supported decision-making paired with explainable AI (XAI) to assess the effects of incorrect explanations.
Key Findings: Impact of incorrect explanations on human reasoning strategies, procedural knowledge, and team performance in human-AI collaboration.
Citations: 13
Sample Size: 160
-
Authors: J Li, M Kuutila, E Huusko, N Kariyakarawana
Year: 2025
Published in: Proceedings of the 15th ..., 2023 - dl.acm.org
Institution: University of Oulu
Research Area: Social Media Credibility, Crowdsourcing, Human-Computer Interaction (HCI)
Discipline: Human-Computer Interaction (HCI)
Credibility of short-form health-related social media posts is influenced by factors such as author profession and post engagement metrics, with experts being encouraged to actively participate in information correction online.
Methods: Crowdsourced online credibility assessment using health-themed social media posts with varied content features deployed across three platforms; quantitative and qualitative data collection.
Key Findings: Credibility factors like author profession, engagement metrics (likes/shares), and personal strategies influencing perceived trustworthiness of social media posts.
DOI: 10.1145/3605390.3605406
Citations: 11
-
Authors: S Kankham, JR Hou
Year: 2025
Published in: International Journal of Human--Computer ..., 2025 - Taylor & Francis
Institution: National Cheng Kung University
Research Area: Social Media and Misinformation Countermeasures in HCI
Discipline: Human-Computer Interaction (HCI)
The study found that integrated counter-rumor features, such as community notes and related articles, reduce users' intentions to believe and spread social media rumors; community notes worked better for 'wish' rumors, while related articles were more effective for 'dread' rumors.
Methods: Conducted an experimental study evaluating the effects of community notes and related articles on online users' intentions to believe and spread two types of rumor tweets: wish and dread rumors.
Key Findings: Online users' intentions to believe and spread rumors on social media with and without integrated counter-rumor features (community notes and related articles).
DOI: https://www.tandfonline.com/doi/abs/10.1080/10447318.2024.2400389
Citations: 11
Sample Size: 201
-
Authors: S Lambiase, G Catolino, F Palomba, F Ferrucci, D Russo
Year: 2025
Published in: ACM Transactions on Software Engineering and Methodology, 2025•dl.acm.org
Institution: University of Salerno, Aalborg University
Research Area: Technology Adoption, Software Engineering Practices, Socio-Technical Research
Discipline: Computer Science, Software Engineering, Human–Computer Interaction (HCI)
The study uses survey data from software professionals and Partial Least Squares Structural Equation Modeling (PLS-SEM) to measure the role of cultural values relative to established predictors like performance expectancy and habitual use in LLM adoption.
Citations: 11
-
Authors: J Beck, S Eckman, C Kern, F Kreuter
Year: 2025
Published in: arXiv preprint arXiv:2509.08514, 2025 - arxiv.org
Institution: National Institutes of Health, National Center for Biotechnology Information
Research Area: Human-Computer Interaction (HCI)
Discipline: Human-Computer Interaction (HCI)
Human attitudes toward AI strongly influence performance in collaborative tasks, with skeptics showing better error detection and accuracy, while automation favorability increases overreliance on AI suggestions.
Methods: Randomized experiment with a controlled annotation task manipulating AI suggestion quality, task burden, and performance-based financial incentives; collected demographic, attitudinal, and behavioral data.
Key Findings: Impact of AI suggestion quality, task burden, and financial incentives on participant performance metrics (accuracy, correction activity, overcorrection, undercorrection); influence of demographic and psychological characteristics on performance.
Citations: 4
Sample Size: 2784
-
Authors: C Chen, Z Cui
Year: 2025
Published in: Journal of Medical Internet Research, 2025 - jmir.org
Institution: Medical College of Wisconsin
Research Area: Trust in AI, AI-assisted diagnosis, Health communication, Healthcare human-AI interaction
Discipline: Digital Health, Human-Computer Interaction (HCI), Behavioral Science
Patients trust and are more likely to seek help from doctors explicitly avoiding AI-assisted diagnosis rather than those using extensive or moderate AI, highlighting a strong aversion to AI in healthcare settings.
Methods: A randomized, web-based 4-group survey experiment was conducted with controls for sociodemographic factors and analysis using regression, mediation, and moderation techniques.
Key Findings: Trust in and intention to seek medical help from health care professionals using AI-assisted diagnosis versus those avoiding AI, and the influence of demographic, social, and experiential factors.
DOI: https://doi.org/10.2196/66083
Citations: 4
Sample Size: 1762
-
Authors: L Luettgau, HR Kirk, K Hackenburg, J Bergs, H Davidson, H Ogden, D Siddarth, S Huang
Year: 2025
Published in: ARXIV
Institution: AI Security Institute, I Policy Directorate, Collective Intelligence Project, Anthropic
Research Area: Experimental evaluation, RCT, Survey Research
Discipline: Computer Science, Human–Computer Interaction (HCI)
Conversational AI is as effective as self-directed internet searches in increasing political knowledge, reducing misinformation beliefs, and promoting accuracy among users in the UK during the 2024 election period.
Methods: A national survey (N=2,499) measured conversational AI usage for political information-seeking, followed by a series of randomised controlled trials (N=2,858) comparing conversational AI to self-directed internet search in improving political knowledge.
Key Findings: Extent of conversational AI usage for political knowledge-seeking in the UK and its efficacy in enhancing political knowledge and reducing misinformation compared to traditional internet searches.
Citations: 3
Sample Size: 5357