Browse 4 peer-reviewed papers from University Of Georgia spanning Social and Emotional Human-AI Interaction, Psychosocial Effects of AI Chatbot Use (2023–2025). Research powered by Prolific's high-quality participant data.
This page lists 4 peer-reviewed papers from researchers at University Of Georgia in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: Mohit Chandra, Javier Hernandez, Gonzalo Ramos, Mahsa Ershadi, Ananya Bhattacharjee, Judith Amores, Ebele Okoli, Ann Paradiso, Shahed Warreth, Jina Suh
Year: 2025
Published in: ArXiv
Institution: Georgia Institute of Technology, Microsoft Research, University of Toronto, Microsoft
Research Area: Social and Emotional Human-AI Interaction, Psychosocial Effects of AI Chatbot Use
Discipline: Social Science
This study found that active use of conversational AI tools significantly increased perceived attachment, empathy, and emotional support from AI, while showing the potential for improving social and emotional interactions with proper safeguards.
Methods: Participants were divided into two groups: one group used conversational AI tools actively (AU, n=89), and a baseline group used AI and the internet regularly (BU, n=60). Emotional and social interaction measures were tracked over five weeks.
Key Findings: Perceived attachment towards AI, AI empathy, comfort in using AI for emotional support, stress management, and discussion of personal topics.
Sample Size: 149
-
Authors: R Zhang, C Flathmann, G Musick, B Schelble
Year: 2024
Published in: ACM Transactions on ..., 2024 - dl.acm.org
Institution: North Carolina State University, University of North Carolina at Charlotte, University of Georgia, University of Michigan
Research Area: Explainable AI (XAI), Human-AI Teaming, Human-Computer Interaction (HCI)
Discipline: Robotics, Artificial Intelligence
Explored how AI explanations impact human trust and team effectiveness in human-AI teams, finding that explanations increase trust when AI disobeys orders but reduce trust when AI lies, with individual characteristics influencing these perceptions.
Methods: Conducted an online experiment analyzing participant responses to scenarios where AI explained its actions within a teamwork context, comparing trust in AI versus human teammates.
Key Findings: Impact of AI explanations on human trust, team effectiveness, and how these vary with teammate identity (human or AI) and participant characteristics (e.g., gender, ethical framework).
DOI: https://doi.org/10.1145/3635474
Citations: 28
Sample Size: 156
-
Authors: Eyal Aharoni, Sharlene Fernandes, Daniel J. Brady, Caelan Alexander, Michael Criner, Kara Queen, Javier Rando, Eddy Nahmias, Victor Crespo
Year: 2024
Published in: Nature
Institution: Duke University, ETH Zurich, Georgia State University
Research Area: Moral Responsibility, Agency in AI, Human-AI Moral Interaction
Discipline: Artificial Intelligence Ethics
-
Authors: J Dai, X Pan, R Sun, J Ji, X Xu, M Liu, Y Wang
Year: 2023
Published in: arXiv preprint arXiv ..., 2023 - arxiv.org
Institution: Cornell University, Georgia Institute of Technology
Research Area: Reinforcement Learning from Human Feedback (RLHF), Safe AI, Reinforcement Learning
Discipline: Artificial Intelligence, Machine Learning
DOI: https://doi.org/10.48550/arXiv.2310.12773
Citations: 598