Authors: L Ibrahim, C Akbulut, R Elasmar, C Rastogi, M Kahng, MR Morris, KR McKee, V Rieser, M Shanahan, L Weidinger
Year: 2025
Published in: arXiv preprint arXiv:2502.07077, 2025•arxiv.org
Institution: Google DeepMind, Google, University of Oxford
Research Area: Multimodal conversational AI, conversational AI, Evaluation methodology, benchmarking
Discipline: Computer Science, Natural Language Processing (NLP), Human–Computer Interaction (HCI)
The paper evaluates anthropomorphic behaviors in SOTA LLMs through a multi-turn methodology, showing that such behaviors, including empathy and relationship-building, predominantly emerge after multiple interactions and influence user perceptions.
Methods: Multi-turn evaluation of 14 anthropomorphic behaviors using simulations of user interactions, validated by a large-scale human subject study.
Key Findings: Anthropomorphic behaviors in large language models, including relationship-building and pronoun usage, and their perception by users.
Citations: 26
Sample Size: 1101
Authors: KR McKee
Year: 2024
Published in: IEEE Transactions on Technology and Society, 2024 - ieeexplore.ieee.org
Institution: University of Queensland
Research Area: AI Ethics, Human-Computer Interaction (HCI), Research Practice Transparency
Discipline: AI Ethics, Human-Computer Interaction (HCI)
The paper identifies ethical and transparency gaps in AI research involving human participants and proposes guidelines to address these issues, drawing from adjacent fields like psychology and human-computer interaction while recognizing unique challenges in AI contexts.
Methods: Analyzed normative practices by reviewing AI research publications and compared them with ethical standards in adjacent fields such as psychology and HCI.
Key Findings: Ethical practices including ethical reviews, informed consent, participant compensation, and contextual considerations specific to AI research.
DOI: https://ieeexplore.ieee.org/abstract/document/10664609/
Citations: 17