Browse 16 peer-reviewed papers from University Of Washington spanning Human-AI Interaction, AI Ethics (2020–2025). Research powered by Prolific's high-quality participant data.
This page lists 16 peer-reviewed papers from researchers at University Of Washington in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: T Mendel, N Singh, DM Mann, B Wiesenfeld
Year: 2025
Published in: Journal of medical ..., 2025 - jmir.org
Institution: The City University of New York, George Washington University, New York University
Research Area: LLMs in Digital Health, Health Queries, User Attitudes
Discipline: Digital Health
Laypeople primarily use search engines over large language models (LLMs) for health queries, perceiving LLMs as less useful but less biased and more human-like while exhibiting no significant difference in trust or ease of use.
Methods: A screening survey followed by logistic regression analysis and a follow-up survey; comparisons were performed using ANOVA, Tukey post hoc tests, and paired-sample Wilcoxon tests.
Key Findings: Demographics and behaviors of LLM and search engine users for health queries, perceived usefulness, ease of use, trustworthiness, bias, and anthropomorphism.
Citations: 21
Sample Size: 2002
-
Authors: M Chung
Year: 2025
Published in: Internet Research, 2023 - emerald.com
Institution: University of Washington, Emory University
Research Area: Algorithmic Knowledge, Misinformation Countermeasures, Comparative Media Studies, Information Science
Discipline: Information Science
The study examines how algorithmic knowledge influences attitudes and actions against misinformation, revealing that perceptions of media influence on self and others predict corrective actions and support for regulation differently across four countries.
Methods: Four national surveys were conducted in the USA, UK, South Korea, and Mexico, with data analyzed through multigroup structural equation modeling (SEM).
Key Findings: Algorithmic knowledge, perceived influence of misinformation on self and others, intention to correct misinformation, support for regulation and content moderation.
DOI: https://doi.org/10.1108/INTR-07-2022-0578
Citations: 14
Sample Size: 5432
-
Authors: J van Grunsven, N Jacobs, BA Kamphorst, M Honauer
Year: 2025
Published in: ACM Journal on, 2025 - dl.acm.org
Institution: University of Texas, Microsoft Research, Google DeepMind, Google, University of Washington, World Economic Forum
Research Area: Ethics and Governance of Computing Research, focused on Responsible Computing, Social Science Research, Artificial Intelligence.
Discipline: Ethics, Governance of Computing Research, AI Ethics
The paper emphasizes the importance of accounting for human vulnerability in the design and analysis of digital technologies, proposing concepts like 'Intimate Computing' to empower individuals in managing their technology-mediated vulnerabilities.
Methods: The study reviews and synthesizes existing literature and frameworks addressing vulnerability in human-technology interactions, including concepts like 'Intimate Computing' and 'Person-Machine Teaming'.
Key Findings: Human vulnerability in the context of digitally-mediated interactions and the role of computing frameworks in addressing them.
Citations: 2
-
Authors: L S. Treiman, CJ Ho, W Kool
Year: 2025
Published in: Proceedings of the 2025 ACM Conference ..., 2025 - dl.acm.org
Institution: Washington University in St. Louis, National Cheng Kung University
Research Area: Human-AI Interaction, Cognitive Science, Behavioral Research in AI Training
Discipline: Human-Computer Interaction (HCI), Behavioral Science
Participants tend to rely on intuition (fast thinking) rather than deliberation (slow thinking) when training AI agents in the ultimatum game, impacting human-AI collaboration system design.
Methods: Participants trained an AI agent in the ultimatum game to analyze whether their training decisions aligned more with intuitive or deliberative cognitive processes.
Key Findings: The cognitive processes (fast vs. slow thinking) underlying human decision-making during AI training.
DOI: https://dl.acm.org/doi/abs/10.1145/3715275.3732177
Citations: 1
-
Authors: P Cooper, A Lim, J Irons, M McGrath, H Jarvis
Year: 2025
Published in: Proceedings of the ..., 2025 - dl.acm.org
Institution: Microsoft Research, Massachusetts Institute of Technology, University of Washington
Research Area: Human-AI Interaction, Trust in AI
Discipline: Human-Computer Interaction (HCI)
Trust in AI dynamically influences users' reliance on AI advice during a deepfake detection task, with no significant impact observed from the timing of AI advice delivery.
Methods: Researchers conducted an online study with participants performing a deepfake detection task, comparing performance across conditions where AI advice was provided either concurrently with decisions or after an initial evaluation. Computational modeling was used to analyze trust dynamics.
Key Findings: Impact of AI advice and its timing on task performance, and the dynamic role of user trust in AI based on expectations of its ability.
DOI: https://dl.acm.org/doi/10.1145/3706599.3719870
Citations: 1
-
Authors: N Haduong
Year: 2025
Published in: 2025 - search.proquest.com
Institution: University of Washington
Research Area: Human-AI Interaction and Perception
Discipline: Human-Computer Interaction (HCI)
The research focuses on developing methodologies to bridge the gap between controlled laboratory studies and real-world human-AI perceptions and interactions, promoting task immersion and intrinsic motivation to model realistic behaviors.
Methods: Used task immersion techniques, domain-specific recruitment, error taxonomy development, and CPS-TaskForge environment generator for systematic study of collaborative problem solving and AI-assisted decision-making.
Key Findings: Human perceptions of AI in collaborative problem solving, understanding risks in AI-assisted decision making, and user behavior under performance pressure with AI advice.
-
Authors: Paresh Chaudhary, Yancheng Liang, Daphne Chen, Simon S. Du, Natasha Jaques
Year: 2025
Published in: ArXiv
Institution: University of Washington
Research Area: Human-AI Coordination, Zero-Shot Coordination, Adversarial Training, Generative Models
Discipline: Artificial Intelligence, Human-Computer Interaction (HCI)
The paper introduces GOAT, a novel framework combining pretrained generative models and adversarial training to improve human-AI coordination, achieving state-of-the-art performance on the Overcooked benchmark with real human partners.
Methods: The study utilized a frozen pretrained generative model to simulate cooperative agent policies and applied adversarial training to dynamically generate challenging human-AI interaction scenarios for training.
Key Findings: The effectiveness of GOAT in generalizing human-AI coordination strategies and its performance on the Overcooked benchmark.
-
Authors: L Ma, J Qin, X Xu, Y Tan
Year: 2025
Published in: arXiv preprint arXiv:2509.14436, 2025•arxiv.org
Institution: University of North Carolina Charlotte, University of Science and Technology of China, University of Washington
Research Area: LLM behavior, Algorithmic content preference, Human–AI interaction
Discipline: Computer Science, Information Retrieval, Artificial Intelligence
This paper studies how generative search engines that use large language models (LLMs)—like Google’s AI overviews—select and cite web content, showing that these engines prefer content that is more predictable and semantically coherent for the model, and that LLM-based content polishing can increase the diversity and usefulness of AI summaries for users.
DOI: https://doi.org/10.48550/arXiv.2509.14436
-
Authors: LS Treiman, CJ Ho, W Kool
Year: 2024
Published in: Proceedings of the National Academy of ..., 2024 - pnas.org
Institution: Massachusetts Institute of Technology, Yale University, Washington University in St. Louis
Research Area: AI Ethics, Behavioral Economics, Decision-Making in AI Systems
Discipline: Artificial Intelligence, Behavioral Science
People alter their behavior when they know their actions will train AI, leading to unintentional habits and biased training data for AI systems.
Methods: Five studies were conducted using the ultimatum game; participants were tasked with deciding on monetary splits proposed by either humans or AI, with some informed their decisions would train the AI.
Key Findings: Behavioral changes in participants when training AI, persistence of these changes over time, and implications for AI training bias.
DOI: https://doi.org/10.1073/pnas.2408731121
Citations: 13
-
Authors: Andrew Konya, Aviv Ovadya, Kevin Feng, Quan Ze Chen, Lisa Schirch, Colin Irwin
Year: 2024
Published in: ArXiv
Institution: AI & Democracy Foundation, Remesh, University of Washington
Research Area: AI Alignment, Public Will, Expert Intelligence, Rule-based Reward
Discipline: Artificial Intelligence, Computational Social Science
-
Authors: Jing-Jing Li♡♠ Valentina Pyatkin♠ Max Kleiman-Weiner♣ Liwei Jiang♣ Nouha Dziri♠ &Anne G. E. Collins♡ Jana Schaich Borg♢ Maarten Sap♠◆ Yejin Choi♣ Sydney Levine♠
Year: 2024
Published in: ArXiv
Institution: Allen Institute for AI, Duke University, University of California Berkeley, University of Washington
Research Area: LLM Safety Moderation, Interpretable AI (XAI), LLM Alignment, Steerable AI
Discipline: Artificial Intelligence
-
Authors: Quan Ze Chen K.J. Kevin Feng Chan Young Park Amy X. Zhang
Year: 2024
Published in: ArXiv
Institution: University of Washington
Research Area: In-Context Learning, Computational Linguistics, Natural Language Processing
Discipline: Computer Science, Computational Linguistics, Natural Language Processing
-
Authors: H Vasconcelos, M Jörke
Year: 2023
Published in: Proceedings of the ..., 2023 - dl.acm.org
Institution: Stanford University, University of Washington
Research Area: Human-AI Interaction, Explainable AI (XAI), Decision-Making
Discipline: Human-Computer Interaction (HCI), Artificial Intelligence
DOI: https://doi.org/10.1145/3579605
Citations: 405
-
Authors: S Corvite, K Roemmich, TI Rosenberg
Year: 2023
Published in: Proceedings of the ACM ..., 2023 - dl.acm.org
Institution: S Corvite: University of Washington, K Roemmich: University of Washington, TI Rosenberg: University of Washington
Research Area: AI Ethics
Discipline: Artificial Intelligence Ethics
DOI: https://doi.org/10.1145/3579600
Citations: 79
-
Authors: J Tang, E Birrell, A Lerner
Year: 2022
Published in: ... symposium on usable privacy and security ..., 2022 - usenix.org
Institution: University of California Berkeley, George Washington University, Stanford University
Research Area: Online privacy and security surveys, External validity, Replication studies, Human-Computer Interaction (HCI) in security
Discipline: Computer Science, Behavioral Science
Citations: 164
-
Authors: J Hanson, M Wei, S Veys, M Kugler
Year: 2020
Published in: Proceedings of the ..., 2020 - dl.acm.org
Institution: University of Chicago, University of Washington
Research Area: Privacy, Crowdwork, Hyper-Personalization in Advertising
Discipline: Human-Computer Interaction (HCI)
DOI: https://doi.org/10.1145/3313831.3376415
Citations: 40