Explore 3 peer-reviewed studies by Cj Ho in Human-AI Interaction and AI Ethics (2023–2025). Discover research powered by Prolific's participant panel.
This page lists 3 peer-reviewed papers authored or co-authored by Cj Ho in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: L S. Treiman, CJ Ho, W Kool
Year: 2025
Published in: Proceedings of the 2025 ACM Conference ..., 2025 - dl.acm.org
Institution: Washington University in St. Louis, National Cheng Kung University
Research Area: Human-AI Interaction, Cognitive Science, Behavioral Research in AI Training
Discipline: Human-Computer Interaction (HCI), Behavioral Science
Participants tend to rely on intuition (fast thinking) rather than deliberation (slow thinking) when training AI agents in the ultimatum game, impacting human-AI collaboration system design.
Methods: Participants trained an AI agent in the ultimatum game to analyze whether their training decisions aligned more with intuitive or deliberative cognitive processes.
Key Findings: The cognitive processes (fast vs. slow thinking) underlying human decision-making during AI training.
DOI: https://dl.acm.org/doi/abs/10.1145/3715275.3732177
Citations: 1
-
Authors: LS Treiman, CJ Ho, W Kool
Year: 2024
Published in: Proceedings of the National Academy of ..., 2024 - pnas.org
Institution: Massachusetts Institute of Technology, Yale University, Washington University in St. Louis
Research Area: AI Ethics, Behavioral Economics, Decision-Making in AI Systems
Discipline: Artificial Intelligence, Behavioral Science
People alter their behavior when they know their actions will train AI, leading to unintentional habits and biased training data for AI systems.
Methods: Five studies were conducted using the ultimatum game; participants were tasked with deciding on monetary splits proposed by either humans or AI, with some informed their decisions would train the AI.
Key Findings: Behavioral changes in participants when training AI, persistence of these changes over time, and implications for AI training bias.
DOI: https://doi.org/10.1073/pnas.2408731121
Citations: 13
-
Authors: LS Treiman, CJ Ho, W Kool
Year: 2023
Published in: Proceedings of the AAAI Conference on ..., 2023 - ojs.aaai.org
Institution: Massachusetts Institute of Technology, Yale University
Research Area: AI Ethics, Human-AI Interaction, Fairness in Machine Learning Training
Discipline: Human-Computer Interaction (HCI), Artificial Intelligence Ethics
DOI: https://doi.org/10.1609/hcomp.v11i1.27556
Citations: 6