Discover 12 peer-reviewed studies in Ai Systems (2022–2026). Explore research findings powered by Prolific's diverse participant panel.
This page lists 12 peer-reviewed papers in the research area of Ai Systems in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: L Dai, Z Wang, L Chen, J Jin
Year: 2026
Published in: 2026•scholarspace.manoa.hawaii.edu
Institution: Shanghai International Studies University
Research Area: Socio-Economic Impacts of AI, Algorithmic Systems
Discipline: Computer Science, Artificial Intelligence
AI errors lead to broader negative generalizations about other AI systems compared to human errors, largely due to perceptions of AI's inflexibility and inability to learn from mistakes.
Methods: Conducted four one-factor experiments across distinct contexts to compare human responses to AI errors and human errors.
Key Findings: Generalization of error perceptions from one AI system to others, and psychological mechanisms driving this process.
-
Authors: S de Jong, V Paananen, B Tag
Year: 2025
Published in: Proceedings of the ACM on ..., 2025 - dl.acm.org
Institution: Niels van Berkel: Aalborg University, Sander de Jong, Ville Paananen, Benjamin Tag: Monash University
Research Area: Cognitive Forcing, Human-AI Interaction, AI Explainability (XAI), Decision-Making in AI Systems.
Discipline: Human-Computer Interaction (HCI), Artificial Intelligence
Partial explanations encourage critical thinking and reduce user overreliance on incorrect AI suggestions, with performance varying based on individual need for cognition and task difficulty.
Methods: Two experiments were conducted: (1) participants identified shortest paths in weighted graphs, and (2) participants corrected spelling and grammar errors in text, with AI suggestions accompanied by no, partial, or full explanations.
Key Findings: Effectiveness of partial explanations in reducing overreliance on incorrect AI suggestions, and interaction of explanation type with task difficulty and user need for cognition.
DOI: https://doi.org/10.1145/3710946
Citations: 14
Sample Size: 474
-
Authors: A Dahlgren Lindström, L Methnani, L Krause
Year: 2025
Published in: Ethics and Information ..., 2025 - Springer
Institution: Umeå University, Vrije Universiteit Amsterdam
Research Area: AI Alignment, AI Safety, Reinforcement Learning from Human Feedback (RLHF), Sociotechnical Systems
Discipline: Artificial Intelligence, Ethics
The paper critiques AI alignment efforts using RLHF and RLAIF, highlighting theoretical and practical limitations in meeting the goals of helpfulness, harmlessness, and honesty, and advocates for a broader sociotechnical approach to AI safety and ethics.
Methods: Sociotechnical critique of RLHF techniques with an analysis of theoretical frameworks and practical implementations.
Key Findings: The alignment of AI systems with human values and the efficacy of RLHF techniques in achieving the HHH principle (helpfulness, harmlessness, honesty).
DOI: https://doi.org/10.1007/s10676-025-09837-2
Citations: 14
-
Authors: K Hackenburg, BM Tappin, L Hewitt, E Saunders
Year: 2025
Published in: Science, 2025 - science.org
Institution: London School of Economics and Political Science, Stony Brook University
Research Area: Political Persuasion with Conversational AI, LLM, Factual Accuracy in AI Systems.
Discipline: Political Science, Computational Social Science
This Science paper shows that conversational AI chatbots can systematically influence political opinions at scale, and that techniques like post-training and prompting make them far more persuasive—but that increased persuasion is tied to reduced factual accuracy in what the AI says.
Citations: 12
-
Authors: Z Ashktorab, A Buccella, J D'Cruz, Z Fowler, A Gill, KY Leung, PD Magnus, J Richards
Year: 2025
Published in: arXiv preprint arXiv:2507.02745, 2025•arxiv.org
Institution: IBM Research, University at Albany
Research Area: Human–AI interaction, AI systems evaluation, UX, User Experience
Discipline: Computer Science, Human–Computer Interaction (HCI)
In a preregistered study with 162 participants, people generally prefer explanatory apologies from LLM chatbots over rote or purely empathic ones—though in biased error scenarios empathic apologies are sometimes favored—highlighting the complexity of designing chatbot apologies that effectively repair trust.
DOI: https://doi.org/10.48550/arXiv.2507.02745
Citations: 1
-
Authors: LS Treiman, CJ Ho, W Kool
Year: 2024
Published in: Proceedings of the National Academy of ..., 2024 - pnas.org
Institution: Massachusetts Institute of Technology, Yale University, Washington University in St. Louis
Research Area: AI Ethics, Behavioral Economics, Decision-Making in AI Systems
Discipline: Artificial Intelligence, Behavioral Science
People alter their behavior when they know their actions will train AI, leading to unintentional habits and biased training data for AI systems.
Methods: Five studies were conducted using the ultimatum game; participants were tasked with deciding on monetary splits proposed by either humans or AI, with some informed their decisions would train the AI.
Key Findings: Behavioral changes in participants when training AI, persistence of these changes over time, and implications for AI training bias.
DOI: https://doi.org/10.1073/pnas.2408731121
Citations: 13
-
Authors: N Jabagi, AM Croteau, L Audebrand, J Marsan
Year: 2024
Published in: 2024 - aisel.aisnet.org
Institution: FSA ULaval, JMSB, Concordia
Research Area: Algorithmic management, Perceived fairness, Platform work, Information Systems
Discipline: Information Systems, Social Science
DOI: https://aisel.aisnet.org/hicss-57/cl/ai_and_future_work/5
Citations: 8
-
Authors: Z Li, M Yin
Year: 2024
Published in: Advances in Neural Information Processing ..., 2024 - proceedings.neurips.cc
Institution: Purdue University
Research Area: Human Behavior Modeling, Explainable AI, Decision Making in AI systems.
Discipline: Artificial Intelligence, Behavioral Science
DOI: https://doi.org/10.52202/079017-0163
Citations: 7
-
Authors: E Becks, V Matkovic, T Weis
Year: 2024
Published in: 2025 IEEE International Conference on ..., 2025 - computer.org
Institution: University of Stuttgart, University of Applied Sciences Offenburg, University of Hohenheim
Research Area: Crowdsourced Online Studies, Human-Computer Interaction (HCI) in AI Systems, Behavioral Research Methodology
Discipline: Human-Computer Interaction (HCI)
-
Authors: Yizhe Zhang, Yucheng Jin, Li Chen, Ting Yang
Year: 2024
Published in: ArXiv
Institution: Department of Computer Science China, Duke Kunshan University, Hong Kong Baptist University
Research Area: User Experience (UX), Conversational AI, Recommender Systems
Discipline: Computer Science
-
Authors: G He
Year: 2023
Published in: repository.tudelft.nl
Institution: Delft University of Technology
Research Area: Human-AI Collaboration, AI Systems, Appropriate Reliance on AI Systems, Artificial Intelligence, Computer Science
Discipline: Artificial Intelligence, Computer Science
-
Authors: T Van Nuenen, J Such, M Cote
Year: 2022
Published in: Proceedings of the ACM on human ..., 2022 - dl.acm.org
Institution: University of Surrey, King’s College London, Tilburg University, University of Amsterdam
Research Area: Intersectional Fairness, Automated Systems, Social Computing
Discipline: Computational Social Science
Citations: 23