Discover 21 peer-reviewed studies in Human Behavior (2021–2025). Explore research findings powered by Prolific's diverse participant panel.
This page lists 21 peer-reviewed papers in the research area of Human Behavior in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: F Salvi, M Horta Ribeiro, R Gallotti, R West
Year: 2025
Published in: Nature Human Behaviour, 2025 - nature.com
Institution: EPFL, Fondazione Bruno Kessle, Princeton University
Research Area: Conversational Persuasion of LLM, Human-Computer Interaction (HCI), Behavioral Science, LLM
Discipline: Behavioral Science
GPT-4 can use personalized arguments to be more persuasive in debates, outperforming humans in 64.4% of AI-human comparisons when personalization is applied.
Methods: Preregistered controlled study involving multiround debates with random assignment to conditions focusing on AI-human comparisons, personalization, and opinion strength.
Key Findings: Effectiveness of persuasion by GPT-4, especially when using personalized arguments, compared to humans in debates.
Citations: 65
Sample Size: 900
-
Authors: K Miazek, K Bocian
Year: 2025
Published in: Computers in Human Behavior Reports, 2025 - Elsevier
Institution: SWPS University
Research Area: Moral and Fairness Judgments of AI, Human Behavior, Egocentrism
Discipline: Social Science, Artificial Intelligence
The study found that egocentric biases influence fairness judgments, favoring decisions beneficial to self-interest, and that this bias is weaker for AI compared to human agents due to reduced perceived mind and liking for AI.
Methods: Three experiments with manipulated self-interest conditions analyzed perceptions of fairness and morality in decisions made by AI versus human agents using Prolific US samples.
Key Findings: Fairness and moral judgments in financial decision-making by AI and human agents, moderated by self-interest and social perceptions.
DOI: https://doi.org/10.1016/j.chbr.2025.100719
Citations: 6
Sample Size: 1880
-
Authors: H Lin, G Czarnek, B Lewis, JP White, AJ Berinsky
Year: 2025
Published in: Nature, 2025 - nature.com
Institution: Massachusetts Institute of Technology
Research Area: Political Persuasion, Human-AI Dialogue, Electoral Behavior
Discipline: Political Science, Artificial Intelligence
The study shows that AI-driven dialogues can significantly influence voter attitudes and candidate preferences, with persuasion effects surpassing traditional political advertisements, though inaccuracies were more prevalent in content generated by AI models supporting right-wing candidates.
Methods: Pre-registered experiments where participants interacted with AI advocating for one of two candidates or a ballot measure, examining persuasion strategies and effects across three elections.
Key Findings: Influence of AI-generated dialogues on voter attitudes and preferences, including analysis of persuasion strategies and accuracy of presented information.
Citations: 3
-
Authors: L S. Treiman, CJ Ho, W Kool
Year: 2025
Published in: Proceedings of the 2025 ACM Conference ..., 2025 - dl.acm.org
Institution: Washington University in St. Louis, National Cheng Kung University
Research Area: Human-AI Interaction, Cognitive Science, Behavioral Research in AI Training
Discipline: Human-Computer Interaction (HCI), Behavioral Science
Participants tend to rely on intuition (fast thinking) rather than deliberation (slow thinking) when training AI agents in the ultimatum game, impacting human-AI collaboration system design.
Methods: Participants trained an AI agent in the ultimatum game to analyze whether their training decisions aligned more with intuitive or deliberative cognitive processes.
Key Findings: The cognitive processes (fast vs. slow thinking) underlying human decision-making during AI training.
DOI: https://dl.acm.org/doi/abs/10.1145/3715275.3732177
Citations: 1
-
Authors: C Qian, AT Parisi, C Bouleau, V Tsai
Year: 2025
Published in: Proceedings of the ..., 2025 - aclanthology.org
Institution: Google, Google DeepMind
Research Area: Human-AI Alignment, Collective Reasoning, Social Biases, LLM Simulation of Human Behavior, AI Bias
Discipline: Natural Language Processing, Artificial Intelligence, Computational Social Science
This study examines human-AI alignment in collective reasoning using an empirical framework, demonstrating how LLMs either mirror or mask human biases depending on context, cues, and model-specific inductive biases.
Methods: The study uses the Lost at Sea social psychology task in a large-scale online experiment, simulating LLM groups conditioned on human decision-making data across varying conditions of visible or pseudonymous demographics.
Key Findings: Alignment of LLM behavior with human social reasoning, focusing on collective decision-making and biases in group interactions.
Citations: 1
Sample Size: 748
-
Authors: J Beck
Year: 2025
Published in: 2025 - edoc.ub.uni-muenchen.de
Institution: Ludwig-Maximilians-Universität München, University of Bayreuth
Research Area: Annotation Quality, Human-AI Collaboration, Behavioral Science, Human-Computer Interaction (HCI)
Discipline: Human-Computer Interaction (HCI)
The study empirically evaluates annotation bias, proposes strategies to reduce its impact, and explores the use of large language models in automated and hybrid annotation workflows.
Methods: Empirical assessments and experimental evaluations involving annotation workflows and large language models.
Key Findings: Annotation bias, annotation quality, and the effectiveness of hybrid workflows integrating human input and AI models.
-
Authors: Z Chen, J Chan
Year: 2024
Published in: Management Science, 2024 - pubsonline.informs.org
Institution: University of Texas Dallas
Research Area: Human-AI Interaction, Creative Work, Behavioral Science
Discipline: Social Science
Using large language models (LLMs) as sounding boards improves ad content quality for nonexpert users, while using LLMs as ghostwriters can negatively impact expert users due to anchoring effects.
Methods: An experiment comparing ad copy creation with and without LLM assistance, focusing on two collaboration modalities: ghostwriting and sounding board approaches. Ad performance was measured via social media click rates, supported by textual analysis.
Key Findings: Effectiveness of LLM collaboration modalities (ghostwriting vs. sounding board) on ad quality and business outcomes for expert and nonexpert users.
DOI: https://doi.org/10.1287/mnsc.2023.03014
Citations: 180
-
Authors: HR Kirk, I Gabriel, C Summerfield, B Vidgen
Year: 2024
Published in: Humanities and Social ..., 2025 - nature.com
Institution: Oxford Internet Institute, University of Oxford
Research Area: Socioaffective Alignment in Human-AI Relationships, AI Ethics, Behavioral Science
Discipline: Artificial Intelligence, Behavioral Science
The paper emphasizes the need for socioaffective alignment in human-AI relationships to ensure AI systems support human psychological needs rather than exploit them, as interactions with AI transition from transactional to sustained engagement.
Methods: Conceptual analysis of socioaffective dynamics in human-AI interactions, framed through psychological theories and principles.
Key Findings: Exploration of how AI systems impact socioaffective relationships, psychological needs, autonomy, companionship, and human well-being.
DOI: https://doi.org/10.1057/s41599-025-04532-5
Citations: 59
-
Authors: Y Gao, D Lee, G Burtch, S Fazelpour
Year: 2024
Published in: arXiv preprint arXiv:2410.19599, 2024 - arxiv.org
Institution: Boston University, Northeastern University
Research Area: LLMs as Human Surrogates, Social Science Research Methods, Human Behavior Simulation
Discipline: Economics, Artificial Intelligence, Social Science
LLMs fail to accurately replicate human behavior in the 11-20 money request game, cautioning against their use as surrogates for human cognition in social science research.
Methods: The study evaluates the reasoning depth of various advanced LLMs through their performance on the 11-20 money request game, analyzing failure points related to input language, roles, and safeguarding.
Key Findings: The ability of LLMs to replicate human-like behavior and reasoning distribution in the context of social science simulations.
Citations: 25
-
Authors: Z Li, M Yin
Year: 2024
Published in: Advances in Neural Information Processing ..., 2024 - proceedings.neurips.cc
Institution: Purdue University
Research Area: Human Behavior Modeling, Explainable AI, Decision Making in AI systems.
Discipline: Artificial Intelligence, Behavioral Science
DOI: https://doi.org/10.52202/079017-0163
Citations: 7
-
Authors: Fabian Dvorak, Regina Stumpf, Sebastian Fehrler, Urs Fischbacher
Year: 2024
Published in: ArXiv
Institution: University of Konstanz
Research Area: Generative AI and Human Decision-Making, Behavioral Economics
Discipline: Artificial Intelligence, Behavioral Economics
-
Authors: E Becks, V Matkovic, T Weis
Year: 2024
Published in: 2025 IEEE International Conference on ..., 2025 - computer.org
Institution: University of Stuttgart, University of Applied Sciences Offenburg, University of Hohenheim
Research Area: Crowdsourced Online Studies, Human-Computer Interaction (HCI) in AI Systems, Behavioral Research Methodology
Discipline: Human-Computer Interaction (HCI)
-
Authors: P Pataranutaporn, R Liu, E Finn, P Maes
Year: 2023
Published in: Nature Machine Intelligence, 2023 - nature.com
Institution: University of California Irvine
Research Area: Human-AI Interaction, Behavioral Science
Discipline: Human-Computer Interaction (HCI), Artificial Intelligence
DOI: https://doi.org/10.1038/s42256-023-00720-7
Citations: 180
-
Authors: G Gui, O Toubia
Year: 2023
Published in: arXiv preprint arXiv:2312.15524, 2023 - arxiv.org
Institution: University of Southern California, Columbia Business School
Research Area: LLMs and Causal Inference in Human Behavior Simulation, LLM
Discipline: Artificial Intelligence (cs.AI), Information Retrieval (cs.IR), Econometrics (econ.EM), Applications (stat.AP)
Citations: 76
-
Authors: T Prike, LH Butler, UKH Ecker
Year: 2023
Published in: Scientific Reports, 2024 - nature.com
Institution: University of Western Australia, University of Exeter, University of Cambridge
Research Area: Social Science, Misinformation, Human Behavior, Media Studies
Discipline: Social Science
DOI: https://doi.org/10.1038/s41598-024-57560-7
Citations: 45
-
Authors: J Tomczak, A Gordon, J Adams, JS Pickering
Year: 2023
Published in: Frontiers in Human ..., 2023 - frontiersin.org
Institution: Prolific, University of Leeds, Gorilla
Research Area: Online Research Protocols, Human Neuroscience, Behavioral Research
Discipline: Human Neuroscience
Citations: 31
-
Authors: K Vodrahalli, T Gerstenberg
Year: 2022
Published in: Advances in Neural ..., 2022 - proceedings.neurips.cc
Institution: Columbia University, Princeton University, Intel, Stanford University, Massachusetts Institute of Technology
Research Area: Human-AI Collaboration, Human Behavior Modeling, Decision Making
Discipline: Artificial Intelligence
Citations: 70
-
Authors: X Qin, M Huang, J Ding
Year: 2022
Published in: Available at SSRN 4922861, 2024 - papers.ssrn.com
Institution: Peking University, Tsinghua University, Nankai University
Research Area: AI Social Science, LLM Simulation of Human Behavior, AI Simulation
Discipline: Social Science Research
DOI: https://ssrn.com/abstract=4922861
Citations: 17
-
Authors: T Matsuura, AA Hasegawa, M Akiyama
Year: 2021
Published in: Proceedings of the 2021 ..., 2021 - dl.acm.org
Institution: Waseda University
Research Area: Phishing Studies, Human-Computer Interaction (HCI), Behavioral Research Methods
Discipline: Human-Computer Interaction (HCI)
Citations: 25
-
Authors: S Trott
Year: 2021
Published in: Open Mind, 2024 - direct.mit.edu
Institution: Stanford University, Microsoft Research
Research Area: LLMs in Social Science Research, Crowdworking, Human Behavior Simulation
Discipline: Artificial Intelligence, Social Science, Information Systems
Citations: 22