Discover 9 peer-reviewed studies in Llm Behavior (2021–2025). Explore research findings powered by Prolific's diverse participant panel.
This page lists 9 peer-reviewed papers in the research area of Llm Behavior in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: F Salvi, M Horta Ribeiro, R Gallotti, R West
Year: 2025
Published in: Nature Human Behaviour, 2025 - nature.com
Institution: EPFL, Fondazione Bruno Kessle, Princeton University
Research Area: Conversational Persuasion of LLM, Human-Computer Interaction (HCI), Behavioral Science, LLM
Discipline: Behavioral Science
GPT-4 can use personalized arguments to be more persuasive in debates, outperforming humans in 64.4% of AI-human comparisons when personalization is applied.
Methods: Preregistered controlled study involving multiround debates with random assignment to conditions focusing on AI-human comparisons, personalization, and opinion strength.
Key Findings: Effectiveness of persuasion by GPT-4, especially when using personalized arguments, compared to humans in debates.
Citations: 65
Sample Size: 900
-
Authors: TS Behrend, RN Landers
Year: 2025
Published in: Journal of Business and Psychology, 2025 - Springer
Institution: University of Nebraska-Lincoln, University of Minnesota
Research Area: LLM in Behavioral Science Research, AI-Assisted Research Methodology
Discipline: Behavioral Science, Psychology, Artificial Intelligence
The paper proposes a framework with five use cases for integrating large language models into survey and experimental research, introduces the Qualtrics-AI Link (QUAIL) tool, and highlights technical and ethical considerations for using LLMs effectively and validly.
Methods: The paper outlines a decision-making framework for five potential uses of LLMs in survey and experimental design, introduces software (QUAIL) for integrating LLM knowledge into Qualtrics, and details technical steps such as prompt engineering, model testing, and validity monitoring.
Key Findings: Applications, implementation strategies, and ethical considerations of large language models in psychological research material development.
DOI: https://doi.org/10.1007/s10869-025-10035-6
Citations: 6
-
Authors: Y Ba, MV Mancenido, EK Chiou, R Pan
Year: 2025
Published in: Behavior Research Methods, 2025 - Springer
Institution: University of Delaware, National Taiwan University, University of British Columbia, Monash University
Research Area: Crowdsourcing, Data Quality, Spamming Behavior Detection, LLM Applications in Behavioral Research
Discipline: Computer Science, Artificial Intelligence, LLM
The paper introduces a systematic method to evaluate crowdsourced data quality and detect spam behaviors through variance decomposition, proposing a spammer index and credibility metrics to improve consistency and reliability in labeling tasks.
Methods: Variance decomposition, Markov chain models, and generalized random effects models were used to assess annotator consistency and credibility; metrics were applied to both simulated and real-world data from two crowdsourcing platforms.
Key Findings: Quality of crowdsourced data, spammer behaviors, annotators’ consistency, and credibility.
Citations: 2
-
Authors: C Qian, AT Parisi, C Bouleau, V Tsai
Year: 2025
Published in: Proceedings of the ..., 2025 - aclanthology.org
Institution: Google, Google DeepMind
Research Area: Human-AI Alignment, Collective Reasoning, Social Biases, LLM Simulation of Human Behavior, AI Bias
Discipline: Natural Language Processing, Artificial Intelligence, Computational Social Science
This study examines human-AI alignment in collective reasoning using an empirical framework, demonstrating how LLMs either mirror or mask human biases depending on context, cues, and model-specific inductive biases.
Methods: The study uses the Lost at Sea social psychology task in a large-scale online experiment, simulating LLM groups conditioned on human decision-making data across varying conditions of visible or pseudonymous demographics.
Key Findings: Alignment of LLM behavior with human social reasoning, focusing on collective decision-making and biases in group interactions.
Citations: 1
Sample Size: 748
-
Authors: L Ma, J Qin, X Xu, Y Tan
Year: 2025
Published in: arXiv preprint arXiv:2509.14436, 2025•arxiv.org
Institution: University of North Carolina Charlotte, University of Science and Technology of China, University of Washington
Research Area: LLM behavior, Algorithmic content preference, Human–AI interaction
Discipline: Computer Science, Information Retrieval, Artificial Intelligence
This paper studies how generative search engines that use large language models (LLMs)—like Google’s AI overviews—select and cite web content, showing that these engines prefer content that is more predictable and semantically coherent for the model, and that LLM-based content polishing can increase the diversity and usefulness of AI summaries for users.
DOI: https://doi.org/10.48550/arXiv.2509.14436
-
Authors: Y Gao, D Lee, G Burtch, S Fazelpour
Year: 2024
Published in: arXiv preprint arXiv:2410.19599, 2024 - arxiv.org
Institution: Boston University, Northeastern University
Research Area: LLMs as Human Surrogates, Social Science Research Methods, Human Behavior Simulation
Discipline: Economics, Artificial Intelligence, Social Science
LLMs fail to accurately replicate human behavior in the 11-20 money request game, cautioning against their use as surrogates for human cognition in social science research.
Methods: The study evaluates the reasoning depth of various advanced LLMs through their performance on the 11-20 money request game, analyzing failure points related to input language, roles, and safeguarding.
Key Findings: The ability of LLMs to replicate human-like behavior and reasoning distribution in the context of social science simulations.
Citations: 25
-
Authors: G Gui, O Toubia
Year: 2023
Published in: arXiv preprint arXiv:2312.15524, 2023 - arxiv.org
Institution: University of Southern California, Columbia Business School
Research Area: LLMs and Causal Inference in Human Behavior Simulation, LLM
Discipline: Artificial Intelligence (cs.AI), Information Retrieval (cs.IR), Econometrics (econ.EM), Applications (stat.AP)
Citations: 76
-
Authors: X Qin, M Huang, J Ding
Year: 2022
Published in: Available at SSRN 4922861, 2024 - papers.ssrn.com
Institution: Peking University, Tsinghua University, Nankai University
Research Area: AI Social Science, LLM Simulation of Human Behavior, AI Simulation
Discipline: Social Science Research
DOI: https://ssrn.com/abstract=4922861
Citations: 17
-
Authors: S Trott
Year: 2021
Published in: Open Mind, 2024 - direct.mit.edu
Institution: Stanford University, Microsoft Research
Research Area: LLMs in Social Science Research, Crowdworking, Human Behavior Simulation
Discipline: Artificial Intelligence, Social Science, Information Systems
Citations: 22