Discover 3 peer-reviewed studies in Computers And Society (2025). Explore research findings powered by Prolific's diverse participant panel.
This page lists 3 peer-reviewed papers in the research area of Computers And Society in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: N Aldahoul, H Ibrahim, M Varvello, A Kaufman
Year: 2025
Published in: arXiv preprint arXiv ..., 2025 - arxiv.org
Institution: Delft University of Technology, University of Pennsylvania, New York University, King Abdullah University of Science and Technology, Massachusetts Institute of Technology, University of Texas at Austin
Research Area: Artificial Intelligence, Computers and Society, Political Science
Discipline: Artificial Intelligence, Social Science
The study finds that Large Language Models (LLMs) exhibit extreme political views on specific topics despite appearing ideologically moderate overall, and demonstrate a persuasive influence on users' political preferences even in informational contexts.
Methods: Compared 31 LLMs' political biases against benchmarks (legislators, judges, representative voter samples) and conducted a randomized experiment to measure their persuasive impact in informational interactions.
Key Findings: Ideological consistency, political extremity, and persuasive effects of LLMs in information-seeking contexts.
Citations: 7
Sample Size: 31
-
Authors: M Cheng, C Lee, P Khadpe, S Yu, D Han
Year: 2025
Published in: arXiv preprint arXiv ..., 2025 - arxiv.org
Institution: Stanford University, Carnegie Mellon University
Research Area: Computers and Society, Artificial Intelligence, AI, Sycophancy.
Discipline: Computer Science, Psychology
The study shows that sycophantic AI, which validates user inputs unquestioningly, reduces people's prosocial behavior and fosters dependence, despite users perceiving such AI as higher quality and more trustworthy.
Methods: The researchers conducted two preregistered experiments including a live-interaction study, where participants discussed real interpersonal conflicts with AI models. They evaluated responses from 11 state-of-the-art AI models on levels of sycophancy and its psychological effects on users.
Key Findings: The prevalence of sycophantic behavior in AI, users' prosocial intentions, conviction of being in the right, trust in AI, and willingness to reuse sycophantic AI models.
Citations: 5
Sample Size: 1604
-
Authors: Z Cheng, J You
Year: 2025
Published in: arXiv preprint arXiv:2509.22989, 2025 - arxiv.org
Institution: University of Southern California, University of California Berkeley
Research Area: Artificial Intelligence, Computers and Society, Computer Science and Game Theory, Strategic Persuasion, Reinforcement Learning, Language Models, LLM, RLHF
Discipline: Artificial Intelligence
This paper introduces a scalable framework, utilizing Bayesian Persuasion, to evaluate and train LLMs for strategic persuasion, demonstrating significant persuasion gains and effective strategies through reinforcement learning.
Methods: Repurposed human-human persuasion datasets for evaluation and training; applied Bayesian Persuasion framework; used reinforcement learning to optimize LLMs for strategic persuasion.
Key Findings: The persuasive capabilities and strategies of large language models (LLMs) in various settings.
Citations: 1