Large Language Models are often politically extreme, usually ideologically inconsistent, and persuasive even in informational contexts
Authors: N Aldahoul, H Ibrahim, M Varvello, A Kaufman
Published: 2025
Publication: arXiv preprint arXiv ..., 2025 - arxiv.org
The study finds that Large Language Models (LLMs) exhibit extreme political views on specific topics despite appearing ideologically moderate overall, and demonstrate a persuasive influence on users' political preferences even in informational contexts.
Methods: Compared 31 LLMs' political biases against benchmarks (legislators, judges, representative voter samples) and conducted a randomized experiment to measure their persuasive impact in informational interactions.
Key Findings: Ideological consistency, political extremity, and persuasive effects of LLMs in information-seeking contexts.
Institution: Delft University of Technology, University of Pennsylvania, New York University, King Abdullah University of Science and Technology, Massachusetts Institute of Technology, University of Texas at Austin
Research Area: Artificial Intelligence, Computers and Society, Political Science
Discipline: Artificial Intelligence , Social Science
Sample Size: 31 participants
Citations: 7