Browse 21 peer-reviewed papers from University Of Oxford spanning LLM, Political Persuasion (2021–2025). Research powered by Prolific's high-quality participant data.
This page lists 21 peer-reviewed papers from researchers at University Of Oxford in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: K Hackenburg, L Ibrahim, BM Tappin, M Tsakiris
Year: 2025
Published in: AI & SOCIETY, 2025 - Springer
Institution: Oxford Internet Institute, University of Oxford
Research Area: Political Communication and Persuasion, LLM
Discipline: Political Science, Artificial Intelligence
GPT-4's ability to generate persuasive messages rivaled human experts on polarized US political issues, suggesting AI tools may have significant implications for political campaigns and democracy.
Methods: Pre-registered experiment where GPT-4 generated partisan role-playing persuasive messages, which were compared to those from human persuasion experts.
Key Findings: Persuasive impact of GPT-4-generated messages versus human expert messages on U.S. political issues.
Citations: 35
Sample Size: 4955
-
Authors: K Hackenburg, BM Tappin, P Röttger, SA Hale
Year: 2025
Published in: Proceedings of the ..., 2025 - pnas.org
Institution: University of California Berkeley, University of Cambridge, University of Oxford, Max Planck Institute
Research Area: Political Persuasion, LLM
Discipline: Computational Social Science, Political Science
Scaling language model sizes leads to diminishing returns in generating persuasive political messages, with larger models providing minimal gains compared to smaller ones after controlling for task completion metrics like coherence and relevance.
Methods: Generated 720 political messages using 24 LLMs of varying sizes and tested their persuasiveness through a large-scale randomized survey experiment.
Key Findings: Persuasive capability of language models across different sizes in generating political messages.
Citations: 31
Sample Size: 25982
-
Authors: L Ibrahim, C Akbulut, R Elasmar, C Rastogi, M Kahng, MR Morris, KR McKee, V Rieser, M Shanahan, L Weidinger
Year: 2025
Published in: arXiv preprint arXiv:2502.07077, 2025•arxiv.org
Institution: Google DeepMind, Google, University of Oxford
Research Area: Multimodal conversational AI, conversational AI, Evaluation methodology, benchmarking
Discipline: Computer Science, Natural Language Processing (NLP), Human–Computer Interaction (HCI)
The paper evaluates anthropomorphic behaviors in SOTA LLMs through a multi-turn methodology, showing that such behaviors, including empathy and relationship-building, predominantly emerge after multiple interactions and influence user perceptions.
Methods: Multi-turn evaluation of 14 anthropomorphic behaviors using simulations of user interactions, validated by a large-scale human subject study.
Key Findings: Anthropomorphic behaviors in large language models, including relationship-building and pronoun usage, and their perception by users.
Citations: 26
Sample Size: 1101
-
Authors: P. Schoenegger, F. Salvi, J. Liu, X. Nan, R. Debnath, B. Fasolo, E. Leivada, G. Recchia, F. Günther, A. Zarifhonarvar, J. Kwon, Z. Ul Islam, M. Dehnert, D. Y. H. Lee, M. G. Reinecke, D. G. Kamper, M. Kobaş, A. Sandford, J. Kgomo, L. Hewitt, S. Kapoor, K. Oktar, E. E. Kucuk, B. Feng, C. R. Jones, I. Gainsburg, S. Olschewski, N. Heinzelmann, F. Cruz, B. M. Tappin, T. Ma, P. S. Park, R. Onyonka, A. Hjorth, P. Slattery, Q. Zeng, L. Finke, I. Grossmann, A. Salatiello, E. Karger
Year: 2025
Published in: arXiv preprint arXiv ..., 2025 - arxiv.org
Institution: London School of Economics and Political Science, University of Cambridge, University College London, Massachusetts Institute of Technology, University of Oxford, Modulo Research, Stanford University, Federal Reserve Bank of Chicago, ETH Zürich, University of Johannesburg
Research Area: Computation and Language
Discipline: Social Science, Artificial Intelligence
This paper compares a frontier LLM (Claude Sonnet 3.5) against incentivized human persuaders in a conversational quiz setting, finding that the AI's persuasion capabilities surpass those of humans with real-money bonuses tied to performance.
Citations: 16
-
Authors: D Guilbeault, S Delecourt, BS Desikan
Year: 2025
Published in: Nature, 2025 - nature.com
Institution: Stanford University, University of California Berkeley, University of Oxford
Research Area: AI Bias, Media Representation, Social Science
Discipline: Computational Social Science, Artificial Intelligence
The study highlights age-related gender bias in online media and language models, showing women are portrayed as younger than men, especially in high-status occupations, and explores how algorithms amplify these biases.
Methods: Analysis of 1.4 million images and videos from online sources and nine language models, followed by a pre-registered experiment involving participants to evaluate biases in internet content and algorithms.
Key Findings: Age and gender bias in occupational depiction across online platforms and language models, as well as its influence on beliefs and hiring preferences.
Citations: 4
Sample Size: 459
-
Authors: T Davidson
Year: 2025
Published in: Nature Human Behaviour, 2025 - nature.com
Institution: University of Oxford, Davidson College
Research Area: Hate Speech Evaluation, Multimodal LLMs, Social Bias, Computational Law, AI Bias, AI Evaluation
Discipline: Artificial Intelligence
The study demonstrates that larger multimodal large language models (MLLMs) can align closely with human judgement in context-sensitive hate speech evaluations, though they still exhibit biases and limitations.
Methods: Conjoint experiments where simulated social media posts varying in attributes like slur usage and user demographics were evaluated by MLLMs and compared to human judgements.
Key Findings: The capacity of MLLMs to evaluate hate speech in a context-sensitive manner and their alignment with human judgement, while assessing biases and responsiveness to contextual cues.
Sample Size: 1854
-
Authors: HR Kirk, M Bartolo, A Whitefield, P Rottger
Year: 2024
Published in: Advances in ..., 2024 - proceedings.neurips.cc
Institution: Meta, Cohere, AWS AI Labs, Contextual AI, Factored AI, University of Oxford, Bocconi University, Meedan, Hugging Face, University College London, ML Commons, University of Pennsylvania
Research Area: LLM Alignment, Human Feedback, Multicultural Studies
Discipline: Artificial Intelligence, Computational Social Science
The PRISM Alignment Dataset presents a large-scale, culturally diverse human feedback dataset linking sociodemographic profiles of 1,500 participants from 75 countries to their contextual preferences and fine‑grained ratings in 8,011 live conversations with 21 LLMs. This enables analysis of how subjective values vary across people and cultures in LLM alignment data.
DOI: https://doi.org/10.52202/079017-3342
Citations: 204
-
Authors: HR Kirk, I Gabriel, C Summerfield, B Vidgen
Year: 2024
Published in: Humanities and Social ..., 2025 - nature.com
Institution: Oxford Internet Institute, University of Oxford
Research Area: Socioaffective Alignment in Human-AI Relationships, AI Ethics, Behavioral Science
Discipline: Artificial Intelligence, Behavioral Science
The paper emphasizes the need for socioaffective alignment in human-AI relationships to ensure AI systems support human psychological needs rather than exploit them, as interactions with AI transition from transactional to sustained engagement.
Methods: Conceptual analysis of socioaffective dynamics in human-AI interactions, framed through psychological theories and principles.
Key Findings: Exploration of how AI systems impact socioaffective relationships, psychological needs, autonomy, companionship, and human well-being.
DOI: https://doi.org/10.1057/s41599-025-04532-5
Citations: 59
-
Authors: T Eloundou, A Beutel, DG Robinson
Year: 2024
Published in: arXiv preprint arXiv ..., 2024 - arxiv.org
Institution: OpenAI, Google DeepMind, Google, University of Oxford
Research Area: Fairness in LLM, AI Bias, AI Ethics
Discipline: Artificial Intelligence, Social Science
The paper introduces a counterfactual approach to evaluate 'first-person fairness' in chatbots, demonstrating that reinforcement learning can mitigate biases based on demographics across extensive chatbot interactions.
Methods: The study uses a Language Model as a Research Assistant (LMRA) to quantitatively and qualitatively assess biases based on demographics across millions of chatbot interactions, covering 66 tasks in 9 domains and involving two genders and four races. Bias evaluations are corroborated by independent...
Key Findings: Demographic biases in chatbot responses, including harmful stereotypes and response differences by gender and race, across diverse tasks and domains.
DOI: https://doi.org/10.48550/arXiv.2410.19803
Citations: 33
Sample Size: 6000000
-
Authors: S Du, MT Babalola, P D'cruz, E Dóci
Year: 2024
Published in: Journal of Business ..., 2024 - Springer
Institution: Nottingham University Business School, University of Reading, Oxford Brookes University, University of Portsmouth
Research Area: Crowdsourcing Ethics, Social Sciences, Organizational Behavior
Discipline: Social Science
The paper explores the ethical, societal, and global implications of using crowdsourcing platforms for research, emphasizing the need for fair compensation, transparency, and consideration of global disparities between the Global North and South.
Methods: The paper provides a conceptual analysis and critique of crowdsourcing research practices, focusing on ethical and societal considerations.
Key Findings: Ethical, societal, and global implications of crowdsourcing research practices, including data quality, reporting transparency, fair remuneration, and the role of global disparities.
Citations: 24
-
Authors: K Hackenburg, BM Tappin, P Röttger, S Hale
Year: 2024
Published in: arXiv preprint arXiv ..., 2024 - arxiv.org
Institution: University of Oxford, The Alan Turing Institute, Royal Holloway, University of London, Bocconi University, Meedan
Research Area: LLM scaling laws, Political Persuasion, LLM, AI Social Science
Discipline: Political Science, Artificial Intelligence
Persuasiveness of messages generated by large language models follows a log scaling law with diminishing returns as model size increases, and task completion appears to primarily drive this capability.
Methods: Generated 720 persuasive messages on 10 U.S. political issues using 24 language models of varying sizes; evaluated persuasiveness through a large-scale randomized survey experiment.
Key Findings: Persuasiveness of large language model-generated political messages across different model sizes.
Citations: 17
Sample Size: 25982
-
Authors: M Lerner, F Dorner, E Ash, N Goel
Year: 2024
Published in: ... of the 62nd Annual Meeting of ..., 2024 - aclanthology.org
Institution: ETH Zürich, University of Oxford
Research Area: Fairness in AI, Content Moderation, Human-AI Alignment
Discipline: Computational Social Science
Citations: 5
-
Authors: G Newlands, C Lutz
Year: 2024
Published in: Science Direct
Institution: BI Norwegian Business School, University of Oxford
Research Area: Occupational Sociology, Labor Economics
Discipline: Sociology
-
Authors: HR Kirk, C Osborne
Year: 2024
Published in: ArXiv
Institution: Alan Turing Institute, Oxford Internet Institute, Oxford University
Research Area: Computational Social Science, AI Community Analysis, Hugging Face Hub Activity
Discipline: Computational Social Science
-
Authors: Macken Murphy, Caroline A. Phillips, Khandis R. Blake
Year: 2024
Published in: Science Direct
Institution: University of Melbourne, University of Oxford
Research Area: Evolutionary Psychology, Human Sexuality, Infidelity Studies
Discipline: Psychology
-
Authors: K Hackenburg, H Margetts
Year: 2023
Published in: Proceedings of the National Academy of ..., 2024 - pnas.org
Institution: Oxford University, Alan Turing Institute
Research Area: Political Persuasion, LLM, Political Science
Discipline: Political Science
DOI: https://doi.org/10.1073/pnas.2403116121
Citations: 153
-
Authors: HR Kirk, B Vidgen, P Röttger, SA Hale
Year: 2023
Published in: arXiv preprint arXiv:2303.05453, 2023 - arxiv.org
Institution: The Alan Turing Institute, University of Oxford, Imperial College London, King's College London, Google DeepMind
Research Area: Large Language Model Alignment, Safety, Personalization Risks
Discipline: Artificial Intelligence
DOI: https://doi.org/10.48550/arXiv.2303.05453
Citations: 146
-
Authors: C Velasco, F Barbosa Escobar, C Spence, JS Olier
Year: 2023
Published in: Science Direct
Institution: Aarhus University, BI Norwegian Business School, Tilburg University, University of Copenhagen, University of Oxford
Research Area: Food Science
Discipline: Experimental Psychology, Food Science
Citations: 29
-
Authors: HP Cowley, M Natter, K Gray-Roncal, RE Rhodes
Year: 2022
Published in: Scientific Reports, 2022 - nature.com
Institution: Johns Hopkins University Applied Physics Laboratory, University of Oxford, Johns Hopkins University, Johns Hopkins University Applied Physics Laboratory, University of Oxford, Johns Hopkins University
Research Area: Human-AI Interaction, Machine Learning Evaluation, AI Evaluation
Discipline: Human-Computer Interaction (HCI)
DOI: https://doi.org/10.1038/s41598-022-08078-3
Citations: 34
-
Authors: N Gupta, L Rigotti, A Wilson
Year: 2021
Published in: arXiv preprint arXiv:2107.05064, 2021 - arxiv.org
Institution: University of Cambridge, University of Verona, University of Oxford, University of Pittsburgh
Research Area: Experimental Design, Research Methodology, Inferential Statistics
Discipline: Social Science Research Methods
Citations: 104