Authors: C Yuan, B Ma, Z Zhang, B Prenkaj, F Kreuter, G Kasneci
Year: 2026
Published in: arXiv preprint arXiv:2601.08634, 2026•arxiv.org
Institution: Munich Center for Machine Learning, LMU Munich, Technical University of Munich
Research Area: Artificial Intelligence, AI Ethics, AI Alignment, Political Science, Computational Social Science
Discipline: Computer Science, Natural Language Processing (NLP)
This paper examines how large language models’ (LLMs) political outputs shift when you explicitly prime them with different moral values. Instead of just assigning fake personas (like “pretend to be liberal”), the authors condition models to endorse or reject specific moral values (e.g., utilitarianism, fairness, authority). They then measure how those moral primes move the models’ positions in...
DOI: https://doi.org/10.48550/arXiv.2601.08634
Authors: J Beck, S Eckman, C Kern, F Kreuter
Year: 2025
Published in: arXiv preprint arXiv:2509.08514, 2025 - arxiv.org
Institution: National Institutes of Health, National Center for Biotechnology Information
Research Area: Human-Computer Interaction (HCI)
Discipline: Human-Computer Interaction (HCI)
Human attitudes toward AI strongly influence performance in collaborative tasks, with skeptics showing better error detection and accuracy, while automation favorability increases overreliance on AI suggestions.
Methods: Randomized experiment with a controlled annotation task manipulating AI suggestion quality, task burden, and performance-based financial incentives; collected demographic, attitudinal, and behavioral data.
Key Findings: Impact of AI suggestion quality, task burden, and financial incentives on participant performance metrics (accuracy, correction activity, overcorrection, undercorrection); influence of demographic and psychological characteristics on performance.
Citations: 4
Sample Size: 2784