Moral Lenses, Political Coordinates: Towards Ideological Positioning of Morally Conditioned LLMs
Authors: C Yuan, B Ma, Z Zhang, B Prenkaj, F Kreuter, G Kasneci
Published: 2026
Publication: arXiv preprint arXiv:2601.08634, 2026•arxiv.org
This paper examines how large language models’ (LLMs) political outputs shift when you explicitly prime them with different moral values. Instead of just assigning fake personas (like “pretend to be liberal”), the authors condition models to endorse or reject specific moral values (e.g., utilitarianism, fairness, authority). They then measure how those moral primes move the models’ positions in a two-dimensional political space (economic left ↔ right and social libertarian ↔ authoritarian) us...
Institution: Munich Center for Machine Learning, LMU Munich, Technical University of Munich
Research Area: Artificial Intelligence, AI Ethics, AI Alignment, Political Science, Computational Social Science
Discipline: Computer Science, Natural Language Processing (NLP)
DOI: https://doi.org/10.48550/arXiv.2601.08634