Browse 6 peer-reviewed papers from University Of Zurich spanning Generative AI, Disinformation (2023–2025). Research powered by Prolific's high-quality participant data.
This page lists 6 peer-reviewed papers from researchers at University Of Zurich in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: M Wack, DA Parry
Year: 2025
Published in: International Journal of Communication, 2025 - ijoc.org
Institution: University of Zurich
Research Area: Generative AI, Disinformation, Political Communication, Ethnic Targeting
Discipline: Communication, Artificial Intelligence, Political Science
The study finds that AI-generated political ads with coethnic avatars are more effective at mobilizing voter support and reducing skepticism, even when labeled as synthetic, with AI literacy playing a key role in identifying such content.
Methods: Survey experiment targeting voter responses to AI-generated political ads with varied presenter ethnicities, including analysis of AI literacy versus digital literacy.
Key Findings: Effectiveness of coethnic versus out-group ethnic AI-generated avatars in mobilizing voter support and the role of AI literacy in detecting synthetic content.
Citations: 1
-
Authors: L Hölbling, S Maier, S Feuerriegel
Year: 2025
Published in: Scientific Reports, 2025 - nature.com
Institution: University of Lausanne, University of Zurich, University of St. Gallen
Research Area: LLMs in Persuasion, Meta-Analysis, Artificial Intelligence, Human-Computer Interaction (HCI)
Discipline: Artificial Intelligence
Large language models (LLMs) demonstrate similar persuasive performance to humans overall, but their effectiveness varies widely based on contextual factors such as model type, conversation design, and domain.
Methods: Systematic review and meta-analysis using Hedges' g to compute standardized effect sizes, with exploratory moderator analyses and publication bias checks (Egger's test, trim-and-fill analysis).
Key Findings: The persuasive effectiveness of LLMs compared to humans across various contexts and studies.
Sample Size: 17422
-
Authors: Liudmila Zavolokina, Kilian Sprenkamp, Zoya Katashinskaya, Daniel Gordon Jones
Year: 2025
Published in: ArXiv
Institution: University of Zurich
Research Area: AI Ethics, AI Bias, News Literacy, Critical Thinking, Computational Social Science
Discipline: Computational Social Science
The study explores leveraging inherent biases in AI to enhance critical thinking in news consumption, proposing strategies such as bias awareness, personalization, and gradual introduction of diverse perspectives.
Methods: Qualitative user study investigating user responses to personalized AI-driven propaganda detection tools.
Key Findings: The effectiveness of AI bias-based strategies in improving critical thinking and news readers’ engagement with diverse perspectives.
-
Authors: F Joessel, S Denkinger, PE Joessel, CS Green
Year: 2025
Published in: Acta Psychologica, 2025 - Elsevier
Institution: Max Planck Institute, University of Potsdam, University of Maryland, University of Zurich, University of Arizona
Research Area: Online cognitive training, Automated psychological studies, Crowdsourcing, behavioral research
Discipline: Psychology
The study introduces a fully online method for conducting cognitive training experiments using Prolific, significantly reducing resource demands while achieving robust results and diverse participant recruitment.
Methods: Participants were recruited via Prolific, assigned to groups using a pseudo-randomized procedure, and completed a 12-hour remote cognitive training study with pre- and post-test assessments monitored via custom dashboards.
Key Findings: Impact of a 12-hour cognitive training intervention on participants' cognitive functions, conducted in a remote and automated manner.
-
Authors: Eyal Aharoni, Sharlene Fernandes, Daniel J. Brady, Caelan Alexander, Michael Criner, Kara Queen, Javier Rando, Eddy Nahmias, Victor Crespo
Year: 2024
Published in: Nature
Institution: Duke University, ETH Zurich, Georgia State University
Research Area: Moral Responsibility, Agency in AI, Human-AI Moral Interaction
Discipline: Artificial Intelligence Ethics
-
Authors: S Casper, X Davies, C Shi, TK Gilbert
Year: 2023
Published in: arXiv preprint arXiv ..., 2023 - arxiv.org
Institution: Columbia University, Cornell Tech, Apollo Research, ETH Zurich, UC Berkeley, University of Sussex, Independent
Research Area: Reinforcement Learning from Human Feedback (RLHF), Alignment, LLM Limitations
Discipline: Artificial Intelligence
DOI: https://doi.org/10.48550/arXiv.2307.15217
Citations: 848