Browse 9 peer-reviewed papers from Max Planck Institute spanning Algorithmic Fairness, Visual Cognition (2021–2025). Research powered by Prolific's high-quality participant data.
This page lists 9 peer-reviewed papers from researchers at Max Planck Institute in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: LM Schulze Buschoff, E Akata, M Bethge
Year: 2025
Published in: Nature Machine ..., 2025 - nature.com
Institution: Max Planck Institute
Research Area: Visual Cognition, Multimodal Large Language Models (MLLMs), Vision-Language Models (VLMs)
Discipline: Cognitive Science, Artificial Intelligence, Computer Vision
Vision-based large language models show proficiency in visual data interpretation but fall short in human-like abilities for causal reasoning, intuitive physics, and social cognition.
Methods: Controlled experiments evaluating model performance on tasks related to intuitive physics, causal reasoning, and intuitive psychology using visual processing benchmarks.
Key Findings: Model capabilities in understanding physical interactions, causal relationships, and social preferences.
DOI: https://doi.org/10.1038/s42256-024-00963-y
Citations: 70
-
Authors: N Grgić-Hlača, G Lima, A Weller
Year: 2025
Published in: Proceedings of the 2nd ..., 2022 - dl.acm.org
Institution: Max Planck Institute, École Polytechnique Fédérale de Lausanne, University of Cambridge, The Alan Turing Institute
Research Area: Algorithmic Fairness, Human Perception, Diversity in AI Decision-Making
Discipline: Social Science, Artificial Intelligence
This study examines how sociodemographic factors and personal experience influence perceptions of fairness in algorithmic decision-making, particularly in bail decisions, highlighting the importance of diverse perspectives in regulatory oversight.
Methods: Explored perceptions of procedural fairness using surveys to assess the influence of demographics and personal experiences.
Key Findings: Impact of demographics (age, education, gender, race, political views) and personal experience on perceptions of fairness of algorithmic feature use in bail decisions.
DOI: 10.1145/3551624.3555306
Citations: 62
-
Authors: K Hackenburg, BM Tappin, P Röttger, SA Hale
Year: 2025
Published in: Proceedings of the ..., 2025 - pnas.org
Institution: University of California Berkeley, University of Cambridge, University of Oxford, Max Planck Institute
Research Area: Political Persuasion, LLM
Discipline: Computational Social Science, Political Science
Scaling language model sizes leads to diminishing returns in generating persuasive political messages, with larger models providing minimal gains compared to smaller ones after controlling for task completion metrics like coherence and relevance.
Methods: Generated 720 political messages using 24 LLMs of varying sizes and tested their persuasiveness through a large-scale randomized survey experiment.
Key Findings: Persuasive capability of language models across different sizes in generating political messages.
Citations: 31
Sample Size: 25982
-
Authors: L Muttenthaler, K Greff, F Born, B Spitzer, S Kornblith
Year: 2025
Published in: Nature, 2025 - nature.com
Institution: Google DeepMind, Google, Machine Learning Group, Technische Universität Berlin, BIFOLD, Berlin Institute for the Foundations of Learning and Data, Max Planck Institute
Research Area: Cognitive Alignment, Computer Vision, Multi-level Conceptual Knowledge
Discipline: Artificial Intelligence, Cognitive Science
This paper presents a method for **aligning machine vision model representations with human visual similarity judgments across different abstraction levels, improving how well models reflect human perceptual and conceptual organization and enhancing generalization and uncertainty prediction.
Citations: 11
-
Authors: G Lima, N Grgić-Hlača, M Langer, Y Zou
Year: 2025
Published in: Proceedings of the 2025 CHI ..., 2025 - dl.acm.org
Institution: University of Maryland, Max Planck Institute, Stanford University, Cornell University
Research Area: Algorithmic Fairness, Systemic Injustice, Social Perception of AI, Algorithmic Discrimination
Discipline: Computational Social Science
The study examines how contextualizing algorithms within systemic injustice impacts perceptions of algorithmic discrimination, finding disparate effects based on participant group identity and revealing unintended consequences of such contextualization.
Methods: 2x3 between-participants experiment using the hiring context as a case-study; examined the influence of systemic injustice information and algorithms' bias perpetuation on lay perceptions.
Key Findings: Impact of systemic injustice framing and explanation of algorithmic bias perpetuation on participants' views of algorithmic fairness and discrimination.
DOI: 10.1145/3706598.3713536
Citations: 2
Sample Size: 716
-
Authors: F Joessel, S Denkinger, PE Joessel, CS Green
Year: 2025
Published in: Acta Psychologica, 2025 - Elsevier
Institution: Max Planck Institute, University of Potsdam, University of Maryland, University of Zurich, University of Arizona
Research Area: Online cognitive training, Automated psychological studies, Crowdsourcing, behavioral research
Discipline: Psychology
The study introduces a fully online method for conducting cognitive training experiments using Prolific, significantly reducing resource demands while achieving robust results and diverse participant recruitment.
Methods: Participants were recruited via Prolific, assigned to groups using a pseudo-randomized procedure, and completed a 12-hour remote cognitive training study with pre- and post-test assessments monitored via custom dashboards.
Key Findings: Impact of a 12-hour cognitive training intervention on participants' cognitive functions, conducted in a remote and automated manner.
-
Authors: A von Schenk, V Klockmann
Year: 2024
Published in: ... on Psychological Science, 2025 - journals.sagepub.com
Institution: Max Planck Institute
Research Area: Social Preferences, Behavioral Economics, Human-Machine Interaction
Discipline: Behavioral Science
Humans exhibit stronger social preferences toward machines when they know machine payoffs benefit a human recipient, and weak preferences when payoff information is absent, suggesting belief formation is self-serving.
Methods: Conducted an online experiment with participants and follow-up surveys to compare the impact of different implementations of machine payoffs and information transparency on social preferences.
Key Findings: Social preferences and reciprocity behaviors toward machines with varying payoff structures and transparency about the beneficiaries.
DOI: https://doi.org/10.1177/17456916231194949
Citations: 40
Sample Size: 1198
-
Authors: Z Qiu, W Liu, H Feng, Z Liu, T Xiao
Year: 2024
Published in: ArXiv
Institution: Massachusetts Institute of Technology, Max Planck Institute, University of Cambridge
Research Area: Computational cognition, LLM evaluation, Program synthesis, Multimodal reasoning
Discipline: Artificial Intelligence
Introduces SGP-Bench, a benchmark testing whether LLMs can answer semantic and spatial questions about images purely from graphics programs (SVG/CAD), effectively probing “visual imagination without vision.” The authors show current LLMs struggle - sometimes near chance - even when images are trivial for humans, but demonstrate that Symbolic Instruction Tuning (SIT) can meaningfully improve thi...
-
Authors: FM Farke, DG Balash, M Golla, M Dürmuth
Year: 2021
Published in: 30th USENIX Security ..., 2021 - usenix.org
Institution: Ruhr University Bochum, Max Planck Institute, University of Michigan, CISPA Helmholtz Center for Information Security
Research Area: Human-Computer Interaction (HCI), Privacy, User Security
Discipline: Human-Computer Interaction (HCI), Privacy, Security in Computer Science, Social Science Research.
Citations: 80