Browse 4 peer-reviewed papers from University Of Bayreuth spanning Human-AI Collaboration, Explainable AI (XAI) (2021–2025). Research powered by Prolific's high-quality participant data.
This page lists 4 peer-reviewed papers from researchers at University Of Bayreuth in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: P Spitzer, J Holstein, K Morrison
Year: 2025
Published in: ... Journal of Human ..., 2025 - Taylor & Francis
Institution: Karlsruhe Institute of Technology, Carnegie Mellon University, University of Bayreuth
Research Area: Human-AI Collaboration, Explainable AI (XAI)
Discipline: Human-Computer Interaction (HCI)
Incorrect explanations in AI-assisted decision-making lead to a misinformation effect, negatively impacting human reasoning, procedural knowledge, and collaboration performance.
Methods: A study on human-AI collaboration involving AI-supported decision-making paired with explainable AI (XAI) to assess the effects of incorrect explanations.
Key Findings: Impact of incorrect explanations on human reasoning strategies, procedural knowledge, and team performance in human-AI collaboration.
Citations: 13
Sample Size: 160
-
Authors: P Spitzer, K Morrison, V Turri, M Feng, A Perer
Year: 2025
Published in: ACM Transactions on ..., 2025 - dl.acm.org
Institution: Carnegie Mellon University, Karlsruhe Institute of Technology, University of Bayreuth
Research Area: Explainable AI (XAI), AI-Assisted Decision-Making, Human-AI Collaboration
Discipline: Artificial Intelligence
The study highlights how imperfect explainable AI (XAI), along with human cognitive styles, affects reliance on AI and the performance of human–AI teams, providing design guidelines for better collaboration systems.
Methods: The researchers conducted a study with 136 participants, analyzing the effects of explanation imperfections and cognitive styles on AI-assisted decision-making and human–AI collaboration.
Key Findings: The impact of incorrect explanations and explanation modalities on human reliance, decision-making, and human–AI team performance, as well as the role of cognitive styles.
Citations: 2
Sample Size: 136
-
Authors: J Beck
Year: 2025
Published in: 2025 - edoc.ub.uni-muenchen.de
Institution: Ludwig-Maximilians-Universität München, University of Bayreuth
Research Area: Annotation Quality, Human-AI Collaboration, Behavioral Science, Human-Computer Interaction (HCI)
Discipline: Human-Computer Interaction (HCI)
The study empirically evaluates annotation bias, proposes strategies to reduce its impact, and explores the use of large language models in automated and hybrid annotation workflows.
Methods: Empirical assessments and experimental evaluations involving annotation workflows and large language models.
Key Findings: Annotation bias, annotation quality, and the effectiveness of hybrid workflows integrating human input and AI models.
-
Authors: A Goswami, A Ak, W Hauser
Year: 2021
Published in: 2021 IEEE 23rd ..., 2021 - ieeexplore.ieee.org
Institution: University of Göttingen, University of Bayreuth
Research Area: Crowdsourcing, Subjective Evaluation, Tone Mapping, AI Evaluation
Discipline: Human-Computer Interaction (HCI)
Citations: 7