Once One Fails, All Are Suspect: Understanding Error Generalization in AI
Authors: L Dai, Z Wang, L Chen, J Jin
Published: 2026
Publication: 2026•scholarspace.manoa.hawaii.edu
AI errors lead to broader negative generalizations about other AI systems compared to human errors, largely due to perceptions of AI's inflexibility and inability to learn from mistakes.
Methods: Conducted four one-factor experiments across distinct contexts to compare human responses to AI errors and human errors.
Key Findings: Generalization of error perceptions from one AI system to others, and psychological mechanisms driving this process.
Limitations: Exact contexts and diversity of experimental settings or participant demographics are not detailed.
Institution: Shanghai International Studies University
Research Area: Socio-Economic Impacts of AI, Algorithmic Systems
Discipline: Computer Science, Artificial Intelligence