Human detection of political speech deepfakes across transcripts, audio, and video
Authors: M Groh, A Sankaranarayanan, N Singh, DY Kim
Published: 2025
Publication: Nature ..., 2024 - nature.com
Humans are better at detecting deepfake political speeches using audio-visual cues than relying on text alone; state-of-the-art text-to-speech audio makes deepfakes harder to discern.
Methods: Five pre-registered randomized experiments with varied base rates of misinformation, audio sources, question framings, and media modalities were conducted.
Key Findings: Human accuracy in discerning real political speeches from deepfakes across media formats and contextual variables.
Limitations: No significant effects from base rates of misinformation; potential variability in generalizability to broader populations or real-world contexts is not discussed.
Institution: Northwestern University, Massachusetts Institute of Technology
Research Area: Deepfakes, Media Forensics, Human Perception of AI-Generated Content, Political Communication
Discipline: Computational Social Science
Sample Size: 2215 participants
Citations: 63
DOI: https://doi.org/10.1038/s41467-024-51998-z