Authors: M Groh, A Sankaranarayanan, N Singh, DY Kim
Year: 2025
Published in: Nature ..., 2024 - nature.com
Institution: Northwestern University, Massachusetts Institute of Technology
Research Area: Deepfakes, Media Forensics, Human Perception of AI-Generated Content, Political Communication
Discipline: Computational Social Science
Humans are better at detecting deepfake political speeches using audio-visual cues than relying on text alone; state-of-the-art text-to-speech audio makes deepfakes harder to discern.
Methods: Five pre-registered randomized experiments with varied base rates of misinformation, audio sources, question framings, and media modalities were conducted.
Key Findings: Human accuracy in discerning real political speeches from deepfakes across media formats and contextual variables.
DOI: https://doi.org/10.1038/s41467-024-51998-z
Citations: 63
Sample Size: 2215