Browse 3 peer-reviewed papers from Northwestern University spanning Deepfakes, Media Forensics (2021–2025). Research powered by Prolific's high-quality participant data.
This page lists 3 peer-reviewed papers from researchers at Northwestern University in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: M Groh, A Sankaranarayanan, N Singh, DY Kim
Year: 2025
Published in: Nature ..., 2024 - nature.com
Institution: Northwestern University, Massachusetts Institute of Technology
Research Area: Deepfakes, Media Forensics, Human Perception of AI-Generated Content, Political Communication
Discipline: Computational Social Science
Humans are better at detecting deepfake political speeches using audio-visual cues than relying on text alone; state-of-the-art text-to-speech audio makes deepfakes harder to discern.
Methods: Five pre-registered randomized experiments with varied base rates of misinformation, audio sources, question framings, and media modalities were conducted.
Key Findings: Human accuracy in discerning real political speeches from deepfakes across media formats and contextual variables.
DOI: https://doi.org/10.1038/s41467-024-51998-z
Citations: 63
Sample Size: 2215
-
Authors: S Dandekar, S Deshmukh, F Chiu, WB Knox
Year: 2025
Published in: arXiv preprint arXiv ..., 2025 - arxiv.org
Institution: University of California, Davis, Northwestern University
Research Area: Reinforcement Learning from Human Feedback (RLHF), Human-AI Interaction, AI Theory
Discipline: Artificial Intelligence, Social Science
The paper investigates how human beliefs about agent capabilities influence preferences in RLHF, proposing a model to minimize the mismatch between beliefs and idealized agent capabilities, ultimately improving policy performance.
Methods: Human studies and synthetic experiments to model and test the impact of belief mismatches on human preferences and RLHF effectiveness.
Key Findings: Effects of human beliefs about agent capabilities on their provided preferences and the performance of RLHF policies.
DOI: https://doi.org/10.48550/arXiv.2506.01692
-
Authors: C Arnold, LZ Xu, K Saffarizadeh
Year: 2021
Published in: Behaviour & Information ..., 2025 - Taylor & Francis
Institution: Northwestern Mutual Data Science Institute, Marquette University
Research Area: Generative AI, Crowdfunding, Trust in AI, Human-Computer Interaction (HCI), Behavioral Science
Discipline: Human-Computer Interaction (HCI), Behavioral Science