Discover 10 peer-reviewed studies in Human Perception (2022–2025). Explore research findings powered by Prolific's diverse participant panel.
This page lists 10 peer-reviewed papers in the research area of Human Perception in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: M Groh, A Sankaranarayanan, N Singh, DY Kim
Year: 2025
Published in: Nature ..., 2024 - nature.com
Institution: Northwestern University, Massachusetts Institute of Technology
Research Area: Deepfakes, Media Forensics, Human Perception of AI-Generated Content, Political Communication
Discipline: Computational Social Science
Humans are better at detecting deepfake political speeches using audio-visual cues than relying on text alone; state-of-the-art text-to-speech audio makes deepfakes harder to discern.
Methods: Five pre-registered randomized experiments with varied base rates of misinformation, audio sources, question framings, and media modalities were conducted.
Key Findings: Human accuracy in discerning real political speeches from deepfakes across media formats and contextual variables.
DOI: https://doi.org/10.1038/s41467-024-51998-z
Citations: 63
Sample Size: 2215
-
Authors: N Grgić-Hlača, G Lima, A Weller
Year: 2025
Published in: Proceedings of the 2nd ..., 2022 - dl.acm.org
Institution: Max Planck Institute, École Polytechnique Fédérale de Lausanne, University of Cambridge, The Alan Turing Institute
Research Area: Algorithmic Fairness, Human Perception, Diversity in AI Decision-Making
Discipline: Social Science, Artificial Intelligence
This study examines how sociodemographic factors and personal experience influence perceptions of fairness in algorithmic decision-making, particularly in bail decisions, highlighting the importance of diverse perspectives in regulatory oversight.
Methods: Explored perceptions of procedural fairness using surveys to assess the influence of demographics and personal experiences.
Key Findings: Impact of demographics (age, education, gender, race, political views) and personal experience on perceptions of fairness of algorithmic feature use in bail decisions.
DOI: 10.1145/3551624.3555306
Citations: 62
-
Authors: B Katz, N Abdelgawad, D Friedberg, P Roberts, S Misra
Year: 2025
Published in: Innovation in Aging, 2025•pmc.ncbi.nlm.nih.gov
Institution: Virginia Tech
Research Area: Human–AI Interaction (HCI), Technology Perception
Discipline: Behavioral Science
Age significantly influences perceptions of generative AI tools, with older individuals perceiving more benefits and fewer risks compared to younger individuals; thinking dispositions also play a role.
Methods: A nationally representative survey of US adults conducted via the Prolific platform using various AI-relevant scales, including attitudes, risks, benefits, frequency of use, expertise, and literacy assessments.
Key Findings: Demographic factors, industry types, thinking dispositions, and attitudes toward generative AI tools, including risk and utility perceptions.
Citations: 1
Sample Size: 500
-
Authors: N Tyulina, Y Yu, TA Emmanouil, SI Levitan
Year: 2025
Published in: Proceedings of the 7th ACM ..., 2025 - dl.acm.org
Institution: University of Cambridge, University of Bath, University of Edinburgh, New York University
Research Area: Human-AI Interaction, Trust and Perception, Nonverbal Communication
Discipline: Applied Linguistics
Trust judgments are primarily influenced by auditory cues in both humans and multimodal models, though subtle differences in modality weighting exist between them.
Methods: Behavioral experiment with trust ratings of bimodal stimuli across four trust congruence conditions, combined with a multimodal model trained using HuBERT and ResNet-50 with late fusion, analyzed using Permutation Feature Importance (PFI).
Key Findings: The construction of trust from visual and auditory signals in both humans and multimodal models, focusing on modality dominance and feature weighting.
Sample Size: 150
-
Authors: Jiaqi Zhua, Andras Molnar
Year: 2025
Published in: ArXiv
Institution: University of Michigan
Research Area: Social Psychology, Human-AI Interaction, Generative AI Impact on Social Perception
Discipline: Social Science, Social Psychology, Human-Computer Interaction (HCI)
Impressions of written messages are overly positive when recipients are unaware of potential Generative AI (GenAI) use, but negative when GenAI use is explicitly disclosed.
Methods: A pre-registered large-scale online experiment leveraged Prolific participants to assess social impressions in diverse communication contexts, with varying levels of sender disclosure regarding GenAI use.
Key Findings: The influence of known or uncertain GenAI use on recipients' social impressions of message senders across different personal and professional contexts.
Sample Size: 647
-
Authors: N Haduong
Year: 2025
Published in: 2025 - search.proquest.com
Institution: University of Washington
Research Area: Human-AI Interaction and Perception
Discipline: Human-Computer Interaction (HCI)
The research focuses on developing methodologies to bridge the gap between controlled laboratory studies and real-world human-AI perceptions and interactions, promoting task immersion and intrinsic motivation to model realistic behaviors.
Methods: Used task immersion techniques, domain-specific recruitment, error taxonomy development, and CPS-TaskForge environment generator for systematic study of collaborative problem solving and AI-assisted decision-making.
Key Findings: Human perceptions of AI in collaborative problem solving, understanding risks in AI-assisted decision making, and user behavior under performance pressure with AI advice.
-
Authors: Y Yin, N Jia, CJ Wakslak
Year: 2024
Published in: Proceedings of the National Academy of ..., 2024 - pnas.org
Institution: University of Southern California Los Angeles
Research Area: Human-AI Interaction, Social Perception of AI, Media Effects
Discipline: Social Sciences
AI responses make people feel more heard and are better at emotional support compared to humans, but labeling responses as AI diminishes this effect.
Methods: Experiment and follow-up study to assess recipient reactions to AI vs. human-generated responses and determine emotional support efficacy.
Key Findings: The degree to which recipients feel heard, emotion detection accuracy, and third-party ratings of emotional support quality.
DOI: https://doi.org/10.1073/pnas.2319112121
Citations: 201
-
Authors: JD Lomas, W van der Maden, S Bandyopadhyay
Year: 2024
Published in: Advanced Design ..., 2024 - Elsevier
Institution: Delft University of Technolog, Playpower Labs, Hong Kong Polytechnic University, Utrecht University
Research Area: AI Alignment, Affective Computing, Emotional Expression in Generative AI, Human Perception of AI Emotions
Discipline: Affective Computing, Artificial Intelligence, Human-Computer Interaction (HCI)
This study evaluates how well generative AI systems (like DALL·E 2/3 and Stable Diffusion) can generate emotionally expressive content that aligns with how humans perceive those emotions, finding that model performance varies by emotion type and model, with implications for designing more emotionally aligned AI.
DOI: https://doi.org/10.1016/j.ijadr.2024.10.002
Citations: 5
-
Authors: Mahjabin Nahar, Haeseung Seo, Eun-Ju Lee, Aiping Xiong, Dongwon Lee
Year: 2024
Published in: ArXiv
Institution: Pennsylvania State University, Seoul National University
Research Area: LLM Hallucinations, Human Perception, Warning Effects in HCI
Discipline: Artificial Intelligence, Human-Computer Interaction (HCI)
-
Authors: J Choi, MM Chao
Year: 2022
Published in: Personality and Social Psychology ..., 2024 - journals.sagepub.com
Research Area: Personality and Social Psychology, Human-AI Interaction, Fairness Perception
Discipline: Social Science, Artificial Intelligence, Psychology
DOI: https://doi.org/10.1177/01461672241288338
Citations: 5