Discover 14 peer-reviewed studies in Generative Ai (2021–2025). Explore research findings powered by Prolific's diverse participant panel.
This page lists 14 peer-reviewed papers in the research area of Generative Ai in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: S Zhang, J Xu, AJ Alvero
Year: 2025
Published in: Sociological Methods & Research, 2025 - journals.sagepub.com
Institution: University of Maryland, Indiana University, University of Minnesota Duluth
Research Area: Sociological Methods, Generative AI, Survey Methodology
Discipline: Sociology, Social Science
The study finds that 34% of research participants use generative AI tools like large language models (LLMs) to assist with open-ended survey responses, leading to more homogeneity and positivity in their answers, which could impact data validity by masking social variations.
Methods: The study conducted an original survey on a popular online platform and simulated comparisons between human-written responses from pre-ChatGPT studies and LLM-generated responses.
Key Findings: Use of LLMs by survey participants, differences in text homogeneity, positivity, and masking of social variation in open-ended survey responses.
Citations: 26
-
Authors: U Messer
Year: 2025
Published in: Computers in Human Behavior: Artificial Humans, 2025 - Elsevier
Institution: Universität der Bundeswehr München
Research Area: Political Bias in Generative AI, Human-AI Interaction, Affective Computing, AI Bias
Discipline: Computer Science, Human-AI Interaction
People's acceptance and reliance on Generative AI (GAI) increase when they perceive alignment between their political orientation and the bias of GAI-generated content, leading to expanded trust in sensitive applications.
Methods: Three experiments analyzing behavioral reactions to politically biased content generated by GAI, including the impact of perceived alignment on acceptance and trust.
Key Findings: Participants' acceptance, reliance, and trust in GAI based on perceived alignment between political bias of GAI-generated content and their own political beliefs.
DOI: https://doi.org/10.1016/j.chbah.2024.100108
Citations: 24
Sample Size: 513
-
Authors: AG Møller, DM Romero, D Jurgens
Year: 2025
Published in: arXiv preprint arXiv ..., 2025 - arxiv.org
Institution: University of Copenhagen, University of Michigan, Pioneer Centre for AI
Research Area: Generative AI, Social Media, Human-Computer Interaction (HCI)
Discipline: Computational Social Science
Generative AI tools on social media increase user engagement and content volume but reduce perceived quality and authenticity in discussions, highlighting challenges for ethical integration.
Methods: Controlled experiment with participants assigned to small discussion groups under distinct AI-assisted treatment conditions including chat assistance, conversation starters, feedback on comment drafts, and reply suggestions.
Key Findings: Impact of generative AI tools on user behavior, engagement, content volume, perceived quality, and authenticity in social media interactions.
DOI: https://doi.org/10.48550/arXiv.2506.14295
Citations: 9
Sample Size: 680
-
Authors: M Wack, DA Parry
Year: 2025
Published in: International Journal of Communication, 2025 - ijoc.org
Institution: University of Zurich
Research Area: Generative AI, Disinformation, Political Communication, Ethnic Targeting
Discipline: Communication, Artificial Intelligence, Political Science
The study finds that AI-generated political ads with coethnic avatars are more effective at mobilizing voter support and reducing skepticism, even when labeled as synthetic, with AI literacy playing a key role in identifying such content.
Methods: Survey experiment targeting voter responses to AI-generated political ads with varied presenter ethnicities, including analysis of AI literacy versus digital literacy.
Key Findings: Effectiveness of coethnic versus out-group ethnic AI-generated avatars in mobilizing voter support and the role of AI literacy in detecting synthetic content.
Citations: 1
-
Authors: Jiaqi Zhua, Andras Molnar
Year: 2025
Published in: ArXiv
Institution: University of Michigan
Research Area: Social Psychology, Human-AI Interaction, Generative AI Impact on Social Perception
Discipline: Social Science, Social Psychology, Human-Computer Interaction (HCI)
Impressions of written messages are overly positive when recipients are unaware of potential Generative AI (GenAI) use, but negative when GenAI use is explicitly disclosed.
Methods: A pre-registered large-scale online experiment leveraged Prolific participants to assess social impressions in diverse communication contexts, with varying levels of sender disclosure regarding GenAI use.
Key Findings: The influence of known or uncertain GenAI use on recipients' social impressions of message senders across different personal and professional contexts.
Sample Size: 647
-
Authors: SC Matz, JD Teeny, SS Vaid, H Peters, GM Harari
Year: 2024
Published in: Scientific Reports, 2024 - nature.com
Institution: Stanford University
Research Area: Personalized Persuasion, Generative AI, Political Influence
Discipline: Artificial Intelligence
Generative AI, specifically large language models like ChatGPT, effectively scale personalized persuasion by matching messages to psychological profiles, demonstrating increased influence across domains and profiles.
Methods: Four studies (with seven sub-studies) tested personalized persuasive messaging crafted by ChatGPT against non-personalized messages across various psychological and domain-specific dimensions.
Key Findings: Effectiveness of personalized persuasive messages crafted by generative AI in different domains, targeting psychological profiles such as personality traits, political ideology, and moral foundations.
Citations: 368
Sample Size: 1788
-
Authors: JD Brüns, M Meißner
Year: 2024
Published in: Journal of Retailing and Consumer Services, 2024 - Elsevier
Institution: Copenhagen Business School, University of Southern Denmark
Research Area: Generative AI in Social Media Marketing, Brand Authenticity, Consumer Services
Discipline: Marketing
Using generative artificial intelligence (GenAI) for social media content creation diminishes perceived brand authenticity, leading to negative follower reactions unless GenAI is used to assist humans rather than replace them.
Methods: Three experimental studies investigating consumer perceptions and reactions toward brand disclosure of GenAI usage in content creation.
Key Findings: Followers' attitudinal and behavioral reactions, mediated by perceptions of brand authenticity.
DOI: https://doi.org/10.1016/j.jretconser.2024.103790
Citations: 235
-
Authors: A Simchon, M Edwards, S Lewandowsky
Year: 2024
Published in: PNAS nexus, 2024 - academic.oup.com
Institution: University of Bristol
Research Area: Political Microtargeting, Generative AI, Political Science, Psychological and Cognitive Sciences
Discipline: Political Science, Psychology
The study highlights the effectiveness and scalability of using generative AI to microtarget personalized political advertisements based on personality traits, raising ethical and policy concerns.
Methods: Four studies were conducted, including experiments (studies 1a and 1b) on the effectiveness of personality-tailored ads and feasibility assessments (studies 2a and 2b) of automatic generation and validation of these ads using generative AI and personality inference.
Key Findings: Effectiveness of personality-based microtargeted political ads and the scalability of their generation using generative AI tools.
Citations: 172
-
Authors: AYJ Ha, J Passananti, R Bhaskar, S Shan
Year: 2024
Published in: Proceedings of the ..., 2024 - dl.acm.org
Institution: University of California Santa Barbara, The University of Chicago, Institute of Education, University College London
Research Area: Human-Computer Interaction (HCI), Generative AI, Digital Forensics
Discipline: Human-Computer Interaction (HCI), Generative AI, Digital Forensics
The paper investigates the effectiveness of different approaches, including both human and automated detectors, in distinguishing human art from AI-generated images, finding that a combination of methods offers the best performance despite persistent weaknesses.
Methods: Comparison of human art across 7 styles with AI-generated images from 5 generative models, assessed using 5 automated detectors and 3 human groups (crowdworkers, professional artists, expert artists).
Key Findings: Detection accuracy and robustness of human and automated methods in identifying AI-generated images under benign and adversarial conditions.
DOI: 10.1145/3658644.3670306
Citations: 52
Sample Size: 3993
-
Authors: E Christoforou, G Demartini
Year: 2024
Published in: Proceedings of the ..., 2024 - ojs.aaai.org
Institution: University of Sheffield, University of Southampton
Research Area: Crowdsourcing, Generative AI, Web and Social Media Research, LLM
Discipline: Artificial Intelligence
DOI: https://doi.org/10.1609/icwsm.v18i1.31452
Citations: 10
-
Authors: JD Lomas, W van der Maden, S Bandyopadhyay
Year: 2024
Published in: Advanced Design ..., 2024 - Elsevier
Institution: Delft University of Technolog, Playpower Labs, Hong Kong Polytechnic University, Utrecht University
Research Area: AI Alignment, Affective Computing, Emotional Expression in Generative AI, Human Perception of AI Emotions
Discipline: Affective Computing, Artificial Intelligence, Human-Computer Interaction (HCI)
This study evaluates how well generative AI systems (like DALL·E 2/3 and Stable Diffusion) can generate emotionally expressive content that aligns with how humans perceive those emotions, finding that model performance varies by emotion type and model, with implications for designing more emotionally aligned AI.
DOI: https://doi.org/10.1016/j.ijadr.2024.10.002
Citations: 5
-
Authors: E Jahani, B Manning, J Zhang, H TuYe, M Alsobay, C Nicolaides, S Suri, D Holtz
Year: 2024
Published in: ArXiv
Institution: Massachusetts Institute of Technology, Microsoft Research, Stanford University, University of California Berkeley, University of Cyprus, University of Maryland
Research Area: Human-AI Interaction, Generative AI, Prompt Engineering
Discipline: Artificial Intelligence, focusing on Human-AI Interaction, Generative AI
-
Authors: Fabian Dvorak, Regina Stumpf, Sebastian Fehrler, Urs Fischbacher
Year: 2024
Published in: ArXiv
Institution: University of Konstanz
Research Area: Generative AI and Human Decision-Making, Behavioral Economics
Discipline: Artificial Intelligence, Behavioral Economics
-
Authors: C Arnold, LZ Xu, K Saffarizadeh
Year: 2021
Published in: Behaviour & Information ..., 2025 - Taylor & Francis
Institution: Northwestern Mutual Data Science Institute, Marquette University
Research Area: Generative AI, Crowdfunding, Trust in AI, Human-Computer Interaction (HCI), Behavioral Science
Discipline: Human-Computer Interaction (HCI), Behavioral Science