Browse 105 peer-reviewed papers in Experiment. Discover studies powered by high-quality human data from Prolific.
This page lists 105 peer-reviewed papers classified as Experiment in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: JW Berge, LIO Berge, W Chiu, I Kolstad
Year: 2026
Published in: The Journal of Development Studies, 2026•Taylor & Francis
Institution: Norwegian School of Economics
Effective altruism information treatment showed no effect on donations to advocacy organisations but increased support for international-focused NGOs, particularly among donors with less universalistic preferences and low trust in NGOs.
Methods: Randomised online discrete choice experiment with pre-registration and incentivised donation measurement.
Key Findings: Impact of effective altruism information treatment on donor behaviour and NGO support preferences.
DOI: https://doi.org/10.1080/00220388.2025.2601583
-
Authors: L Dai, Z Wang, L Chen, J Jin
Year: 2026
Published in: 2026•scholarspace.manoa.hawaii.edu
Institution: Shanghai International Studies University
Research Area: Socio-Economic Impacts of AI, Algorithmic Systems
Discipline: Computer Science, Artificial Intelligence
AI errors lead to broader negative generalizations about other AI systems compared to human errors, largely due to perceptions of AI's inflexibility and inability to learn from mistakes.
Methods: Conducted four one-factor experiments across distinct contexts to compare human responses to AI errors and human errors.
Key Findings: Generalization of error perceptions from one AI system to others, and psychological mechanisms driving this process.
-
Authors: S Assecondi
Year: 2026
Published in: SPRINGER
Institution: University of Trento
Research Area: Geropsychology, Cognitive intervention research, Psycholinguistics, Neuropsychology
Discipline: Psychology, Cognitive Science
Working memory training shows modest improvements in reading comprehension for younger adults but not older adults, highlighting the need for ecologically valid measures in cognitive training programs.
Methods: Participants underwent a 5-day cognitive training program targeting visuo-spatial working memory to evaluate effects on reading comprehension as a proxy for everyday functions.
Key Findings: The relationship between visuo-spatial working memory improvements and reading comprehension performance across age groups.
Sample Size: 175
-
Authors: T Kosch, R Welsch, L Chuang, A Schmidt
Year: 2025
Published in: ACM Transactions on ..., 2023 - dl.acm.org
Institution: Aalto University
Research Area: User Expectations, HCI Research Bias, Artificial Intelligence, AI Bias
Discipline: Human-Computer Interaction (HCI)
The belief in receiving adaptive AI support positively impacts user performance, demonstrating a placebo effect in Human-Computer Interaction.
Methods: Two experiments where participants completed word puzzles under conditions with or without supposed AI support; in reality, no AI assistance was provided.
Key Findings: Impact of perceived AI support on user expectations and task performance.
DOI: https://doi.org/10.1145/3529225
Citations: 149
Sample Size: 469
-
Authors: M Steyvers, H Tejeda, A Kumar, C Belem
Year: 2025
Published in: Nature Machine ..., 2025 - nature.com
Institution: University of California Irvine
Research Area: Computational Linguistics, Computational Social Science, AI Ethics, Trust in AI
Discipline: Computational Social Science
LLMs often lead to user overestimation of response accuracy, especially with longer explanations; adjusting explanation styles to align with model confidence improves calibration and discrimination gaps, enhancing trust in AI-assisted decision making.
Methods: Conducted experiments using multiple-choice and short-answer questions to study user confidence versus model-stated confidence; varied explanation length and alignment with model internal confidence.
Key Findings: Calibration gap (human vs. model confidence), discrimination gap (ability to distinguish correct vs. incorrect answers), and effects of explanation style and length on user trust.
Citations: 100
-
Authors: G Beknazar-Yuzbashev, R Jiménez-Durán, J McCrosky
Year: 2025
Published in: 2025 - econstor.eu
Institution: Mozilla Foundation, Columbia University, Bocconi University, Stanford University, University of Warwick
Research Area: Social Media, User Engagement, Toxicity
Discipline: Social Science
Reducing exposure to toxic content on social media lowers user engagement but also decreases the toxicity of user-generated content, highlighting a trade-off for platforms between reduced toxicity and increased engagement.
Methods: Pre-registered browser extension field experiment on Facebook, Twitter, and YouTube to randomly hide toxic content for six weeks; supplemented with a survey experiment.
Key Findings: Impact of reduced exposure to toxic content on advertising impressions, time spent, engagement, and user-generated content toxicity; explored curiosity and alignment between engagement and welfare.
Citations: 76
-
Authors: F Salvi, M Horta Ribeiro, R Gallotti, R West
Year: 2025
Published in: Nature Human Behaviour, 2025 - nature.com
Institution: EPFL, Fondazione Bruno Kessle, Princeton University
Research Area: Conversational Persuasion of LLM, Human-Computer Interaction (HCI), Behavioral Science, LLM
Discipline: Behavioral Science
GPT-4 can use personalized arguments to be more persuasive in debates, outperforming humans in 64.4% of AI-human comparisons when personalization is applied.
Methods: Preregistered controlled study involving multiround debates with random assignment to conditions focusing on AI-human comparisons, personalization, and opinion strength.
Key Findings: Effectiveness of persuasion by GPT-4, especially when using personalized arguments, compared to humans in debates.
Citations: 65
Sample Size: 900
-
Authors: M Groh, A Sankaranarayanan, N Singh, DY Kim
Year: 2025
Published in: Nature ..., 2024 - nature.com
Institution: Northwestern University, Massachusetts Institute of Technology
Research Area: Deepfakes, Media Forensics, Human Perception of AI-Generated Content, Political Communication
Discipline: Computational Social Science
Humans are better at detecting deepfake political speeches using audio-visual cues than relying on text alone; state-of-the-art text-to-speech audio makes deepfakes harder to discern.
Methods: Five pre-registered randomized experiments with varied base rates of misinformation, audio sources, question framings, and media modalities were conducted.
Key Findings: Human accuracy in discerning real political speeches from deepfakes across media formats and contextual variables.
DOI: https://doi.org/10.1038/s41467-024-51998-z
Citations: 63
Sample Size: 2215
-
Authors: SSY Kim, JW Vaughan, QV Liao, T Lombrozo
Year: 2025
Published in: Proceedings of the ..., 2025 - dl.acm.org
Institution: Wake Forest University, University of Illinois at Urbana-Champaign, Princeton University, University of California Berkeley
Research Area: Appropriate Reliance on LLMs, Explainable AI, Human-AI Interaction, Cognitive Psychology
Discipline: Cognitive Psychology, Artificial Intelligence, Human-Computer Interaction (HCI)
The study examines factors that influence users' reliance on LLM responses, finding explanations increase reliance, while sources and inconsistent explanations reduce reliance on incorrect responses.
Methods: Think-aloud study followed by a pre-registered, controlled experiment to assess the impact of explanations, sources, and inconsistencies in LLM responses on user reliance.
Key Findings: Users' reliance on LLM responses, accuracy, and the influence of explanations, inconsistencies, and sources on these measures.
DOI: https://doi.org/10.1145/3706598.3714020
Citations: 38
Sample Size: 308
-
Authors: H Bai, JG Voelkel, S Muldowney, JC Eichstaedt
Year: 2025
Published in: Nature ..., 2025 - nature.com
Institution: Stanford University
Research Area: Political Persuasion, LLM
Discipline: Computational Social Science
LLM-generated messages can effectively persuade humans on policy issues similarly to human-crafted messages, with differences in perceived persuasion mechanisms.
Methods: Three pre-registered experiments were conducted comparing the persuasive effectiveness of LLM-generated and human-generated messages on policy attitudes, using control conditions with neutral messages.
Key Findings: Influence of LLM-generated messages on participants' policy attitudes and perceived characteristics of the message authors.
Citations: 37
Sample Size: 4829
-
Authors: K Hackenburg, L Ibrahim, BM Tappin, M Tsakiris
Year: 2025
Published in: AI & SOCIETY, 2025 - Springer
Institution: Oxford Internet Institute, University of Oxford
Research Area: Political Communication and Persuasion, LLM
Discipline: Political Science, Artificial Intelligence
GPT-4's ability to generate persuasive messages rivaled human experts on polarized US political issues, suggesting AI tools may have significant implications for political campaigns and democracy.
Methods: Pre-registered experiment where GPT-4 generated partisan role-playing persuasive messages, which were compared to those from human persuasion experts.
Key Findings: Persuasive impact of GPT-4-generated messages versus human expert messages on U.S. political issues.
Citations: 35
Sample Size: 4955
-
Authors: C Diebel, M Goutier, M Adam, A Benlian
Year: 2025
Published in: Business & Information Systems ..., 2025 - Springer
Institution: Technical University of Darmstadt, University of Goettingen
Research Area: Human-AI Collaboration, System Satisfaction, User Competence
Discipline: Information Systems, Human-Computer Interaction (HCI), Artificial Intelligence
Proactive AI-based agent assistance decreases users' competence-based self-esteem and system satisfaction, especially for users with higher AI knowledge.
Methods: Vignette-based online experiment using self-determination theory as the framework to evaluate user responses to proactive vs. reactive AI assistance.
Key Findings: Impact of proactive vs. reactive AI help on users' competence-based self-esteem and system satisfaction, moderated by users' AI knowledge levels.
DOI: https://doi.org/10.1007/s12599-024-00918-y
Citations: 32
-
Authors: K Hackenburg, BM Tappin, P Röttger, SA Hale
Year: 2025
Published in: Proceedings of the ..., 2025 - pnas.org
Institution: University of California Berkeley, University of Cambridge, University of Oxford, Max Planck Institute
Research Area: Political Persuasion, LLM
Discipline: Computational Social Science, Political Science
Scaling language model sizes leads to diminishing returns in generating persuasive political messages, with larger models providing minimal gains compared to smaller ones after controlling for task completion metrics like coherence and relevance.
Methods: Generated 720 political messages using 24 LLMs of varying sizes and tested their persuasiveness through a large-scale randomized survey experiment.
Key Findings: Persuasive capability of language models across different sizes in generating political messages.
Citations: 31
Sample Size: 25982
-
Authors: M Riveiro, S Thill
Year: 2025
Published in: Proceedings of the 30th ACM Conference on User ..., 2022 - dl.acm.org
Institution: Linköping University, University of Skövde
Research Area: Explainable AI, Human-Computer Interaction (HCI)
Discipline: Human-Computer Interaction (HCI)
Users prefer factual explanations when AI outputs match expectations and mechanistic explanations when outputs deviate, with preferences influenced by response format (multiple-choice vs free text).
Methods: Participants were presented with scenarios involving an automated text classifier and asked to express their preference for explanations either through multiple-choice or free text responses.
Key Findings: User-desired content of AI explanations based on whether system behaviour aligns or deviates from expectations.
DOI: 10.1145/3503252.3531306
Citations: 30
-
Authors: A Misra, TD Dinh, SY Ewe
Year: 2025
Published in: British Food Journal, 2024 - emerald.com
Institution: Monash University
Research Area: Consumer Behavior, Social Media Marketing
Discipline: Marketing
The study found that the number of followers and content type of food influencers significantly shape consumer behavior in the social media context, highlighting their role in effective marketing strategies for the food industry.
Methods: Quantitative analysis examining the relationship between influencers' follower counts, content type, and consumer reactions using social media data.
Key Findings: Influencer follower count, type of content communicated by influencers, consumer behavior influenced by these variables.
DOI: https://doi.org/10.1108/BFJ-01-2024-0096
Citations: 30
-
Authors: J Mundel, J Yang
Year: 2025
Published in: Journal of Interactive Advertising, 2021 - Taylor & Francis
Institution: Arizona State University, Loyola University
Research Area: Consumer Behavior, Social Media Marketing, COVID-19 Studies
Discipline: Marketing, Social Sciences
Brands with strong perceived fit between their products and COVID-19 messaging showed higher consumer engagement and positive attitudes, while those with lower perceived fit faced negative evaluations due to perceptions of opportunism.
Methods: Analyzed consumer responses to Instagram ads using perceived brand-social issue fit as a determinant of ad evaluations, brand attitudes, and engagement intentions.
Key Findings: Consumer responses, ad evaluations, brand attitudes, engagement intentions, and perceived brand opportunism based on fit between product type and COVID-19 messaging.
DOI: https://www.tandfonline.com/doi/abs/10.1080/15252019.2021.1958274#
Citations: 29
-
Authors: Y Ding, J You, TK Machulla, J Jacobs, P Sen
Year: 2025
Published in: Proceedings of the ..., 2022 - dl.acm.org
Institution: University of California Irvine, University of Florida, State University of New York at Buffalo, University of Waterloo, Virginia Tech
Research Area: Computational Social Science, Human-Computer Interaction (HCI), Sentiment Analysis
Discipline: Computational Social Science
Demographic differences among annotators significantly affect sentiment dataset labels, causing up to a 4.5% accuracy difference in sentiment prediction models.
Methods: Crowdsourced annotations from >1000 workers combined with demographic data; analysis of multimodal sentiment datasets and evaluation using machine learning models.
Key Findings: Impact of annotator demographics on sentiment labeling and its effect on model predictions.
DOI: https://doi.org/10.1145/3555632
Citations: 28
Sample Size: 1000
-
Authors: U Messer
Year: 2025
Published in: Computers in Human Behavior: Artificial Humans, 2025 - Elsevier
Institution: Universität der Bundeswehr München
Research Area: Political Bias in Generative AI, Human-AI Interaction, Affective Computing, AI Bias
Discipline: Computer Science, Human-AI Interaction
People's acceptance and reliance on Generative AI (GAI) increase when they perceive alignment between their political orientation and the bias of GAI-generated content, leading to expanded trust in sensitive applications.
Methods: Three experiments analyzing behavioral reactions to politically biased content generated by GAI, including the impact of perceived alignment on acceptance and trust.
Key Findings: Participants' acceptance, reliance, and trust in GAI based on perceived alignment between political bias of GAI-generated content and their own political beliefs.
DOI: https://doi.org/10.1016/j.chbah.2024.100108
Citations: 24
Sample Size: 513
-
Authors: F Sun, N Li, K Wang, L Goette
Year: 2025
Published in: arXiv preprint arXiv:2505.02151, 2025 - arxiv.org
Institution: HKU Business School
Research Area: LLM Overconfidence and Human Bias Amplification, Bias, LLM
Discipline: Artificial Intelligence, Behavioral Science
Large language models (LLMs) exhibit overconfidence, amplifying human bias, especially in cases where their certainty declines, and their input doubles overconfidence in human decision making despite improving accuracy.
Methods: Algorithmically constructed reasoning problems with known ground truths were used to evaluate LLMs' confidence; comparisons were drawn with human performance using similar experimental protocols.
Key Findings: LLM confidence levels, correctness probabilities, comparison of bias between LLMs and humans, and effects of LLM input on human decision making.
Citations: 21
-
Authors: JQ Zhu, JC Peterson, B Enke, TL Griffiths
Year: 2025
Published in: Nature Human Behaviour, 2025 - nature.com
Institution: Princeton University, Boston University, Harvard University
Research Area: Strategic decision-making, Machine learning, Computational Cognitive Science
Discipline: Artificial Intelligence
This study used deep neural networks to analyze human strategic decision-making, predicting choices more accurately than existing theories and uncovering the context-dependent nature of reasoning and decision-making in complex games.
Methods: Deep neural networks trained on data from procedurally generated matrix games with over 2,400 variations; models were modified for interpretability.
Key Findings: Human choices and reasoning in initial play of two-player matrix games, focusing on strategic decision-making and response to game complexity.
DOI: https://doi.org/10.1038/s41562-025-02230-5
Citations: 16
Sample Size: 90000