Browse 40 peer-reviewed papers from Vu University spanning LLM, Artificial Intelligence (2025–2026). Research powered by Prolific's high-quality participant data.
This page lists 40 peer-reviewed papers from researchers at Vu University in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: K Rudnicki, O Borowiecki, K Poels, B Beersma
Year: 2026
Published in: Evolution and Human …, 2026 - Elsevier
Institution: University of Antwerp, University of Bialystok, VU University, Emory University
Research Area: Personality psychology, Social cognition, Cognitive neuroscience
Discipline: Evolutionary psychology, human behavioral ecology
In a preregistered study, psychopathy (more than the other Dark Triad traits) is linked to worse cognitive empathy and greater dehumanization, and this empathy–psychopathy link is especially strong among people who are less sensitive at detecting agency in others.
-
Authors: H Zhu, J Chen, N Liu
Year: 2026
Published in: International Journal of Hospitality Management, 2026 - Elsevier
Institution: Sun Yat-Sen University
Research Area: Leadership studies, Organizational psychology, hospitality research, Attachment theory
Discipline: Organizational Behavior, Management
Leader secure-base support improves hospitality employees’ service performance by boosting work engagement, but this benefit is weakened when employees experience high role ambiguity or role conflict.
-
Authors: C Yuan, B Ma, Z Zhang, B Prenkaj, F Kreuter, G Kasneci
Year: 2026
Published in: arXiv preprint arXiv:2601.08634, 2026•arxiv.org
Institution: Munich Center for Machine Learning, LMU Munich, Technical University of Munich
Research Area: Artificial Intelligence, AI Ethics, AI Alignment, Political Science, Computational Social Science
Discipline: Computer Science, Natural Language Processing (NLP)
This paper examines how large language models’ (LLMs) political outputs shift when you explicitly prime them with different moral values. Instead of just assigning fake personas (like “pretend to be liberal”), the authors condition models to endorse or reject specific moral values (e.g., utilitarianism, fairness, authority). They then measure how those moral primes move the models’ positions in...
DOI: https://doi.org/10.48550/arXiv.2601.08634
-
Authors: L Dai, Z Wang, L Chen, J Jin
Year: 2026
Published in: 2026•scholarspace.manoa.hawaii.edu
Institution: Shanghai International Studies University
Research Area: Socio-Economic Impacts of AI, Algorithmic Systems
Discipline: Computer Science, Artificial Intelligence
AI errors lead to broader negative generalizations about other AI systems compared to human errors, largely due to perceptions of AI's inflexibility and inability to learn from mistakes.
Methods: Conducted four one-factor experiments across distinct contexts to compare human responses to AI errors and human errors.
Key Findings: Generalization of error perceptions from one AI system to others, and psychological mechanisms driving this process.
-
Authors: J He, C Calluso, C Donato, R Thouvarecq
Year: 2026
Published in: … - Journal of Retailing and …, 2026 - Elsevier
Institution: Luiss University, Roma Tre University, Univ Rouen Normandie, Le Mans Université
Research Area: Message framing, Psychological reactance, Self-image traits
Discipline: Consumer behavior
This paper looks at why some people get annoyed / push back (“psychological reactance”) when online grocery sites show healthy-eating PSAs, especially when the PSA is framed as a warning (“If you don’t eat well, you’ll suffer”) vs a benefit (“If you eat well, you’ll gain”).
-
Authors: M Raj, JM Berg, R Seamans
Year: 2026
Published in: Journal of Experimental Psychology …, 2026 - psycnet.apa.org
Institution: New York University, University of Michigan, Wharton
Research Area: Disclosure psychology, Biases in human–machine evaluation, AI Biases
Discipline: Experimental psychology
This paper sits at the intersection of experimental psychology, social cognition, and consumer judgment, examining how AI disclosure triggers persistent authenticity-based bias against creative work, revealing a robust form of algorithmic aversion in symbolic and expressive domains.
DOI: https://doi.org/10.1037/xge0001889
-
Authors: X Yang, N Xi, J Hamari
Year: 2026
Published in: 2026 - scholarspace.manoa.hawaii.edu
Institution: Tampere University
Research Area: NFT, Gamification, Virtual economies
Discipline: Human–Computer Interaction (HCI), Consumer behavior, Behavioral psychology
Using a Prolific survey of 805 people, the paper shows that Big Five personality traits predict why people “love vs. hate” NFT art—with agreeableness and conscientiousness linked to higher perceived value across most dimensions, and neuroticism linked to more skepticism (especially about transparency).
-
Authors: H Mohseni, T Kujala, J Silvennoinen
Year: 2026
Published in: SPRINGER
Institution: University of Jyväskylä
Research Area: Migration studies, Social indicators, Psychometrics, Quantitative social science methods
Discipline: Social sciences
Developed and validated a multidimensional place-belongingness scale to assess immigrants' sense of belonging to geographic locations, identifying four factors: feeling at home, accepted, empowered, and secure.
Methods: Survey data from 270 immigrants worldwide analyzed using exploratory factor analysis.
Key Findings: The subjective sense of place-belongingness, decomposed into four factors: feeling at home, feeling accepted, feeling empowered, and feeling secure.
Sample Size: 270
-
Authors: S Assecondi
Year: 2026
Published in: SPRINGER
Institution: University of Trento
Research Area: Geropsychology, Cognitive intervention research, Psycholinguistics, Neuropsychology
Discipline: Psychology, Cognitive Science
Working memory training shows modest improvements in reading comprehension for younger adults but not older adults, highlighting the need for ecologically valid measures in cognitive training programs.
Methods: Participants underwent a 5-day cognitive training program targeting visuo-spatial working memory to evaluate effects on reading comprehension as a proxy for everyday functions.
Key Findings: The relationship between visuo-spatial working memory improvements and reading comprehension performance across age groups.
Sample Size: 175
-
Authors: T Kosch, R Welsch, L Chuang, A Schmidt
Year: 2025
Published in: ACM Transactions on ..., 2023 - dl.acm.org
Institution: Aalto University
Research Area: User Expectations, HCI Research Bias, Artificial Intelligence, AI Bias
Discipline: Human-Computer Interaction (HCI)
The belief in receiving adaptive AI support positively impacts user performance, demonstrating a placebo effect in Human-Computer Interaction.
Methods: Two experiments where participants completed word puzzles under conditions with or without supposed AI support; in reality, no AI assistance was provided.
Key Findings: Impact of perceived AI support on user expectations and task performance.
DOI: https://doi.org/10.1145/3529225
Citations: 149
Sample Size: 469
-
Authors: S Chaudhari, P Aggarwal, V Murahari
Year: 2025
Published in: ACM Computing ..., 2025 - dl.acm.org
Institution: University of Massachusetts Amherst, Carnegie Mellon University, Princeton University
Research Area: Reinforcement Learning from Human Feedback (RLHF), LLM, RLHF
Discipline: Artificial Intelligence
The paper critically analyzes reinforcement learning from human feedback (RLHF) for large language models (LLMs), emphasizing the importance and limitations of reward models in improving human-aligned AI systems.
Methods: Analyzed RLHF frameworks through reinforcement learning principles; conducted a categorical literature review to identify modeling challenges, assumptions, and framework limitations.
Key Findings: Investigated RLHF's fundamentals, focusing on the role of reward models, implications of design choices in RLHF training algorithms, and underlying issues like generalization errors, model misspecification, and feedback sparsity.
Citations: 117
-
Authors: M Steyvers, H Tejeda, A Kumar, C Belem
Year: 2025
Published in: Nature Machine ..., 2025 - nature.com
Institution: University of California Irvine
Research Area: Computational Linguistics, Computational Social Science, AI Ethics, Trust in AI
Discipline: Computational Social Science
LLMs often lead to user overestimation of response accuracy, especially with longer explanations; adjusting explanation styles to align with model confidence improves calibration and discrimination gaps, enhancing trust in AI-assisted decision making.
Methods: Conducted experiments using multiple-choice and short-answer questions to study user confidence versus model-stated confidence; varied explanation length and alignment with model internal confidence.
Key Findings: Calibration gap (human vs. model confidence), discrimination gap (ability to distinguish correct vs. incorrect answers), and effects of explanation style and length on user trust.
Citations: 100
-
Authors: G Beknazar-Yuzbashev, R Jiménez-Durán, J McCrosky
Year: 2025
Published in: 2025 - econstor.eu
Institution: Mozilla Foundation, Columbia University, Bocconi University, Stanford University, University of Warwick
Research Area: Social Media, User Engagement, Toxicity
Discipline: Social Science
Reducing exposure to toxic content on social media lowers user engagement but also decreases the toxicity of user-generated content, highlighting a trade-off for platforms between reduced toxicity and increased engagement.
Methods: Pre-registered browser extension field experiment on Facebook, Twitter, and YouTube to randomly hide toxic content for six weeks; supplemented with a survey experiment.
Key Findings: Impact of reduced exposure to toxic content on advertising impressions, time spent, engagement, and user-generated content toxicity; explored curiosity and alignment between engagement and welfare.
Citations: 76
-
Authors: F Salvi, M Horta Ribeiro, R Gallotti, R West
Year: 2025
Published in: Nature Human Behaviour, 2025 - nature.com
Institution: EPFL, Fondazione Bruno Kessle, Princeton University
Research Area: Conversational Persuasion of LLM, Human-Computer Interaction (HCI), Behavioral Science, LLM
Discipline: Behavioral Science
GPT-4 can use personalized arguments to be more persuasive in debates, outperforming humans in 64.4% of AI-human comparisons when personalization is applied.
Methods: Preregistered controlled study involving multiround debates with random assignment to conditions focusing on AI-human comparisons, personalization, and opinion strength.
Key Findings: Effectiveness of persuasion by GPT-4, especially when using personalized arguments, compared to humans in debates.
Citations: 65
Sample Size: 900
-
Authors: M Groh, A Sankaranarayanan, N Singh, DY Kim
Year: 2025
Published in: Nature ..., 2024 - nature.com
Institution: Northwestern University, Massachusetts Institute of Technology
Research Area: Deepfakes, Media Forensics, Human Perception of AI-Generated Content, Political Communication
Discipline: Computational Social Science
Humans are better at detecting deepfake political speeches using audio-visual cues than relying on text alone; state-of-the-art text-to-speech audio makes deepfakes harder to discern.
Methods: Five pre-registered randomized experiments with varied base rates of misinformation, audio sources, question framings, and media modalities were conducted.
Key Findings: Human accuracy in discerning real political speeches from deepfakes across media formats and contextual variables.
DOI: https://doi.org/10.1038/s41467-024-51998-z
Citations: 63
Sample Size: 2215
-
Authors: N Grgić-Hlača, G Lima, A Weller
Year: 2025
Published in: Proceedings of the 2nd ..., 2022 - dl.acm.org
Institution: Max Planck Institute, École Polytechnique Fédérale de Lausanne, University of Cambridge, The Alan Turing Institute
Research Area: Algorithmic Fairness, Human Perception, Diversity in AI Decision-Making
Discipline: Social Science, Artificial Intelligence
This study examines how sociodemographic factors and personal experience influence perceptions of fairness in algorithmic decision-making, particularly in bail decisions, highlighting the importance of diverse perspectives in regulatory oversight.
Methods: Explored perceptions of procedural fairness using surveys to assess the influence of demographics and personal experiences.
Key Findings: Impact of demographics (age, education, gender, race, political views) and personal experience on perceptions of fairness of algorithmic feature use in bail decisions.
DOI: 10.1145/3551624.3555306
Citations: 62
-
Authors: K Dalal, D Koceja, G Hussein, J Xu, Y Zhao, Y Song, S Han, KC Cheung, J Kautz, C Guestrin, T Hashimoto, S Koyejo, Y Choi, Y Sun, X Wang
Year: 2025
Published in: ArXiv
Institution: Nvidia, Stanford University, UT Austin, University of California Berkeley, University of California San Diego
Research Area: Video Generation, Diffusion Models, Test-Time Training
Discipline: Computer Science
The paper introduces Test-Time Training (TTT) layers into Transformers to generate coherent one-minute videos from text storyboards, outperforming baselines in storytelling coherence but facing efficiency and artifact challenges.
Methods: Experimentation with Test-Time Training layers embedded in pre-trained Transformer models, evaluated using a dataset curated from Tom and Jerry cartoons and compared against Mamba 2, Gated DeltaNet, and sliding-window attention layers.
Key Findings: Effectiveness of video generation methods in creating coherent multi-scene stories in one-minute videos.
Citations: 52
Sample Size: 100
-
Authors: SSY Kim, JW Vaughan, QV Liao, T Lombrozo
Year: 2025
Published in: Proceedings of the ..., 2025 - dl.acm.org
Institution: Wake Forest University, University of Illinois at Urbana-Champaign, Princeton University, University of California Berkeley
Research Area: Appropriate Reliance on LLMs, Explainable AI, Human-AI Interaction, Cognitive Psychology
Discipline: Cognitive Psychology, Artificial Intelligence, Human-Computer Interaction (HCI)
The study examines factors that influence users' reliance on LLM responses, finding explanations increase reliance, while sources and inconsistent explanations reduce reliance on incorrect responses.
Methods: Think-aloud study followed by a pre-registered, controlled experiment to assess the impact of explanations, sources, and inconsistencies in LLM responses on user reliance.
Key Findings: Users' reliance on LLM responses, accuracy, and the influence of explanations, inconsistencies, and sources on these measures.
DOI: https://doi.org/10.1145/3706598.3714020
Citations: 38
Sample Size: 308
-
Authors: H Bai, JG Voelkel, S Muldowney, JC Eichstaedt
Year: 2025
Published in: Nature ..., 2025 - nature.com
Institution: Stanford University
Research Area: Political Persuasion, LLM
Discipline: Computational Social Science
LLM-generated messages can effectively persuade humans on policy issues similarly to human-crafted messages, with differences in perceived persuasion mechanisms.
Methods: Three pre-registered experiments were conducted comparing the persuasive effectiveness of LLM-generated and human-generated messages on policy attitudes, using control conditions with neutral messages.
Key Findings: Influence of LLM-generated messages on participants' policy attitudes and perceived characteristics of the message authors.
Citations: 37
Sample Size: 4829
-
Authors: K Hackenburg, L Ibrahim, BM Tappin, M Tsakiris
Year: 2025
Published in: AI & SOCIETY, 2025 - Springer
Institution: Oxford Internet Institute, University of Oxford
Research Area: Political Communication and Persuasion, LLM
Discipline: Political Science, Artificial Intelligence
GPT-4's ability to generate persuasive messages rivaled human experts on polarized US political issues, suggesting AI tools may have significant implications for political campaigns and democracy.
Methods: Pre-registered experiment where GPT-4 generated partisan role-playing persuasive messages, which were compared to those from human persuasion experts.
Key Findings: Persuasive impact of GPT-4-generated messages versus human expert messages on U.S. political issues.
Citations: 35
Sample Size: 4955