Browse 12 peer-reviewed papers from University Of California Berkeley spanning Human-AI Interaction, LLM (2022–2025). Research powered by Prolific's high-quality participant data.
This page lists 12 peer-reviewed papers from researchers at University Of California Berkeley in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: K Dalal, D Koceja, G Hussein, J Xu, Y Zhao, Y Song, S Han, KC Cheung, J Kautz, C Guestrin, T Hashimoto, S Koyejo, Y Choi, Y Sun, X Wang
Year: 2025
Published in: ArXiv
Institution: Nvidia, Stanford University, UT Austin, University of California Berkeley, University of California San Diego
Research Area: Video Generation, Diffusion Models, Test-Time Training
Discipline: Computer Science
The paper introduces Test-Time Training (TTT) layers into Transformers to generate coherent one-minute videos from text storyboards, outperforming baselines in storytelling coherence but facing efficiency and artifact challenges.
Methods: Experimentation with Test-Time Training layers embedded in pre-trained Transformer models, evaluated using a dataset curated from Tom and Jerry cartoons and compared against Mamba 2, Gated DeltaNet, and sliding-window attention layers.
Key Findings: Effectiveness of video generation methods in creating coherent multi-scene stories in one-minute videos.
Citations: 52
Sample Size: 100
-
Authors: SSY Kim, JW Vaughan, QV Liao, T Lombrozo
Year: 2025
Published in: Proceedings of the ..., 2025 - dl.acm.org
Institution: Wake Forest University, University of Illinois at Urbana-Champaign, Princeton University, University of California Berkeley
Research Area: Appropriate Reliance on LLMs, Explainable AI, Human-AI Interaction, Cognitive Psychology
Discipline: Cognitive Psychology, Artificial Intelligence, Human-Computer Interaction (HCI)
The study examines factors that influence users' reliance on LLM responses, finding explanations increase reliance, while sources and inconsistent explanations reduce reliance on incorrect responses.
Methods: Think-aloud study followed by a pre-registered, controlled experiment to assess the impact of explanations, sources, and inconsistencies in LLM responses on user reliance.
Key Findings: Users' reliance on LLM responses, accuracy, and the influence of explanations, inconsistencies, and sources on these measures.
DOI: https://doi.org/10.1145/3706598.3714020
Citations: 38
Sample Size: 308
-
Authors: K Hackenburg, BM Tappin, P Röttger, SA Hale
Year: 2025
Published in: Proceedings of the ..., 2025 - pnas.org
Institution: University of California Berkeley, University of Cambridge, University of Oxford, Max Planck Institute
Research Area: Political Persuasion, LLM
Discipline: Computational Social Science, Political Science
Scaling language model sizes leads to diminishing returns in generating persuasive political messages, with larger models providing minimal gains compared to smaller ones after controlling for task completion metrics like coherence and relevance.
Methods: Generated 720 political messages using 24 LLMs of varying sizes and tested their persuasiveness through a large-scale randomized survey experiment.
Key Findings: Persuasive capability of language models across different sizes in generating political messages.
Citations: 31
Sample Size: 25982
-
Authors: D Guilbeault, S Delecourt, BS Desikan
Year: 2025
Published in: Nature, 2025 - nature.com
Institution: Stanford University, University of California Berkeley, University of Oxford
Research Area: AI Bias, Media Representation, Social Science
Discipline: Computational Social Science, Artificial Intelligence
The study highlights age-related gender bias in online media and language models, showing women are portrayed as younger than men, especially in high-status occupations, and explores how algorithms amplify these biases.
Methods: Analysis of 1.4 million images and videos from online sources and nine language models, followed by a pre-registered experiment involving participants to evaluate biases in internet content and algorithms.
Key Findings: Age and gender bias in occupational depiction across online platforms and language models, as well as its influence on beliefs and hiring preferences.
Citations: 4
Sample Size: 459
-
Authors: Z Cheng, J You
Year: 2025
Published in: arXiv preprint arXiv:2509.22989, 2025 - arxiv.org
Institution: University of Southern California, University of California Berkeley
Research Area: Artificial Intelligence, Computers and Society, Computer Science and Game Theory, Strategic Persuasion, Reinforcement Learning, Language Models, LLM, RLHF
Discipline: Artificial Intelligence
This paper introduces a scalable framework, utilizing Bayesian Persuasion, to evaluate and train LLMs for strategic persuasion, demonstrating significant persuasion gains and effective strategies through reinforcement learning.
Methods: Repurposed human-human persuasion datasets for evaluation and training; applied Bayesian Persuasion framework; used reinforcement learning to optimize LLMs for strategic persuasion.
Key Findings: The persuasive capabilities and strategies of large language models (LLMs) in various settings.
Citations: 1
-
Authors: D Guilbeault, S Delecourt, T Hull, BS Desikan, M Chu
Year: 2024
Published in: Nature, 2024 - nature.com
Institution: University of California Berkeley, Institute For Public Policy Research, Columbia University, University of Southern California Los Angeles
Research Area: Gender Bias, Computational Social Science, Online Media, AI Bias
Discipline: Computational Social Science
Online images significantly amplify gender bias compared to text, with biases in visual content impacting societal beliefs about gender roles.
Methods: Analyzed 3,495 social categories using over one million images from platforms like Google, Wikipedia, and IMDb, compared visual content to billions of words from the same platforms, and conducted a preregistered national experiment to assess the psychological impact on participants' beliefs.
Key Findings: The prevalence and psychological impact of gender bias in online images compared to text, including gender associations and representation disparities.
DOI: https://doi.org/10.1038/s41586-024-07068-x
Citations: 72
Sample Size: 3495
-
Authors: B Lebrun, S Temtsin, A Vonasch
Year: 2024
Published in: Frontiers in Robotics and ..., 2024 - frontiersin.org
Institution: University of Lausanne, University of California Berkeley, University of Massachusetts Amherst, Arizona State University
Research Area: AI in Social Science Research, Survey Methodology, Data Quality
Discipline: Artificial Intelligence
The study examines the integrity of online questionnaire responses and concludes that humans can identify AI-generated text with 76% accuracy, but current AI detection systems are ineffective, raising concerns about data quality in online surveys.
Methods: Human participants and automatic AI detection systems were tested on their ability to differentiate AI-generated text from human-generated text in the context of online questionnaires.
Key Findings: The study measured the ability of humans and AI detection tools to correctly identify whether text was generated by a human or an AI system for online questionnaire responses.
DOI: https://doi.org/10.3389/frobt.2023.1277635
Citations: 26
-
Authors: E Jahani, B Manning, J Zhang, H TuYe, M Alsobay, C Nicolaides, S Suri, D Holtz
Year: 2024
Published in: ArXiv
Institution: Massachusetts Institute of Technology, Microsoft Research, Stanford University, University of California Berkeley, University of Cyprus, University of Maryland
Research Area: Human-AI Interaction, Generative AI, Prompt Engineering
Discipline: Artificial Intelligence, focusing on Human-AI Interaction, Generative AI
-
Authors: Jing-Jing Li♡♠ Valentina Pyatkin♠ Max Kleiman-Weiner♣ Liwei Jiang♣ Nouha Dziri♠ &Anne G. E. Collins♡ Jana Schaich Borg♢ Maarten Sap♠◆ Yejin Choi♣ Sydney Levine♠
Year: 2024
Published in: ArXiv
Institution: Allen Institute for AI, Duke University, University of California Berkeley, University of Washington
Research Area: LLM Safety Moderation, Interpretable AI (XAI), LLM Alignment, Steerable AI
Discipline: Artificial Intelligence
-
Authors: Yunzhi Zhang, Zizhang Li, Matt Zhou, Shangzhe Wu, Jiajun Wu
Year: 2024
Published in: ArXiv
Institution: Stanford University, University of California Berkeley
Research Area: Artificial Intelligence, Computer Vision, Multimodal Reasoning
Discipline: Artificial Intelligence
-
Authors: J Barua, S Subramanian, K Yin, A Suhr
Year: 2024
Published in: ArXiv
Institution: University of California Berkeley
Research Area: Natural Language Processing (NLP), Machine Translation, Lexical Semantics
Discipline: Natural Language Processing
-
Authors: J Tang, E Birrell, A Lerner
Year: 2022
Published in: ... symposium on usable privacy and security ..., 2022 - usenix.org
Institution: University of California Berkeley, George Washington University, Stanford University
Research Area: Online privacy and security surveys, External validity, Replication studies, Human-Computer Interaction (HCI) in security
Discipline: Computer Science, Behavioral Science
Citations: 164