Explore 3 peer-reviewed studies by J Xu in Video Generation and Diffusion Models (2024–2025). Discover research powered by Prolific's participant panel.
This page lists 3 peer-reviewed papers authored or co-authored by J Xu in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: K Dalal, D Koceja, G Hussein, J Xu, Y Zhao, Y Song, S Han, KC Cheung, J Kautz, C Guestrin, T Hashimoto, S Koyejo, Y Choi, Y Sun, X Wang
Year: 2025
Published in: ArXiv
Institution: Nvidia, Stanford University, UT Austin, University of California Berkeley, University of California San Diego
Research Area: Video Generation, Diffusion Models, Test-Time Training
Discipline: Computer Science
The paper introduces Test-Time Training (TTT) layers into Transformers to generate coherent one-minute videos from text storyboards, outperforming baselines in storytelling coherence but facing efficiency and artifact challenges.
Methods: Experimentation with Test-Time Training layers embedded in pre-trained Transformer models, evaluated using a dataset curated from Tom and Jerry cartoons and compared against Mamba 2, Gated DeltaNet, and sliding-window attention layers.
Key Findings: Effectiveness of video generation methods in creating coherent multi-scene stories in one-minute videos.
Citations: 52
Sample Size: 100
-
Authors: S Zhang, J Xu, AJ Alvero
Year: 2025
Published in: Sociological Methods & Research, 2025 - journals.sagepub.com
Institution: University of Maryland, Indiana University, University of Minnesota Duluth
Research Area: Sociological Methods, Generative AI, Survey Methodology
Discipline: Sociology, Social Science
The study finds that 34% of research participants use generative AI tools like large language models (LLMs) to assist with open-ended survey responses, leading to more homogeneity and positivity in their answers, which could impact data validity by masking social variations.
Methods: The study conducted an original survey on a popular online platform and simulated comparisons between human-written responses from pre-ChatGPT studies and LLM-generated responses.
Key Findings: Use of LLMs by survey participants, differences in text homogeneity, positivity, and masking of social variation in open-ended survey responses.
Citations: 26
-
Authors: J Xu, L Han, S Sadiq, G Demartini
Year: 2024
Published in: Proceedings of the International ..., 2024 - ojs.aaai.org
Institution: University of Lausanne, EPFL, University of Southampton, University of Queensland
Research Area: Crowdsourcing, Misinformation Assessment, LLM
Discipline: Artificial Intelligence
DOI: https://doi.org/10.1609/icwsm.v18i1.31417
Citations: 6