Browse 7 peer-reviewed papers from University Of Maryland spanning Generative AI, LLM (2024–2025). Research powered by Prolific's high-quality participant data.
This page lists 7 peer-reviewed papers from researchers at University Of Maryland in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: S Zhang, J Xu, AJ Alvero
Year: 2025
Published in: Sociological Methods & Research, 2025 - journals.sagepub.com
Institution: University of Maryland, Indiana University, University of Minnesota Duluth
Research Area: Sociological Methods, Generative AI, Survey Methodology
Discipline: Sociology, Social Science
The study finds that 34% of research participants use generative AI tools like large language models (LLMs) to assist with open-ended survey responses, leading to more homogeneity and positivity in their answers, which could impact data validity by masking social variations.
Methods: The study conducted an original survey on a popular online platform and simulated comparisons between human-written responses from pre-ChatGPT studies and LLM-generated responses.
Key Findings: Use of LLMs by survey participants, differences in text homogeneity, positivity, and masking of social variation in open-ended survey responses.
Citations: 26
-
Authors: G Lima, N Grgić-Hlača, M Langer, Y Zou
Year: 2025
Published in: Proceedings of the 2025 CHI ..., 2025 - dl.acm.org
Institution: University of Maryland, Max Planck Institute, Stanford University, Cornell University
Research Area: Algorithmic Fairness, Systemic Injustice, Social Perception of AI, Algorithmic Discrimination
Discipline: Computational Social Science
The study examines how contextualizing algorithms within systemic injustice impacts perceptions of algorithmic discrimination, finding disparate effects based on participant group identity and revealing unintended consequences of such contextualization.
Methods: 2x3 between-participants experiment using the hiring context as a case-study; examined the influence of systemic injustice information and algorithms' bias perpetuation on lay perceptions.
Key Findings: Impact of systemic injustice framing and explanation of algorithmic bias perpetuation on participants' views of algorithmic fairness and discrimination.
DOI: 10.1145/3706598.3713536
Citations: 2
Sample Size: 716
-
Authors: F Joessel, S Denkinger, PE Joessel, CS Green
Year: 2025
Published in: Acta Psychologica, 2025 - Elsevier
Institution: Max Planck Institute, University of Potsdam, University of Maryland, University of Zurich, University of Arizona
Research Area: Online cognitive training, Automated psychological studies, Crowdsourcing, behavioral research
Discipline: Psychology
The study introduces a fully online method for conducting cognitive training experiments using Prolific, significantly reducing resource demands while achieving robust results and diverse participant recruitment.
Methods: Participants were recruited via Prolific, assigned to groups using a pseudo-randomized procedure, and completed a 12-hour remote cognitive training study with pre- and post-test assessments monitored via custom dashboards.
Key Findings: Impact of a 12-hour cognitive training intervention on participants' cognitive functions, conducted in a remote and automated manner.
-
Authors: E Jahani, B Manning, J Zhang, H TuYe, M Alsobay, C Nicolaides, S Suri, D Holtz
Year: 2024
Published in: ArXiv
Institution: Massachusetts Institute of Technology, Microsoft Research, Stanford University, University of California Berkeley, University of Cyprus, University of Maryland
Research Area: Human-AI Interaction, Generative AI, Prompt Engineering
Discipline: Artificial Intelligence, focusing on Human-AI Interaction, Generative AI
-
Authors: Dayeon Ki, Marine Carpuat
Year: 2024
Published in: ArXiv
Institution: University of Maryland
Research Area: Machine Translation Post-Editing, LLM
Discipline: Computer Science
-
Authors: Yang Trista Cao, Anna Sotnikova, Jieyu Zhao, Linda X. Zou, Rachel Rudinger, Hal Daumé III
Year: 2024
Published in: ArXiv
Institution: Microsoft Research, University of Maryland
Research Area: Multilingual Bias, Social Science, LLM, AI Bias
Discipline: Artificial Intelligence, Social Science, Large Language Models
-
Authors: Jacob Beck, Stephanie Eckman, Bolei Ma, Rob Chew, Frauke Kreuter
Year: 2024
Published in: ACL Anthology
Institution: University of Maryland
Research Area: Annotation Sensitivity, Order Effects, Natural Language Processing, Social Science in AI
Discipline: Natural Language Processing (NLP), Computational Social Science