Browse 12 peer-reviewed papers from University Of New York spanning Human-Computer Interaction (HCI), Disclosure psychology (2020–2026). Research powered by Prolific's high-quality participant data.
This page lists 12 peer-reviewed papers from researchers at University Of New York in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: M Raj, JM Berg, R Seamans
Year: 2026
Published in: Journal of Experimental Psychology …, 2026 - psycnet.apa.org
Institution: New York University, University of Michigan, Wharton
Research Area: Disclosure psychology, Biases in human–machine evaluation, AI Biases
Discipline: Experimental psychology
This paper sits at the intersection of experimental psychology, social cognition, and consumer judgment, examining how AI disclosure triggers persistent authenticity-based bias against creative work, revealing a robust form of algorithmic aversion in symbolic and expressive domains.
DOI: https://doi.org/10.1037/xge0001889
-
Authors: Y Ding, J You, TK Machulla, J Jacobs, P Sen
Year: 2025
Published in: Proceedings of the ..., 2022 - dl.acm.org
Institution: University of California Irvine, University of Florida, State University of New York at Buffalo, University of Waterloo, Virginia Tech
Research Area: Computational Social Science, Human-Computer Interaction (HCI), Sentiment Analysis
Discipline: Computational Social Science
Demographic differences among annotators significantly affect sentiment dataset labels, causing up to a 4.5% accuracy difference in sentiment prediction models.
Methods: Crowdsourced annotations from >1000 workers combined with demographic data; analysis of multimodal sentiment datasets and evaluation using machine learning models.
Key Findings: Impact of annotator demographics on sentiment labeling and its effect on model predictions.
DOI: https://doi.org/10.1145/3555632
Citations: 28
Sample Size: 1000
-
Authors: T Mendel, N Singh, DM Mann, B Wiesenfeld
Year: 2025
Published in: Journal of medical ..., 2025 - jmir.org
Institution: The City University of New York, George Washington University, New York University
Research Area: LLMs in Digital Health, Health Queries, User Attitudes
Discipline: Digital Health
Laypeople primarily use search engines over large language models (LLMs) for health queries, perceiving LLMs as less useful but less biased and more human-like while exhibiting no significant difference in trust or ease of use.
Methods: A screening survey followed by logistic regression analysis and a follow-up survey; comparisons were performed using ANOVA, Tukey post hoc tests, and paired-sample Wilcoxon tests.
Key Findings: Demographics and behaviors of LLM and search engine users for health queries, perceived usefulness, ease of use, trustworthiness, bias, and anthropomorphism.
Citations: 21
Sample Size: 2002
-
Authors: N Aldahoul, H Ibrahim, M Varvello, A Kaufman
Year: 2025
Published in: arXiv preprint arXiv ..., 2025 - arxiv.org
Institution: Delft University of Technology, University of Pennsylvania, New York University, King Abdullah University of Science and Technology, Massachusetts Institute of Technology, University of Texas at Austin
Research Area: Artificial Intelligence, Computers and Society, Political Science
Discipline: Artificial Intelligence, Social Science
The study finds that Large Language Models (LLMs) exhibit extreme political views on specific topics despite appearing ideologically moderate overall, and demonstrate a persuasive influence on users' political preferences even in informational contexts.
Methods: Compared 31 LLMs' political biases against benchmarks (legislators, judges, representative voter samples) and conducted a randomized experiment to measure their persuasive impact in informational interactions.
Key Findings: Ideological consistency, political extremity, and persuasive effects of LLMs in information-seeking contexts.
Citations: 7
Sample Size: 31
-
Authors: N Tyulina, Y Yu, TA Emmanouil, SI Levitan
Year: 2025
Published in: Proceedings of the 7th ACM ..., 2025 - dl.acm.org
Institution: University of Cambridge, University of Bath, University of Edinburgh, New York University
Research Area: Human-AI Interaction, Trust and Perception, Nonverbal Communication
Discipline: Applied Linguistics
Trust judgments are primarily influenced by auditory cues in both humans and multimodal models, though subtle differences in modality weighting exist between them.
Methods: Behavioral experiment with trust ratings of bimodal stimuli across four trust congruence conditions, combined with a multimodal model trained using HuBERT and ResNet-50 with late fusion, analyzed using Permutation Feature Importance (PFI).
Key Findings: The construction of trust from visual and auditory signals in both humans and multimodal models, focusing on modality dominance and feature weighting.
Sample Size: 150
-
Authors: S Kapoor, N Gruver, M Roberts
Year: 2024
Published in: Advances in ..., 2024 - proceedings.neurips.cc
Institution: Abacus AI, University of Cambridge, New York University, Columbia University
Research Area: Uncertainty Estimation, LLM Limitations, Know-What-You-Don't-Know, Computational Cognition
Discipline: Artificial Intelligence
Fine-tuning large language models (LLMs) on a small dataset of graded examples improves uncertainty estimations, enhancing their applicability in high-stakes scenarios and human-AI collaboration.
Methods: The researchers fine-tuned LLMs using a small dataset of graded correct and incorrect answers with LoRA (Low-Rank Adaptation) to create uncertainty estimates and conducted a user study to investigate their utility in human-AI collaboration.
Key Findings: Calibration and generalization of uncertainty estimates, performance of fine-tuning LLMs for uncertainty estimation, and human-AI interaction improvements informed by uncertainty data.
Citations: 71
Sample Size: 1000
-
Authors: L Hewitt, A Ashokkumar, I Ghezae, R Willer
Year: 2024
Published in: Preprint, 2024 - samim.io
Institution: Stanford University, New York University
Research Area: Social Science Experiments, Large Language Model Prediction, LLM
Discipline: Computational Social Science
The study presents a framework using large language models to predict outcomes of social science field experiments, achieving 78% accuracy but facing challenges with experiments on complex social issues.
Methods: Authors used an automated framework powered by large language models to predict outcomes of 276 field experiments drawn from economics literature.
Key Findings: The prediction accuracy of large language models for outcomes of field experiments addressing various human behaviors.
Citations: 68
Sample Size: 276
-
Authors: HC Gordon, T Stafford, K Dommett
Year: 2024
Published in: ... of the Annual Meeting of the ..., 2024 - escholarship.org
Institution: University of California, Irvine, University of New York, Buffalo, University of Bath
Research Area: Political Advertising, Trust, Political Communication, Transparency
Discipline: Political Science, Communication
-
Authors: O Raccah, P Chen, TM Gureckis, D Poeppe, VA Vo
Year: 2024
Published in: Nature
Institution: Intel Labs, New York University, Yale University
Research Area: Cognitive Psychology, Memory Research, Natural Language Processing (NLP)
Discipline: Psychology, Artificial Intelligence
-
Authors: S Rathje, C Robertson, WJ Brady
Year: 2023
Published in: Perspectives on ..., 2024 - journals.sagepub.com
Institution: New York University, University of North Carolina at Chapel Hill
Research Area: Social Media Perception, Divisive Content, Platform Amplification
Discipline: Social Science
Citations: 82
-
Authors: J Oppenlaender, K Milland, A Visuri, P Ipeirotis
Year: 2020
Published in: Proceedings of the ..., 2020 - dl.acm.org
Institution: Cornell, Aalto University, New York University
Research Area: Crowdsourcing, Creativity, Human-Computer Interaction (HCI)
Discipline: Human-Computer Interaction (HCI)
Citations: 66
-
Authors: A Bergman, A Chinco, SM Hartzmark
Year: 2020
Published in: Start-Up Guide and ..., 2020 - papers.ssrn.com
Institution: Yale School of Management, University of Miami School of Business, New York University, Columbia University
Research Area: Survey Methodology, Experimental Design, Social Science Research Methods
Discipline: Social Science Research Methods
Citations: 34