Discover 7 peer-reviewed studies in Language Models (2024–2026). Explore research findings powered by Prolific's diverse participant panel.
This page lists 7 peer-reviewed papers in the research area of Language Models in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: L Qiu, F Sha, K Allen, Y Kim, T Linzen, S van Steenkiste
Year: 2026
Published in: Nature …, 2026 - nature.com
Institution: Meta, Google DeepMind, Massachusetts Institute of Technology, Google Research, Google
Research Area: Probabilistic reasoning, Bayesian cognition, Neural language models, Reasoning, AI Evaluations
Discipline: Machine learning, Artificial intelligence
This paper sits at the intersection of machine learning and computational cognitive science, showing that large language models can acquire generalized probabilistic reasoning by being trained to imitate Bayesian belief updating rather than relying on prompting or heuristics.
Citations: 8
-
Authors: LM Schulze Buschoff, E Akata, M Bethge
Year: 2025
Published in: Nature Machine ..., 2025 - nature.com
Institution: Max Planck Institute
Research Area: Visual Cognition, Multimodal Large Language Models (MLLMs), Vision-Language Models (VLMs)
Discipline: Cognitive Science, Artificial Intelligence, Computer Vision
Vision-based large language models show proficiency in visual data interpretation but fall short in human-like abilities for causal reasoning, intuitive physics, and social cognition.
Methods: Controlled experiments evaluating model performance on tasks related to intuitive physics, causal reasoning, and intuitive psychology using visual processing benchmarks.
Key Findings: Model capabilities in understanding physical interactions, causal relationships, and social preferences.
DOI: https://doi.org/10.1038/s42256-024-00963-y
Citations: 70
-
Authors: Z Cheng, J You
Year: 2025
Published in: arXiv preprint arXiv:2509.22989, 2025 - arxiv.org
Institution: University of Southern California, University of California Berkeley
Research Area: Artificial Intelligence, Computers and Society, Computer Science and Game Theory, Strategic Persuasion, Reinforcement Learning, Language Models, LLM, RLHF
Discipline: Artificial Intelligence
This paper introduces a scalable framework, utilizing Bayesian Persuasion, to evaluate and train LLMs for strategic persuasion, demonstrating significant persuasion gains and effective strategies through reinforcement learning.
Methods: Repurposed human-human persuasion datasets for evaluation and training; applied Bayesian Persuasion framework; used reinforcement learning to optimize LLMs for strategic persuasion.
Key Findings: The persuasive capabilities and strategies of large language models (LLMs) in various settings.
Citations: 1
-
Authors: A Li, Q Xiao, P Cao, J Tang, Y Yuan, Z Zhao
Year: 2024
Published in: arXiv preprint arXiv ..., 2024 - arxiv.org
Institution: Beijing University, Alibaba Group
Research Area: Reinforcement Learning from AI Feedback (RLAIF), Safety and Utility of Open-domain Language Models, Open Source LLM
Discipline: Artificial Intelligence
Citations: 12
-
Authors: Thibaut Thonet, Jos Rozen, Laurent Besacier
Year: 2024
Published in: ArXiv
Institution: NAVER Labs
Research Area: Long-Context Language Models, Meeting Assistant Systems, Benchmark Evaluation
Discipline: Artificial Intelligence
-
Authors: SN Pushpita, R Levy
Year: 2024
Published in: Proceedings of the 28th Conference on ..., 2024 - aclanthology.org
Institution: Masachusetts Institute of Technology
Research Area: Visual Language Models (VLMs), Psycholinguistics, Psychometric Benchmarking
Discipline: Artificial Intelligence
DOI: https://doi.org/10.18653/v1/2024.conll-1.34
-
Authors: D Testa, G Bonetta, R Bernardi
Year: 2024
Published in: Proceedings of the ..., 2025 - aclanthology.org
Institution: Università di Roma La Sapienza, Fondazione Bruno Kessler, University of Pisa
Research Area: Multimodal AI Assessment, Visual Language Models (VLMs), Video Understanding, Computational Linguistics
Discipline: Artificial Intelligence, Computational Linguistics