Browse 3 peer-reviewed papers from Google Research spanning Probabilistic reasoning, Bayesian cognition (2025–2026). Research powered by Prolific's high-quality participant data.
This page lists 3 peer-reviewed papers from researchers at Google Research in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: L Qiu, F Sha, K Allen, Y Kim, T Linzen, S van Steenkiste
Year: 2026
Published in: Nature …, 2026 - nature.com
Institution: Meta, Google DeepMind, Massachusetts Institute of Technology, Google Research, Google
Research Area: Probabilistic reasoning, Bayesian cognition, Neural language models, Reasoning, AI Evaluations
Discipline: Machine learning, Artificial intelligence
This paper sits at the intersection of machine learning and computational cognitive science, showing that large language models can acquire generalized probabilistic reasoning by being trained to imitate Bayesian belief updating rather than relying on prompting or heuristics.
Citations: 8
-
Authors: J van Grunsven, N Jacobs, BA Kamphorst, M Honauer
Year: 2025
Published in: ACM Journal on, 2025 - dl.acm.org
Institution: University of Texas, Microsoft Research, Google DeepMind, Google, University of Washington, World Economic Forum
Research Area: Ethics and Governance of Computing Research, focused on Responsible Computing, Social Science Research, Artificial Intelligence.
Discipline: Ethics, Governance of Computing Research, AI Ethics
The paper emphasizes the importance of accounting for human vulnerability in the design and analysis of digital technologies, proposing concepts like 'Intimate Computing' to empower individuals in managing their technology-mediated vulnerabilities.
Methods: The study reviews and synthesizes existing literature and frameworks addressing vulnerability in human-technology interactions, including concepts like 'Intimate Computing' and 'Person-Machine Teaming'.
Key Findings: Human vulnerability in the context of digitally-mediated interactions and the role of computing frameworks in addressing them.
Citations: 2
-
Authors: C Rastogi, TH Teh, P Mishra, R Patel, D Wang, M Díaz, A Parrish, AM Davani, Z Ashwood
Year: 2025
Published in: arXiv preprint arXiv:2507.13383, 2025•arxiv.org
Institution: Google DeepMind, Google Research, Google
Research Area: AI alignment, safety evaluation, AI Safety, Multimodal evaluation, Human–AI interaction, LLM
Discipline: Computer Science, Machine Learning, Artificial Intelligence
This research introduces the DIVE dataset to enable pluralistic alignment in text-to-image models by accounting for diverse safety perspectives, revealing demographic variations in harm perception and advancing T2I model alignment strategies.
Methods: The study involved collecting feedback across 1000 prompts from demographically intersectional human raters to capture diverse safety perspectives, with an emphasis on empirical and contextual differences in harm perception.
Key Findings: Safety perceptions of text-to-image (T2I) model outputs from diverse demographic viewpoints and the influence of these perspectives on alignment strategies.
Citations: 1
Sample Size: 1000