Discover 18 peer-reviewed studies in Ai Evaluation (2021–2026). Explore research findings powered by Prolific's diverse participant panel.
This page lists 18 peer-reviewed papers in the research area of Ai Evaluation in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: L Qiu, F Sha, K Allen, Y Kim, T Linzen, S van Steenkiste
Year: 2026
Published in: Nature …, 2026 - nature.com
Institution: Meta, Google DeepMind, Massachusetts Institute of Technology, Google Research, Google
Research Area: Probabilistic reasoning, Bayesian cognition, Neural language models, Reasoning, AI Evaluations
Discipline: Machine learning, Artificial intelligence
This paper sits at the intersection of machine learning and computational cognitive science, showing that large language models can acquire generalized probabilistic reasoning by being trained to imitate Bayesian belief updating rather than relying on prompting or heuristics.
Citations: 8
-
Authors: N Petrova, A Gordon, E Blindow
Year: 2026
Published in: Open review
Institution: Prolific
Research Area: Human-centered AI evaluation, Bayesian statistics, Responsible AI, AI alignment, LLM Evaluation
Discipline: Machine Learning, Artificial Intelligence
The study introduces HUMAINE, a multidimensional evaluation framework for LLMs, revealing demographic-specific preference variations and ranking google/gemini-2.5-pro as the top-performing model with a posterior probability of 95.6%.
Methods: Multi-turn naturalistic conversations analyzed using a hierarchical Bayesian Bradley-Terry-Davidson model with post-stratification to census data, stratified across 22 demographic groups.
Key Findings: Performance of 28 LLMs across five human-centric dimensions, accounting for demographic-specific preferences.
Sample Size: 23404
-
Authors: L Ibrahim, C Akbulut, R Elasmar, C Rastogi, M Kahng, MR Morris, KR McKee, V Rieser, M Shanahan, L Weidinger
Year: 2025
Published in: arXiv preprint arXiv:2502.07077, 2025•arxiv.org
Institution: Google DeepMind, Google, University of Oxford
Research Area: Multimodal conversational AI, conversational AI, Evaluation methodology, benchmarking
Discipline: Computer Science, Natural Language Processing (NLP), Human–Computer Interaction (HCI)
The paper evaluates anthropomorphic behaviors in SOTA LLMs through a multi-turn methodology, showing that such behaviors, including empathy and relationship-building, predominantly emerge after multiple interactions and influence user perceptions.
Methods: Multi-turn evaluation of 14 anthropomorphic behaviors using simulations of user interactions, validated by a large-scale human subject study.
Key Findings: Anthropomorphic behaviors in large language models, including relationship-building and pronoun usage, and their perception by users.
Citations: 26
Sample Size: 1101
-
Authors: L Gienapp, T Hagen, M Fröbe, M Hagen, B Stein, M Potthast, H Scells
Year: 2025
Published in: ArXiv
Institution: Bauhaus-Universitat Weimar, Friedrich-Schiller-Universitat Jena, Leipzig University, University of Kassel, ScaDS.AI, hessian.AI
Research Area: Crowdsourcing, RAG Evaluation, Artificial Intelligence, AI Evaluation, RAG
Discipline: Artificial Intelligence
The study investigates the feasibility of using crowdsourcing for RAG evaluation, finding that human pairwise judgments are reliable and cost-effective compared to LLM-based or automated methods.
Methods: Two complementary studies on response writing and response utility judgment using 903 human-written and 903 LLM-generated responses for 301 topics; pairwise judgments across seven utility dimensions were collected via human and LLM evaluators.
Key Findings: Human effectiveness in writing and judging responses in RAG scenarios, considering discourse styles and utility dimensions like coverage and coherence.
Citations: 4
Sample Size: 903
-
Authors: A Karamolegkou, O Eberle, P Rust, C Kauf, A Søgaard
Year: 2025
Published in: ArXiv
Institution: Aleph Alpha, Massachusetts Institute of Technology
Research Area: Adversarial Ambiguity, Language Model Evaluation, Artificial intelligence, Computation and Language, LLM, AI Evaluation, Red Teaming
Discipline: Natural Language Processing
The paper assesses language models' sensitivity to ambiguity using an adversarial dataset and finds that direct prompting poorly identifies ambiguity, while linear probes achieve high accuracy in decoding ambiguity from model representations.
Methods: An adversarial ambiguity dataset was introduced with various types of ambiguities and transformations; models were tested using direct prompts and linear probes trained on internal representations.
Key Findings: Language models' ability to detect ambiguity, including syntactic, lexical, and phonological types, as well as performance under adversarial variations.
Citations: 2
-
Authors: P Schmidtová, O Dušek, S Mahamood
Year: 2025
Published in: ArXiv
Institution: Charles University, Trivago
Research Area: Summarization evaluation, Natural Language Processing, LLM-as-a-Judge, AI Evaluation
Discipline: Natural Language Processing
Simpler metrics like word overlap surprisingly correlate well with human judgments in summarization evaluation, outperforming complex methods in out-of-domain applications, though LLMs remain unreliable for assessment due to annotation biases.
Methods: Human evaluation campaigns with categorical error assessment, span-level annotations, and comparison of traditional metrics, trainable models, and LLM-as-a-judge approaches.
Key Findings: Effectiveness of summarization evaluation methods and their correlation with human judgment, along with business impacts of incorrect information in generated summaries.
Citations: 1
-
Authors: Z Ashktorab, A Buccella, J D'Cruz, Z Fowler, A Gill, KY Leung, PD Magnus, J Richards
Year: 2025
Published in: arXiv preprint arXiv:2507.02745, 2025•arxiv.org
Institution: IBM Research, University at Albany
Research Area: Human–AI interaction, AI systems evaluation, UX, User Experience
Discipline: Computer Science, Human–Computer Interaction (HCI)
In a preregistered study with 162 participants, people generally prefer explanatory apologies from LLM chatbots over rote or purely empathic ones—though in biased error scenarios empathic apologies are sometimes favored—highlighting the complexity of designing chatbot apologies that effectively repair trust.
DOI: https://doi.org/10.48550/arXiv.2507.02745
Citations: 1
-
Authors: C Rastogi, TH Teh, P Mishra, R Patel, D Wang, M Díaz, A Parrish, AM Davani, Z Ashwood
Year: 2025
Published in: arXiv preprint arXiv:2507.13383, 2025•arxiv.org
Institution: Google DeepMind, Google Research, Google
Research Area: AI alignment, safety evaluation, AI Safety, Multimodal evaluation, Human–AI interaction, LLM
Discipline: Computer Science, Machine Learning, Artificial Intelligence
This research introduces the DIVE dataset to enable pluralistic alignment in text-to-image models by accounting for diverse safety perspectives, revealing demographic variations in harm perception and advancing T2I model alignment strategies.
Methods: The study involved collecting feedback across 1000 prompts from demographically intersectional human raters to capture diverse safety perspectives, with an emphasis on empirical and contextual differences in harm perception.
Key Findings: Safety perceptions of text-to-image (T2I) model outputs from diverse demographic viewpoints and the influence of these perspectives on alignment strategies.
Citations: 1
Sample Size: 1000
-
Authors: J Szczuka, L Mühl, P Ebner, S Dubé
Year: 2025
Published in: ArXiv
Institution: University of Duisburg-Essen
Research Area: Human-Computer Interaction (HCI), Social Psychology, Interpersonal Relationships with AI, LLM Evaluation
Discipline: Social Science
Participants rated AI-generated dating profile responses equally as human-like in terms of closeness and romantic interest, challenging assumptions about authenticity in online communication.
Methods: Participants evaluated 10 AI-generated responses to an interpersonal closeness task in a matchmaking scenario, without knowing the responses were AI-generated.
Key Findings: Impact of perceived response source (human vs AI) on interpersonal closeness and romantic interest; influence of perceived quality and human-likeness.
Sample Size: 307
-
Authors: Cameron R. Jones, Benjamin K. Bergen
Year: 2025
Published in: ArXiv
Institution: University of California San Diego
Research Area: Artificial Intelligence, Computational Linguistics, Turing Test, AI Evaluation
Discipline: Artificial Intelligence
GPT-4.5 passed the Turing Test by being misidentified as human 73% of the time, surpassing real humans and other models, marking the first conclusive evidence of an AI achieving this standard.
Methods: Randomised, controlled, pre-registered Turing Test where 5-minute conversations were conducted between human participants and AI systems, followed by judgments on which partner was human.
Key Findings: The ability of AI systems (ELIZA, GPT-4o, LLaMa-3.1-405B, GPT-4.5) to mimic human conversational behavior and be perceived as human.
-
Authors: T Davidson
Year: 2025
Published in: Nature Human Behaviour, 2025 - nature.com
Institution: University of Oxford, Davidson College
Research Area: Hate Speech Evaluation, Multimodal LLMs, Social Bias, Computational Law, AI Bias, AI Evaluation
Discipline: Artificial Intelligence
The study demonstrates that larger multimodal large language models (MLLMs) can align closely with human judgement in context-sensitive hate speech evaluations, though they still exhibit biases and limitations.
Methods: Conjoint experiments where simulated social media posts varying in attributes like slur usage and user demographics were evaluated by MLLMs and compared to human judgements.
Key Findings: The capacity of MLLMs to evaluate hate speech in a context-sensitive manner and their alignment with human judgement, while assessing biases and responsiveness to contextual cues.
Sample Size: 1854
-
Authors: M Ku, T Li, K Zhang, Y Lu, X Fu, W Zhuang
Year: 2024
Published in: - arXiv preprint arXiv …, 2023 - arxiv.org
Institution: University of Waterloo, Ohio State University, University of California Santa Barbara, University of Pensylvania
Research Area: AI alignment, Representation learning, Cognitive computational modeling, Vision foundation models evaluation, Multimodal, Vision models
Discipline: Computer Science, Artificial Intelligence, Machine Learning
This paper presents a method for **aligning machine vision model representations with human visual similarity judgments across different abstraction levels, improving how well models reflect human perceptual and conceptual organization and enhancing generalization and uncertainty prediction.
DOI: https://doi.org/10.48550/arXiv.2310.01596
Citations: 59
-
Authors: M Kuutila, C Kiili, R Kupiainen, E Huusko, J Li
Year: 2024
Published in: Computers in Human ..., 2024 - Elsevier
Research Area: Social Media Credibility Evaluation, Human-Computer Interaction (HCI), Cyberpsychology, AI Evaluation
Discipline: Computer science, human–computer interaction, cyberpsychology
The study found that prior belief consistency and source expertise significantly influenced perceived credibility of health-related social media posts, while evidence quality had minimal impact. Crowdsourcing platform choice also affected credibility evaluations of inaccurate posts.
Methods: Researchers created social media posts with manipulated source characteristics, claim accuracy, and evidence quality. Participants evaluated the credibility of these posts via crowdsourcing platforms after having their prior topic beliefs assessed.
Key Findings: The perceived credibility of health-related social media posts based on source characteristics, evidence quality, prior beliefs, and the platform used for data collection.
DOI: https://doi.org/10.1016/j.chb.2023.108017
Citations: 19
Sample Size: 844
-
Authors: Daria Kryvosheieva
Year: 2024
Published in: ArXiv
Institution: Massachusetts Institute of Technology
Research Area: Natural Language Processing, AI Evaluation
Discipline: Natural Language Processing
-
Authors: Memoona Aziz, Muhammad Umair Danish, Umair Rehman, Katarina Grolinger
Year: 2024
Published in: ArXiv
Institution: IEEE
Research Area: Computer Vision, AI-Generated Images, Image Quality Evaluation
Discipline: Artificial Intelligence, Computer Science
-
Authors: B Liefooghe, M Oliveira, LM Leisten
Year: 2023
Published in: Collabra ..., 2023 - online.ucpress.edu
Institution: Ghent University, ISCTE-IUL, CIS-IUL, Universidade de Lisboa
Research Area: AI Ethics, Trust in AI, Social Evaluation of AI, AI Evaluation
Discipline: Social Science, AI Ethics
DOI: https://doi.org/10.1525/collabra.73066
Citations: 14
-
Authors: HP Cowley, M Natter, K Gray-Roncal, RE Rhodes
Year: 2022
Published in: Scientific Reports, 2022 - nature.com
Institution: Johns Hopkins University Applied Physics Laboratory, University of Oxford, Johns Hopkins University, Johns Hopkins University Applied Physics Laboratory, University of Oxford, Johns Hopkins University
Research Area: Human-AI Interaction, Machine Learning Evaluation, AI Evaluation
Discipline: Human-Computer Interaction (HCI)
DOI: https://doi.org/10.1038/s41598-022-08078-3
Citations: 34
-
Authors: A Goswami, A Ak, W Hauser
Year: 2021
Published in: 2021 IEEE 23rd ..., 2021 - ieeexplore.ieee.org
Institution: University of Göttingen, University of Bayreuth
Research Area: Crowdsourcing, Subjective Evaluation, Tone Mapping, AI Evaluation
Discipline: Human-Computer Interaction (HCI)
Citations: 7