Explore 5 peer-reviewed studies by Hr Kirk in Experimental evaluation and RCT (2023–2025). Discover research powered by Prolific's participant panel.
This page lists 5 peer-reviewed papers authored or co-authored by Hr Kirk in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: L Luettgau, HR Kirk, K Hackenburg, J Bergs, H Davidson, H Ogden, D Siddarth, S Huang
Year: 2025
Published in: ARXIV
Institution: AI Security Institute, I Policy Directorate, Collective Intelligence Project, Anthropic
Research Area: Experimental evaluation, RCT, Survey Research
Discipline: Computer Science, Human–Computer Interaction (HCI)
Conversational AI is as effective as self-directed internet searches in increasing political knowledge, reducing misinformation beliefs, and promoting accuracy among users in the UK during the 2024 election period.
Methods: A national survey (N=2,499) measured conversational AI usage for political information-seeking, followed by a series of randomised controlled trials (N=2,858) comparing conversational AI to self-directed internet search in improving political knowledge.
Key Findings: Extent of conversational AI usage for political knowledge-seeking in the UK and its efficacy in enhancing political knowledge and reducing misinformation compared to traditional internet searches.
Citations: 3
Sample Size: 5357
-
Authors: HR Kirk, M Bartolo, A Whitefield, P Rottger
Year: 2024
Published in: Advances in ..., 2024 - proceedings.neurips.cc
Institution: Meta, Cohere, AWS AI Labs, Contextual AI, Factored AI, University of Oxford, Bocconi University, Meedan, Hugging Face, University College London, ML Commons, University of Pennsylvania
Research Area: LLM Alignment, Human Feedback, Multicultural Studies
Discipline: Artificial Intelligence, Computational Social Science
The PRISM Alignment Dataset presents a large-scale, culturally diverse human feedback dataset linking sociodemographic profiles of 1,500 participants from 75 countries to their contextual preferences and fine‑grained ratings in 8,011 live conversations with 21 LLMs. This enables analysis of how subjective values vary across people and cultures in LLM alignment data.
DOI: https://doi.org/10.52202/079017-3342
Citations: 204
-
Authors: HR Kirk, I Gabriel, C Summerfield, B Vidgen
Year: 2024
Published in: Humanities and Social ..., 2025 - nature.com
Institution: Oxford Internet Institute, University of Oxford
Research Area: Socioaffective Alignment in Human-AI Relationships, AI Ethics, Behavioral Science
Discipline: Artificial Intelligence, Behavioral Science
The paper emphasizes the need for socioaffective alignment in human-AI relationships to ensure AI systems support human psychological needs rather than exploit them, as interactions with AI transition from transactional to sustained engagement.
Methods: Conceptual analysis of socioaffective dynamics in human-AI interactions, framed through psychological theories and principles.
Key Findings: Exploration of how AI systems impact socioaffective relationships, psychological needs, autonomy, companionship, and human well-being.
DOI: https://doi.org/10.1057/s41599-025-04532-5
Citations: 59
-
Authors: HR Kirk, C Osborne
Year: 2024
Published in: ArXiv
Institution: Alan Turing Institute, Oxford Internet Institute, Oxford University
Research Area: Computational Social Science, AI Community Analysis, Hugging Face Hub Activity
Discipline: Computational Social Science
-
Authors: HR Kirk, B Vidgen, P Röttger, SA Hale
Year: 2023
Published in: arXiv preprint arXiv:2303.05453, 2023 - arxiv.org
Institution: The Alan Turing Institute, University of Oxford, Imperial College London, King's College London, Google DeepMind
Research Area: Large Language Model Alignment, Safety, Personalization Risks
Discipline: Artificial Intelligence
DOI: https://doi.org/10.48550/arXiv.2303.05453
Citations: 146