Discover 26 peer-reviewed studies in Ai Ethics (2023–2026). Explore research findings powered by Prolific's diverse participant panel.
This page lists 26 peer-reviewed papers in the research area of Ai Ethics in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: C Yuan, B Ma, Z Zhang, B Prenkaj, F Kreuter, G Kasneci
Year: 2026
Published in: arXiv preprint arXiv:2601.08634, 2026•arxiv.org
Institution: Munich Center for Machine Learning, LMU Munich, Technical University of Munich
Research Area: Artificial Intelligence, AI Ethics, AI Alignment, Political Science, Computational Social Science
Discipline: Computer Science, Natural Language Processing (NLP)
This paper examines how large language models’ (LLMs) political outputs shift when you explicitly prime them with different moral values. Instead of just assigning fake personas (like “pretend to be liberal”), the authors condition models to endorse or reject specific moral values (e.g., utilitarianism, fairness, authority). They then measure how those moral primes move the models’ positions in...
DOI: https://doi.org/10.48550/arXiv.2601.08634
-
Authors: M Steyvers, H Tejeda, A Kumar, C Belem
Year: 2025
Published in: Nature Machine ..., 2025 - nature.com
Institution: University of California Irvine
Research Area: Computational Linguistics, Computational Social Science, AI Ethics, Trust in AI
Discipline: Computational Social Science
LLMs often lead to user overestimation of response accuracy, especially with longer explanations; adjusting explanation styles to align with model confidence improves calibration and discrimination gaps, enhancing trust in AI-assisted decision making.
Methods: Conducted experiments using multiple-choice and short-answer questions to study user confidence versus model-stated confidence; varied explanation length and alignment with model internal confidence.
Key Findings: Calibration gap (human vs. model confidence), discrimination gap (ability to distinguish correct vs. incorrect answers), and effects of explanation style and length on user trust.
Citations: 100
-
Authors: S Shekar, P Pataranutaporn, C Sarabu, GA Cecchi
Year: 2025
Published in: NEJM AI, 2025 - ai.nejm.org
Institution: MIT Media Lab, IBM Research, Stanford University, Massachusetts Institute of Technology
Research Area: AI Ethics, Healthcare, Patient Trust, Medical Misinformation
Discipline: Artificial Intelligence, Human-Computer Interaction (HCI), AI Ethics
This paper discusses a study by MIT researchers detailing patient trust in AI-generated medical advice, even when that advice is incorrect, raising concerns about misinformation in healthcare.
Citations: 19
-
Authors: G Riva, BK Wiederhold, P Cipresso
Year: 2025
Published in: ... , Behavior, and Social ..., 2025 - liebertpub.com
Institution: Università Cattolica del Sacro Cuore, University of Genova, Università degli Studi di Milano, Università di Catania
Research Area: AI Ethics, Social and Psychological Dimensions of Artificial Intelligence, Human-Computer Interaction (HCI)
Discipline: Artificial Intelligence Ethics, Psychology, Sociology
The paper addresses the psychological, social, and ethical challenges of integrating AI into daily life and emphasizes the need to design AI systems that uphold human values and well-being.
Methods: The paper conducts an interdisciplinary review of existing research and literature to analyze the psychological, social, and ethical dimensions of AI deployment.
Key Findings: The impact of AI on human behavior, decision-making, and societal values.
DOI: https://doi.org/10.1089/cyber.2025.0202
Citations: 3
-
Authors: Liudmila Zavolokina, Kilian Sprenkamp, Zoya Katashinskaya, Daniel Gordon Jones
Year: 2025
Published in: ArXiv
Institution: University of Zurich
Research Area: AI Ethics, AI Bias, News Literacy, Critical Thinking, Computational Social Science
Discipline: Computational Social Science
The study explores leveraging inherent biases in AI to enhance critical thinking in news consumption, proposing strategies such as bias awareness, personalization, and gradual introduction of diverse perspectives.
Methods: Qualitative user study investigating user responses to personalized AI-driven propaganda detection tools.
Key Findings: The effectiveness of AI bias-based strategies in improving critical thinking and news readers’ engagement with diverse perspectives.
-
Authors: M Reis, F Reis, W Kunde
Year: 2024
Published in: Nature Medicine, 2024 - nature.com
Institution: University of Cambridge, Julius Maximilians Universität
Research Area: AI in Healthcare, Medical Ethics, Cognitive Psychology, Human-Computer Interaction (HCI) in Medicine
Discipline: AI in Healthcare, Medical Ethics, Cognitive Psychology
The study found that medical advice labeled as being sourced from AI (or AI supervised by humans) is perceived as less reliable and empathetic compared to advice labeled as originating solely from a human physician, resulting in reduced willingness to follow such advice.
Methods: Two preregistered studies were conducted where participants were presented with identical medical advice scenarios but with manipulated labels for the advice source ('AI', 'human physician', 'human+AI').
Key Findings: Participants' perceptions of reliability, empathy, and willingness to follow medical advice based on the perceived source.
Citations: 78
Sample Size: 2280
-
Authors: L Lanz, R Briker, FH Gerpott
Year: 2024
Published in: Journal of Business Ethics, 2024 - Springer
Institution: University of Lausanne, University of Neuchâtel, University of Bern
Research Area: AI Ethics, Organizational Behavior, Supervisory Influence in the Workplace
Discipline: Business Ethics, Organizational Behavior, Artificial Intelligence Ethics
Employees are less likely to adhere to unethical instructions from AI supervisors compared to human supervisors, partly due to perceived differences in 'mind' and individual characteristics like compliance tendency and age.
Methods: The study employed four experiments using causal forest and transformer-based machine learning algorithms, as well as pre-registered experimental manipulations to evaluate employee behavior towards unethical instructions from AI and human supervisors.
Key Findings: Adherence to unethical instructions from AI versus human supervisors; mediating role of perceived mind and moderating factors like compliance tendency and age.
DOI: https://doi.org/10.1007/s10551-023-05393-1
Citations: 72
Sample Size: 1701
-
Authors: HR Kirk, I Gabriel, C Summerfield, B Vidgen
Year: 2024
Published in: Humanities and Social ..., 2025 - nature.com
Institution: Oxford Internet Institute, University of Oxford
Research Area: Socioaffective Alignment in Human-AI Relationships, AI Ethics, Behavioral Science
Discipline: Artificial Intelligence, Behavioral Science
The paper emphasizes the need for socioaffective alignment in human-AI relationships to ensure AI systems support human psychological needs rather than exploit them, as interactions with AI transition from transactional to sustained engagement.
Methods: Conceptual analysis of socioaffective dynamics in human-AI interactions, framed through psychological theories and principles.
Key Findings: Exploration of how AI systems impact socioaffective relationships, psychological needs, autonomy, companionship, and human well-being.
DOI: https://doi.org/10.1057/s41599-025-04532-5
Citations: 59
-
Authors: T Haesevoets, B Verschuere, R Van Severen
Year: 2024
Published in: Government Information ..., 2024 - Elsevier
Institution: Ghent University, KU Leuven
Research Area: Public Sector AI, Citizen Perception, AI Ethics, Transparency
Discipline: Political Science, Public Administration
Citizens in the UK prefer AI to play a supporting role in public sector decisions rather than making decisions autonomously, with greater acceptance for low ideologically-charged contexts.
Methods: Three studies surveying UK respondents on their perceptions of AI involvement in public sector decision-making.
Key Findings: Perception of AI's role in decision-making, its legitimacy compared to human decision-makers, and suitability for various types of decisions.
DOI: https://doi.org/10.1016/j.giq.2023.101906
Citations: 54
-
Authors: T Eloundou, A Beutel, DG Robinson
Year: 2024
Published in: arXiv preprint arXiv ..., 2024 - arxiv.org
Institution: OpenAI, Google DeepMind, Google, University of Oxford
Research Area: Fairness in LLM, AI Bias, AI Ethics
Discipline: Artificial Intelligence, Social Science
The paper introduces a counterfactual approach to evaluate 'first-person fairness' in chatbots, demonstrating that reinforcement learning can mitigate biases based on demographics across extensive chatbot interactions.
Methods: The study uses a Language Model as a Research Assistant (LMRA) to quantitatively and qualitatively assess biases based on demographics across millions of chatbot interactions, covering 66 tasks in 9 domains and involving two genders and four races. Bias evaluations are corroborated by independent...
Key Findings: Demographic biases in chatbot responses, including harmful stereotypes and response differences by gender and race, across diverse tasks and domains.
DOI: https://doi.org/10.48550/arXiv.2410.19803
Citations: 33
Sample Size: 6000000
-
Authors: KR McKee
Year: 2024
Published in: IEEE Transactions on Technology and Society, 2024 - ieeexplore.ieee.org
Institution: University of Queensland
Research Area: AI Ethics, Human-Computer Interaction (HCI), Research Practice Transparency
Discipline: AI Ethics, Human-Computer Interaction (HCI)
The paper identifies ethical and transparency gaps in AI research involving human participants and proposes guidelines to address these issues, drawing from adjacent fields like psychology and human-computer interaction while recognizing unique challenges in AI contexts.
Methods: Analyzed normative practices by reviewing AI research publications and compared them with ethical standards in adjacent fields such as psychology and HCI.
Key Findings: Ethical practices including ethical reviews, informed consent, participant compensation, and contextual considerations specific to AI research.
DOI: https://ieeexplore.ieee.org/abstract/document/10664609/
Citations: 17
-
Authors: LS Treiman, CJ Ho, W Kool
Year: 2024
Published in: Proceedings of the National Academy of ..., 2024 - pnas.org
Institution: Massachusetts Institute of Technology, Yale University, Washington University in St. Louis
Research Area: AI Ethics, Behavioral Economics, Decision-Making in AI Systems
Discipline: Artificial Intelligence, Behavioral Science
People alter their behavior when they know their actions will train AI, leading to unintentional habits and biased training data for AI systems.
Methods: Five studies were conducted using the ultimatum game; participants were tasked with deciding on monetary splits proposed by either humans or AI, with some informed their decisions would train the AI.
Key Findings: Behavioral changes in participants when training AI, persistence of these changes over time, and implications for AI training bias.
DOI: https://doi.org/10.1073/pnas.2408731121
Citations: 13
-
Authors: V Cheung, M Maier, F Lieder
Year: 2024
Published in: Psyarxiv preprint, 2024 - files.osf.io
Institution: University College LondonA
Research Area: AI Ethics, Moral Decision-Making, Cognitive Biases in LLMs, AI Bias
Discipline: Artificial Intelligence, Ethics
Citations: 11
-
Authors: M Tahaei, D Wilkinson, A Frik, M Muller
Year: 2024
Published in: Proceedings of the ..., 2024 - ojs.aaai.org
Institution: University of Cambridge, University of Bath, University of Amsterdam, Amazon
Research Area: AI Ethics, Survey Methods, AI Governance
Discipline: AI Ethics, Governance
DOI: https://doi.org/10.1609/aies.v7i1.31734
Citations: 11
-
Authors: M Gonzalez-Cabello, A Siddiq, CJ Corbett, C Hu
Year: 2024
Published in: Business Horizons, 2024 - Elsevier
Research Area: Crowdwork Fairness, Human-AI Supply Chain, Business Ethics, Management
Discipline: Business Ethics, Management
DOI: https://doi.org/10.1016/j.bushor.2024.09.003
Citations: 7
-
Authors: S Schmer-Galunder, R Wheelock, Z Jalan
Year: 2024
Published in: Proceedings of the ..., 2024 - ojs.aaai.org
Institution: Google DeepMind, Google, Accenture, Amazon
Research Area: AI Ethics and Prosocial Data Annotation
Discipline: Artificial Intelligence, Ethics, Behavioral Science
DOI: https://doi.org/10.1609/aies.v7i1.31726
Citations: 3
-
Authors: Marios Constantinides, Edyta Bogucka, Sanja Scepanovic, Daniele Quercia
Year: 2024
Published in: ArXiv
Institution: Nokia Bell Labs
Research Area: AI Ethics, Mobile and Wearable AI, Risk Assessment
Discipline: Artificial Intelligence
-
Authors: Mehdi Khamassi, Marceau Nahon1 and Raja Chatila
Year: 2024
Published in: ArXiv
Institution: Sorbonne University
Research Area: AI Alignment, AI Ethics, Computational Cognition
Discipline: Artificial Intelligence, Ethics, Computational Cognition
-
Authors: M Lavanchy, P Reichert, J Narayanan
Year: 2023
Published in: Journal of Business ..., 2023 - Springer
Institution: Hong Kong Polytechnic University, International Institute for Management Development
Research Area: Applicants' Fairness Perceptions, Algorithm-Driven Hiring, Business Ethics
Discipline: Business Ethics, Human-Computer Interaction (HCI)
DOI: https://doi.org/10.1007/s10551-022-05320-w
Citations: 107
-
Authors: S Corvite, K Roemmich, TI Rosenberg
Year: 2023
Published in: Proceedings of the ACM ..., 2023 - dl.acm.org
Institution: S Corvite: University of Washington, K Roemmich: University of Washington, TI Rosenberg: University of Washington
Research Area: AI Ethics
Discipline: Artificial Intelligence Ethics
DOI: https://doi.org/10.1145/3579600
Citations: 79