Discover 38 peer-reviewed studies in Ethics (2024–2026). Explore research findings powered by Prolific's diverse participant panel.
This page lists 38 peer-reviewed papers in the research area of Ethics in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: C Yuan, B Ma, Z Zhang, B Prenkaj, F Kreuter, G Kasneci
Year: 2026
Published in: arXiv preprint arXiv:2601.08634, 2026•arxiv.org
Institution: Munich Center for Machine Learning, LMU Munich, Technical University of Munich
Research Area: Artificial Intelligence, AI Ethics, AI Alignment, Political Science, Computational Social Science
Discipline: Computer Science, Natural Language Processing (NLP)
This paper examines how large language models’ (LLMs) political outputs shift when you explicitly prime them with different moral values. Instead of just assigning fake personas (like “pretend to be liberal”), the authors condition models to endorse or reject specific moral values (e.g., utilitarianism, fairness, authority). They then measure how those moral primes move the models’ positions in...
DOI: https://doi.org/10.48550/arXiv.2601.08634
-
Authors: M Steyvers, H Tejeda, A Kumar, C Belem
Year: 2025
Published in: Nature Machine ..., 2025 - nature.com
Institution: University of California Irvine
Research Area: Computational Linguistics, Computational Social Science, AI Ethics, Trust in AI
Discipline: Computational Social Science
LLMs often lead to user overestimation of response accuracy, especially with longer explanations; adjusting explanation styles to align with model confidence improves calibration and discrimination gaps, enhancing trust in AI-assisted decision making.
Methods: Conducted experiments using multiple-choice and short-answer questions to study user confidence versus model-stated confidence; varied explanation length and alignment with model internal confidence.
Key Findings: Calibration gap (human vs. model confidence), discrimination gap (ability to distinguish correct vs. incorrect answers), and effects of explanation style and length on user trust.
Citations: 100
-
Authors: T Greene, G Shmueli, S Ray
Year: 2025
Published in: Journal of the Association for ..., 2023 - aisel.aisnet.org
Institution: National Tsing Hua University, Copenhagen Business School
Research Area: Information Systems Ethics, Reinforcement Learning for Personalization
Discipline: Information Systems
The paper examines the ethical risks of reinforcement learning-based personalization and proposes three research directions for IS scholars to address its societal implications and inadequacies in existing regulations.
Methods: The study presents a conceptual analysis of emergent features and societal risks associated with reinforcement learning-based personalization and proposes research directions.
Key Findings: Potential harms of reinforcement learning-based personalization, such as reduced autonomy, social and political destabilization, and mass surveillance, alongside the limitations of current data protection laws.
DOI: https://aisel.aisnet.org/jais/vol24/iss6/6
Citations: 33
-
Authors: S Shekar, P Pataranutaporn, C Sarabu, GA Cecchi
Year: 2025
Published in: NEJM AI, 2025 - ai.nejm.org
Institution: MIT Media Lab, IBM Research, Stanford University, Massachusetts Institute of Technology
Research Area: AI Ethics, Healthcare, Patient Trust, Medical Misinformation
Discipline: Artificial Intelligence, Human-Computer Interaction (HCI), AI Ethics
This paper discusses a study by MIT researchers detailing patient trust in AI-generated medical advice, even when that advice is incorrect, raising concerns about misinformation in healthcare.
Citations: 19
-
Authors: S Lodoen, A Orchard
Year: 2025
Published in: arXiv preprint arXiv:2505.09576, 2025 - arxiv.org
Institution: Embry–Riddle Aeronautical University, University of Waterloo
Research Area: Reinforcement Learning from Human Feedback (RLHF), Procedural Rhetoric, LLM Persuasion, Ethics
Discipline: Artificial Intelligence, AI Ethics, Social Science
The paper uses procedural rhetoric to analyze how RLHF reshapes ethical, social, and rhetorical dimensions of generative AI interactions, raising concerns about biases, hegemonic language, and human relationships.
Methods: The study conducts a theoretical and rhetorical analysis based on Ian Bogost's concept of procedural rhetoric, examining how RLHF mechanisms influence language conventions, information practices, and social expectations.
Key Findings: Ethical and rhetorical implications of RLHF-enhanced LLMs on language usage, information seeking, and interpersonal dynamics.
DOI: https://doi.org/10.48550/arXiv.2505.09576
Citations: 3
-
Authors: G Riva, BK Wiederhold, P Cipresso
Year: 2025
Published in: ... , Behavior, and Social ..., 2025 - liebertpub.com
Institution: Università Cattolica del Sacro Cuore, University of Genova, Università degli Studi di Milano, Università di Catania
Research Area: AI Ethics, Social and Psychological Dimensions of Artificial Intelligence, Human-Computer Interaction (HCI)
Discipline: Artificial Intelligence Ethics, Psychology, Sociology
The paper addresses the psychological, social, and ethical challenges of integrating AI into daily life and emphasizes the need to design AI systems that uphold human values and well-being.
Methods: The paper conducts an interdisciplinary review of existing research and literature to analyze the psychological, social, and ethical dimensions of AI deployment.
Key Findings: The impact of AI on human behavior, decision-making, and societal values.
DOI: https://doi.org/10.1089/cyber.2025.0202
Citations: 3
-
Authors: J van Grunsven, N Jacobs, BA Kamphorst, M Honauer
Year: 2025
Published in: ACM Journal on, 2025 - dl.acm.org
Institution: University of Texas, Microsoft Research, Google DeepMind, Google, University of Washington, World Economic Forum
Research Area: Ethics and Governance of Computing Research, focused on Responsible Computing, Social Science Research, Artificial Intelligence.
Discipline: Ethics, Governance of Computing Research, AI Ethics
The paper emphasizes the importance of accounting for human vulnerability in the design and analysis of digital technologies, proposing concepts like 'Intimate Computing' to empower individuals in managing their technology-mediated vulnerabilities.
Methods: The study reviews and synthesizes existing literature and frameworks addressing vulnerability in human-technology interactions, including concepts like 'Intimate Computing' and 'Person-Machine Teaming'.
Key Findings: Human vulnerability in the context of digitally-mediated interactions and the role of computing frameworks in addressing them.
Citations: 2
-
Authors: S Kwon, NL Kim
Year: 2025
Published in: International Textile and Apparel ..., 2025 - iastatedigitalpress.com
Institution: University of Minnesota
Research Area: Social Media Advertising, Consumer Perception, Information Collection Ethics in Marketing, Social Science.
Discipline: Social Science, Marketing
Consumers are more willing to disclose personal information in social media advertising when they perceive exchanged benefits, such as monetary rewards and personalized recommendations, outweigh the risks; the method of information collection (overt vs. covert) does not significantly affect this decision.
Methods: An online survey was conducted among U.S. Instagram users to assess attitudes toward benefit-risk trade-offs in personal data disclosure for advertising purposes.
Key Findings: Willingness to disclose personal information, click-through intentions, and purchase intentions based on perceived benefits and risks in social media advertisements.
DOI: https://doi.org/10.31274/itaa.18830
Citations: 1
Sample Size: 199
-
Authors: W van Zoonen, ME von Bonsdorff
Year: 2025
Published in: human ..., 2025 - journals.sagepub.com
Institution: Wageningen University & Research, University of Twente
Research Area: Organizational Behavior, Human Resources, or Social Science focusing on Technology and Ethics in the Workplace.
Discipline: Social Science
The study shows that algorithmic surveillance undermines trust and fairness, while increasing privacy concerns among crowdworkers, influencing their compliance, alteration, or resistance behaviors, with decontextualization intensifying these dynamics.
Methods: Three-wave survey data analysis of European online crowdworkers, analyzed through socio-technical systems theory and micro-level legitimacy frameworks.
Key Findings: The effects of algorithmic surveillance on trust, privacy concerns, fairness, and workers' compliance, alteration, or resistance, with a focus on the moderating role of perceived decontextualization.
Sample Size: 435
-
Authors: Liudmila Zavolokina, Kilian Sprenkamp, Zoya Katashinskaya, Daniel Gordon Jones
Year: 2025
Published in: ArXiv
Institution: University of Zurich
Research Area: AI Ethics, AI Bias, News Literacy, Critical Thinking, Computational Social Science
Discipline: Computational Social Science
The study explores leveraging inherent biases in AI to enhance critical thinking in news consumption, proposing strategies such as bias awareness, personalization, and gradual introduction of diverse perspectives.
Methods: Qualitative user study investigating user responses to personalized AI-driven propaganda detection tools.
Key Findings: The effectiveness of AI bias-based strategies in improving critical thinking and news readers’ engagement with diverse perspectives.
-
Authors: M Reis, F Reis, W Kunde
Year: 2024
Published in: Nature Medicine, 2024 - nature.com
Institution: University of Cambridge, Julius Maximilians Universität
Research Area: AI in Healthcare, Medical Ethics, Cognitive Psychology, Human-Computer Interaction (HCI) in Medicine
Discipline: AI in Healthcare, Medical Ethics, Cognitive Psychology
The study found that medical advice labeled as being sourced from AI (or AI supervised by humans) is perceived as less reliable and empathetic compared to advice labeled as originating solely from a human physician, resulting in reduced willingness to follow such advice.
Methods: Two preregistered studies were conducted where participants were presented with identical medical advice scenarios but with manipulated labels for the advice source ('AI', 'human physician', 'human+AI').
Key Findings: Participants' perceptions of reliability, empathy, and willingness to follow medical advice based on the perceived source.
Citations: 78
Sample Size: 2280
-
Authors: L Lanz, R Briker, FH Gerpott
Year: 2024
Published in: Journal of Business Ethics, 2024 - Springer
Institution: University of Lausanne, University of Neuchâtel, University of Bern
Research Area: AI Ethics, Organizational Behavior, Supervisory Influence in the Workplace
Discipline: Business Ethics, Organizational Behavior, Artificial Intelligence Ethics
Employees are less likely to adhere to unethical instructions from AI supervisors compared to human supervisors, partly due to perceived differences in 'mind' and individual characteristics like compliance tendency and age.
Methods: The study employed four experiments using causal forest and transformer-based machine learning algorithms, as well as pre-registered experimental manipulations to evaluate employee behavior towards unethical instructions from AI and human supervisors.
Key Findings: Adherence to unethical instructions from AI versus human supervisors; mediating role of perceived mind and moderating factors like compliance tendency and age.
DOI: https://doi.org/10.1007/s10551-023-05393-1
Citations: 72
Sample Size: 1701
-
Authors: HR Kirk, I Gabriel, C Summerfield, B Vidgen
Year: 2024
Published in: Humanities and Social ..., 2025 - nature.com
Institution: Oxford Internet Institute, University of Oxford
Research Area: Socioaffective Alignment in Human-AI Relationships, AI Ethics, Behavioral Science
Discipline: Artificial Intelligence, Behavioral Science
The paper emphasizes the need for socioaffective alignment in human-AI relationships to ensure AI systems support human psychological needs rather than exploit them, as interactions with AI transition from transactional to sustained engagement.
Methods: Conceptual analysis of socioaffective dynamics in human-AI interactions, framed through psychological theories and principles.
Key Findings: Exploration of how AI systems impact socioaffective relationships, psychological needs, autonomy, companionship, and human well-being.
DOI: https://doi.org/10.1057/s41599-025-04532-5
Citations: 59
-
Authors: T Haesevoets, B Verschuere, R Van Severen
Year: 2024
Published in: Government Information ..., 2024 - Elsevier
Institution: Ghent University, KU Leuven
Research Area: Public Sector AI, Citizen Perception, AI Ethics, Transparency
Discipline: Political Science, Public Administration
Citizens in the UK prefer AI to play a supporting role in public sector decisions rather than making decisions autonomously, with greater acceptance for low ideologically-charged contexts.
Methods: Three studies surveying UK respondents on their perceptions of AI involvement in public sector decision-making.
Key Findings: Perception of AI's role in decision-making, its legitimacy compared to human decision-makers, and suitability for various types of decisions.
DOI: https://doi.org/10.1016/j.giq.2023.101906
Citations: 54
-
Authors: T Eloundou, A Beutel, DG Robinson
Year: 2024
Published in: arXiv preprint arXiv ..., 2024 - arxiv.org
Institution: OpenAI, Google DeepMind, Google, University of Oxford
Research Area: Fairness in LLM, AI Bias, AI Ethics
Discipline: Artificial Intelligence, Social Science
The paper introduces a counterfactual approach to evaluate 'first-person fairness' in chatbots, demonstrating that reinforcement learning can mitigate biases based on demographics across extensive chatbot interactions.
Methods: The study uses a Language Model as a Research Assistant (LMRA) to quantitatively and qualitatively assess biases based on demographics across millions of chatbot interactions, covering 66 tasks in 9 domains and involving two genders and four races. Bias evaluations are corroborated by independent...
Key Findings: Demographic biases in chatbot responses, including harmful stereotypes and response differences by gender and race, across diverse tasks and domains.
DOI: https://doi.org/10.48550/arXiv.2410.19803
Citations: 33
Sample Size: 6000000
-
Authors: S Du, MT Babalola, P D'cruz, E Dóci
Year: 2024
Published in: Journal of Business ..., 2024 - Springer
Institution: Nottingham University Business School, University of Reading, Oxford Brookes University, University of Portsmouth
Research Area: Crowdsourcing Ethics, Social Sciences, Organizational Behavior
Discipline: Social Science
The paper explores the ethical, societal, and global implications of using crowdsourcing platforms for research, emphasizing the need for fair compensation, transparency, and consideration of global disparities between the Global North and South.
Methods: The paper provides a conceptual analysis and critique of crowdsourcing research practices, focusing on ethical and societal considerations.
Key Findings: Ethical, societal, and global implications of crowdsourcing research practices, including data quality, reporting transparency, fair remuneration, and the role of global disparities.
Citations: 24
-
Authors: KR McKee
Year: 2024
Published in: IEEE Transactions on Technology and Society, 2024 - ieeexplore.ieee.org
Institution: University of Queensland
Research Area: AI Ethics, Human-Computer Interaction (HCI), Research Practice Transparency
Discipline: AI Ethics, Human-Computer Interaction (HCI)
The paper identifies ethical and transparency gaps in AI research involving human participants and proposes guidelines to address these issues, drawing from adjacent fields like psychology and human-computer interaction while recognizing unique challenges in AI contexts.
Methods: Analyzed normative practices by reviewing AI research publications and compared them with ethical standards in adjacent fields such as psychology and HCI.
Key Findings: Ethical practices including ethical reviews, informed consent, participant compensation, and contextual considerations specific to AI research.
DOI: https://ieeexplore.ieee.org/abstract/document/10664609/
Citations: 17
-
Authors: LS Treiman, CJ Ho, W Kool
Year: 2024
Published in: Proceedings of the National Academy of ..., 2024 - pnas.org
Institution: Massachusetts Institute of Technology, Yale University, Washington University in St. Louis
Research Area: AI Ethics, Behavioral Economics, Decision-Making in AI Systems
Discipline: Artificial Intelligence, Behavioral Science
People alter their behavior when they know their actions will train AI, leading to unintentional habits and biased training data for AI systems.
Methods: Five studies were conducted using the ultimatum game; participants were tasked with deciding on monetary splits proposed by either humans or AI, with some informed their decisions would train the AI.
Key Findings: Behavioral changes in participants when training AI, persistence of these changes over time, and implications for AI training bias.
DOI: https://doi.org/10.1073/pnas.2408731121
Citations: 13
-
Authors: V Cheung, M Maier, F Lieder
Year: 2024
Published in: Psyarxiv preprint, 2024 - files.osf.io
Institution: University College LondonA
Research Area: AI Ethics, Moral Decision-Making, Cognitive Biases in LLMs, AI Bias
Discipline: Artificial Intelligence, Ethics
Citations: 11
-
Authors: M Tahaei, D Wilkinson, A Frik, M Muller
Year: 2024
Published in: Proceedings of the ..., 2024 - ojs.aaai.org
Institution: University of Cambridge, University of Bath, University of Amsterdam, Amazon
Research Area: AI Ethics, Survey Methods, AI Governance
Discipline: AI Ethics, Governance
DOI: https://doi.org/10.1609/aies.v7i1.31734
Citations: 11