Browse 13 peer-reviewed papers from Usa spanning Human-Computer Interaction (HCI), Data Quality (2021–2025). Research powered by Prolific's high-quality participant data.
This page lists 13 peer-reviewed papers from researchers at Usa in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: N Grgić-Hlača, G Lima, A Weller
Year: 2025
Published in: Proceedings of the 2nd ..., 2022 - dl.acm.org
Institution: Max Planck Institute, École Polytechnique Fédérale de Lausanne, University of Cambridge, The Alan Turing Institute
Research Area: Algorithmic Fairness, Human Perception, Diversity in AI Decision-Making
Discipline: Social Science, Artificial Intelligence
This study examines how sociodemographic factors and personal experience influence perceptions of fairness in algorithmic decision-making, particularly in bail decisions, highlighting the importance of diverse perspectives in regulatory oversight.
Methods: Explored perceptions of procedural fairness using surveys to assess the influence of demographics and personal experiences.
Key Findings: Impact of demographics (age, education, gender, race, political views) and personal experience on perceptions of fairness of algorithmic feature use in bail decisions.
DOI: 10.1145/3551624.3555306
Citations: 62
-
Authors: L Hölbling, S Maier, S Feuerriegel
Year: 2025
Published in: Scientific Reports, 2025 - nature.com
Institution: University of Lausanne, University of Zurich, University of St. Gallen
Research Area: LLMs in Persuasion, Meta-Analysis, Artificial Intelligence, Human-Computer Interaction (HCI)
Discipline: Artificial Intelligence
Large language models (LLMs) demonstrate similar persuasive performance to humans overall, but their effectiveness varies widely based on contextual factors such as model type, conversation design, and domain.
Methods: Systematic review and meta-analysis using Hedges' g to compute standardized effect sizes, with exploratory moderator analyses and publication bias checks (Egger's test, trim-and-fill analysis).
Key Findings: The persuasive effectiveness of LLMs compared to humans across various contexts and studies.
Sample Size: 17422
-
Authors: Pooja S. B. Rao, Sanja Šćepanović, Ke Zhou, Edyta Paulina Bogucka, D Quercia
Year: 2025
Published in: ArXiv
Institution: Nokia Bell Labs, University of Lausanne
Research Area: AI Risk Management, Model Risk Reporting, RAG Pipeline, RAG
Discipline: Artificial Intelligence
RiskRAG improves AI model risk reporting by offering pre-populated, contextualized risk reports that are preferred by developers, designers, and media professionals over standard model cards.
Methods: Developed a Retrieval Augmented Generation system based on five design requirements co-created with 16 developers, using a dataset of 450K model cards and 600 real-world incidents. Evaluated RiskRAG in preliminary and final studies with a total of 125 participants.
Key Findings: Effectiveness of RiskRAG in improving risk reporting and decision-making compared to standard model cards.
Sample Size: 125
-
Authors: L Lanz, R Briker, FH Gerpott
Year: 2024
Published in: Journal of Business Ethics, 2024 - Springer
Institution: University of Lausanne, University of Neuchâtel, University of Bern
Research Area: AI Ethics, Organizational Behavior, Supervisory Influence in the Workplace
Discipline: Business Ethics, Organizational Behavior, Artificial Intelligence Ethics
Employees are less likely to adhere to unethical instructions from AI supervisors compared to human supervisors, partly due to perceived differences in 'mind' and individual characteristics like compliance tendency and age.
Methods: The study employed four experiments using causal forest and transformer-based machine learning algorithms, as well as pre-registered experimental manipulations to evaluate employee behavior towards unethical instructions from AI and human supervisors.
Key Findings: Adherence to unethical instructions from AI versus human supervisors; mediating role of perceived mind and moderating factors like compliance tendency and age.
DOI: https://doi.org/10.1007/s10551-023-05393-1
Citations: 72
Sample Size: 1701
-
Authors: B Lebrun, S Temtsin, A Vonasch
Year: 2024
Published in: Frontiers in Robotics and ..., 2024 - frontiersin.org
Institution: University of Lausanne, University of California Berkeley, University of Massachusetts Amherst, Arizona State University
Research Area: AI in Social Science Research, Survey Methodology, Data Quality
Discipline: Artificial Intelligence
The study examines the integrity of online questionnaire responses and concludes that humans can identify AI-generated text with 76% accuracy, but current AI detection systems are ineffective, raising concerns about data quality in online surveys.
Methods: Human participants and automatic AI detection systems were tested on their ability to differentiate AI-generated text from human-generated text in the context of online questionnaires.
Key Findings: The study measured the ability of humans and AI detection tools to correctly identify whether text was generated by a human or an AI system for online questionnaire responses.
DOI: https://doi.org/10.3389/frobt.2023.1277635
Citations: 26
-
Authors: J Xu, L Han, S Sadiq, G Demartini
Year: 2024
Published in: Proceedings of the International ..., 2024 - ojs.aaai.org
Institution: University of Lausanne, EPFL, University of Southampton, University of Queensland
Research Area: Crowdsourcing, Misinformation Assessment, LLM
Discipline: Artificial Intelligence
DOI: https://doi.org/10.1609/icwsm.v18i1.31417
Citations: 6
-
Authors: A Welivita, P Pu
Year: 2024
Published in: ArXiv
Institution: École Polytechnique Fédérale de Lausanne
Research Area: LLM, Empathy, Human-AI Interaction
Discipline: Artificial Intelligence, Human-Computer Interaction (HCI), Social Science
-
Authors: Mete Ismayilzada1,2, Claire Stevenson3, Lonneke van der Plas
Year: 2024
Published in: ArXiv
Institution: Idiap Research Institute, University of Amsterdam, Università della Svizzera Italiana, École Polytechnique Fédérale de Lausanne
Research Area: Creative Story Generation, LLM Evaluation, Computational Creativity
Discipline: Artificial Intelligence, Natural Language Processing, Computational Creativity
-
Authors: K Uittenhove, S Jeanneret, E Vergauwe
Year: 2023
Published in: Journal of Cognition, 2023 - pmc.ncbi.nlm.nih.gov
Institution: University of Lausanne, University of Geneva, EPFL, University of Neuchâtel, NiH
Research Area: Cognitive Psychology, Research Methodology, Behavioral Research Methods, Web-based Behavioral Research
Discipline: Cognitive Research, Psychology
Citations: 83
-
Authors: Nathalie klein Selle, Barak Or, Ine Van der Cruyssen, Bruno Verschuere & Gershon Ben-Shakhar
Year: 2023
Published in: Nature
Institution: Bar-Ilan University, Hebrew University of Jerusalem, University of Amsterdam
Research Area: Concealed Information Test (CIT), Response Conflict, Reaction Time Analysis
Discipline: Psychology
-
Authors: E Peer, D Rothschild, A Gordon, E Damer
Year: 2022
Published in: Behavior Research Methods, 2022 - Springer
Institution: The Hebrew University of Jerusalem, Microsoft Research, Prolific
Research Area: Online Behavioral Research, Data Quality, Research Methods
Discipline: Computational Social Science, Behavioral Research
Citations: 2112
-
Authors: N Grgić-Hlača, C Castelluccia
Year: 2022
Published in: Proceedings of the AAAI ..., 2022 - ojs.aaai.org
Institution: École Polytechnique Fédérale de Lausanne (EPFL), Inria
Research Area: Human-Computer Interaction (HCI), Algorithmic Decision-Making, Human-AI Collaboration
Discipline: Artificial Intelligence
DOI: https://doi.org/10.1609/hcomp.v10i1.21989
Citations: 22
-
Authors: AA Arechar, DG Rand
Year: 2021
Published in: Behavior research methods, 2021 - Springer
Institution: Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
Research Area: Online Labor Markets, Amazon Mechanical Turk (MTurk), Social Science Research during COVID-19
Discipline: Behavioral Research
Citations: 154