Explore 3 peer-reviewed studies by X Li in Artificial Intelligence and Data Annotation (2022–2025). Discover research powered by Prolific's participant panel.
This page lists 3 peer-reviewed papers authored or co-authored by X Li in the Prolific Citations Library, a curated collection of research powered by high-quality human data from Prolific.
-
Authors: MM Karim, S Khan, DH Van, X Liu, C Wang, Q Qu
Year: 2025
Published in: Future Internet, 2025 - mdpi.com
Institution: Chinese Academy of Sciences, Zhejiang University, South-Central Minzu University
Research Area: Artificial Intelligence, Data Annotation, Multi-Agent Systems
Discipline: Artificial Intelligence
The paper reviews the role of AI agents powered by large language models in addressing challenges in data annotation, focusing on architectures, workflows, real-world applications, and future research directions for improving efficiency, scalability, transparency, and bias mitigation.
Methods: Comprehensive review and analysis of AI agent architectures, workflows, applications, and evaluation methods in data annotation across multiple industries.
Key Findings: Capabilities of LLM-driven agents in reasoning, adaptive learning, collaborative annotation, and their impact on quality assurance, cost, scalability, and bias mitigation.
Citations: 10
-
Authors: S Liu, Z Cai, H Wang, Z Ma, X Li
Year: 2025
Published in: arXiv preprint arXiv:2505.19134, 2025 - arxiv.org
Institution: Meta, Imperial College London
Research Area: Artificial Intelligence, Crowdsourcing, LLM
Discipline: Artificial Intelligence
The paper develops a principal-agent model to incentivize high-quality human annotations using golden questions and identifies criteria for these questions to effectively monitor annotators' performance.
Methods: The authors use a principal-agent model with maximum likelihood estimators (MLE) and hypothesis testing to design incentive-compatible systems for annotators. Golden questions of high certainty and similar format to normal data were selected and validated through experiments.
Key Findings: The effectiveness of golden questions for incentivizing and monitoring high-quality human annotations in preference data.
DOI: https://doi.org/10.48550/arXiv.2505.19134
Citations: 1
-
Authors: SX Li, R Halabi, R Selvarajan, M Woerner
Year: 2022
Published in: JMIR Formative ..., 2022 - formative.jmir.org
Institution: Massachusetts General Hospital, Harvard Medical School, Boston University, University of Waterloo
Research Area: Digital Health, Remote Research Methods, Recruitment and Retention Studies
Discipline: Digital Health, Research Methodology
Citations: 19