Incentivizing High-Quality Human Annotations with Golden Questions
Authors: S Liu, Z Cai, H Wang, Z Ma, X Li
Published: 2025
Publication: arXiv preprint arXiv:2505.19134, 2025 - arxiv.org
The paper develops a principal-agent model to incentivize high-quality human annotations using golden questions and identifies criteria for these questions to effectively monitor annotators' performance.
Methods: The authors use a principal-agent model with maximum likelihood estimators (MLE) and hypothesis testing to design incentive-compatible systems for annotators. Golden questions of high certainty and similar format to normal data were selected and validated through experiments.
Key Findings: The effectiveness of golden questions for incentivizing and monitoring high-quality human annotations in preference data.
Limitations: Specific limitations of the model or experiments are not explicitly outlined, but potential challenges could include generalizability to other annotation tasks or scalability for larger datasets.
Institution: Meta, Imperial College London
Research Area: Artificial Intelligence, Crowdsourcing, LLM
Discipline: Artificial Intelligence
Citations: 1
DOI: https://doi.org/10.48550/arXiv.2505.19134