Authors: L Ma, J Qin, X Xu, Y Tan
Year: 2025
Published in: arXiv preprint arXiv:2509.14436, 2025•arxiv.org
Institution: University of North Carolina Charlotte, University of Science and Technology of China, University of Washington
Research Area: LLM behavior, Algorithmic content preference, Human–AI interaction
Discipline: Computer Science, Information Retrieval, Artificial Intelligence
This paper studies how generative search engines that use large language models (LLMs)—like Google’s AI overviews—select and cite web content, showing that these engines prefer content that is more predictable and semantically coherent for the model, and that LLM-based content polishing can increase the diversity and usefulness of AI summaries for users.
DOI: https://doi.org/10.48550/arXiv.2509.14436
Authors: R Zhang, C Flathmann, G Musick, B Schelble
Year: 2024
Published in: ACM Transactions on ..., 2024 - dl.acm.org
Institution: North Carolina State University, University of North Carolina at Charlotte, University of Georgia, University of Michigan
Research Area: Explainable AI (XAI), Human-AI Teaming, Human-Computer Interaction (HCI)
Discipline: Robotics, Artificial Intelligence
Explored how AI explanations impact human trust and team effectiveness in human-AI teams, finding that explanations increase trust when AI disobeys orders but reduce trust when AI lies, with individual characteristics influencing these perceptions.
Methods: Conducted an online experiment analyzing participant responses to scenarios where AI explained its actions within a teamwork context, comparing trust in AI versus human teammates.
Key Findings: Impact of AI explanations on human trust, team effectiveness, and how these vary with teammate identity (human or AI) and participant characteristics (e.g., gender, ethical framework).
DOI: https://doi.org/10.1145/3635474
Citations: 28
Sample Size: 156