Authors: K Zhou
Year: 2025
Published in: 2025 - search.proquest.com
Institution: Stanford University
Research Area: Human-Centered Natural Language Interfaces (NLI)
Discipline: Artificial Intelligence
The research explores how to safely design natural language interfaces in AI by identifying safety risks, proposing a harm-focused evaluation framework, and advocating for a broader consideration of user needs.
Methods: The study includes a review of LLM safety risks, development of a harm-based evaluation framework, and conceptual exploration of broadening NLP research to underrepresented user needs.
Key Findings: Safety risks in LLM communication, behavioral impacts of human-LM interactions, and gaps in NLP addressing diverse user needs.