Carnegie Mellon University
Browse

SOK: BRIDGING RESEARCH AND PRACTICE IN LLM AGENT SECURITY

<p dir="ltr">Large Language Model agents are rapidly transitioning from research prototypes to deployed systems, raising new and urgent security challenges. Unlike static chatbots, LLM agents interact with external tools, data, and services, creating pathways to real-world harm even during early stages of development. Existing guidance on securing agents is fragmented, creating obstacles for developers and organizations looking to build secure systems. To clarify the security landscape, we conduct a systematic review covering academic surveys, grey literature sources, and real-world case studies. We then (i) categorize the known threats to LLM agents and analyze key attack surfaces, (ii) construct a taxonomy of actionable security best practices encompassing the full LLM agent development lifecycle, highlighting gaps in the security landscape, and (iii) evaluate he adoption of these recommendations in practice. Together, these contributions establish a framework for developing comprehensive risk-mitigation strategies. Our synthesis promotes standardization, surfaces gaps in current practice, and establishes a foundation for future work toward secure LLM agents.</p>

History

Date

2025-11-20

Usage metrics

    Categories

    No categories selected

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC