Authorization for LLM Tool Schemas: Formal Model with Noninterference Guarantees [pdf] (raw.githubusercontent.com)

🤖 AI Summary
A recent research paper introduces a formal model for schema-level authorization within the Model Context Protocol (MCP), enhancing the security of interactions between large language models (LLMs) and server-side tools. The key innovation is a projection function that tailors the JSON schemas according to the user's authorization level, thereby preventing unauthorized visibility of tool capabilities. This addresses significant information disclosure issues prevalent in existing systems, where unauthorized elements could lead to unintended invocations or error message amplifications, thereby leaking sensitive information. This development is particularly significant for the AI/ML community as it applies rigorous type-state principles from the Rust programming language to enforce a noninterference property. This means that unauthorized users cannot even observe the existence of gated capabilities, providing a strong defense against information leakage in multi-tenant deployments. The paper claims that this approach incurs zero runtime overhead while ensuring the security of capability-gated operations, making it a pivotal advancement for safe interactions within tools deployed in LLM environments. Implementations are provided across five programming languages, demonstrating the model's broad applicability and uniform enforcement across different systems.
Loading comments...
loading comments...