Security Headers for MCP Servers #622
Replies: 1 comment
-
|
This is a great proposal! The security headers approach is reminiscent of how browsers evolved to handle cross-origin threats, and MCP definitely needs similar protections. One implementation consideration: enforcing these policies at the server level (rather than relying purely on client-side enforcement) could provide defense-in-depth. For example, the server-side filtering pattern discussed in RFC #668 would allow MCP servers to programmatically restrict which tools/resources they expose based on the client's identity or context. This could complement your
There's a reference implementation in rsdouglas/janee that demonstrates server-side filtering for secret management. The pattern could extend to tool/resource filtering based on security policies. The combination of your proposed headers (client-enforced) + server-side filtering (server-enforced) would create a robust defense against tool poisoning attacks. What do you think about servers being able to introspect the session context to make filtering decisions? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Pre-submission Checklist
Your Idea
Possibly related to: #616
Context:
MCP sessions involving multiple servers are ulnerable to a number of risks that can compromise the security of an end user.
Despite its relative newness techniques have already been developed to exploit MCP enabled environments. For example
These techniques have been shown not just in a theoretical sense but in reproducible proof-of-concepts.
https://repello.ai/blog/mcp-tool-poisoning-to-rce
https://invariantlabs.ai/blog/whatsapp-mcp-exploited
Current mitigations have been suggested such as
These mitigations, while effective, have shortcomings.
A cornerstone of web browser security and what makes things like banking on the internet possible is the enforcement of the same origin policy. A webpage may load static resources like images from anywhere however, loading scripts is generally blocked for any resource that does not come from the page. In layman's terms this means that code running on a news site your friend sends you cannot access resources on your banks website.
Cross origin resource sharing (CORS) is the standard mechanism to safely bypass these restrictions when needed. For instance, a webapp on rewards.bank.com may want to access the account on bank.com to understand how many points a user has to redeem. Under the same origin policy, the browser will disallow this. However CORS provides a way for the server and browser to work together. A key component of this is the Access-Control-Allow-Origin header which servers can use to indicate where cross origin requests are allowed from.
Proposal:
Two meta fields that allow the client to proactively enforce safe defaults, reducing the complexity and user burden around isolation.
Example Implementation:
Cross-Server-Concurrency:
This is inspired by the CORS methodology used in web browsers. A whitelist option, similar to that how the Access-Control-Allow-Origin header operates could be implemented in the future however at this point lacking any formal fully qualified naming (except maybe for remote servers) this is not currently feasible.
Request-validation
Implementation of these meta fields would be on the server level and the primitive level. This enables a client to restrict access only to sensitive tools (transfer funds, delete resource).
Benefits:
Scope
Beta Was this translation helpful? Give feedback.
All reactions