Skip to content

Conversation

@ihrpr
Copy link
Contributor

@ihrpr ihrpr commented Apr 25, 2025

While implementing Streamable HTTP transport in the Typescript and now Python SDKs, I haven't seen any compelling use cases for batching.

The main use case we initially considered was allowing notifications (e.g., logging) when a tool responds with JSON instead of SSE. However, the current specification doesn't support mixing responses and notifications (because JSON-RPC doesn't), and even if it was, delayed notifications bundled with the final response offer limited value compared to real-time updates over SSE.

Another use case is parallel tool calling, which is quite strait forward with horizontal scaling especially in stateless mode.

This PR proposes simplifycayion the spec by removing batching support in the next version.

Copy link
Member

@dsp-ant dsp-ant left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you

@atesgoral
Copy link
Contributor

atesgoral commented Jun 16, 2025

Coming very late to this, after seeing batch support is being removed in the latest spec.

the current specification doesn't support mixing responses and notifications (because JSON-RPC doesn't)

I don't think this is true. JSON-RPC allows mixed use of responses and notifications, by simply excluding the notifications from the response array.

6 Batch

To send several Request objects at the same time, the Client MAY send an Array filled with Request objects.

The Server should respond with an Array containing the corresponding Response objects, after all of the batch
Request objects have been processed. A Response object SHOULD exist for each Request object, except that there
SHOULD NOT be any Response objects for notifications. The Server MAY process a batch rpc call as a set of
concurrent tasks, processing them in any order and with any width of parallelism.

The Response objects being returned from a batch call MAY be returned in any order within the Array. The Client
SHOULD match contexts between the set of Request objects and the resulting set of Response objects based on the id
member within each Object.

@mkouba
Copy link

mkouba commented Jun 19, 2025

While implementing Streamable HTTP transport in the Typescript and now Python SDKs, I haven't seen any compelling use cases for batching.

To be honest, I don't understand how implementing a transport can tell you anything about compelling use cases. It's the users who can tell you.

Another use case is parallel tool calling, which is quite strait forward with horizontal scaling especially in stateless mode.

What do you mean by the "stateless mode"? AFAIK the spec does not mention it anywhere.

This PR proposes simplifycayion the spec by removing batching support in the next version.

This change is also not backward compatible which means that an MCP client that supports 2025-03-26 might not work with an MCP server that only supports 2025-06-18.

In general, I would expect a broader discussion before such a change is merged 🤷.

@green-coder
Copy link

Is there anyone opposing breaking backward compatibility in the JS world?

This change means more work for people implementing compatible MCP libraries, rather than less.

@mkouba
Copy link

mkouba commented Jun 23, 2025

Is there anyone opposing breaking backward compatibility in the JS world?

I'm sorry but I don't understand this comment ;-)

This change means more work for people implementing compatible MCP libraries, rather than less.

Exactly.

@hesreallyhim
Copy link
Contributor

@ihrpr @dsp-ant sorry I'm very late to the game on this, but I didn't notice it until reviewing the update notes for the new spec...

I'm not fully confident that this change is compliant with JSON-RPC 2.0 spec... There are a few places I could point to but the most direct argument is probably:

To send several Request objects at the same time, the Client MAY send an Array filled with Request objects.

When a rpc call is made, the Server MUST reply with a Response, except for in the case of Notifications.

Together, this entails that batches are valid rpc requests, and servers MUST reply with a response.

And the MCP Spec now says:

Messages are individual JSON-RPC requests, notifications, or responses.

and all mention of batches has been removed, and it's been removed from the Schema.

Elsewhere in the JSON-RPC 2.0 spec, it states that servers SHOULD respond to batch requests with such-and-such. Part of the meaning of "SHOULD" is that

there may exist valid reasons in particular circumstances to ignore a
particular item, but the full implications must be understood and
carefully weighed before choosing a different course

In particular, the implication is that a SHOULD must be followed unless there's a good reason not to, and the reason has to do with security and/or interoperability, and not preference.

Based on this PR's message, it seems to me that the justification was mainly the lack of "compelling use cases for batching".

I think this is a deviation from JSON-RPC 2.0 requirements, which is fine as far as it goes, and is not the main purpose of this comment, but in my opinion the Specification should be more explicit about this than merely changing the language to omit batches, and noting it in the changelog. For instance in the Base Protocol overview, the Spec explicitly states:

Unlike base JSON-RPC, the ID MUST NOT be null.

Similarly, I think an explicit statement like the following would be appropriate:

Unlike base JSON-RPC, the request MUST be a single RPC request or notification, and MUST NOT be a batch request.

Also, I'm not sure what the current Spec instructs servers to do in response to a batch request if one happens to be sent in.

I'm also not clear whether the Spec allows servers to support batch requests for backwards-compatibility purposes or not. Given that batches are no longer valid types within the Schema, it doesn't seem allowed. But the question is - is this change about removing support for batches, or about removing the requirement for support for batches.

I've opened a PR to make this change more explicit, as I believe that given the foundational status of JSON-RPC with respect to MCP, deviations from JSON-RPC should be explicit in the Spec (and not merely in the Key Changes). Not doing so is liable to lead to confusion or breakage in tools that are built in strict adherence to JSON-RPC.

PR here: #828

@atesgoral
Copy link
Contributor

atesgoral commented Jun 24, 2025

Reposting a sentiment I shared in another medium:

Thinking about this more, an outstanding concern is, not supporting batching breaks away from JSON-RPC. And any SDK that's using a JSON-RPC library under the hood might run into problems with turning off batching. The Ruby SDK uses the json_rpc_handler gem, which is spec-compliant and therefore supports batching. It's the first thing that the transport hits before even getting into MCP concerns. And json_rpc_handler does not have a configuration for turning off batching, because why should it? It's just an JSON-RPC gem. I assume other SDKs will run into similar issues, unless their JSON-RPC implementations are tightly coupled with the SDKs themselves, an potentially not fully-JSON-RPC-spec-compliant. I wish there was a wider discussion before the decision to remove batching was made.

For the reasons stated in the PR description:

delayed notifications bundled with the final response offer limited value compared to real-time updates over SSE

A feature having limited value shouldn't be a reason to ban it, especially when banning it is not a net deletion of a requirement, but an addition of an artificial requirement that creates more work and mental overhead for SDK implementors and users (of having to worry about this special exception over base JSON-RPC).

quite strait forward with horizontal scaling

This is true!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

No open projects
Status: Approved

Development

Successfully merging this pull request may close these issues.

7 participants