Callback mechanism for Sampling requests #186
Replies: 2 comments 4 replies
-
|
A sampling flow is essentially a ping pong between client and server to generate detailed generations. But, a sampling flow will never just 'end' as a result in the MCP server, it needs to be handled. To make this effective, my current implementations go like this. a/ Each tool has a type (sampling or standard) To actually make this process useful, I add _callbacks to the _meta of the sampling request/response. When an MCP server receives an MCP Sampling response with a callback, it 'does stuff', which is usually the important LLM magic. This requires me stuffing the '_meta' of my sampling response with _callbacks and schemas. This does not seem the correct use of metadata, but we need some way, especially for sampling flows, to execute business logic based on the response in the server. |
Beta Was this translation helpful? Give feedback.
-
|
Fwiw I encountered usecases for similar functionality frequently when using chatgpt's python interpreter sandbox eg when doing some string processing and wanting to make use of LLM capabilities per string, within the python codebase. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Pre-submission Checklist
Discussion Topic
A sampling flow is essentially a ping pong between client and server to generate detailed generations. But, a sampling flow will never just 'end' as a result in the MCP server, it needs to be handled.
Should there be a mechanism handling callbacks on Sampling flows?
Beta Was this translation helpful? Give feedback.
All reactions