-
Notifications
You must be signed in to change notification settings - Fork 4
message validation is too strict when responding to system message #39
Copy link
Copy link
Open
Description
llama3+ based models do not reply to a system instruction with a message like "ok, go ahead". In our message.py, we enforce there must be a text or tool usage in reply. This logic makes it impossible to try anything based on llama3+, at least without changing the system message to a point where it might affect the reply.
Request method: POST
Request URL: http://localhost:11434/v1/chat/completions
Request headers: Headers({'host': 'localhost:11434', 'accept': '*/*', 'accept-encoding': 'gzip, deflate', 'connection': 'keep-alive', 'user-agent': 'python-httpx/0.27.2', 'content-length': '357', 'content-type': 'application/json'})
Request content: b'{"messages": [{"role": "system", "content": "You are a helpful assistant. Expect to need to authenticate using get_password."}], "model": "llama3-groq-tool-use", "tools": [{"type": "function", "function": {"name": "get_password", "description": "Return the password for authentication", "parameters": {"type": "object", "properties": {}, "required": []}}}]}'
Response content:
{"id":"chatcmpl-364","object":"chat.completion","created":1725853404,"model":"llama3-groq-tool-use","system_fingerprint":"fp_ollama","choices":[{"index":0,"message":{"role":"assistant","content":""},"finish_reason":"stop"}],"usage":{"prompt_tokens":137,"completion_tokens":1,"total_tokens":138}}
I would suggest one of the following choices:
- change to skip enforcement logic on the response to the initial system prompt
- change validation hooks so it can see the prior message and then skip validation after a system prompt
- don't change validation system, rather skip like below always
--- a/src/exchange/message.py
+++ b/src/exchange/message.py
@@ -19,8 +19,10 @@ def validate_role_and_content(instance: "Message", *_: Any) -> None: # noqa: AN
if instance.tool_use:
raise ValueError("User message does not support ToolUse")
elif instance.role == "assistant":
- if not (instance.text or instance.tool_use):
- raise ValueError("Assistant message must include a Text or ToolUsage")
+ # Note: Models based on llama3 return no instance.text in the response
+ # when the input was a single system message. We also can't determine
+ # the input inside a validator. Hence, we can't enforce a condition
+ # that the assistant message must include a Text or ToolUsage.
if instance.tool_result:
raise ValueError("Assistant message does not support ToolResult")
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels