-
Notifications
You must be signed in to change notification settings - Fork 294
[Serving] Support model monitoring with list + tests #9117
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
# Conflicts: # tests/serving/test_async_flow.py
4b697ef to
12f466d
Compare
# Conflicts: # tests/serving/test_async_flow.py
royischoss
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey looks good overall added some comments
| serving_fn: mlrun.runtimes.nuclio.serving.ServingRuntime, | ||
| *, | ||
| with_training_set: bool = True, | ||
| model_runner_mode: typing.Optional[str] = None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it not clear here the usage of V2ModelServer is hidden with None, better adding a flag or adding a V2ModelServer option and adding a ModelRunner prefix for the batch and single
| @pytest.mark.parametrize("with_training_set", [True, False]) | ||
| @pytest.mark.parametrize("with_model_runner", [True, False]) | ||
| def test_app_flow(self, with_training_set: bool, with_model_runner: bool) -> None: | ||
| @pytest.mark.parametrize( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thought this will be in a separate test, can be add to test_monitoring_with_model_runner_dict_infer in class TestBasicModelMonitoring
mlrun/serving/system_steps.py
Outdated
| mm_schemas.StreamProcessingEvent.WHEN | ||
| ) | ||
| # if the body is not a dict, we set it to empty dict to fields extraction | ||
| body_by_model = ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it better to create a method to extract fields of labels or error from body. this way we just we may cause someone else to use this body as the event data while we initialized an empty dict
mlrun/serving/system_steps.py
Outdated
| when = event._original_timestamp | ||
| else: | ||
| when = event._metadata.get(mm_schemas.StreamProcessingEvent.WHEN) | ||
| event_body = event.body if isinstance(event.body, dict) else {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same as above
update test to use v2modelserver as model_mode and not as None
add simpler test: test_monitoring_with_model_runner_batch_infer
royischoss
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 🥇
📝 Description
🛠️ Changes Made
✅ Checklist
🧪 Testing
Unit test for model monitoring + helpers functions.
System test for main functionality.
🔗 References
🚨 Breaking Changes?
🔍️ Additional Notes