Skip to content

Conversation

@tomerm-iguazio
Copy link
Contributor

@tomerm-iguazio tomerm-iguazio commented Dec 24, 2025

📝 Description

  1. Support list and dictionary outputs for batch (as list) inputs.
  2. Add model monitoring to the batching tests.
  3. Add functionality to helper functions and their tests.

🛠️ Changes Made


✅ Checklist

  • I updated the documentation (if applicable)
  • I have tested the changes in this PR
  • I confirmed whether my changes are covered by system tests
    • If yes, I ran all relevant system tests and ensured they passed before submitting this PR
    • I updated existing system tests and/or added new ones if needed to cover my changes
  • If I introduced a deprecation:

🧪 Testing

Unit test for model monitoring + helpers functions.
System test for main functionality.

🔗 References

  • Ticket link: ML-11681
  • Design docs links:
  • External links:

🚨 Breaking Changes?

  • Yes (explain below)
  • No

🔍️ Additional Notes

@tomerm-iguazio tomerm-iguazio marked this pull request as ready for review December 25, 2025 09:09
@tomerm-iguazio tomerm-iguazio requested review from a team and liranbg as code owners December 25, 2025 09:09
@tomerm-iguazio tomerm-iguazio changed the title [Serving] Support model monitoring with list + unit test to all cases. [Serving] Support model monitoring with list + unit test to all cases Dec 25, 2025
@tomerm-iguazio tomerm-iguazio changed the title [Serving] Support model monitoring with list + unit test to all cases [Serving] Support model monitoring with list + tests Dec 30, 2025
Copy link
Contributor

@royischoss royischoss left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey looks good overall added some comments

serving_fn: mlrun.runtimes.nuclio.serving.ServingRuntime,
*,
with_training_set: bool = True,
model_runner_mode: typing.Optional[str] = None,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it not clear here the usage of V2ModelServer is hidden with None, better adding a flag or adding a V2ModelServer option and adding a ModelRunner prefix for the batch and single

@pytest.mark.parametrize("with_training_set", [True, False])
@pytest.mark.parametrize("with_model_runner", [True, False])
def test_app_flow(self, with_training_set: bool, with_model_runner: bool) -> None:
@pytest.mark.parametrize(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thought this will be in a separate test, can be add to test_monitoring_with_model_runner_dict_infer in class TestBasicModelMonitoring

mm_schemas.StreamProcessingEvent.WHEN
)
# if the body is not a dict, we set it to empty dict to fields extraction
body_by_model = (
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it better to create a method to extract fields of labels or error from body. this way we just we may cause someone else to use this body as the event data while we initialized an empty dict

when = event._original_timestamp
else:
when = event._metadata.get(mm_schemas.StreamProcessingEvent.WHEN)
event_body = event.body if isinstance(event.body, dict) else {}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same as above

add simpler test: test_monitoring_with_model_runner_batch_infer
Copy link
Contributor

@royischoss royischoss left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 🥇

@davesh0812 davesh0812 merged commit 8f06420 into mlrun:development Jan 5, 2026
13 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants