Skip to content

Bump confluent-kafka-python version on omnibus#17266

Merged
FlorentClarret merged 4 commits intomainfrom
alopez/bump-confluent-kafka-python
Jul 3, 2023
Merged

Bump confluent-kafka-python version on omnibus#17266
FlorentClarret merged 4 commits intomainfrom
alopez/bump-confluent-kafka-python

Conversation

@alopezz
Copy link
Copy Markdown
Contributor

@alopezz alopezz commented May 23, 2023

What does this PR do?

Updates confluent-kafka-python version on omnibus script.

Motivation

In step with update on integrations-core DataDog/integrations-core#14665.

Additional Notes

The image has been tested locally using ddev on this integrations-core branch that bumps the version on our side:

❯ ddev env start -a 486234852809.dkr.ecr.us-east-1.amazonaws.com/ci/datadog-agent/agent:v16364585-4a91d5b1-7-arm64 kafka_consumer py3.8-3.3-kerberos
Setting up environment `py3.8-3.3-kerberos`... success!
Updating `486234852809.dkr.ecr.us-east-1.amazonaws.com/ci/datadog-agent/agent:v16364585-4a91d5b1-7-arm64`... success!
Detecting the major version... Agent 7 detected
Writing configuration for `py3.8-3.3-kerberos`... success!
Starting the Agent... success!

To edit config file, do: ddev env edit kafka_consumer py3.8-3.3-kerberos
Config file (copied to your clipboard): /Users/florent.clarret/Library/Application Support/dd-checks-dev/envs/kafka_consumer/py3.8-3.3-kerberos/config/kafka_consumer.yaml
To reload the config file, do: ddev env reload kafka_consumer py3.8-3.3-kerberos
To run this check, do: ddev env check kafka_consumer py3.8-3.3-kerberos
To stop this check, do: ddev env stop kafka_consumer py3.8-3.3-kerberos

❯ ddev test --e2e kafka_consumer:py3.8-3.3-kerberos
Running only end-to-end tests for `kafka_consumer`os                                                                                                                          ─╯
--------------------------------------------------
────────────────────────────────────────────────────────────────────────────── py3.8-3.3-kerberos ───────────────────────────────────────────────────────────────────────────────
Finished checking dependencies
cmd [1] | pytest -v --benchmark-skip tests
============================================================================== test session starts ==============================================================================
platform darwin -- Python 3.8.14, pytest-7.3.1, pluggy-1.0.0 -- /Users/florent.clarret/Library/Application Support/hatch/env/virtual/datadog-kafka-consumer/KmLHgHe8/py3.8-3.3-kerberos/bin/python
cachedir: .pytest_cache
benchmark: 4.0.0 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
rootdir: /Users/florent.clarret/go/src/github.com/DataDog/integrations-core/kafka_consumer
plugins: memray-1.4.0, datadog-checks-dev-19.3.1, asyncio-0.21.0, ddtrace-0.53.2, flaky-3.7.0, mock-3.10.0, cov-4.1.0, benchmark-4.0.0
asyncio: mode=strict
collected 46 items / 45 deselected / 1 selected                                                                                                                                 

tests/test_e2e.py::test_e2e PASSED                                                                                                                                        [100%]

======================================================================= 1 passed, 45 deselected in 7.56s ========================================================================

Passed!

Possible Drawbacks / Trade-offs

Describe how to test/QA your changes

Reviewer's Checklist

  • If known, an appropriate milestone has been selected; otherwise the Triage milestone is set.
  • Use the major_change label if your change either has a major impact on the code base, is impacting multiple teams or is changing important well-established internals of the Agent. This label will be use during QA to make sure each team pay extra attention to the changed behavior. For any customer facing change use a releasenote.
  • A release note has been added or the changelog/no-changelog label has been applied.
  • Changed code has automated tests for its functionality.
  • Adequate QA/testing plan information is provided if the qa/skip-qa label is not applied.
  • At least one team/.. label has been applied, indicating the team(s) that should QA this change.
  • If applicable, docs team has been notified or an issue has been opened on the documentation repo.
  • If applicable, the need-change/operator and need-change/helm labels have been applied.
  • If applicable, the k8s/<min-version> label, indicating the lowest Kubernetes version compatible with this feature.
  • If applicable, the config template has been updated.

@alopezz alopezz added the changelog/no-changelog No changelog entry needed label May 23, 2023
@alopezz alopezz added this to the 7.46.0 milestone May 23, 2023
@alopezz alopezz requested a review from a team as a code owner May 23, 2023 12:11
@pr-commenter
Copy link
Copy Markdown

pr-commenter Bot commented May 23, 2023

Bloop Bleep... Dogbot Here

Regression Detector Results

Run ID: 3c0416f3-dde1-496a-a448-fa5b33eb3e5d
Baseline: 55d237f
Comparison: 4a91d5b
Total datadog-agent CPUs: 7

Explanation

A regression test is an integrated performance test for datadog-agent in a repeatable rig, with varying configuration for datadog-agent. What follows is a statistical summary of a brief datadog-agent run for each configuration across SHAs given above. The goal of these tests are to determine quickly if datadog-agent performance is changed and to what degree by a pull request.

Because a target's optimization goal performance in each experiment will vary somewhat each time it is run, we can only estimate mean differences in optimization goal relative to the baseline target. We express these differences as a percentage change relative to the baseline target, denoted "Δ mean %". These estimates are made to a precision that balances accuracy and cost control. We represent this precision as a 90.00% confidence interval denoted "Δ mean % CI": there is a 90.00% chance that the true value of "Δ mean %" is in that interval.

We decide whether a change in performance is a "regression" -- a change worth investigating further -- if both of the following two criteria are true:

  1. The estimated |Δ mean %| ≥ 5.00%. This criterion intends to answer the question "Does the estimated change in mean optimization goal performance have a meaningful impact on your customers?". We assume that when |Δ mean %| < 5.00%, the impact on your customers is not meaningful. We also assume that a performance change in optimization goal is worth investigating whether it is an increase or decrease, so long as the magnitude of the change is sufficiently large.

  2. Zero is not in the 90.00% confidence interval "Δ mean % CI" about "Δ mean %". This statement is equivalent to saying that there is at least a 90.00% chance that the mean difference in optimization goal is not zero. This criterion intends to answer the question, "Is there a statistically significant difference in mean optimization goal performance?". It also means there is no more than a 10.00% chance this criterion reports a statistically significant difference when the true difference in mean optimization goal is zero -- a "false positive". We assume you are willing to accept a 10.00% chance of inaccurately detecting a change in performance when no true difference exists.

The table below, if present, lists those experiments that have experienced a statistically significant change in mean optimization goal performance between baseline and comparison SHAs with 90.00% confidence OR have been detected as newly erratic. Negative values of "Δ mean %" mean that baseline is faster, whereas positive values of "Δ mean %" mean that comparison is faster. Results that do not exhibit more than a ±5.00% change in their mean optimization goal are discarded. An experiment is erratic if its coefficient of variation is greater than 0.1. The abbreviated table will be omitted if no interesting change is observed.

Changes in experiment optimization goals with confidence ≥ 90.00% and |Δ mean %| ≥ 5.00%:

experiment goal Δ mean % confidence
trace_agent_json ingress throughput -5.62 100.00%
Fine details of change detection per experiment.
experiment goal Δ mean % Δ mean % CI confidence
tcp_dd_logs_filter_exclude ingress throughput +1.73 [+1.56, +1.89] 100.00%
file_to_blackhole egress throughput +1.16 [+0.44, +1.89] 95.99%
uds_dogstatsd_to_api ingress throughput +0.25 [-0.77, +1.27] 24.56%
tcp_syslog_to_blackhole ingress throughput -0.60 [-0.69, -0.52] 100.00%
otel_to_otel_logs ingress throughput -0.98 [-1.06, -0.90] 100.00%
trace_agent_msgpack ingress throughput -1.76 [-1.85, -1.66] 100.00%
trace_agent_json ingress throughput -5.62 [-5.78, -5.46] 100.00%

@alopezz alopezz modified the milestones: 7.46.0, Triage May 24, 2023
@alopezz alopezz marked this pull request as draft May 24, 2023 11:21
@FlorentClarret FlorentClarret modified the milestones: Triage, 7.47.0 Jun 1, 2023
@FlorentClarret FlorentClarret marked this pull request as ready for review June 1, 2023 07:16
Copy link
Copy Markdown
Member

@chouetz chouetz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

@FlorentClarret FlorentClarret merged commit 15f9c81 into main Jul 3, 2023
@FlorentClarret FlorentClarret deleted the alopez/bump-confluent-kafka-python branch July 3, 2023 07:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants