[performance_meter] Add performance_meter tests#16842
[performance_meter] Add performance_meter tests#16842Blueve merged 14 commits intosonic-net:masterfrom
Conversation
|
/azp run |
|
Azure Pipelines successfully started running 1 pipeline(s). |
| # Timing will only start after the first blocking part of operation | ||
| # is over. The op should make sure op is started correctly and ended | ||
| # correctly. If either part is unsuccessful, op should yeild False and | ||
| # log the error, otherwise yielding True is expected. |
There was a problem hiding this comment.
In short, the op defines the procedure:
- OP - Do something
- OP - yield or return
- Start timer and success criteria checker
- OP - Do something that we want to measure
Is this statement right?
|
Can you also link the HLD to this PR's description? |
| # take all results that passed op precheck. | ||
|
|
||
|
|
||
| def random_success_20_perc(duthost, **kwarg): |
There was a problem hiding this comment.
This is a sample implementation, right?
If user use this one, their test will got ~20% passing rate
| # process results of all runs, so there could be a success criteria | ||
| # stats function, named with a "_stats" suffix, taking the same | ||
| # variables as its single run version, like "bgp_up_stats". It will | ||
| # take all results that passed op precheck. |
There was a problem hiding this comment.
The _stats is special, I saw the example below:
def random_success_20_perc_stats(passed_op_precheck, **kwarg):
Can you explain why it need **kwarg?
And can you extend the sample method to output the actual success rate? Then it could be more clear.
There was a problem hiding this comment.
If we define a mean, or a p99, stuff like that, they will be passed to function through the kwarg
|
/azp run |
|
Azure Pipelines successfully started running 1 pipeline(s). |
|
/azp run |
|
Azure Pipelines successfully started running 1 pipeline(s). |
|
/azp run |
|
Azure Pipelines successfully started running 1 pipeline(s). |
|
/azp run |
|
Azure Pipelines successfully started running 1 pipeline(s). |
|
/azp run |
|
Azure Pipelines successfully started running 1 pipeline(s). |
|
/azp run |
|
Azure Pipelines successfully started running 1 pipeline(s). |
|
/azp run |
|
Azure Pipelines successfully started running 1 pipeline(s). |
|
/azp run |
|
Azure Pipelines successfully started running 1 pipeline(s). |
|
/azp run |
|
Azure Pipelines successfully started running 1 pipeline(s). |
|
/azp run |
|
Azure Pipelines successfully started running 1 pipeline(s). |
|
/azp run |
|
Azure Pipelines successfully started running 1 pipeline(s). |
|
/azp run |
|
Azure Pipelines successfully started running 1 pipeline(s). |
|
/azp run |
|
Azure Pipelines successfully started running 1 pipeline(s). |
| ############################ | ||
| parser.addoption("--dpu-pattern", action="store", default="all", help="dpu host name") | ||
|
|
||
| ################################# |
There was a problem hiding this comment.
Make this configurable in next PR
What is the motivation for this PR? To better understand the performance of certain operation on different hwsku, and make sure there are no unexpected performance changes. How did you do it? Add new test. User can write config file and code will perform and measure the performance of different operations. How did you verify/test it? Run test on different testbeds.
What is the motivation for this PR? To better understand the performance of certain operation on different hwsku, and make sure there are no unexpected performance changes. How did you do it? Add new test. User can write config file and code will perform and measure the performance of different operations. How did you verify/test it? Run test on different testbeds.
Description of PR
Add performance meter tests
hld: #15356
Summary:
Fixes # (issue)
Type of change
Back port request
Approach
What is the motivation for this PR?
To better understand the performance of certain operation on different hwsku, and make sure there are no unexpected performance changes.
How did you do it?
Add new test. User can write config file and code will perform and measure the performance of different operations.
How did you verify/test it?
Run test on different testbeds.
Any platform specific information?
Supported testbed topology if it's a new test case?
Documentation