Add docker0's IPv6 address since it was removed when disabling IPv6#10651
Add docker0's IPv6 address since it was removed when disabling IPv6#10651judyjoseph merged 3 commits intosonic-net:masterfrom
Conversation
|
The pre-commit check detected issues in the files touched by this pull request. Detailed pre-commit check results: To run the pre-commit checks locally, you can follow below steps:
|
|
/azp run |
|
Azure Pipelines successfully started running 1 pipeline(s). |
…onic-net#10651) * Add docker0's IPv6 address since it was removed when disabling IPv6 * Add docker0's IPv6 address since it was removed when disabling IPv6 * Add docker0's IPv6 address since it was removed when disabling IPv6
|
Cherry-pick PR to 202205: #10976 |
…10651) * Add docker0's IPv6 address since it was removed when disabling IPv6 * Add docker0's IPv6 address since it was removed when disabling IPv6 * Add docker0's IPv6 address since it was removed when disabling IPv6
Regression introduced by sonic-net#10651 for dualtor. The config_reload in the fixture `dut_disable_ipv6` waits until all critical processes are up after issuing config reload command and it timeouts in case of dualtor because mux container doesn't come up. Mux container is disabled by another fixture in the same file `stopServices`. These two fixtures have no dependency on each other hence the execution can happen in any order, so if the teardown of `dut_disable_ipv6` happens before `stopServices` then this issue is seen.
Regression introduced by #10651 for dualtor. The config_reload in the fixture `dut_disable_ipv6` waits until all critical processes are up after issuing config reload command and it timeouts in case of dualtor because mux container doesn't come up. Mux container is disabled by another fixture in the same file `stopServices`. These two fixtures have no dependency on each other hence the execution can happen in any order, so if the teardown of `dut_disable_ipv6` happens before `stopServices` then this issue is seen.
Regression introduced by sonic-net#10651 for dualtor. The config_reload in the fixture `dut_disable_ipv6` waits until all critical processes are up after issuing config reload command and it timeouts in case of dualtor because mux container doesn't come up. Mux container is disabled by another fixture in the same file `stopServices`. These two fixtures have no dependency on each other hence the execution can happen in any order, so if the teardown of `dut_disable_ipv6` happens before `stopServices` then this issue is seen.
Regression introduced by sonic-net#10651 for dualtor. The config_reload in the fixture `dut_disable_ipv6` waits until all critical processes are up after issuing config reload command and it timeouts in case of dualtor because mux container doesn't come up. Mux container is disabled by another fixture in the same file `stopServices`. These two fixtures have no dependency on each other hence the execution can happen in any order, so if the teardown of `dut_disable_ipv6` happens before `stopServices` then this issue is seen.
Regression introduced by #10651 for dualtor. The config_reload in the fixture `dut_disable_ipv6` waits until all critical processes are up after issuing config reload command and it timeouts in case of dualtor because mux container doesn't come up. Mux container is disabled by another fixture in the same file `stopServices`. These two fixtures have no dependency on each other hence the execution can happen in any order, so if the teardown of `dut_disable_ipv6` happens before `stopServices` then this issue is seen.
Regression introduced by #10651 for dualtor. The config_reload in the fixture `dut_disable_ipv6` waits until all critical processes are up after issuing config reload command and it timeouts in case of dualtor because mux container doesn't come up. Mux container is disabled by another fixture in the same file `stopServices`. These two fixtures have no dependency on each other hence the execution can happen in any order, so if the teardown of `dut_disable_ipv6` happens before `stopServices` then this issue is seen.
What is the motivation for this PR? qos/test_qos_sai.py fails at teardown failed on setup with "Failed: All critical services should be fully started! Regression introduced by #10651 for dualtor. How did you do it? The config_reload in the fixture dut_disable_ipv6 waits until all critical processes are up after issuing config reload command and it timeouts in case of dualtor because mux container doesn't come up. Mux container is disabled by another fixture stopServices in the same file. These two fixtures have no dependency on each other hence the execution can happen in any order, so if the teardown of dut_disable_ipv6 happens before stopServices then this issue is seen. This change ensures that the teardown of stopServices happens before dut_disable_ipv6 so that mux is no longer disabled at the time of config_reload. How did you verify/test it? Ran qos/test_qos_sai.py on Arista-7260CX3 platform with dualtor topology with 202305 and 202311 images.
…r. (sonic-net#12503) What is the motivation for this PR? qos/test_qos_sai.py fails at teardown failed on setup with "Failed: All critical services should be fully started! Regression introduced by sonic-net#10651 for dualtor. How did you do it? The config_reload in the fixture dut_disable_ipv6 waits until all critical processes are up after issuing config reload command and it timeouts in case of dualtor because mux container doesn't come up. Mux container is disabled by another fixture stopServices in the same file. These two fixtures have no dependency on each other hence the execution can happen in any order, so if the teardown of dut_disable_ipv6 happens before stopServices then this issue is seen. This change ensures that the teardown of stopServices happens before dut_disable_ipv6 so that mux is no longer disabled at the time of config_reload. How did you verify/test it? Ran qos/test_qos_sai.py on Arista-7260CX3 platform with dualtor topology with 202305 and 202311 images.
…r. (sonic-net#12503) What is the motivation for this PR? qos/test_qos_sai.py fails at teardown failed on setup with "Failed: All critical services should be fully started! Regression introduced by sonic-net#10651 for dualtor. How did you do it? The config_reload in the fixture dut_disable_ipv6 waits until all critical processes are up after issuing config reload command and it timeouts in case of dualtor because mux container doesn't come up. Mux container is disabled by another fixture stopServices in the same file. These two fixtures have no dependency on each other hence the execution can happen in any order, so if the teardown of dut_disable_ipv6 happens before stopServices then this issue is seen. This change ensures that the teardown of stopServices happens before dut_disable_ipv6 so that mux is no longer disabled at the time of config_reload. How did you verify/test it? Ran qos/test_qos_sai.py on Arista-7260CX3 platform with dualtor topology with 202305 and 202311 images.
What is the motivation for this PR? qos/test_qos_sai.py fails at teardown failed on setup with "Failed: All critical services should be fully started! Regression introduced by #10651 for dualtor. How did you do it? The config_reload in the fixture dut_disable_ipv6 waits until all critical processes are up after issuing config reload command and it timeouts in case of dualtor because mux container doesn't come up. Mux container is disabled by another fixture stopServices in the same file. These two fixtures have no dependency on each other hence the execution can happen in any order, so if the teardown of dut_disable_ipv6 happens before stopServices then this issue is seen. This change ensures that the teardown of stopServices happens before dut_disable_ipv6 so that mux is no longer disabled at the time of config_reload. How did you verify/test it? Ran qos/test_qos_sai.py on Arista-7260CX3 platform with dualtor topology with 202305 and 202311 images.
What is the motivation for this PR? qos/test_qos_sai.py fails at teardown failed on setup with "Failed: All critical services should be fully started! Regression introduced by #10651 for dualtor. How did you do it? The config_reload in the fixture dut_disable_ipv6 waits until all critical processes are up after issuing config reload command and it timeouts in case of dualtor because mux container doesn't come up. Mux container is disabled by another fixture stopServices in the same file. These two fixtures have no dependency on each other hence the execution can happen in any order, so if the teardown of dut_disable_ipv6 happens before stopServices then this issue is seen. This change ensures that the teardown of stopServices happens before dut_disable_ipv6 so that mux is no longer disabled at the time of config_reload. How did you verify/test it? Ran qos/test_qos_sai.py on Arista-7260CX3 platform with dualtor topology with 202305 and 202311 images.
Approach What is the motivation for this PR? qos/test_qos_sai.py fails at teardown failed on setup with "Failed: All critical services should be fully started! Regression introduced by #10651 for dualtor. How did you do it? The config_reload in the fixture dut_disable_ipv6 waits until all critical processes are up after issuing config reload command and it timeouts in case of dualtor because mux container doesn't come up. Mux container is disabled by another fixture stopServices in the same file. These two fixtures have no dependency on each other hence the execution can happen in any order, so if the teardown of dut_disable_ipv6 happens before stopServices then this issue is seen. This change ensures that the teardown of stopServices happens before dut_disable_ipv6 so that mux is no longer disabled at the time of config_reload. How did you verify/test it? Ran qos/test_qos_sai.py on Arista-7260CX3 platform with dualtor topology with 202305 and 202311 images.
Approach What is the motivation for this PR? qos/test_qos_sai.py fails at teardown failed on setup with "Failed: All critical services should be fully started! Regression introduced by sonic-net#10651 for dualtor. How did you do it? The config_reload in the fixture dut_disable_ipv6 waits until all critical processes are up after issuing config reload command and it timeouts in case of dualtor because mux container doesn't come up. Mux container is disabled by another fixture stopServices in the same file. These two fixtures have no dependency on each other hence the execution can happen in any order, so if the teardown of dut_disable_ipv6 happens before stopServices then this issue is seen. This change ensures that the teardown of stopServices happens before dut_disable_ipv6 so that mux is no longer disabled at the time of config_reload. How did you verify/test it? Ran qos/test_qos_sai.py on Arista-7260CX3 platform with dualtor topology with 202305 and 202311 images.
Approach What is the motivation for this PR? qos/test_qos_sai.py fails at teardown failed on setup with "Failed: All critical services should be fully started! Regression introduced by #10651 for dualtor. How did you do it? The config_reload in the fixture dut_disable_ipv6 waits until all critical processes are up after issuing config reload command and it timeouts in case of dualtor because mux container doesn't come up. Mux container is disabled by another fixture stopServices in the same file. These two fixtures have no dependency on each other hence the execution can happen in any order, so if the teardown of dut_disable_ipv6 happens before stopServices then this issue is seen. This change ensures that the teardown of stopServices happens before dut_disable_ipv6 so that mux is no longer disabled at the time of config_reload. How did you verify/test it? Ran qos/test_qos_sai.py on Arista-7260CX3 platform with dualtor topology with 202305 and 202311 images.
Approach What is the motivation for this PR? qos/test_qos_sai.py fails at teardown failed on setup with "Failed: All critical services should be fully started! Regression introduced by sonic-net#10651 for dualtor. How did you do it? The config_reload in the fixture dut_disable_ipv6 waits until all critical processes are up after issuing config reload command and it timeouts in case of dualtor because mux container doesn't come up. Mux container is disabled by another fixture stopServices in the same file. These two fixtures have no dependency on each other hence the execution can happen in any order, so if the teardown of dut_disable_ipv6 happens before stopServices then this issue is seen. This change ensures that the teardown of stopServices happens before dut_disable_ipv6 so that mux is no longer disabled at the time of config_reload. How did you verify/test it? Ran qos/test_qos_sai.py on Arista-7260CX3 platform with dualtor topology with 202305 and 202311 images.
Approach What is the motivation for this PR? qos/test_qos_sai.py fails at teardown failed on setup with "Failed: All critical services should be fully started! Regression introduced by sonic-net#10651 for dualtor. How did you do it? The config_reload in the fixture dut_disable_ipv6 waits until all critical processes are up after issuing config reload command and it timeouts in case of dualtor because mux container doesn't come up. Mux container is disabled by another fixture stopServices in the same file. These two fixtures have no dependency on each other hence the execution can happen in any order, so if the teardown of dut_disable_ipv6 happens before stopServices then this issue is seen. This change ensures that the teardown of stopServices happens before dut_disable_ipv6 so that mux is no longer disabled at the time of config_reload. How did you verify/test it? Ran qos/test_qos_sai.py on Arista-7260CX3 platform with dualtor topology with 202305 and 202311 images.
Approach What is the motivation for this PR? qos/test_qos_sai.py fails at teardown failed on setup with "Failed: All critical services should be fully started! Regression introduced by #10651 for dualtor. How did you do it? The config_reload in the fixture dut_disable_ipv6 waits until all critical processes are up after issuing config reload command and it timeouts in case of dualtor because mux container doesn't come up. Mux container is disabled by another fixture stopServices in the same file. These two fixtures have no dependency on each other hence the execution can happen in any order, so if the teardown of dut_disable_ipv6 happens before stopServices then this issue is seen. This change ensures that the teardown of stopServices happens before dut_disable_ipv6 so that mux is no longer disabled at the time of config_reload. How did you verify/test it? Ran qos/test_qos_sai.py on Arista-7260CX3 platform with dualtor topology with 202305 and 202311 images.
Approach What is the motivation for this PR? qos/test_qos_sai.py fails at teardown failed on setup with "Failed: All critical services should be fully started! Regression introduced by #10651 for dualtor. How did you do it? The config_reload in the fixture dut_disable_ipv6 waits until all critical processes are up after issuing config reload command and it timeouts in case of dualtor because mux container doesn't come up. Mux container is disabled by another fixture stopServices in the same file. These two fixtures have no dependency on each other hence the execution can happen in any order, so if the teardown of dut_disable_ipv6 happens before stopServices then this issue is seen. This change ensures that the teardown of stopServices happens before dut_disable_ipv6 so that mux is no longer disabled at the time of config_reload. How did you verify/test it? Ran qos/test_qos_sai.py on Arista-7260CX3 platform with dualtor topology with 202305 and 202311 images.
Description of PR
IPv6 address is removed from docker0 when disabling IPv6 and this causes test_snmp_loopback testcase to fail
ARISTA06T1#bash snmpget -v2c -c public FC00:11::1 .1.3.6.1.2.1.1.1.0 (FC00:11::1 is the ipv6 address of the LC's loopback0)
Timeout: No Response from FC00:11::1.
% 'snmpget -v2c -c public FC00:11::1 .1.3.6.1.2.1.1.1.0' returned error code: 1
bash snmpget -v2c -c public FC00:11::1 .1.3.6.1.2.1.1.1.0 AssertionError: Sysdescr not found in SNMP result from IP FC00:11::1/128)
After enabling IPv6, an IPv6 address should be added to docker0 and a config reload is required.
Summary:
Fixes # (issue)
Type of change
Back port request
Approach
What is the motivation for this PR?
To restore the docker0's IPv6 address which is removed when IPv6 is disabled
How did you do it?
Add IPv6 address do docker0.
Do a config reload
How did you verify/test it?
Tested qos and snmp suites against a multi Asics line card on a T2 chassis
Any platform specific information?
Supported testbed topology if it's a new test case?
Documentation