Skip to content

Add docker0's IPv6 address since it was removed when disabling IPv6#10651

Merged
judyjoseph merged 3 commits intosonic-net:masterfrom
mannytaheri:qos_sai_base-2
Dec 8, 2023
Merged

Add docker0's IPv6 address since it was removed when disabling IPv6#10651
judyjoseph merged 3 commits intosonic-net:masterfrom
mannytaheri:qos_sai_base-2

Conversation

@mannytaheri
Copy link
Copy Markdown
Contributor

Description of PR

IPv6 address is removed from docker0 when disabling IPv6 and this causes test_snmp_loopback testcase to fail

ARISTA06T1#bash snmpget -v2c -c public FC00:11::1 .1.3.6.1.2.1.1.1.0 (FC00:11::1 is the ipv6 address of the LC's loopback0)
Timeout: No Response from FC00:11::1.
% 'snmpget -v2c -c public FC00:11::1 .1.3.6.1.2.1.1.1.0' returned error code: 1
bash snmpget -v2c -c public FC00:11::1 .1.3.6.1.2.1.1.1.0 AssertionError: Sysdescr not found in SNMP result from IP FC00:11::1/128)

After enabling IPv6, an IPv6 address should be added to docker0 and a config reload is required.

Summary:
Fixes # (issue)

Type of change

  • Bug fix
  • Testbed and Framework(new/improvement)
  • Test case(new/improvement)

Back port request

  • 201911
  • 202012
  • 202205
  • 202305

Approach

What is the motivation for this PR?

To restore the docker0's IPv6 address which is removed when IPv6 is disabled

How did you do it?

Add IPv6 address do docker0.
Do a config reload

How did you verify/test it?

Tested qos and snmp suites against a multi Asics line card on a T2 chassis

Any platform specific information?

Supported testbed topology if it's a new test case?

Documentation

Comment thread tests/qos/qos_sai_base.py Outdated
@mssonicbld
Copy link
Copy Markdown
Collaborator

The pre-commit check detected issues in the files touched by this pull request.
The pre-commit check is a mandatory check, please fix detected issues.

Detailed pre-commit check results:
trim trailing whitespace.................................................Passed
fix end of files.........................................................Passed
check yaml...........................................(no files to check)Skipped
check for added large files..............................................Passed
check python ast.........................................................Passed
flake8...................................................................Failed
- hook id: flake8
- exit code: 1

tests/qos/qos_sai_base.py:1667:13: E122 continuation line missing indentation or outdented

flake8...............................................(no files to check)Skipped
check conditional mark sort..........................(no files to check)Skipped

To run the pre-commit checks locally, you can follow below steps:

  1. Ensure that default python is python3. In sonic-mgmt docker container, default python is python2. You can run
    the check by activating the python3 virtual environment in sonic-mgmt docker container or outside of sonic-mgmt
    docker container.
  2. Ensure that the pre-commit package is installed:
sudo pip install pre-commit
  1. Go to repository root folder
  2. Install the pre-commit hooks:
pre-commit install
  1. Use pre-commit to check staged file:
pre-commit
  1. Alternatively, you can check committed files using:
pre-commit run --from-ref <commit_id> --to-ref <commit_id>

judyjoseph
judyjoseph previously approved these changes Nov 17, 2023
Comment thread tests/qos/qos_sai_base.py Outdated
@judyjoseph
Copy link
Copy Markdown
Contributor

/azp run

@azure-pipelines
Copy link
Copy Markdown

Azure Pipelines successfully started running 1 pipeline(s).

@judyjoseph judyjoseph merged commit 0df004b into sonic-net:master Dec 8, 2023
mssonicbld pushed a commit to mssonicbld/sonic-mgmt that referenced this pull request Dec 8, 2023
…onic-net#10651)

* Add docker0's IPv6 address since it was removed when disabling IPv6

* Add docker0's IPv6 address since it was removed when disabling IPv6

* Add docker0's IPv6 address since it was removed when disabling IPv6
@mssonicbld
Copy link
Copy Markdown
Collaborator

Cherry-pick PR to 202205: #10976

mssonicbld pushed a commit that referenced this pull request Dec 8, 2023
…10651)

* Add docker0's IPv6 address since it was removed when disabling IPv6

* Add docker0's IPv6 address since it was removed when disabling IPv6

* Add docker0's IPv6 address since it was removed when disabling IPv6
vivekverma-arista added a commit to vivekverma-arista/sonic-mgmt that referenced this pull request Mar 10, 2024
Regression introduced by sonic-net#10651 for dualtor.

The config_reload in the fixture `dut_disable_ipv6` waits until all critical processes are up after issuing config reload command and it timeouts in case of dualtor because mux container doesn't come up. Mux container is disabled by another fixture in the same file `stopServices`. These two fixtures have no dependency on each other hence the execution can happen in any order, so if the teardown of `dut_disable_ipv6` happens before `stopServices` then this issue is seen.
yxieca pushed a commit that referenced this pull request Mar 20, 2024
Regression introduced by #10651 for dualtor.

The config_reload in the fixture `dut_disable_ipv6` waits until all critical processes are up after issuing config reload command and it timeouts in case of dualtor because mux container doesn't come up. Mux container is disabled by another fixture in the same file `stopServices`. These two fixtures have no dependency on each other hence the execution can happen in any order, so if the teardown of `dut_disable_ipv6` happens before `stopServices` then this issue is seen.
mssonicbld pushed a commit to mssonicbld/sonic-mgmt that referenced this pull request Mar 21, 2024
Regression introduced by sonic-net#10651 for dualtor.

The config_reload in the fixture `dut_disable_ipv6` waits until all critical processes are up after issuing config reload command and it timeouts in case of dualtor because mux container doesn't come up. Mux container is disabled by another fixture in the same file `stopServices`. These two fixtures have no dependency on each other hence the execution can happen in any order, so if the teardown of `dut_disable_ipv6` happens before `stopServices` then this issue is seen.
mssonicbld pushed a commit to mssonicbld/sonic-mgmt that referenced this pull request Mar 21, 2024
Regression introduced by sonic-net#10651 for dualtor.

The config_reload in the fixture `dut_disable_ipv6` waits until all critical processes are up after issuing config reload command and it timeouts in case of dualtor because mux container doesn't come up. Mux container is disabled by another fixture in the same file `stopServices`. These two fixtures have no dependency on each other hence the execution can happen in any order, so if the teardown of `dut_disable_ipv6` happens before `stopServices` then this issue is seen.
mssonicbld pushed a commit that referenced this pull request Mar 21, 2024
Regression introduced by #10651 for dualtor.

The config_reload in the fixture `dut_disable_ipv6` waits until all critical processes are up after issuing config reload command and it timeouts in case of dualtor because mux container doesn't come up. Mux container is disabled by another fixture in the same file `stopServices`. These two fixtures have no dependency on each other hence the execution can happen in any order, so if the teardown of `dut_disable_ipv6` happens before `stopServices` then this issue is seen.
mssonicbld pushed a commit that referenced this pull request Mar 21, 2024
Regression introduced by #10651 for dualtor.

The config_reload in the fixture `dut_disable_ipv6` waits until all critical processes are up after issuing config reload command and it timeouts in case of dualtor because mux container doesn't come up. Mux container is disabled by another fixture in the same file `stopServices`. These two fixtures have no dependency on each other hence the execution can happen in any order, so if the teardown of `dut_disable_ipv6` happens before `stopServices` then this issue is seen.
StormLiangMS pushed a commit that referenced this pull request May 14, 2024
What is the motivation for this PR?
qos/test_qos_sai.py fails at teardown

failed on setup with "Failed: All critical services should be fully started!
Regression introduced by #10651 for dualtor.

How did you do it?
The config_reload in the fixture dut_disable_ipv6 waits until all critical processes are up after issuing config reload command and it timeouts in case of dualtor because mux container doesn't come up. Mux container is disabled by another fixture stopServices in the same file. These two fixtures have no dependency on each other hence the execution can happen in any order, so if the teardown of dut_disable_ipv6 happens before stopServices then this issue is seen.

This change ensures that the teardown of stopServices happens before dut_disable_ipv6 so that mux is no longer disabled at the time of config_reload.

How did you verify/test it?
Ran qos/test_qos_sai.py on Arista-7260CX3 platform with dualtor topology with 202305 and 202311 images.
mssonicbld pushed a commit to mssonicbld/sonic-mgmt that referenced this pull request May 16, 2024
…r. (sonic-net#12503)

What is the motivation for this PR?
qos/test_qos_sai.py fails at teardown

failed on setup with "Failed: All critical services should be fully started!
Regression introduced by sonic-net#10651 for dualtor.

How did you do it?
The config_reload in the fixture dut_disable_ipv6 waits until all critical processes are up after issuing config reload command and it timeouts in case of dualtor because mux container doesn't come up. Mux container is disabled by another fixture stopServices in the same file. These two fixtures have no dependency on each other hence the execution can happen in any order, so if the teardown of dut_disable_ipv6 happens before stopServices then this issue is seen.

This change ensures that the teardown of stopServices happens before dut_disable_ipv6 so that mux is no longer disabled at the time of config_reload.

How did you verify/test it?
Ran qos/test_qos_sai.py on Arista-7260CX3 platform with dualtor topology with 202305 and 202311 images.
mssonicbld pushed a commit to mssonicbld/sonic-mgmt that referenced this pull request May 16, 2024
…r. (sonic-net#12503)

What is the motivation for this PR?
qos/test_qos_sai.py fails at teardown

failed on setup with "Failed: All critical services should be fully started!
Regression introduced by sonic-net#10651 for dualtor.

How did you do it?
The config_reload in the fixture dut_disable_ipv6 waits until all critical processes are up after issuing config reload command and it timeouts in case of dualtor because mux container doesn't come up. Mux container is disabled by another fixture stopServices in the same file. These two fixtures have no dependency on each other hence the execution can happen in any order, so if the teardown of dut_disable_ipv6 happens before stopServices then this issue is seen.

This change ensures that the teardown of stopServices happens before dut_disable_ipv6 so that mux is no longer disabled at the time of config_reload.

How did you verify/test it?
Ran qos/test_qos_sai.py on Arista-7260CX3 platform with dualtor topology with 202305 and 202311 images.
mssonicbld pushed a commit that referenced this pull request May 16, 2024
What is the motivation for this PR?
qos/test_qos_sai.py fails at teardown

failed on setup with "Failed: All critical services should be fully started!
Regression introduced by #10651 for dualtor.

How did you do it?
The config_reload in the fixture dut_disable_ipv6 waits until all critical processes are up after issuing config reload command and it timeouts in case of dualtor because mux container doesn't come up. Mux container is disabled by another fixture stopServices in the same file. These two fixtures have no dependency on each other hence the execution can happen in any order, so if the teardown of dut_disable_ipv6 happens before stopServices then this issue is seen.

This change ensures that the teardown of stopServices happens before dut_disable_ipv6 so that mux is no longer disabled at the time of config_reload.

How did you verify/test it?
Ran qos/test_qos_sai.py on Arista-7260CX3 platform with dualtor topology with 202305 and 202311 images.
mssonicbld pushed a commit that referenced this pull request May 16, 2024
What is the motivation for this PR?
qos/test_qos_sai.py fails at teardown

failed on setup with "Failed: All critical services should be fully started!
Regression introduced by #10651 for dualtor.

How did you do it?
The config_reload in the fixture dut_disable_ipv6 waits until all critical processes are up after issuing config reload command and it timeouts in case of dualtor because mux container doesn't come up. Mux container is disabled by another fixture stopServices in the same file. These two fixtures have no dependency on each other hence the execution can happen in any order, so if the teardown of dut_disable_ipv6 happens before stopServices then this issue is seen.

This change ensures that the teardown of stopServices happens before dut_disable_ipv6 so that mux is no longer disabled at the time of config_reload.

How did you verify/test it?
Ran qos/test_qos_sai.py on Arista-7260CX3 platform with dualtor topology with 202305 and 202311 images.
StormLiangMS pushed a commit that referenced this pull request Jun 21, 2024
Approach
What is the motivation for this PR?
qos/test_qos_sai.py fails at teardown

failed on setup with "Failed: All critical services should be fully started!
Regression introduced by #10651 for dualtor.

How did you do it?
The config_reload in the fixture dut_disable_ipv6 waits until all critical processes are up after issuing config reload command and it timeouts in case of dualtor because mux container doesn't come up. Mux container is disabled by another fixture stopServices in the same file. These two fixtures have no dependency on each other hence the execution can happen in any order, so if the teardown of dut_disable_ipv6 happens before stopServices then this issue is seen.

This change ensures that the teardown of stopServices happens before dut_disable_ipv6 so that mux is no longer disabled at the time of config_reload.

How did you verify/test it?
Ran qos/test_qos_sai.py on Arista-7260CX3 platform with dualtor topology with 202305 and 202311 images.
mssonicbld pushed a commit to mssonicbld/sonic-mgmt that referenced this pull request Jun 21, 2024
Approach
What is the motivation for this PR?
qos/test_qos_sai.py fails at teardown

failed on setup with "Failed: All critical services should be fully started!
Regression introduced by sonic-net#10651 for dualtor.

How did you do it?
The config_reload in the fixture dut_disable_ipv6 waits until all critical processes are up after issuing config reload command and it timeouts in case of dualtor because mux container doesn't come up. Mux container is disabled by another fixture stopServices in the same file. These two fixtures have no dependency on each other hence the execution can happen in any order, so if the teardown of dut_disable_ipv6 happens before stopServices then this issue is seen.

This change ensures that the teardown of stopServices happens before dut_disable_ipv6 so that mux is no longer disabled at the time of config_reload.

How did you verify/test it?
Ran qos/test_qos_sai.py on Arista-7260CX3 platform with dualtor topology with 202305 and 202311 images.
mssonicbld pushed a commit that referenced this pull request Jun 21, 2024
Approach
What is the motivation for this PR?
qos/test_qos_sai.py fails at teardown

failed on setup with "Failed: All critical services should be fully started!
Regression introduced by #10651 for dualtor.

How did you do it?
The config_reload in the fixture dut_disable_ipv6 waits until all critical processes are up after issuing config reload command and it timeouts in case of dualtor because mux container doesn't come up. Mux container is disabled by another fixture stopServices in the same file. These two fixtures have no dependency on each other hence the execution can happen in any order, so if the teardown of dut_disable_ipv6 happens before stopServices then this issue is seen.

This change ensures that the teardown of stopServices happens before dut_disable_ipv6 so that mux is no longer disabled at the time of config_reload.

How did you verify/test it?
Ran qos/test_qos_sai.py on Arista-7260CX3 platform with dualtor topology with 202305 and 202311 images.
mssonicbld pushed a commit to mssonicbld/sonic-mgmt that referenced this pull request Aug 2, 2024
Approach
What is the motivation for this PR?
qos/test_qos_sai.py fails at teardown

failed on setup with "Failed: All critical services should be fully started!
Regression introduced by sonic-net#10651 for dualtor.

How did you do it?
The config_reload in the fixture dut_disable_ipv6 waits until all critical processes are up after issuing config reload command and it timeouts in case of dualtor because mux container doesn't come up. Mux container is disabled by another fixture stopServices in the same file. These two fixtures have no dependency on each other hence the execution can happen in any order, so if the teardown of dut_disable_ipv6 happens before stopServices then this issue is seen.

This change ensures that the teardown of stopServices happens before dut_disable_ipv6 so that mux is no longer disabled at the time of config_reload.

How did you verify/test it?
Ran qos/test_qos_sai.py on Arista-7260CX3 platform with dualtor topology with 202305 and 202311 images.
mssonicbld pushed a commit to mssonicbld/sonic-mgmt that referenced this pull request Aug 2, 2024
Approach
What is the motivation for this PR?
qos/test_qos_sai.py fails at teardown

failed on setup with "Failed: All critical services should be fully started!
Regression introduced by sonic-net#10651 for dualtor.

How did you do it?
The config_reload in the fixture dut_disable_ipv6 waits until all critical processes are up after issuing config reload command and it timeouts in case of dualtor because mux container doesn't come up. Mux container is disabled by another fixture stopServices in the same file. These two fixtures have no dependency on each other hence the execution can happen in any order, so if the teardown of dut_disable_ipv6 happens before stopServices then this issue is seen.

This change ensures that the teardown of stopServices happens before dut_disable_ipv6 so that mux is no longer disabled at the time of config_reload.

How did you verify/test it?
Ran qos/test_qos_sai.py on Arista-7260CX3 platform with dualtor topology with 202305 and 202311 images.
mssonicbld pushed a commit that referenced this pull request Aug 3, 2024
Approach
What is the motivation for this PR?
qos/test_qos_sai.py fails at teardown

failed on setup with "Failed: All critical services should be fully started!
Regression introduced by #10651 for dualtor.

How did you do it?
The config_reload in the fixture dut_disable_ipv6 waits until all critical processes are up after issuing config reload command and it timeouts in case of dualtor because mux container doesn't come up. Mux container is disabled by another fixture stopServices in the same file. These two fixtures have no dependency on each other hence the execution can happen in any order, so if the teardown of dut_disable_ipv6 happens before stopServices then this issue is seen.

This change ensures that the teardown of stopServices happens before dut_disable_ipv6 so that mux is no longer disabled at the time of config_reload.

How did you verify/test it?
Ran qos/test_qos_sai.py on Arista-7260CX3 platform with dualtor topology with 202305 and 202311 images.
mssonicbld pushed a commit that referenced this pull request Aug 4, 2024
Approach
What is the motivation for this PR?
qos/test_qos_sai.py fails at teardown

failed on setup with "Failed: All critical services should be fully started!
Regression introduced by #10651 for dualtor.

How did you do it?
The config_reload in the fixture dut_disable_ipv6 waits until all critical processes are up after issuing config reload command and it timeouts in case of dualtor because mux container doesn't come up. Mux container is disabled by another fixture stopServices in the same file. These two fixtures have no dependency on each other hence the execution can happen in any order, so if the teardown of dut_disable_ipv6 happens before stopServices then this issue is seen.

This change ensures that the teardown of stopServices happens before dut_disable_ipv6 so that mux is no longer disabled at the time of config_reload.

How did you verify/test it?
Ran qos/test_qos_sai.py on Arista-7260CX3 platform with dualtor topology with 202305 and 202311 images.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

Archived in project

Development

Successfully merging this pull request may close these issues.

4 participants