Skip to content

Add the Wistron 6512-32R platform sku sensor data and skip unsupported test items#15347

Closed
jerrychuWis wants to merge 842 commits intosonic-net:202405from
jerrychuWis:wistron_6512_32r
Closed

Add the Wistron 6512-32R platform sku sensor data and skip unsupported test items#15347
jerrychuWis wants to merge 842 commits intosonic-net:202405from
jerrychuWis:wistron_6512_32r

Conversation

@jerrychuWis
Copy link
Copy Markdown

@jerrychuWis jerrychuWis commented Nov 4, 2024

Description of PR

Add the Wistron 6512-32R platform sku sensor data for testing and skipped unsupported test items

Summary:
Fixes # (issue)

Type of change

  • Bug fix
  • Testbed and Framework(new/improvement)
  • Test case(new/improvement)

Back port request

  • 202012
  • 202205
  • 202305
  • 202311
  • 202405

Approach

What is the motivation for this PR?

Add the Wistron 6512-32R platform sku sensor data for testing and skipped unsupported test items

How did you do it?

Add the definition for 6512-32R in Added the skip in the common sku-sensors data and skip file.

How did you verify/test it?

Run the testbed with Wistron 6512-32R

Any platform specific information?

Specific to Wistron 6512-32R

Supported testbed topology if it's a new test case?

Documentation

ysmanman and others added 30 commits September 19, 2024 13:38
…config (sonic-net#14498)

Description of PR
Broadcom ASICs only supports RED threshold. Fix helper config_wred to config RED for Broadcom ASIC.

co-authorized by: [email protected]
* [dualtor] Fix flakiness of route/test_static_route.py

Fixes:
1) Adding "setup_standby_ports_on_rand_unselected_tor" fixture to setup
   ports in standby mode in case of active-active topology. This is
   needed for packets not to go out of unexpected tor and cause test
   failures.
2) Test is performing "config_reload", this can cause switchover (active
   to standy and viceversa). But rand_selected_dut should be in active
   state for traffic verification to pass, so after config_reload we
   need to toggle ports to rand_selected_dut.

* Addressing review comments.

* Reverting minor unintended change.
What is the motivation for this PR?
Current processing involves iterating through sonic logs (which can be large) from the beginning, which is unnecessary since only log lines starting from a particular timestamp are relevant.

How did you do it?
Optimize this processing by doing it in reverse and stopping after the last relevant timestamp.

How did you verify/test it?
Ran test_upgrade_path with SONiC neighbors, verified in warm-reboot.log that SSH threads no longer hang unnecessarily long due to log processing
Setup TACACS server on PTF host when renumber topo.

Why I did it
Some test failed because loganalyzer found TACACS error log.
The error log is because TACACS enabled on DUT host but not setup TACACS server on PTF host.
Those test bed are setup PTF device with renumber_topo.yml, and the setup TACACS server step missing in this file.

How I did it
Setup TACACS server on PTF device when renumber topo.
Load TACACS passkey by inv_name for renumber topo scenario.

How to verify it
Pass all test case.
The DPU-NPU data ports should not be selected for test.
Skip the test combination due to hw limitation - RM #3870562

Change-Id: I5e4c2b76cc19c17ebb396c05f7f34296239b0bf0
…onic-net#14632)

What is the motivation for this PR?
To fix sonic-net#5017

Signed-off-by: Longxiang [email protected]

How did you do it?
Send downstream traffic (DIP not learnt) to the device, verify the traffic is forwarded by the IPinIP tunnel to the peer side.

How did you verify/test it?
dualtor/test_standalone_tunnel_route.py::test_standalone_tunnel_route[active-standby] PASSED             

Signed-off-by: Longxiang <[email protected]>
…et#14654)

Nokia-7215 has low performance. After device reboot, it's critical processes may not be fully up at the time SSH is reachable. For this platform, I add wait_critical_processes at teardown stage to improve the test stability.

What is the motivation for this PR?
Improve the test_reboot stability on Nokia-7215 platform.

How did you do it?
Add wait_critical_processes at teardown stage.

How did you verify/test it?
Verified on Nokia-7215 M0 testbed.
* add Cisco-8101 specific fanout deploy script
add Cisco-8101 specific fanout deploy script
add Cisco-8101 specific fanout deploy script
add Cisco-8101 specific fanout deploy script
add Cisco-8101 specific fanout deploy script
add Cisco-8101 specific fanout deploy script
add Cisco-8101 specific fanout deploy script
add Cisco-8101 specific fanout deploy script

* update template

* update template

* update template
What is the motivation for this PR?
To minimize cross-module dependencies, we move some shared functions to tests/helpers.
Description of PR
Summary: for peer device has no peer fanouts. We should skip the pfc test on these peers.

We have already skipped ixia in here: https://github.com/sonic-net/sonic-mgmt/blob/master/tests/conftest.py#L700

Fixes # (issue) 29428890

Signed-off-by: Austin Pham <[email protected]>
…nic-net#14602)

Description of PR
sonic-net#14104
Reverting this PR as we have fix on the platform side to take care of the CRM route resource usage related issue.

Summary:
Fixes # (issue)
CRM route resource fix is committed in the platform side of code. the additional delay and checks added part of the PR/14104 is no longer needed.

Approach
What is the motivation for this PR?
Revert the cisco specific change added in sonic-mgmt code.

How did you do it?
How did you verify/test it?
Verified running sonic mgmt tests/test_crm.py::test_crm_route

co-authorized by: [email protected]
…onic-net#14305)

Description of PR
Summary: Accomodating the infra change for multidut ECN and PFCWD cases
Fixes # (issue)
sonic-net#13389
sonic-net#13769

Approach
What is the motivation for this PR?
To accomodate the infra change from PR 14127

How did you do it?
Added a pytest fixture called get_snappi_ports and get_snappi_ports_for_rdma whcih selects the ports from the information provided in MULTIDUT_PORT_INFO in variables.py

co-authorized by: [email protected]
New test coverage for the existing test gap recorded in issue#6560

New test coverage for the existing test gap recorded in issue#6560

Description of PR
Summary:
Fixes # (issue)
New test coverage for the existing test gap recorded in issue#6560 sonic-net#6560
Reboot orchagent, then check if lldp neighbors are in good state

Approach
What is the motivation for this PR?
Fill the test gap

How did you do it?
write new test coverage

How did you verify/test it?
run the test on vs

co-authorized by: [email protected]
What is the motivation for this PR?
Several test scripts in the tacacs folder import shared functions from other scripts. To reduce cross-module dependencies, we have relocated these shared functions to tacacs/utils.py.
…#14672)

What is the motivation for this PR?
In the test_upgrade_path.py script, several functions are imported from other directories. To reduce cross-module dependencies, we have relocated these shared functions to a common directory.
What is the motivation for this PR?
Address test gap for this change: sonic-net/sonic-buildimage#20021
Prevously, dhcrelay would hit the issue that it wouldn't relay any packets if there are packets come when dhcrelay startup. This issue has been fixed from image side by sonic-net/sonic-buildimage#20021. This PR is to add test for it.

How did you do it?
Add stress test with dhcp_relay restart:
Keep sending DHCP packets
Restart dhcp_relay
Check socket buffer
Run general dhcp relay test.

How did you verify/test it?
Run test on m0/t0/dualtor topos, all passed
What is the motivation for this PR?
Add a new case to verify the BBR initialized behavior.

How did you do it?
Add the following new case under test_bgp_bbr.py
test_bbr_status_consistent_after_reload

How did you verify/test it?
https://dev.azure.com/mssonic/internal/_build/results?buildId=619063&view=results
…hy_entity (sonic-net#14596)" (sonic-net#14659)

What is the motivation for this PR?
Revert sonic-net#14596 as it was temporary optics

How did you do it?
How did you verify/test it?
Validate it in internal setup, TRANSCEIVER_INFO|Ethernet302 in STATE_DB is empty at the moment.

Any platform specific information?
str3-7060x6-64pe-1

Supported testbed topology if it's a new test case?
t0-standalone-32
…t#14675)

What is the motivation for this PR?
In the test_vnet_vxlan.py script, the TestWrArp class is imported from arp/test_wr_arp.py. To minimize cross-module dependencies, we have refactored this class and moved the shared functions to a common location.
Description of PR
Summary:
Fixes sonic-net#3624
Mitigate the test gap: Test dhcp relay with source port ip in relay enabled.

Approach
What is the motivation for this PR?
Fixes sonic-net#3624
Mitigate the test gap: Test dhcp relay with source port ip in relay enabled.

How did you do it?
Enhance dhcp_relay ptf test to verify the src_ip in relay packets
Add a fixture which modify deployment_id to 8 and enable source port ip in relay
Add a test case exactly same with test_dhcp_relay_default but include fixture enable_source_port_ip_in_relay.
How did you verify/test it?
Run on local dev vm,
dhcp_relay/test_dhcp_relay.py::test_interface_binding PASSED [ 12%]
dhcp_relay/test_dhcp_relay.py::test_dhcp_relay_default PASSED [ 25%]
dhcp_relay/test_dhcp_relay.py::test_dhcp_relay_with_source_port_ip_in_relay_enabled PASSED [ 37%]
dhcp_relay/test_dhcp_relay.py::test_dhcp_relay_after_link_flap PASSED [ 50%]
dhcp_relay/test_dhcp_relay.py::test_dhcp_relay_start_with_uplinks_down PASSED [ 62%]
dhcp_relay/test_dhcp_relay.py::test_dhcp_relay_unicast_mac PASSED [ 75%]
dhcp_relay/test_dhcp_relay.py::test_dhcp_relay_random_sport PASSED [ 87%]
dhcp_relay/test_dhcp_relay.py::test_dhcp_relay_counter SKIPPED (skip...) [100%]
and PR test will test it again.

co-authorized by: [email protected]
…s tests (sonic-net#14274)

move number of flex_db counters per port to FLEXDB_COUNTERS_PER_PORT

Change-Id: I0d63e2c4de5de424dd7ba993706e0569a5a9f6d4
…14679)

Clock module test cases failing as the show clock command output format is changed.
Modifying date time pattern match as per latest sonic show clock output

Summary:
Show clock command output format has been changed in master and 202405 sonic.
In 202405 and master shows as below

root@sonic:~# show clock Tue Sep 17 04:47:53 PM IDT 2024

but test case is trying to match for the below pattern that shown in earlier version of sonic
In earlier version of sonic,
admin@sonic:~$ show clock Tue 17 Sep 2024 01:56:33 PM UTC

Test case pattern match need change and same is done in this PR.
…ed (sonic-net#14573)

Description of PR
Summary:
Fixes # (issue) 29428086

This test case needs to specify --target_image_list to be able to run as described in the beginning of the test. However, we don't provide this information for our nightly running (https://github.com/sonic-net/sonic-mgmt/blob/master/tests/platform_tests/test_secure_upgrade.py#L9). It's currently failing / error out for all topologies.

Ansible will fail to run the test since src= is undefined.

Signed-off-by: Austin Pham <[email protected]>
…#14703)

What is the motivation for this PR?
Check-in sonic-net#14641 for pr test

How did you do it?
Check-in sonic-net#14641 for pr test
…_reject (sonic-net#14463)

Retry when duthost unreachable in test_stop_request_next_server_after_reject

#### Why I did it
duthost randomly unreachable in test_stop_request_next_server_after_reject

### How I did it
Retry when duthost unreachable in test_stop_request_next_server_after_reject

#### How to verify it
Pass all test case.

### Description for the changelog
Retry when duthost unreachable in test_stop_request_next_server_after_reject
…onic-net#14702)

What is the motivation for this PR?
In this PR, we add the telemetry test removed by sonic-net#14451. The issue has been fixed in sonic-net#14448.
At the same time, we remove two unnecessary scripts in PR test.
auspham and others added 25 commits October 31, 2024 18:55
Description of PR
Summary:
Fixes # (issue) 29752643

Approach
What is the motivation for this PR?
Currently log rotate for supervisor takes 1 to 2 minutes with a maximum of 2 minutes on pc/test_lag_2

Since log_rotate is now running on function fixture. With all test case running, this will add up. On recent nightly run, it added up to 2:03:58 hours which slows down the test to 2 hours.

The reason for log rotating is documented in sonic-net#2161 to save spaces on 7060 devices. This change for T2 device make sure that we only rotate for T2 at module level instead of functions.

This will optimise the time from 2 hours to 2 minutes.

Details of the stats can be seen here for pc/test_lag_2

{
    "analyzer_logrotate_time": {
        "total": "2:03:58.135243",
        "average": "0:01:01.984460",
        "max": "0:02:00.298079",
        "min": "0:00:57.740233",
        "number of runs": 120
    },
    "analyzer_add_marker_time": {
        "total": "0:07:09.586112",
        "average": "0:00:03.579884",
        "max": "0:00:07.686170",
        "min": "0:00:02.248445",
        "number of runs": 120
    },
    "analyze_logs_time": {
        "total": "0:18:48.677592",
        "average": "0:00:11.880817",
        "max": "0:00:17.943104",
        "min": "0:00:06.689129",
        "number of runs": 95
    },
    "total_time": "2:29:56.398947",
    "longest_analyzer_logrotate_time": {
        "line": 8467,
        "time": "0:02:00.298079"
    },
    "longest_analyzer_add_marker_time": {
        "line": 10299,
        "time": "0:00:07.686170"
    },
    "longest_analyze_logs_time": {
        "line": 47906,
        "time": "0:00:17.943104"
    }
}
Break down of analyzer_logrotate_time in details

lc4-1	lc1-1	lc2-1	sup-1
/usr/sbin/logrotate -f /etc/logrotate.conf > /dev/null 2>&1	Start	22:08:46	22:08:46	22:08:46	22:08:47
End	22:09:02	22:09:02	22:09:07	22:10:43
sed -i 's/^#//g' /etc/cron.d/logrotate	Start	22:09:02	22:09:02	22:09:07	22:10:43
End	22:09:03	22:09:03	22:09:07	22:10:44
systemctl start logrotate.timer	Start	22:09:03	22:09:03	22:09:07	2:10:44
End	22:09:03	22:09:03	22:09:07	22:10:45
Complete everything around 22:10:45

everyone was waiting for sup-1 which goes from 22:08:44 -> 22:10:45 which is around 2 minutes. This is reasonable speed.

The rest of the task start around 22:08:44 -> 22:09:03 which is 19 seconds. But we have to wait for supervisor to be done.

co-authorized by: [email protected]
* Update test_voq_ipfwd.py
Increase the ready timeout from 180 to 540

Change-Id: Ic07c4a2e2c504d78f4026af67d059a0884bbe5b7
…ic-net#14915)

Avoid following script failed due to port not recovered

Change-Id: Id991980f4ce49c3bd6d1086f3df912575f1c3ac0
…m the host (sonic-net#13172)

Use lldp0/lldp1 instead of lldp, since container lldp was removed from the host
Various multi-asic fixes in lldp/test_lldp_syncd.py

Fix helper functions db_instance, get_lldp_entry_keys, get_lldp_entry_content, get_lldpctl_output to query both namespaces
test_lldp_entry_table_after_flap

Ignore routeCheck log errors as this test flaps ports and churns routes ignore_expected_loganalyzer_exceptions
Don't skip the test if it finds eth0 port, just continue
Add namespace to config interface commands
Increase delay from 5s to 10s as verify_lldp_entry was seeing the stale LLDP_ENTRY from before link flap and returning True
test_lldp_entry_table_after_lldp_restart

Query the per-namespace lldp docker when available
test_lldp_entry_table_after_reboot

wait until check_intf_up_ports=True when rebooting, otherwise the LLDP_ENTRY may flap when the ports come up
Description of PR
When parallel run is enabled, multiple processes may try to read/write the same cache file, so there will be a small chance that the file is being written by process 1 while process 2 is reading it, which will cause EOFError in process 2. In this case, we will retry reading the file in process 2. If we still get EOFError after some retry attempts, we will return NOTEXIST to overwrite the file.

In the meantime, we should also optimize how we initialize the DUT hosts when parallel run is enabled to reduce the chance of having such cache read issue.

Summary:
Fixes # (issue) Microsoft ADO 30031372

Approach
What is the motivation for this PR?
To prevent EOFError when reading cache file when parallel run is enabled.

How did you do it?
Add retry mechanism and optimize how DUT hosts are initialized when parallel is enabled.

How did you verify/test it?
I ran the updated code and can confirm parallel run is still working as expected.

co-authorized by: [email protected]
Description of PR
Summary:
Snappi reboot testcases don't save the config before rebooting the DUT. This causes the DUT to loose all the config, and the test fails with "ARP is not resolved" errors.

This PR addresses this issue by saving the config before any reboot. Also this PR moves the code that is reused in multiple tests to a common code.

co-authorized by: [email protected]
Description of PR
Summary:
In qos-sai-base, there is a docker0 ipv6 checking function, which fails if the DUT has no ipv6 address. The failure signature is as below:

14:50:01 __init__._fixture_generator_decorator    L0088 ERROR  | 
IndexError('list index out of range')
Traceback (most recent call last):
  File "/data/tests/common/plugins/log_section_start/__init__.py", line 84, in _fixture_generator_decorator
    res = next(it)
  File "/data/tests/qos/qos_sai_base.py", line 1824, in dut_disable_ipv6
    duthost.shell("sudo ip -6  addr show dev docker0 | grep global" + " | awk '{print $2}'")[
IndexError: list index out of range

Approach
What is the motivation for this PR?
The failure of the ipv6 disabling fixture.

How did you do it?
Check if the docker0 has ipv6 address or not.

How did you verify/test it?
Ran it on my TB:

----------------------------------------------------------------------------- live log sessionfinish -----------------------------------------------------------------------------
07:42:47 __init__.pytest_terminal_summary         L0067 INFO   | Can not get Allure report URL. Please check logs
============================================================================ short test summary info =============================================================================
PASSED qos/test_qos_sai.py::TestQosSai::testParameter[single_asic]
PASSED qos/test_qos_sai.py::TestQosSai::testParameter[single_dut_multi_asic]
PASSED qos/test_qos_sai.py::TestQosSai::testParameter[multi_dut_longlink_to_shortlink]
PASSED qos/test_qos_sai.py::TestQosSai::testParameter[multi_dut_shortlink_to_shortlink]
PASSED qos/test_qos_sai.py::TestQosSai::testParameter[multi_dut_shortlink_to_longlink]
=================================================================== 5 passed, 1 warning in 5246.68s (1:27:26) ====================================================================
sonic@202405-qos-sonic-mgmt-prod:/data/tests$ 

co-authorized by: [email protected]
…sonic-net#15315)

Signed-off-by: anamehra [email protected]

Description of PR
Fixes config reload -y <running_golden_config> for chassis sup by using num_asics to populate config db file list

Summary:
Fixes # (issue)

Approach
What is the motivation for this PR?
During some tests, config is restored via config reload using runnin_golden config files.

config reload -y expects all n+1 config files be provided as input but sonic-mgmt script only includes the config files for present asics.

System had 10 asics but max asics could be 16. The command shows only 10+1(global) config db

Before fix:

tc/sonic/running_golden_config0.json,/etc/sonic/running_golden_config1.json,/etc/sonic/running_golden_config4.json,/etc/sonic/running_golden_config5.json,/etc/sonic/running_golden_config8.json,/etc/soo
nic/running_golden_config9.json,/etc/sonic/running_golden_config10.json,/etc/sonic/running_golden_config11.json,/etc/sonic/running_golden_config12.json,/etc/sonic/running_golden_config13.json &>/dev/nn
ull _uses_shell=True warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
How did you do it?
Use num_asics for the DUT host and populate the CLI args list with running_golden_config db file path for each possible asic, present or absent.

How did you verify/test it?
RUn sonic-mgmt tests suits

After fix:

Config reload on RP with max 16 asics, 10 resent:

2024 Oct 30 20:45:56.831628 sfd-t2-sup INFO python[3758349]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=config reload -y -f -l /etc/sonic/running_golden_config.json,//
etc/sonic/running_golden_config0.json,/etc/sonic/running_golden_config1.json,/etc/sonic/running_golden_config2.json,/etc/sonic/running_golden_config3.json,/etc/sonic/running_golden_config4.json,/etc/ss
onic/running_golden_config5.json,/etc/sonic/running_golden_config6.json,/etc/sonic/running_golden_config7.json,/etc/sonic/running_golden_config8.json,/etc/sonic/running_golden_config9.json,/etc/sonic//
running_golden_config10.json,/etc/sonic/running_golden_config11.json,/etc/sonic/running_golden_config12.json,/etc/sonic/running_golden_config13.json,/etc/sonic/running_golden_config14.json,/etc/sonic//
running_golden_config15.json &>/dev/null _uses_shell=True warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
2024 Oct 30 20:46:01.770294 sfd-t2-sup NOTICE CCmisApi: 'reload' executing with command: config reload -y -f -l /etc/sonic/running_golden_config.json,/etc/sonic/running_golden_config0.json,/etc/sonic//
running_golden_config1.json,/etc/sonic/running_golden_config2.json,/etc/sonic/running_golden_config3.json,/etc/sonic/running_golden_config4.json,/etc/sonic/running_golden_config5.json,/etc/sonic/runnii
ng_golden_config6.json,/etc/sonic/running_golden_config7.json,/etc/sonic/running_golden_config8.json,/etc/sonic/running_golden_config9.json,/etc/sonic/running_golden_config10.json,/etc/sonic/running_gg
olden_config11.json,/etc/sonic/running_golden_config12.json,/etc/sonic/running_golden_config13.json,/etc/sonic/running_golden_config14.json,/etc/sonic/running_golden_config15.json
Config reload on LC with 3 asics:

2024 Oct 30 20:45:53.282867 sfd-t2-lc0 INFO python[86920]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=config reload -y -f -l /etc/sonic/running_golden_confii
g.json,/etc/sonic/running_golden_config0.json,/etc/sonic/running_golden_config1.json,/etc/sonic/running_golden_config2.json &>/dev/null _uses_shell=True warn=False stdin_add_newline=True strr
ip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
2024 Oct 30 20:45:54.919206 sfd-t2-lc0 NOTICE CCmisApi: 'reload' executing with command: config reload -y -f -l /etc/sonic/running_golden_config.json,/etc/sonic/running_golden_config0.json,//
etc/sonic/running_golden_config1.json,/etc/sonic/running_golden_config2.json
2024 Oct 30 20:45:54.919305 sfd-t2-lc0 NOTICE CCmisApi: 'reload' stopping services...
Any platform specific information?
Chassis Supervisor

Signed-off-by: anamehra [email protected]
What is the motivation for this PR?
Have experienced issues in the reboot tests and were unable to diagnose due to a lack of information in the warm-reboot sequence.

How did you do it?
Copied across the logs that were already preserved on the device across reboots.

How did you verify/test it?
Tested internally on A->B and multl-hop upgrade scenarios.
Description of PR
Summary: Fixing test gap on test_mgmt_ipv6
Fixes # (issue) 28836766

Approach
What is the motivation for this PR?
Currently test_mgmt_ipv6 is having the following skipped tests:

SKIPPED [2] ip/test_mgmt_ipv6_only.py:122: DUT has no default route, skiped
SKIPPED [1] ip/test_mgmt_ipv6_only.py:192: Skipping test as no Ethernet0 frontpanel port on supervisor

"DUT has no default route, skiped" was because the ipv6 default gateway becomes stale from not using it.

Solution: adding a ping and check if it's still reachable after ping
"Skipping test as no Ethernet0 frontpanel port on supervisor" was because the test is meant to be for supervisor:

Solution: only use the enumerate of frontend nodes. This is to avoid in the future someone else need to investigate why this one is skipped. We should only call the fixture we want to test.

Signed-off-by: Austin Pham <[email protected]>
This T2 Chassis test plan for BGP FIB suppress pending feature is an extension of the BGP Suppress FIB Pending Test Plan added for T1 DUT at plan
…-net#15268)

Description of PR
Summary:
Add cEOS neighbor support for test_4-byte_asn_community.py. Current test case only support SONiC neighbors. this PR will improve coverage with more neighbor type.

Future works to improve, not in this PR:

implement SONiC specific BGP operations in class SonicBGPRouter. and change the current command check method to using class method for BGP configuration/verification.

Approach
What is the motivation for this PR?
Improve test coverage by adding cEOS support to the test case.

How did you do it?
Defined a base class for common BGP operations. class BGPRouter(ABC):, and implemented platfor specific BGP operations for cEOS neighbors.

How did you verify/test it?
bgp/test_4-byte_asn_community.py::test_4_byte_asn_community[vlab-01-None] PASSED [100%]

co-authorized by: [email protected]
…nic-net#14662)

What is the motivation for this PR?
It is possible for kernel neighbor information to fall out of sync with the APPL_DB neighbor table.

How did you do it?
Manually change the APPL_DB neighbor table entry, then verify that the arp_update script is able to flush the out of sync neighbor.
What is the motivation for this PR?
How did you do it?
Create the BGP profile
Create a VNET routes.
check neighbor bgp routes to verify the advertisements.
The following tests are performed for Both V4 and V6 routes.

Step	Goal	Expected results
Create a tunnel route and advertise the tunnel route to all neighbor without community id	BGP	ALL BGP neighbors can recieve the advertised BGP routes
Create a tunnel route and advertise the tunnel route to all neighbor with community id	BGP	ALL BGP neighbors can recieve the advertised BGP routes with community id
Update a tunnel route and advertise the tunnel route to all neighbor with new community id	BGP	ALL BGP neighbors can recieve the advertised BGP routes with new community id
Create a tunnel route and advertise the tunnel route to all neighbor with BGP profile, but create the profile later	BGP	ALL BGP neighbors can recieve the advertised BGP routes without community id first, after the profile table created, the community id would be added and all BGP neighbors can recieve this update and associate the community id with the route
Delete a tunnel route	BGP	ALL BGP neighbors can remove the previously advertised BGP routes
Create 400 tunnel routes and advertise all tunnel routes to all neighbor with community id	BGP scale	ALL BGP neighbors can recieve 400 advertised BGP routes with community id and record the time
Updat BGP_PROFILE_TABLE with new community id for 400 tunnel routes and advertise all tunnel routes to all neighbor with new community id	BGP scale	ALL BGP neighbors can recieve 400 advertised BGP routes with new community id and record the time
How did you verify/test it?
image

Any platform specific information?
These scale tests are set to un with 400 routes. Altough I have ran these tests with 4k routes without any problem, but that takes the test run time to around 40 minutes.

Supported testbed topology if it's a new test case?
T1 Cisco, T1 Mlnx, VS
Add a new T1 topology with 224 VMs simulating downstream T0 neighbors and 8 VMs simulating upstream T2 neighbors.

Signed-off-by: Janetxxx <[email protected]>
…13785)

What is the motivation for this PR?
This changes are used for phoenix wing initiative, to provide
1. 5-node testbed for running existing test cases in sonic-mgmt
2. 7-node testbed for running srv6 test cases.
The difference for this testbed is to use Cisco's ngdp as dataplane simulation. This type of vsonic would allow us to simulate both control plane and data plane in virtual testing environment.

How did you do it?
Based on Test doc sonic-net#13645

How did you verify/test it?
Both sanity test cases are running daily for phoenix wing.

Any platform specific information?
cisco-8101-p4-32x100-vs

Supported testbed topology if it's a new test case?
5-node and 7-node testbed listed in testplan sonic-net#13645.

Documentation
sonic-net#13645
…ask (sonic-net#15300)

What is the motivation for this PR?
Enhance elastictest template, use bash script instead of azcli task, improve and fix azlogin and get token when requesting APIs.

How did you do it?
Enhance elastictest template, use bash script instead of azcli task.
Improve and fix azlogin and get token when requesting APIs.
Everytime get token to access Elastictest API, re-run azlogin and az get token.
For test plan create/cancel, only call once. For test plan poll, first call once. Then during the polling, if token expired, response are not valid, then will call again to re-run azlogin.

How did you verify/test it?
Because PR test will download test_plan.py from master branch, so the change of test_plan.py will not work. Raised Draft PR to use the updated test_plan.py to test.
sonic-net#15316

Signed-off-by: Chun'ang Li <[email protected]>
If the test server is Ubuntu 22.04, SoC IP ARP flux is observed on the
DUTs for dualtor-aa testbed.

Let's enable `arp_filter` to prevent this.

Signed-off-by: Longxiang Lyu <[email protected]>
… azcli t…" (sonic-net#15339)

This reverts commit 31112c0.
Creating test plan failed. Revert this change.
@linux-foundation-easycla
Copy link
Copy Markdown

linux-foundation-easycla bot commented Nov 4, 2024

CLA Signed


The committers listed above are authorized under a signed CLA.

@jerrychuWis jerrychuWis marked this pull request as draft November 5, 2024 02:18
@jerrychuWis jerrychuWis changed the base branch from master to 202405 November 5, 2024 02:22
@jerrychuWis jerrychuWis closed this Nov 5, 2024
@jerrychuWis jerrychuWis deleted the wistron_6512_32r branch November 6, 2024 08:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.