0% found this document useful (0 votes)
127 views32 pages

Huawei Hcpa-Ip

The document discusses Fibre Channel over Ethernet (FCoE) and how it works. It explains that FCoE encapsulates Fibre Channel frames within Ethernet frames to run storage traffic over an Ethernet network. This allows for converged networks carrying both storage and regular data traffic. Key points covered include how FCoE uses virtual interfaces on converged network adapters to separate storage and data VLANs, ensures lossless delivery through priority flow control, and requires all devices in the path to support FCoE.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
127 views32 pages

Huawei Hcpa-Ip

The document discusses Fibre Channel over Ethernet (FCoE) and how it works. It explains that FCoE encapsulates Fibre Channel frames within Ethernet frames to run storage traffic over an Ethernet network. This allows for converged networks carrying both storage and regular data traffic. Key points covered include how FCoE uses virtual interfaces on converged network adapters to separate storage and data VLANs, ensures lossless delivery through priority flow control, and requires all devices in the path to support FCoE.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

HCPA-IP 03/10/2020

Forwarding capacity

It is measured according to the length of minimum packet that can be processed. For example, the length
of a minimum Ethernet packet is 64 bytes plus the frame cost 20 bytes, totaled 84 bytes. For a full duplex
interface at 1 Gbit/s, the forwarding capacity at wire speed is 1Gbit/s/((64+20)*8bit)=1.488 Mpps. For a full
duplex interface at 100 Mbit/s, the forwarding capacity at wire speed is 100 Mbit/s/((64+20)*8bit)=0.150
Mpps.

Switching capacity

It is also called backplane bandwidth or switching bandwidth. The switching capacity is the maximum data
that can be processed by the interface processor of a switch and the data bus. The backplane bandwidth
indicates the overall data switching capability of a switch, in Gbit/s. The higher the switching capacity of a
switch is, the more powerful it is in processing data. However, the design is costly. The result of twice the
output by multiplying the interface capacity by the interface number should be less than the switching
capacity, realizing full duplex switching without congestions.

Backplane capacity = Number of SerDes links between LPUs and SFUs x Rate of each SerDes link

The backplane capacity is calculated as 7.2 Tbit/s = [2 x (9 x 4 x 16)] x 6.25 Gbit/s.


• The value 2 indicates the bidirectional (receiving and sending) capacity.
• The value 9 indicates the number of SerDes links connecting each LPU and SFU.
• The value 4 indicates the number of SFUs.
• The value 16 indicates the number of 16 LPUs.
• The value 6.25 Gbit/s indicates the rate of each SerDes link.
HCPA-IP 03/10/2020

FCoE

FCoE Fibre Channel


over Ethernet tutorial

A Converged Network Adapter (CNA) is a single network interface card that contains both a Fiber Channel
(FC) host bus adapter (HBA) and a TCP/IP Ethernet NIC.

A different VLAN is used for the data and storage traffic, which is then trunked from the CNA up to the
switch.

As previously mentioned FCoE is literally just an encapsulation of an FC frame within an Ethernet frame.
There is a one-to-one relationship between FC frames and the Ethernet frame. Therefore, FC frames are
never segmented and sent across multiple Ethernet frames.

The maximum size of an FCoE frame is 2180bytes. To accommodate this, the network must support an MTU
of up to 2.5KB (also known as baby jumbo frames).

A Fibre Channel over Ethernet (FCoE)-Fibre Channel (FC) gateway connects FCoE devices on an Ethernet
network to an FC switch in an FC storage area network (SAN) as shown in Figure 1. To FCoE devices such as
servers, the FCoE-FC gateway presents virtual fabric ports (VF_Ports) and appears to be an FCoE forwarder
HCPA-IP 03/10/2020

(FCF). To the FC switch, the FCoE-FC gateway presents a proxy node port (NP_Port) and appears to be an FC
device.

The FCoE-FC gateway handles FCoE Initialization Protocol (FIP) and FCoE traffic on the interfaces connected
to FCoE devices. The gateway forwards native FC traffic on the interfaces to the FC switch. The gateway does
not provide FC services (such as fabric login server or name server). It is a proxy for an FCF, not an FCF or an
FC switch. The gateway transparently substitutes for the FC switch when communicating with FCoE devices
and transparently substitutes for FCoE devices when communicating with the FC switch.
The gateway does not use an FC domain ID, so it extends the SAN fabric while saving domain resources.
Using the gateway also means that the FC switch does not have to handle FCoE traffic (and therefore requires
no FCoE blades or ports). The gateway converges Ethernet and FC backbones to leverage existing resources.
HCPA-IP 03/10/2020

1. Host1 sends FCoE frames destined for Host2 to the gateway .


2. The gateway de-encapsulates the FCoE frames from Host1 into native FC frames and switches them
to the FC switch.
3. The FC switch processes the native FC frames and sends them back to the gateway destined
for Host2.
4. The gateway encapsulates the FC frames in Ethernet and sends the resulting FCoE frames to Host2.
5. When Host2 replies, the FCoE reply goes to the gateway. The gateway de-encapsulates the reply and
switches it to the FC switch for processing. The FC switch then sends it back to the gateway, which
encapsulates the FC frames and sends them to Host1.

Fibre Channel SAN Part 1 – FCP and WWPN Addressing

Fibre Channel and FCoE Comparison

To understand FCoE, it is essential to understand that FCoE uses FCP, the Fibre Channel Protocol. The exact
same Fibre Channel Protocol that is used with native Fibre Channel. FCoE works exactly the same way as
native Fibre Channel. We still have WWPNs on the initiator and target, and we still use the FLOGI, PLOGI,
and PLRI login process. The difference is that we encapsulate that same FCP traffic inside an Ethernet
header so we can run it over an Ethernet network, rather than over a native Fibre Channel network.
HCPA-IP 03/10/2020

SAN Protocol Stack Comparison

FCoE retains the reliability and performance of native Fibre Channel. Quality of Service (QoS) is used to
guarantee the required bandwidth for the storage traffic. It ensures the normal data traffic does not hog too
much bandwidth.

One of the main benefits we get from FCoE is the savings we get in our network infrastructure. When we
have redundant native Fibre Channel storage and Ethernet data networks, there's 4 adapters on our hosts,
and 4 cables which are connected to 4 switches.

Native Fibre Channel


With FCoE, we run the data and storage traffic through shared switches and shared ports on our hosts.
Now we just have 2 adapters, 2 cables, and 2 switches. The required infrastructure is cut in half. We save
on the hardware costs because of this, and also require less rack space, less power, and less cooling, which
gives us more savings.
HCPA-IP 03/10/2020

Fibre Channel over Ethernet

FCoE Networks

In FCoE both the storage and the data traffic uses the same shared physical interface on our hosts – the
Converged Network Adapter (CNA). The CNA replaces the traditional Network Interface Card (NIC) in the
host. (See the SAN and NAS Adapter Card Types post for a quick review.)

The storage traffic uses FCP so it requires a WWPN. The data traffic requires a MAC address. The way that
Ethernet data traffic and FCP storage traffic works is totally different so how can we support them both on
the same physical interface? The answer is we virtualize the physical interface into two virtual interfaces: a
virtual NIC with a MAC address for the data traffic and a virtual HBA with a WWPN for the storage traffic.
The storage and the data traffic are split into two different VLANs, a data VLAN and a storage VLAN.

In the diagram below we have a single server, Server 1. It’s got two physical interface cards, CNA1 and CNA2.
Both CNAs are split into separate virtual adapters for data and storage.

For the data traffic, we've got virtual NIC-1 on CNA1, and virtual NIC-2 on CNA2. Those virtual NICs will both
have MAC addresses assigned to them. On the switches we're trunking the data VLAN down to the physical
part on the CNA. We cross connect our switches for the data VLAN traffic.
HCPA-IP 03/10/2020

FCoE Data VLAN

We also have virtual HBAs on the CNAs, we've got virtual HBA-1 on CNA1 and virtual HBA-2 on CNA2. We
have WWPNs on our virtual HBAs. We're trunking the storage VLAN down from the switches to the virtual
HBAs on the converged network adapters.

This time we do not cross connect our switches because we need to comply with SAN best practice of
physically separate Fabric A and Fabric B.

FCoE Storage VLAN

If we put the whole thing together you can see that we're running both our data and our storage traffic over
the same shared infrastructure. We’re trunking both the data and the storage VLANs down to single physical
parts on our CNAs, and then that traffic is split out into a virtual NIC for the data traffic with our MAC address
on it, and a virtual HBA for the storage traffic with a WWPN on it.
HCPA-IP 03/10/2020

FCoE Network

Another thing we need to discuss is lossless FCoE. Fibre Channel is a lossless protocol, meaning it ensures
that no frames are lost in transit between the initiator and the target. It uses buffer to buffer credits to do
this.

Ethernet is not lossless. TCP uses acknowledgements from the receiver back to the sender to check that
traffic reaches its destination. If an acknowledgement is not received then the sender will resend that packet.

FCoE uses FCP which assumes a lossless network, so we need a way to ensure our storage packets are not
lost while traversing the Ethernet network. Priority Flow Control (PFC), an FCoE extension for Ethernet, is
used to ensure that lossless delivery. PFC works on a hop by hop rather than end to end basis, so we don't
just need to support it on our end hosts. Each NIC and switch in the path between initiator and target must
be FCoE capable. You can't use just standard 10Gb Ethernet NICs and switches, you need to use CNAs and
the switches have to be FCoE capable.

Wireless

Wave 1 products have been in use in the market for about 2.5 years. Wave 2 builds upon Wave 1 with
some very significant enhancements:

• Supports speeds to 2.34 Gbps (up from 1.3 Gbps) in the 5 GHz band
• Supports multiuser multiple input, multiple output (MU-MIMO)
• Offers the option of using 160-MHz-wide channels for greater performance
• Offers the option of using a fourth spatial stream for greater performance
• Can run in additional 5-GHz bands around the world
HCPA-IP 03/10/2020

Fig 1.2- Wave 1 Vs Wave 2


HCPA-IP 03/10/2020

D-Link Smart
Antenna Technology.m
HCPA-IP 03/10/2020
HCPA-IP 03/10/2020

#1: What are the key differences between 802.11ac (Wi-Fi 5) and 802.11ax (Wi-Fi 6)?

• Uplink MIMO: 802.11ac supports multiuser MIMO, but only in downlink mode. In contrast,
802.11ax adds uplink capability, so multiple users can upload video simultaneously.
• Modulation: 802.11ax has a higher modulation scheme, moving from 256 QAM to 1024 QAM,
which translates to better throughput and 25% higher capacity with 10 bits per symbol.
• Capacity and efficiency improvements: 802.11ax uses OFDMA instead of OFDM, which allows FDD
versus TDD as well as resource unit allocation within a given bandwidth. Subcarrier spacing is also
reduced to 78.125 kHz, which is 25% of 802.11ac spacing, and the symbols are 4 times longer.
When combined, all these changes mean that the system is more efficient and can upload or
download multiple data packets simultaneously, rather than one at a time.
• Schedule-based rather than contention-based: In 802.11ax, the access point dictates when a
device will operate, thus handling clients more efficiently. Resource scheduling also significantly
reduces the power consumption during sleep time, which improves battery life for clients.
HCPA-IP 03/10/2020

#2: Wi-Fi 5 (802.11ac) promises 6.9 Gbps but public Wi-Fi doesn't attain these speeds. Will Wi-Fi 6
(802.11ax) remedy this problem?

6.9 Gbps just isn't possible in a home or public Wi-Fi network. We will never see the theoretical speeds
listed on the box of a router on the shelves at Best Buy, Walmart or other big-box stores.

The most limiting factor at home is the connection from the internet provider — the pipe coming into the
home for internet access. If a router can support 1.6 Gbps but the connection to the home is only
100 Mbps, a client will never realize that higher speed for downloads from the wide area network (WAN).

The data stream into a home and the access point will establish the initial internet bandwidth benchmark.
From there, other factors can slow the network speed:

• Distance between the client and the access point


• Interference from other clients on the same frequency
• Inherent Wi-Fi overhead for acknowledgments, transmit, and clear channel assessments

A Wi-Fi 6 access point will provide a more efficient environment, mitigating the problems of Wi-Fi
overhead — in other words, the fixed "cost" associated with the communication that isn’t a part of the
data transmission. 11ax will attack the overhead differently, scheduling when a device operates and
handling the information and clients more efficiently.

Additionally, advanced filtering techniques enable better coexistence and bandedge performance. This has
two effects:

• Allows a broader spectrum to operate at full power


• Improves the quality of service and range in the once-limited bandedge channels

Altogether, improved efficiency and filtering will help 802.11ax have faster speeds than 11ac.

#3: How does OFDMA create a more efficient payload delivery system?

802.11a up to 802.11ac use OFDM, or orthogonal frequency-division multiplexing, to deliver Wi-Fi data
packets. Under OFDM, a device uses a fixed 20 MHz or 40 MHz of bandwidth to deliver the packets,
regardless of whether it's transmitting video or just sending a simple text message over a Wi-Fi network.

Wi-Fi 6 (802.11ax), however, uses OFDMA, or orthogonal frequency-division multiple access, which allows
resource units (RUs) that divide the bandwidth according to the needs of the client and provides multiple
individuals the same user experience at faster speeds.

A simple analogy using trucks can illustrate the difference, as shown in the image below. Each truck is
hauling a payload, or user data — one surfing the web, another uploading video from a soccer game and a
third sending a text message, for instance. Under OFDM, a device had to use three trucks of the same size
to send the data, regardless of how empty or full each truck was. In other words, OFDM inefficiently uses
the bandwidth, leaving a lot of empty space. OFDMA, in contrast, allows a device to fill an entire truck with
RUs (i.e., data) — a payload delivery model that uses the bandwidth much more efficiently.
HCPA-IP 03/10/2020

#4: Wi-Fi 6 (802.11ax) supports 1024 QAM. What are the impacts of this higher modulation scheme?

With 1024 QAM modulation, there are more bits per symbol — 10 bits per symbol versus 8 bits in 256
QAM. More bits equals more data, and the payload delivery of data is more efficient — like having a bigger
truck.

At the same time, OFDMA decreases the spaces between the subcarriers, packing even more resource
units into the truck, so to speak.

But as the data rate increases, error vector magnitude (EVM) on the RF front-end becomes paramount.
With 1024 subcarriers in Wi-Fi 6 (802.11ax), the constellation is flooded and so dense that the system must
distinguish one of the points from another. It takes a very sophisticated system to decode (or demodulate)
these constellation points, and it requires devices to have better EVM.

Wi-Fi 5 (802.11ac) requires -35 dB PA EVM, while Wi-Fi 6 (11ax) requires -47 dB PA EVM. Higher
modulation schemes require better EVM so that a device can attain the higher efficiencies of the data
HCPA-IP 03/10/2020

packet.

#5: What are the differences for Wi-Fi 6 for a handset or other client versus access points?

As already said, Wi-Fi 6 (802.11ax) pushes the EVM requirements down to -47 dB, but access points and
clients still have to meet the same spec. There’s no difference.

Power levels can be very different, though. Access points or customer premises equipment (CPE) typically
operate at much higher power than a client — 24 dBm versus 14-20 dBm for a mobile handset. Ultimately,
a lot more power means a lot more heat that has to be dissipated, so a connectivity solution may require a
more stringent thermal requirement compared to a mobile solution.

Fit AP – The control of this unit is done by either a cloud based controller or a physical WIS controller. All
settings for wireless are set on the controller and filtered through to the various Access Points.
HCPA-IP 03/10/2020

FatAP – The unit can operate as a stand alone unit without the need for a controller to be installed. All
settings will be set on the unit.

ZigBee is a specification for a high-level communication protocol suite. It is used to create PANs (Personal
Area Networks) that are built from small and low-powered radio waves. It is based on the IEEE 802.15.4
standard and is also related to several other wireless protocols. IEEE 802.15.4 is a standard required for
specifying physical layer and media access control for LR-WPANs (Low-rate Wireless Personal Area
Networks). RFID is a part of AIDC (Automatic Identification and Data Capture). It is a wireless system in
which data is transmitted with the help of radio waves. It is mostly used in tracking objects via RFID tags.

ZigBee vs. RFID

Unlike RFID, ZigBee requires line-of-sight. It is used for low-rate data applications that require a long
battery life. Its data transmission range is limited to 10-100 meters. ZigBee is the ideal choice for
equipment that requires low-rate wireless data transfer.

RFID, on the other hand, has a greater range for data transmission when compared to ZigBee. It is used for
tracking objects in the assembly line. RFID has been credited for revolutionizing the object tracking
systems.

Telemetry is a technology developed to fast collect data from physical or virtual devices remotely. Devices
use the push mode to proactively send their data information, such as traffic statistics on interfaces, CPU
usage, and memory data, to the collector at a specific interval. In the conventional pull mode, devices
interact with the collector using questions and answers. The push mode implements real-time and quick
data collection.
HCPA-IP 03/10/2020

MLAG vs. Stacking: Pros and Cons

Here we take a review of some of the pros and cons of MLAG and stacking, helping you to
understand the benefits and limitations of each technology.

Pros of MLAG

1. For MLAG, traffic is more evenly distributed to each of the switches through the use of LAG
hashing. And each switch is independently able for forward / route traffic without passing to a
master switch.
2. MLAG can simply bundle more links into the LAGs to increase bandwidth for North & South as
well as East & West.
3. MLAG offers more stability over stacking since it has dual management and control
planes.
4. MLAG is more suitable for switches that are geographically separate. However, when stacking
remotely separated switches, the exponential of error increasing with distance.
5. MLAG is able to upgrade one switch at a time without affecting service. Besides, it could expands
port capacity beyond the limitation that you could with stacking - simply adding another switch
East or West by creating another MLAG to another switch.
HCPA-IP 03/10/2020

Cons of MLAG

1. MLAG is more complex to configure.


2. You have to configure/maintain each switch individually when using MLAG.
3. Current MLAG doesn't support spanning-tree.

Pros of Stacking

1. Stacking is much simpler and easier to configure and manage.


2. Possibly easier to add more ports by adding an additional switch to the stack.
3. Makes sense to be used at the edge of smaller sites where the control plane
services are not required for the full functioning of the network.

Cons of Stacking

1. Limited to number of switches that can be added to the stack or bond. Not able
to add more bandwidth to stacking (but you can for bonding).
2. Single control and management plane.
3. More inter-switch communication, as opposed to the ISC for MLAG.

MLAG vs. Stacking: Which to Choose?

MLAG is useful to present diverse physical paths to hosts, and it also allows you to do software
upgrade of the core. It can be an efficient tool to eliminate blocked Layer 2 links due to
spanning-tree. MLAG can be used at various places in the network to eliminate bottlenecks
and provide resiliency – at the leaf layer it offers active/active redundant connections from the
server to the switches. While at the spine layer, it greatly enhances the Layer 2 scalability
without increasing the cost. So if you need redundant Layer 2 connections and access to large
portions of bandwidth, or your application servers require multi-path fabric architectures,
MLAG would be a better design.

Stacking is a great fit for limited space deployment where flexibility trumps availability. As a
pay-as-you-grow model, switch stacking is attractive for users that need flexibility in their
physical network. However, the connecting distance is limited by the length of stacking cable
– often within wiring closet. So if you have a small site that configuration simplicity is a matter,
and bandwidth distribution to switches less of a concern, or your switches are in close
proximity of each other, stacking could be your choice.

18 | P a g e
HCPA-IP 03/10/2020

Introduction to Segment Routing

What is it?
It is a new technology that will add benefit to IP and MPLS networks. It will allow to have FRR
protection for any topology, simpler to operate and more scalable. For future SDN services it
provides a quicker interaction with the applications.

How it works?
Like in MPLS, Segment Routing is based on label switching but with no extra protocol just
extensions to the IGP (ISIS/OSPF). Labels are called segments where we have the traditional Push,
Swap, Pop actions. There are two types of Segments: Nodal and Adjacency where a segment
identifies a prefix:
• Nodal Segment identifies the Node and exactly the prefix of his loopback interface and
it’s globally significant so it must be unique among nodes.
• Αn Adjacency Segment represents the local segment (interface) to a specific SR node, it is
locally significant (don’t have to be unique among nodes)

Figure 1:
In this example (Fig. 1) all links have the same metric. ISIS or OSPF automatically builds segments
where the Nodal segment uses the shortest-path to the related node and for the adjacency
segment it is one-hop through the related interface:
• Each node advertises its global label with its loopback address (ISIS sub-TLV or OSPF
Opaque sub-TLV extension), the other nodes install the nodal segment in the Segment
Routing dataplane. Here B advertises 70, nodes S and R uses 70 to reach B
• F allocates a local segment 10000 for its link B-F then advertises the adjacency segment
in the IGP but only B installs it in the data plane

The “operator” allocates a Segment Routing block [ n, m ] then allocates to each node from the
SR block a global label (nodal SR). The adjacency segment is outside the SR block and it is
automatically allocated by each node.

19 | P a g e
HCPA-IP 03/10/2020

With this in mind, you clearly see that with the nodal segment attach to a prefix, you will reach
it via a shortest-path it could be ECMP or not it depends on the IGP topology.

Let’s see two examples in detail.


Example 1 (Fig. 2): Node R advertises segment 70 to all other nodes. If S wants to reach R, it will
use the segment 70 (source routing), the path inherits the ECMP behavior of IGP where nodes A,
C will swap segment 70 with 70 and nodes B, D pops the label (PHP) and transmits the packet to
the destination R.

Figure 2:

With the adjacency segment (Fig. 3) you can steer the traffic through a specific
interface/segment. Let’s see if S wants to reach R (Nodal segment 70) but through B-D
segment/link, we will use at the source the Path { 71, 10000, 70 }, at node A we pop the 1st label
71 which corresponds to B loopback (Nodal Seg.), then B will force the traffic to D with the adj.
segment 10000 and we use the IGP path to reach the destination.

Figure 3:

For the MPLS dataplane the segment is 32 bits size with the 20 right-most bits encoded as a
label. In classical MPLS a stack of label represents a list of segments in SR, the active segment is
the top label. The transport can be IPv4 or IPv6 and can co-exist with LDP or RSVP control plane.

Use Cases

• FRR protection in any topology especially where in remote LFA we have no PQ node. It is
simple as we don’t have extra computation (T-LDP) just put in the repair path a nodal
segment to the P node and an adjacency segment to the Q node
• Traffic Engineering, you can define per-flow CoS policies based on latency, bandwidth

20 | P a g e
HCPA-IP 03/10/2020

• SDN, to program a network it must be simple (no LDP or RSVP), scalable (no LDP/IGP
sync and Label database has much less labels) and responsive (no signaling delay). For
instance, an application can request to the SDN controller a circuit with specific SLAs
then the controller will provide the segment to use. The controller can learn the
topology with various tools like BGP Link State.

21 | P a g e
HCPA-IP 03/10/2020

ACRONYMS
SCT Swift, Smart and simple Configuration Tool

CEE Converged Enhanced Ethernet is an enhanced single


interconnect Ethernet technology developed to converge a
variety of applications in data centers. CEE's primary focus is to
consolidate the number of cables and adapters connected to
servers.

CMB China Merchant Bank

GPU Graphical Processing Unit

AI Artificial Intelligence

HPC High-Performance Computing

FCT Fixed Cellular Terminal

EANTC European Advanced Networking Test Center


[Link]

GBP Binary encoding mode

GRPC Telemetry Data Model Layer

INT/IOAM In band OAM

ACI Application Centric Infrastructure

NSX Virtual networking and security SW

SMB Small Medium Business

SVF Super Virtual Fabric

ENP Ethernet Network Processor


Huawei’s ENP switches are programmable and maintain high line
rates. Designed for OpenFlow, ENP-based switches are integral
to the future of Software-Defined Networking (SDN)

MPU Main Processing Units

CMU Centralized Monitoring Units

22 | P a g e
HCPA-IP 03/10/2020

SFU Switching Fabric Units

CSS Cluster Switch Systems

AP Access Point

LPU Line Processing Unit

iPCA Packet Conservation Algorithm for Internet is a technology used


to measure IP network performance. It marks on service packets
to implement network-level and device-level packet loss
measurement.

eMDI enhanced Media Delivery Index is a network quality monitoring


and fault demarcation solution designed for video and audio
services. It can monitor specified service packets on each node of
an IP network in real time, and quickly demarcate the faulty
network segment based on the monitoring results provided by
one or more nodes.
Packet Conservation Algorithm for Internet is another commonly
used network quality monitoring technology. However, eMDI is
more suitable for monitoring the quality of audio and video
services. This is because eMDI provides more monitoring
indicators, such as the jitter and out-of-order packet rate, and
can be deployed on a single node.

ECA Encrypted Com Analytics is a traffic identification and detection


technology for identifying encrypted and non-encrypted traffic
on the network, extracting encrypted traffic characteristics, and
sending them to the Cybersecurity Intelligence System (CIS) for
malicious traffic detection.

UPOE Universal Power Over Ethernet

DSVPN Dynamic Smart VPN

SPR Smart Policy Routing (monitors link quality based on service


requirements and selects optimal links to forward service data)

WOC WAN Optimization Controller

EC-IoT Edge Computing IoT

23 | P a g e
HCPA-IP 03/10/2020

AMI Advanced Meter Infrastructure

PLC Power Line Com

ERSPAN Encapsulated Remote Switched Port Analyzer is a traffic


mirroring protocol that mirrors traffic to one or more ports or
virtual local area networks (VLANs). Τhe mirrored traffic is sent
to a server for monitoring.

CXP Control, cross-connect and protocol processing board

TWAMP Two-Way Active Measurement Protocol

DCI Data Center Interconnect

NP Network Processor

USG Unified Security Gateway

NIP Network Intelligence Police

IDS Intrusion Detection System

SVN Security Virtual Network


CIS Cybersecurity Intelligence System

mGE A MultiGE interface is an Ethernet electrical interface that can


work at the rate of 100 Mbit/s, 1000 Mbit/s, 2500 Mbit/s, 5000
Mbit/s, or 10000 Mbit/s.

DLP Data Loss Prevention

SLB Server Load Balancing

MU-MIMO Multiple User-Multi In Multi Out

OVSDB Open vSwitch DataBase

BOD BW on Demand - is a value-added service featuring dynamic


bandwidth allocation.

DAA Destination Address Accounting is a rate-limiting and accounting


technology based on the destination IP address.

24 | P a g e
HCPA-IP 03/10/2020

SAE Service Accelerated Engine - The SAE board is an open service


platform where the Linux, Windows Server, or VMware operating
system can be installed for users to configure and deploy
services. The SAE board can be used in scenarios, such as
enterprise application software, antivirus gateway, WAN
optimization, and VPN encryption.

GBP Group Based Policy - GBP defines traffic control for members in
an EPG or in different EPGs.

EPG End Point Group - Servers are allocated to EPGs based on rules.

TCAM Ternary Content-Addressable Memory

EDCA Enhanced Distributed Channel Access - is used in wireless


networks supporting the IEEE 802.11e Quality of Service
enhancement. It supports differentiated and distributed access
to the Wireless Medium using eight different user priority sub
fields supporting four different Access Categories. These are
Voice, Video, Best Effort and Background.

STA Station

DTLS Data Transport Layer Security

IDC Internet Data Center

Clos architecture The architecture uses Variable Size Cell (VSC) and has the
dynamic routing capability. The implementation of load
balancing among multiple switch fabrics prevents the switching
matrix from being blocked and easily copes with complex and
volatile traffic in data centers.

25 | P a g e
HCPA-IP 03/10/2020

NSH Network Services Headers, a new service chaining protocol, is


added to the network traffic in a packet header to create a
dedicated service plane that is independent of the underlying
transport protocol. In general, NSH describes a sequence of
service nodes that a packet is routed through before reaching the
destination address. The NSH includes meta-data information
about the packet and service chain in an IP packet. The NSH
protocol addresses the growing requirement to deploy various
services functions external to the gateway.

VOQ Virtual Output Queue mechanism and super-large buffer on


inbound interfaces create independent VoQ queues on inbound
interfaces to perform end-to-end flow control on traffic destined
for different outbound interfaces. This method ensures unified
service scheduling and sequenced forwarding and implements
non-blocking switching. In VOQ, the physical buffer of each input
port maintains a separate virtual queue for each output port.
Therefore congestion on an egress port will block only the virtual
queue for this particular egress port. Other packets in the same
physical buffer destined to different (non-congested) output
ports are in separate virtual queues and can therefore still be
processed. In a traditional setup, the blocked packet for the
congested egress port would have blocked the whole physical
buffer, resulting in head-of-line blocking.

26 | P a g e
HCPA-IP 03/10/2020

VIQ Virtual Input Queue


With PFC Priority-based flow control , eight virtual channels can
be created on an Ethernet link, with a priority specified for each
virtual channel. Any of the virtual channels can be separately
suspended or restarted without interrupting traffic on the other
virtual channels. This method enables the network to provide
zero-packet-loss services for a single virtual link so that traffic of
different types can be transmitted on the same interface.

The sending interface of Device A is divided into eight queues


with different priorities, which map to the eight receiving buffers
of Device B, forming eight virtual channels. These queues have
data cache capabilities that vary according to their buffer sizes.
When the receiving buffer usage of Device B exceeds a preset
threshold (for example, 1/2 or 3/4 of the queue buffer),
congestion occurs on the corresponding interface of Device B.
Device B then sends a backpressure signal STOP to the upstream
device (Device A).
After receiving the backpressure signal, Device A stops sending
packets in the corresponding queue, and caches the data in the
local interface buffer. In turn, if the buffer usage of the local
interface on Device A exceeds the threshold, Device A sends a
backpressure signal to its upstream device. A backpressure signal
can be sent to the other upstream devices in a similar manner
unless packet loss caused by congestion is eliminated.

27 | P a g e
HCPA-IP 03/10/2020

CNP Congestion Notification Packet. Use fast Congestion Notification


Packet (CNP) to immediately adjust the packet sending rate of
the source end. When congestion occurs on the forwarding
device, the forwarding device replaces the destination server to
send a CNP to the source server. The source server then adjusts
the packet sending rate, relieving congestion of the queue buffer
on the forwarding device.

ECN Explicit Congestion Notification. Use fast Explicit Congestion


Notification (ECN) to shorten the congestion notification time.
When a queue is congested, the ECN flag is added for outgoing
packets in the queue. This eliminates the time taken to add the
ECN flag to a packet entering the queue before forwarding the
packet with the ECN flag, which shortens the congestion
notification time.
Dynamically adjust the ECN threshold to balance the tradeoff
between low latency and high throughput of lossless queues.
Dynamically adjust the ECN threshold of lossless queues based
on the incast concurrency and proportions of elephant and mice
flows to balance the tradeoff between low latency and high
throughput of lossless queues.

gRPC google Remote Procedure Call. It uses HTTP/2 for transport,


Protocol Buffers as the interface description language, and
provides features such as authentication, bidirectional streaming
and flow control, blocking or non-blocking bindings, and
cancellation and timeouts. It generates cross-platform client and
server bindings for many languages. Most common usage
scenarios include connecting services in microservices style
architecture and connect mobile devices, browser clients to
backend services.

28 | P a g e
HCPA-IP 03/10/2020

DLB Dynamic Load Balancing During data packet forwarding, this


mechanism checks the usage of each member link and selects a
member link with the lightest load to forward data packets.

Microsegmentation Microsegmentation is a method of creating secure zones in data


centers and cloud deployments that allows companies to isolate
workloads from one another and secure them individually. It’s
aimed at making network security more granular.

29 | P a g e
HCPA-IP 03/10/2020

NGSF Next Generation Switch Fabric Huawei NGSF architecture


innovatively processes service flows as follows. First, upon
receiving the service flows sent by users A and B, an inbound line
card slices data packets into cells, selects a switching fabric
dynamically based on loads, and sends the cells to the optimal
switching fabric. Then, the switching fabric forwards the cells to
an outbound line card, which then assembles cells into data
packets and sends them to the target end user. This innovative
cell switching mechanism achieves dynamic load balancing of the
switching fabrics, and maximizes the switching performance. In
addition, multipath transmission of cells prevents network
congestion and service loss caused by overload of a single
switching fabric, thereby providing better Quality of Service
(QoS) assurance.

30 | P a g e
HCPA-IP 03/10/2020

Deception Scanning behavior on the network, and lures suspicious traffic to


a Decoy for in-depth interaction and detection, protecting the
service network from the hacker.

OFL OFLINE Board removal button. To remove a board, hold down the
OFL button for about 6s until the OFL indicator turns on, and then
remove the board.

LPUI Integrated Line Processing Unit

FlexE To isolate services, flexible Ethernet (FlexE) technology assigns


bandwidth to specific interfaces. FlexE interfaces can be isolated
independent of each other. Traffic is isolated at the physical
layer, and network slicing is performed for services on the same
physical network.

VSUI Integrated Versatile Service Unit

PM A PM is a power module used to supply power to the


equipment.
31 | P a g e
HCPA-IP 03/10/2020

PEM A PEM is the power entry module on the chassis, an iron frame
that provides power inputs to PMs and provides other functions
such as wave filtering. Generally, a PEM is included in the chassis
sales part.

APT Advanced Persistent Threat


LB Load Balancing

SRv6 Segment Routing for IPv6 is a protocol designed to forward data


packets on a network based on source routes.

FPM IP Flow Performance Monitor (FPM) is a Huawei-proprietary


feature that measures packet loss rate and delay of end-to-end
service packets transmitted on an IP network. This feature is
easy to deploy and provides an accurate assessment of network
performance.

ND Network Discovery

SA Service Awareness is capable of identifying various common


applications by analyzing passing packets and comparing
obtained information from the packets with the application
signatures in the signature database. After identifying any
applications, the USG9000 performs differentiated control over
the applications based on their categories.

32 | P a g e

You might also like