Ethernet Switching Paradigms Overview
This
document focuses on latency requirements in the data center. It
discusses the latency characteristics of the two Ethernet switching
paradigms that perform packet forwarding at Layer 2: cut-through and
store-and-forward1.
It provides a functional discussion of the two switching methodologies
as well as an overall assessment of where a switch of either type is
appropriate in the data center.
This
document discusses general Layer 2 packet handling architectures as
they pertain to end-to-end latency requirements. It does not cover
specific product capabilities, but where appropriate, Cisco® Ethernet switching platforms are mentioned as examples of solutions.
The following main points related to choosing a low-latency data center solution are addressed here:
• End-to-end application latency requirements should be the main criteria for determining LAN switches with the appropriate latency characteristics. • In most data center and other networking environments, both cut-through and store-and-forward LAN switching technologies are suitable. • In the few cases where true low-microsecond latency is needed, cut-through switching technologies should be considered, along with a certain class of store-and-forward low-latency switches. In this context, low, or rather ultra-low, refers to a solution that has an end-to-end latency of about 10 microseconds. • For end-to-end application latencies under 3 microseconds, InfiniBand capabilities should be examined. • Function, performance, port density, and cost are important criteria for switch considerations, after true application latency requirements are understood. Ethernet Switching Paradigms Overview In
the 1980s, when enterprises started to experience slower performance on
their networks, they procured Ethernet (transparent or learning)
bridges to limit collision domains.
In
the 1990s, advancements in integrated circuit technologies allowed
bridge vendors to move the Layer 2 forwarding decision from Complex
Instruction Set Computing (CISC) and Reduced Instruction Set Computing
(RISC) processors to application-specific integrated circuits (ASICs)
and field-programmable gate arrays (FPGAs), thereby reducing the
packet-handling time within the bridge (that is, the latency) to tens of
microseconds, as well allowing the bridge to handle many more ports
without a performance penalty. The term "Ethernet switch" became
popular.
The
earliest method of forwarding data packets at Layer 2 was referred to
as "store-and-forward switching" to distinguish it from a term coined in
the early 1990s for a cut-through method of forwarding packets.
Layer 2 Forwarding Both
store-and-forward and cut-through Layer 2 switches base their
forwarding decisions on the destination MAC address of data packets.
They also learn MAC addresses as they examine the source MAC (SMAC)
fields of packets as stations communicate with other nodes on the
network.
When
a Layer 2 Ethernet switch initiates the forwarding decision, the series
of steps that a switch undergoes to determine whether to forward or
drop a packet is what differentiates the cut-through methodology from
its store-and-forward counterpart.
Whereas
a store-and-forward switch makes a forwarding decision on a data packet
after it has received the whole frame and checked its integrity, a
cut-through switch engages in the forwarding process soon after it has
examined the destination MAC (DMAC) address of an incoming frame.
In
theory, a cut-through switch receives and examines only the first 6
bytes of a frame, which carries the DMAC address. However, for a number
of reasons, as will be shown in this document; cut-through switches wait
until a few more bytes of the frame have been evaluated before they
decide whether to forward or drop the packet.
|
手机版|小黑屋|BC Morning Website ( Best Deal Inc. 001 )
GMT-8, 2025-12-14 01:07 , Processed in 0.011463 second(s), 17 queries .
Supported by Best Deal Online X3.5
© 2001-2025 Discuz! Team.