Conclusion
In
most data center application environments, the type of Ethernet switch
adopted should be based on function, performance, port density, and the
true cost to install and operate the device, not just the low-latency
characteristics.
The
functional requirements in some application environments will dictate
the need to support end-to-end latencies under 10 microseconds. For
those environments, cut-through switches and a class of
store-and-forward switches can complement OS and NIC tools such as RDMA
and OS kernel bypass to meet the low-latency application requirements.
Cut-through
and store-and-forward LAN switches are suitable for most data center
networking environments. In a few of those environments, where
applications truly need response times of less than 10 microseconds,
low-latency Ethernet or InfiniBand switches are appropriate networking
choices.
For More Information: Cisco Nexus 5000 Series Switches: http://www.cisco.com/en/US/products/ps9670/index.html
Cisco Catalyst 4900M Switch:http://www.cisco.com/en/US/products/ps9310/index.html
Cisco Catalyst 4948 Switch: http://www.cisco.com/en/US/products/ps6026/index.html
1Unlike
Layer 2 switching, Layer 3 IP forwarding modifies the contents of every
data packet that is sent out, as stipulated in RFC 1812. To operate
properly as an IP router, the switch has to perform source and
destination MAC header rewrites, decrement the time-to-live (TTL) field,
and then recompute the IP header checksum. Further, the Ethernet
checksum needs to be recomputed. If the router does not modify the
pertinent fields in the packet, every frame will contain IP and Ethernet
errors. Unless a Layer 3 cut-through implementation supports
recirculating packets for performing necessary operations, Layer 3
switching needs to be a store-and-forward function. Recirculation
removes the latency advantages of cut-through switching.
2In
reality, a number of store-and-forward switching implementations store
the header (of some predetermined size, depending on the EtherType value
in an Ethernet II frame) in one place while the body of the packet sits
elsewhere in memory. But from the perspective of packet handling and
the making of a forwarding decision, how and where portions of the
packet are stored is insignificant.
3As
was explained earlier, in the cut-through switching section, the
complexity is mainly the result of having to perform both types of
Ethernet switching. Under certain conditions, cut-through switches
behave like store-and-forward devices, while under other conditions,
they function somewhere between the two paradigms. Further, during
egress port congestion, the switch has to store the entire packet before
the packet can be scheduled out the egress interface, so the software
and hardware of cut-through switches tended to be more complex than that
of store-and-forward switches.
4RDMA
protocols are server OS and NIC implementations whereby communications
processes are modified to transact most of the work performed in the
networking hardware and not in the OS kernel, freeing essentially all
server processing cycles to focus on the application instead of on
communication. In addition, RDMA protocols allow an application running
on one server to access memory on another server through the network,
with minimal communication overhead, reducing network latency to as
little as 5 microseconds, as opposed to tens or hundreds of microseconds
for traditional non-RDMA TCP/IP communication. Each server in an HPC
environment can access the memory of other servers in the same cluster
through (ideally) a low-latency switch.
5With
kernel bypass, applications can bypass the host machine's OS kernel,
directly accessing hardware and dramatically reducing application
context switching. |
手机版|小黑屋|BC Morning Website ( Best Deal Inc. 001 )
GMT-8, 2025-12-14 01:07 , Processed in 0.013565 second(s), 17 queries .
Supported by Best Deal Online X3.5
© 2001-2025 Discuz! Team.