设为首页收藏本站

 找回密码
 注册

QQ登录

只需一步,快速开始

BCM 门户 IT世界 资料备录 查看内容

Cut-Through or Store-and-Forward: Ethernet Switching for Low-Latency

2013-7-10 08:14| 发布者: demo| 查看: 1353| 评论: 0|来自: CISCO

摘要: This document focuses on latency requirements in the data center. It discusses the latency characteristics of the two Ethernet switching paradigms that perform packet forwarding at Layer 2: cut-throug ...
Characteristics of Cut-Through Ethernet Switching


This section explores cut-through Ethernet switching. Because cut-through switching is not as well understood as store-and-forward switching, it is described in more detail than the store-and-forward technology.

Invalid Packets

Unlike store-and-forward switching, cut-through switching flags but does not get a chance to drop invalid packets. Packets with physical- or data-link-layer errors will get forwarded to other segments of the network. Then, at the receiving end, the host invalidates the FCS of the packet and drops the packet.

Timing of Cut-Through Forwarding

In theory, as indicated in Figure 2, a cut-through switch can make a forwarding decision as soon as it has looked up the DMAC address of the data packet. The switch does not have to wait for the rest of the packet to make its forwarding decision.
However, newer cut-through switches do not necessarily take this approach. A cut-through switch may parse an incoming packet until it has collected enough information from the frame content. It can then make a more sophisticated forwarding decision, matching the richness of packet-handling features that store-and-forward switches have offered over the past 15 years.

Figure 2. Cut-Through Ethernet Switching: in theory, frames are forwarded as soon as the switch receives the DMAC address, but in reality, several more bytes arrive before forwarding commences



EtherType Field

In preparation for a forwarding decision, a cut-through switch can fetch a predetermined number of bytes based on the value in EtherType field, regardless of the number of fields that the switch needs to examine. For example, upon recognizing an incoming packet as an IPv4 unicast datagram, a cut-through switch checks for the presence of a filtering configuration on the interface, and if there is one, the cut-through switch waits an additional few microseconds or nanoseconds to receive the IP and transport-layer headers (20 bytes for a standard IPv4 header plus another 20 bytes for the TCP section, or 8 bytes if the transport protocol is UDP). If the interface does not have an ACL for traffic to be matched against, the cut-through switch may wait for only the IP header and then proceed with the forwarding process. Alternatively, in a simpler ASIC implementation, the switch fetches the whole IPv4 and transport-layer headers and hence receives a total of 54 bytes up to that point, irrespective of the configuration. The cut-through switch can then run the packet through a policy engine that will check against ACLs and perhaps a quality-of-service (QoS) configuration.

Wait Time

With today's MAC controllers, ASICs, and ternary content addressable memory (TCAM), a cut-through switch can quickly decide whether it needs to examine a larger portion of the packet headers. It can parse past the first 14 bytes (the SMAC, DMAC, and EtherType) and handle, for example, 40 additional bytes in order to perform more sophisticated functions relative to IPv4 Layer 3 and 4 headers. At 10 Gbps, it may take approximately an additional 100 nanoseconds to receive the 40 bytes of the IPv4 and transport headers. In the context of a task-to-task (or process-to-process or even application-to-application) latency requirement that falls in a broad range, down to a demanding 10 microseconds for the vast majority of applications, that additional wait time is irrelevant. ASIC code paths are less complex when IP frames are parsed up to the transport-layer header with an insignificant latency penalty.

Advantages of Cut-Through Ethernet Switching

A primary advantage of cut-through switches is that the amount of time the switch takes to start forwarding the packet (referred to as the switch's latency) is on the order of a few microseconds only, regardless of the packet size. If an application uses 9000-byte frames, a cut-through switch will forward the frame (if that is the appropriate decision to make for that datagram) a few microseconds to a few milliseconds earlier than its store-and-forward counterpart (a few microseconds earlier in the case of 10-Gbps Ethernet).
Furthermore, cut-through switches are more appropriate for extremely demanding high-performance computing (HPC) applications that require process-to-process latencies of 10 microseconds or less.
In some scenarios, however, cut-through switches lose their advantages.

Windowed Protocols and Increased Response Time

Even where the cut-through methodology can be used, windowed protocols (such as TCP) can increase end-to-end response time, reducing the effectiveness of the lower switching delay of cut-through switching and making the latency of store-and-forward switches essentially the same as that of cut-through switches.

User Perception of Response Times with Most Applications

In most enterprise environments, including the data center, users do not notice a difference in response times whether their environment is supported with store-and-forward or cut-through switches.
For example, users requesting a file from a server (through FTP or HTTP) do not notice whether the reception of the beginning of the file is delayed by a few hundred microseconds. Furthermore, end-to-end latencies for most applications are in the tens of milliseconds. For instance, an application latency of about 20 milliseconds on a cut-through or store-and-forward switch that has a 20-microsecond latency (which would be 1/1000 of the application latency) is negligible.



鲜花

握手

雷人

路过

鸡蛋

相关阅读

手机版|小黑屋|BC Morning Website ( Best Deal Inc. 001 )  

GMT-8, 2025-12-14 01:07 , Processed in 0.012236 second(s), 17 queries .

Supported by Best Deal Online X3.5

© 2001-2025 Discuz! Team.

返回顶部