Data Frames Forwarding and Switching Method

With the network growing, its performance is going slower, Ethernet bridges limit the size of the collision domains. Therefore the use of ethernet bridges art started. The advancement in integrated circuits permitted LAN switches to replace the early bridges. The modern switches were able to move the layer 2 forwarding decision from software to application-specific-integrated-circuits (ASICs). The ASICs decrease the packet-handling time within the device and let the device handle an increased number of ports without degrading performance.  There are two methods of frame forwarding and switching:-

  • Store-and-forward method
  • Cut-through Method

Store-and-Forward Switching

The store-and-forward method makes a forwarding decision when the complete frame is received and then checked the frame for errors using a mathematical error-checking mechanism known as a cyclic redundancy check (CRC). if the CRC is valid, the switch looks up the destination address, which determines the outgoing interface. The frame is then forwarded out to the correct port.

The Store-and-forward method has two primary characteristics that differentiate it from cut-through:

  • Error checking
  • Automatic buffering.

Error Checking

A switch using a store-and-forward switching technique performs an error check on each incoming frame. When received the entire frame on the ingress port, shows that the figure, the switch compares the frame-check-sequence (FCS) value in the last field of the datagram against its own FCS calculations. The FCS is the process that helps to make sure that the frame is free of physical and data-link layer errors. If the frame has no error, the switch forwards the frame to the destination; otherwise, the frame is dropped.

forwarding

Automatic Buffering

With any difference in data speeds between the ingress and egress ports, the switch stores the frame in a buffer computes the FCS check; forwards it to the egress port buffer and then sends it. For example, when an incoming frame travelling into a Fast Ethernet port that must be sent out a Giga Ethernet interface would need to use the store-and-forward method. The store-and-forward switching is the primary method for Cisco switches.

Cut-Through Switching

The Cut-Through Switching is another method of switching, as shown in Figure 2; this method starts the forwarding process when the destination MAC address of an incoming frame and the egress port has been determined. The advantage of this method is the ability to switch data earlier than the store-and-forward method. The primary characteristics of cut-through switching are the following:

  • Rapid Frame Switching
  • Fragment Free

Rapid Frame Forwarding

A switch using the cut-through method make forwarding immediately when it has found the destination MAC address of the frame in its MAC address table. The switch doesn’t need to wait for the complete frame to receive like the store-and-forward method.

A switch using the cut-through method can quickly decide because of ASICs and MAC controllers. The cut-through method needs to check a larger part of a frame’s headers for more filtering purposes. For example, the switch can check source MAC address; destination MAC, and the Ether Type fields which are total of 14 bytes and check an extra 40 bytes to carry out more difficult functions Layers 3 and 4.

This method does not drop invalid frames. The frames with errors are forwarded to the next segments of the network. If there are too much invalid frames in the network, which produce a negative impact on bandwidth.

Fragment Free Switching

It is a modified form of cut-through switching. In this form of switching the switch waits for the collision window (64 bytes) to pass before forwarding the frame. Each frame will be checked into the data field to make sure there is no fragmentation that has occurred. This is provides enhanced error checking than cut-through; without any further latency and delay. The lower latency speed of cut-through switching makes it more suitable for high-performance computing (HPC) applications that need process-to-process latencies of 10 microseconds or less.