Ethernet Frame Segmentation and MTU

 

Got from the original post.

 

MTU – Maximum Transmission Unit (MTU) is the Physical layer characteristics, so its the maximum amount of information (data) can be sent in the Frame (e.g. Ethernet Frame). As per standard frame, maximum amount of Packet size accommodate in Ethernet frame is 1500B.

But if packet size is more than 1500B due to any reason, than Layer 2 informs Layer 3 to fragment the information as it cannot be fit into Ethernet frame. Initially it was observed that physical Media technology was not as stable & reliable as today, so Internet Architect suggest to prefer fragmentation, as they only have to re-transmit that small part of segment, not the complete information again. But this puts lots & lots of load on Layer 3 device responsible for fragmentation.

What are the reasons, when our normal HTTP or application traffic does not able to communicate? where Did MTU hits? Lets check it out…

Here are some overhead facts to carry Application/Presentation/Session Layer [Normally termed as Data] information,

• TCP Header = 20B
• GRE = 24B
• IPv4 Header = 20B or IPv6 Header = 40B
• MPLS Header = 4B to 16B (Including L3VPN, FRR TE, AToM Control Word)
• Ethernet Header = 14B
• VLAN/Trunk = 4B & Q-in-Q = 8B

Here are some examples where the end to end communication breaks for certain customers/applications, while all other service work well.

When everything goes well,

 

Consider the network with default config i.e. MTU 1500 for most of the FastEthernet interfaces (now a day’s Gig Ethernet interface have Jumbo enable by default for some vendors).

If any PC behind Router A want to send the Data and configured MTU at interfaces is 1500 than maximum data coming from A/P/S layers should be calculated based on following,

Data = 1500 – 20 (TCP) – 20 (IPv4) – 14 (Ethernet) = 1446B

This 1446B is usually considered as safe payload from Customer devices to pass all the application data w/o dropping somewhere in between Source & Destination. So if customer set MTU of its CE WAN interface than usually its CE router will do the fragmentation (if required) and usually the traffic will not drop in the transit. There are ways that Service Provider can set DF (Don’t Fragment) bit on the incoming customer traffic, so that their Core routers will not be overloaded with Fragmentation process.

But there are scenarios, where the traffic with 1446B can be drop. Lets discuss those,

1) If Service Provider support MTU of 1500B and use VLAN trunk on any intermediate node connectivity:

 

 

In this scenario Router B & C are connected over the Ethernet Trunk Link, means there comes another 4B of VLAN TAG overhead. Now if the same 1446B of traffic come in from Customer router A, than it cannot pass over B-C link. Here is the calculation,

1446 (Data) + 20 (TCP) + 20 (IPv4) + 14 (Ethernet) + 4B (VLAN TAG) = 1504B (Required MTU)

If customer application mark the DF bit in Application or SP marked the same for informing customer traffic than Router B will not do the Fragmentation and traffic will be dropped. To resolve this issue, B-C link should support atleast 1504B.

Let’s discuss another scenario as an example:

2) If Service Provider support MPLS along with VLAN tagging.

 

 

In this scenario Service Provider network B-C-D support MTU of 1504B. Router C & D are connected over the Ethernet Trunk Link and also running MPLS, means there comes 4B of VLAN TAG overhead and 4B of MPLS Label overhead. Now if the same 1446B of traffic come in from Customer router A, than it can pass over B-C link, but not over C-D link. Here is the calculation,

1446 (Data) + 20 (TCP) + 20 (IPv4) + 4 (MPLS Label) + 14 (Ethernet) + 4B (VLAN TAG) = 1508B (Required MTU)

Similarly, if customer application mark the DF bit in Application or SP marked the same for informing customer traffic than Router C will not do the Fragmentation and traffic will be dropped. To resolve this issue, C-D link should support atleast 1508B.

 

 

The case is worse when Service Provider run MPLS Traffic Engineering and Customer traffic is carried over VPN, this will add additional overhead up to 12B, if Q-in-Q supported than additional 4B, if IPv6 is the transport protocol than IP header overhead will increased to 40B, instead of 20B of IPv4 header. Further if customer is using GRE tunneling than 24B of GRE overhead will be added.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s