Oracle Linux And VM Server: Improve Performance On Network With Heavy Packet Loss
Last updated on APRIL 13, 2018
Applies to:Linux OS - Version Oracle Linux 6.9 with Unbreakable Enterprise Kernel [4.1.12] and later
Information in this document applies to any platform.
Network spans which are dropping packets can cause performance degradation because the receiving network end must wait for all segments of a large message to arrive. The entire message must be resent even if only one segment is lost. The receiver waits for a time before sending notice of the lost message to the sender. The delay period gets increasingly longer as the packet drops continue. Network throughput progressively degrades while the drops continue.
In case of a network with packet loss it can be notable a slow retransmission rate due to the amount of packet loss causing the TCP retransmission to increase upon each retry.
That is the expected behavior in case of packet loss and the real issue to look for is why packets are being dropped.
But there a possibility to have the system perform a fast retransmission that could alleviate the issue. The goal of this KM is to show some recommendations on how to do it.
Sign In with your My Oracle Support account
Don't have a My Oracle Support account? Click to get started
My Oracle Support provides customers with access to over a
Million Knowledge Articles and hundreds of Community platforms