Unified Communications, Payments & HPE NonStop Guides | IR

Network Latency - Common Causes and Best Solutions | IR

Written by IR Team | Dec 14, 2020 4:30:00 PM

Network latency (sometimes referred to as lag) is best described as the delay in the time that data takes to transfer across a network. A long delay indicates high latency, while faster response times have low latency.

Obviously the higher the lag time, the more chance of inefficiency in the network, and this can be detrimental, especially in the case of real-time business operations that rely on sensor data. So businesses prefer low latency and faster network communication for better productivity and more efficient business operations.

A complete guide to understanding, monitoring and fixing network latency.

Network latency sits squarely in the midst of the two other troublesome network complications: packet loss and jitter.

This guide will explain everything you need to know about the causes, troubleshooting, ways to reduce latency, and improve application and network performance.

Are recurring UC&C issues a roadblock? Navigate through with Collab Compass.

 

What is network latency?

As mentioned, the term is used to describe delays in communication over a network. It is best thought of as the amount of time taken for a packet of data to travel through multiple devices, then be received at its destination and decoded. Latency itself, however, is a measurement of time taken for data to travel, not of how much data is downloaded over time.

Bottlenecks in communication occur with long delays when latency is high. In the worst cases, it’s like traffic on a four-lane highway trying to merge into a single lane. High latency decreases communication bandwidth, and can be temporary or permanent, depending on the source of the delays.

The way to measure latency is by the speed in milliseconds that it takes for data to transfer. Or during speed tests, it’s referred to as a ping rate. The lower the ping rate the better the performance. A ping rate of less than 100ms is considered acceptable but for optimal performance, latency in the range of 30-40ms is desirable.

The ping command also tests the accessibility of network devices by sending Internet Control Message Protocol (ICMP)echo request packets to a target device and reliably checks network latency to ensure the device is available.

Download a PDF copy of the Optimizing your Network Guide

Causes of network latency?

There are countless variables that can cause network lag. Here are some of the most common:

1. Distance data packets have to travel

One of the main causes of network latency is distance, or how far away the device making requests is located from the servers responding to those requests. 

For example, if a website is hosted in a data center in Trenton, New Jersey, it will respond faster to a data packet sent from users in Farmingdale, NY (100 miles away), or most likely within 10-15 milliseconds. On the other hand, users in Denver, Colorado (about 1,800 miles away) will face longer delays of up to 50 milliseconds.

Locating servers and databases geographically closer to users, can cut down on the physical distance that data packets need to travel.

The amount of time it takes for a request to reach a client device is referred to as Round Trip Time (RTT). While an increase of a few milliseconds might seem negligible, it can result in round trip delay. There are other considerations that can increase latency. 

  • The to-and-fro communication necessary for the client and server to make that connection in the first place.

  • The total size and load time of the page

2.   Website construction

Web page construction makes a difference. Those that carry heavy content, large images, or load content from several third- party websites may cause network congestion, as browsers need to download larger files to display them.

3. Transmission medium

The type of transmission medium can affect latency. Data packets travel across large distances in different forms, either through electrical signals over copper cabling, or light waves over fiber optic cables (generally lower latency) or a wireless network connection (generally higher latency), or even a complex web of networks with multiple mediums.

4.   End-user issues

Network problems might appear to be responsible for latency, but sometimes RTT latency is the result of the end-user device being low on memory or CPU cycles to respond in a reasonable time frame.

5.   Physical issues

In a physical context, common causes of lag are the components that move data from one point to the next. Physical cabling such as routers, switches and WiFi access points. In addition, latency can be influenced by other network devices like application load balancers, security devices,  firewalls and

6. Storage Delays

Delays can occur when accessing a stored data packet, resulting in a hold up caused by intermediate devices like switches and bridges.

Latency vs bandwidth vs throughput

Latency, bandwidth and throughput are all equal contributors to the quality of communications. Both throughput and latency are elements used to measure network performance and improve load times. To understand it better, you could imagine that data packets flow through a pipe:

Bandwidth is the width of the pipe. The narrower the pipe, the less data allowed to travel back and forth through it. The wider the communication band, the more data that can flow through it simultaneously.

Latency is how fast the data packets inside the pipe travel from client to server and back. Packet latency is dependent on the physical distance that data must travel through cords, networks and the like to reach its destination.

Throughput is the volume of data that can be transferred over a specified time period.

Low latency and low bandwidth means that throughput will also be low. This means that while data packets should technically be delivered without delay, a low bandwidth means there can still be considerable congestion.  But with high bandwidth, low latency, then throughput will be greater and the connection much more efficient.

Other types of latency

Now that we have determined the meaning of global latency and its effects on smooth communications, the following describes three other examples of the effects of internet latency. 

Fiber optic latency

In the case of fiber optic networks, latency refers to the time delay that affects light as it travels through the fiber optic network. Latency increases over the distance traveled, so this must also be factored in to compute the latency for any fiber optic route.

Based on the speed of light (299,792,458 meters/second), there is a latency of 3.33 microseconds (0.000001 of a second) for every kilometer covered. Light travels slower in a cable which means the latency of light traveling in a fiber optic cable is around 4.9 microseconds per kilometer.

The quality of fiber optic cable is an important factor in reducing latency in a network.

VoIP latency

The reasons behind audio latency are based on the speed of sound. Latency in VoIP is the difference in time between when a voice packet is transmitted and the moment it reaches its destination. A latency of 20 ms is normal for VoIP calls; a latency of up to 150 ms is barely noticeable and therefore acceptable. Any higher than that, however, and quality starts to diminish. At 300 ms or higher, it becomes completely unacceptable.

Operational latency

This refers to the time lag due to various computing operations when they run one after another in a sequence. Operational latency is calculated as the sum total of the time each individual operation takes. In parallel workflows, the slowest operation determines the operational latency time. 

Monitoring and improving network latency

As your network infrastructure grows, and the amount of data increases, having additional connections means more points where delays and issues can happen.

Problems can increase again as more and more organizations connect to cloud servers, use more applications and expand to accommodate remote workers extra branch offices.

More latency can severely threaten website performance, customer satisfaction, business deadlines, expected outcomes and eventually ROI.

In industries such as telerobotics and teledriving, where video enabled remote operations are at their core, keeping latency low is critical. So every organization wants to reduce latency, and this is where comprehensive network monitoring and troubleshooting comes into its own.

Network monitoring and troubleshooting can quickly and accurately diagnose and identify the root causes of high network latency and put solutions in place to reduce and improve the problem.

How to Reduce Network Latency

One simple way to improve network latency is to check that others on your network aren’t unnecessarily using up your bandwidth, or increasing your latency with excessive downloads or streaming. Then, check application performance to determine whether applications are acting unexpectedly and potentially placing pressure on the network.

Use of a Content Delivery Network (CDN) can significantly reduce latency. A CDN places servers, or internet exchange points along different network paths where various internet providers can link to each other to access resources. Huge technology companies, such as Google, Apple, and Microsoft, use CDNs to reduce latency in loading web page content.

Subnetting is another way to help reduce latency across your network, by grouping together endpoints that frequently communicate with each other.

Additionally, you could use traffic shaping and bandwidth allocation to improve latency for the business-critical parts of your network.

Finally, you can use a load balancer to help offload traffic to parts of the network with the capacity to handle some additional activity.

How to Troubleshoot Network Latency Issues

To check if any of the devices on your network are specifically causing issues, you can try disconnecting computers or network devices and restarting all the hardware. You’ll need to ensure that you have network monitoring deployed. 

An ethernet connection instead of WiFi can provide a more consistent internet connection and typically improves internet speed.

If you still have latency problems after checking all your local devices, it’s the issues could be coming from the destination you’re trying to connect to.

How to Test Network Latency

Testing network latency can be done by using ping or traceroute (tracert), although, comprehensive network monitoring and performance managers can test and check latency more accurately.

Maintaining a reliable network is an important part of a smoothly operating business.  Network issues can become worse if they’re not managed properly. 

What can improve network latency?

Network monitoring and troubleshooting tools like IR Collaborate are the best way to achieve a low latency network.

You can typically set network standard expectations for latency and create alerts when it reaches a certain threshold above this baseline.

Network monitoring tools can help you set up data comparisons between different metrics to help identify performance issues, such as application performance or errors also affecting latency.

A network mapping tool can also help you pinpoint where within the network latency the performance issues are occurring, which allows you to troubleshoot problems more quickly.

Specific traceroute tools monitor packets and how they move across an IP network, including how many “hops” the packet took, the roundtrip time, best time (in milliseconds), as well as the IP addresses and countries the packet traveled through.

By improving your network speed and reducing latency, your business processes will also make leaps and bounds towards efficiency and high performance.

Key takeaways

This guide has been created to define network latency and to help identify, understand and troubleshoot the most common problems related to latency in computer networks.

Network latency, jitter, and packet loss can severely impede clear communication and universally affect your user experience (UX). If you can measure and keep latency low, your user experience will improve dramatically.

How IR Collaborate can help

In a complex, multi-vendor unified communications ecosystem, we help you avoid, quickly find and resolve performance issues in real-time – across your on-premises, cloud or hybrid environments.

  • Ensure a positive end-user experience with one-click troubleshooting for all network issues affecting UC performance. Deployment and getting started is quick, generating insights within minutes of installation across multiple sites within your environment.

  • You can improve IT efficiency with the ability to operate and troubleshoot your entire multi-vendor UC environment from a single viewing point.

  • Reduce costly outages and service interruptions with automated, intelligent alerts.

Plan, deploy and migrate new technologies with confidence.

Download a PDF copy of the Optimizing your Network Guide

For further insightful information on network performance complications, download our additional guides on the full explanation of latency, jitter and packet loss:

What is Network Packet Loss? A Complete Guide to Understanding, Monitoring and Fixing Network Packet Loss.

What is Network Jitter? A Complete Guide to Understanding, Monitoring and Fixing Jitter.