Latency Blog Series Part 1 – What is Latency?
Let’s take a closer at latency, and why it can be such a nightmare
We’ve experienced the effects of ‘lag’ before; perhaps you’ve spent ages waiting for a web page to load at work, or you’ve seethed in your seat as a streamed movie suddenly starts buffering at a critical moment in the plot; or perhaps you’re a gamer, in which case, you know all too well how badly lag can ruin the gaming experience.
But how much do you know about latency really?
Could you describe it in a sentence? Do you know what causes lag, and what’s being done behind the scenes to minimize the impact of latency?
If you don’t feel confident enough to answer, fear not; you’re in the right place!
In this blog series, we’re going to be taking a deep dive into the world of latency. In the first part of this four part series of bite-sized blogs, we’re going to examine what latency is, and then we’re going to look into why it can be such a troublesome factor to deal with.
Following this, we’ll spend part 2 of our latency series looking through the main causes of latency, and in part 3 we’ll reveal what we do behind the scenes to prevent all of these issues from impacting the user experience and product performance. Make sure to check out those articles too!
So then… what is latency?
Latency
‘Latency’ refers to the time delay between the cause and effect of an action performed over the internet, due to the time taken for the action request to be sent from one place to another across a network.
This is normally measured between an originating device (the client device) and a data centre, and back- this journey represents the time taken to request an action (accessing an email) and then this action being completed (email being opened).
While these tasks are completed very quickly, they still depend on information travelling across physical space, through a maze of different networks and network infrastructure equipment, and then finally retrieving the information from a data centre… before doing it all over again on it’s way back to the user.
Because of this, unless the network is perfectly maintained, then actions can take longer to be completed- this increased delay is known as higher latency, which simply means that it takes a higher period of time to complete a requested action.
So, for all intents and purposes, ‘latency’ represents the time it takes for a user to successfully complete an intended action over a network- so ‘low latency’ means that it takes a short amount of time to complete a user’s action or application’s request (which is good), and ‘high latency’ means that it takes a longer amount of time to complete an action (which is bad).
But if the internet can transfer information so quickly to begin with, why is a delay of an extra second or two so significant?
Time Out Errors, Frustrated Users, and Unpopular Products – What Latency Does to Application Performance
Put simply, high latency can cripple application performance.
Over the last decade, all of the major industries have almost entirely shifted to a fast, digitised, application-leveraged world, including the games industry, broadcasting and media streaming, defence, finance, production, and naturally, the IT industry.
For this network of people and applications to work effectively. there needs to be quick, unimpaired communication that allows this ecosystem of digital applications to support modern business operations.
If there are delays in the communications between applications due to high latency, then there can be massive implications for the business.
These latency-induced errors will present themselves in many different ways, depending on the applications or products that they’re affecting.
Latency can:
- Ruin a game player’s experience, and cripple a movie streaming provider’s quality of service
- Cause large loading times that give shoppers those 2 extra seconds of frustration that makes them think twice about completing an online purchase
- Slow Applications that your business relies on, such as VoIP or remote working applications, to perform so badly that they are effectively unusable
- Stop products from working in client locations, negatively impacting your client business, impeding your ability to maintain SLAs, and over time, losing you customers as a result.
Now, whilst these errors all have different effects and consequences, they all have one thing in common; they are caused by unacceptably high latency, which stems from inefficient networking from a contracted hosting provider.
However, your users won’t blame your infrastructure provider; they will blame you, and your product, and your business will be the one that suffers as a result.
Therefore, choosing your server hosting partner is a very important decision, as they will have a huge impact on how happy your users are, how effectively your products will perform, and how smoothly your internal infrastructure runs; all of these factors will have a big impact on how successful your business is as a result.
How prepared is your provider?
So when you’re choosing your hosting partner, you should look to see how they tackle the three main causes of latency, and if they’re truly prepared to support your business. In order to be able to critically evaluate potential server providers, you’ll need to know what the main causes of latency are!
To help you on your way, we’ve got everything you need to know in the next part of our blog series – The Main Causes of Latency. Read it to learn about the main causes of latency, and why they can lead to such big issues in technology!
Alternatively, feel free to jump to our final blog post of the series, How Ingenuity Cloud Services Prevent Latency Issues, to examine what we do behind the scenes to give our clients high-quality service all across the globe, and see if we could be the ideal server provider for you too!