Future-proof IT

How bandwidth obsession masks what truly matters: quality of experience

Oct 26, 2021
Bandwidth obsession masks quality of experience

Our obsession with network bandwidth

Nielsen’s Law states that internet bandwidth grows by 50% per year, doubling roughly every 21 months. Casual observers may equate this to a perpetual gain in performance and speed because of a popular misnomer that bandwidth is a measurement of speed. Think of bandwidth as the number of lanes on the motorway, not how fast the cars (packets) move. Latency and packet loss also determines how fast data moves across a network. Any network engineer will tell you that high latency, latency variation (jitter), or high packet loss will kill the performance of almost any network-based application, especially real-time applications like Microsoft Teams or Zoom.

Nielsen's Law of Internet Bandwidth

 

In the last two decades, bandwidth leaped from 9,600 bps to 1 Gbps, opening the door to an astounding array of applications. In 25 years, we could have 100 Tbps powering full VR experiences for all our senses. By then, superior customer and end-user experience will be everything, hence companies need to measure and track this important metric today. Gartner’s Market Guide on Digital Experience Monitoring (DEM) reports that “by 2023, 60% of digital business initiatives will require I&O to report on user’s digital experience, up 15% from today.” 

With bandwidth an enabler rather than an inhibitor of next-gen experiences, can prioritizing packets help improve performance?

Forget about QoS (quality of service), it’s all about QoE (quality of experience)

It is a sunny Friday morning in Copenhagen. I'm gazing out the window and talking with one of my colleagues in Germany.  My manager walks over to my desk, to discuss yet another escalation from management, this time about the poor user experience with Skype for Business. Users were complaining about calls dropping, video pixelation, and, in some cases, completely unusable service due to the poor voice and video. My first thought was that the problem must be the network. And sure enough, when I checked and saw many of the WAN links were running full, I needed to understand why. I soon discovered it was software distribution traffic. It was that time of the month when Microsoft released all of its software updates (packages) to all of the distribution points (servers) across our 100 office locations. Does this sound familiar?

The reason many links were saturated was that these locations were connected with expensive low bandwidth MPLS WAN, where some sites only had a few Mbps of bandwidth. I wondered why we were flooding the network with updates during business hours. Couldn't we schedule this for later in the evening or at night? The other annoyance was the traffic was not time-sensitive, so it didn't matter if it took an hour or two so why were we sending all this traffic over expensive MPLS links and adversely affecting business voice and video traffic for users? Couldn't we just send it over the high-bandwidth, low-cost internet?

Misunderstood QoS control mechanisms

I discussed these points with management and my operational team explaining how QoS (Quality of Service) works and helps unclog congested networks. It is a misnomer that QoS can magically improve the quality of certain traffic flows and that packets with special tags would somehow move faster than other packets. QoS will only help when the pipe is full; that's right QoS can only help if you run out of bandwidth, and the router will drop other non-critical traffic to ensure your critical traffic flows continue. An alternative to complex QoS control mechanisms is to generously over-provision bandwidth. In hindsight, this would have been the better approach.  

We experience capacity limitations on expensive MPLS WAN connections, but the fact is bandwidth is becoming more and more abundant, and especially internet bandwidth, so we can essentially ensure the pipe never gets full. Only If I had some technology that would tunnel traffic securely across the internet, so, per application, I could offload the MPLS WAN and move the non-critical traffic to internet lines. I suppose this is what we call SD-WAN.

There was a common belief that voice, video, and other real-time traffic couldn't run over the internet and needed a reliable dedicated WAN with MPLS and QoS tagging. Today’s WFH era has shown us that with advanced software, stable internet infrastructure, and plenty of bandwidth, we can collaborate with our coworkers from home and it just works.  

Digital experience is king, not the network

This is why we need to work towards Quality of Experience (QoE) which is a measure of the delight or annoyance of a customer's experiences with a service. Enterprises are now improving the digital experience and ensuring users are productive. If issues arise, they are identified and resolved quickly. A user cares about the experience they have with applications on the endpoint and not what the network is doing. So why are we so obsessed with the network when we should be measuring the user experience since it overshadows network performance?

The internet is the new corporate network

As Nielsen's Law continues to drive bandwidth abundance and enterprises shift towards the internet as the new corporate network, don’t rely on the WAN and QoS to ensure users have plenty of internet bandwidth and a great user experience. Don’t waste time troubleshooting complex overloaded networks and QoS rules. Embrace the internet as the connectivity fabric for a simpler high-bandwidth user experience. But wait, what about latency? How do we solve this? Stay tuned for part 2 of this series.

What to read next   

Security service edge (SSE) at the speed of light

Avoid the Watermelon Trap with Digital Experience Monitoring

The True Cost of Legacy Technology: How Technical Debt Stymies Enterprise Security and Network Transformation