QoS: Past and Present

This post is the prelude of a more technically inclined article about Quality of Service (QoS). Before digging into the technical details, I wanted to cover a bit of history about telecommunication networks and how QoS evolved from it.

Historically, telecommunication networks were based on circuit-switched technologies. A fixed and finite amount of resources were affected to the services, most often telephony. In the latter case, time slots were dedicated to phone calls, which inherently provided strong built-in Quality of Service guarantees. To prevent congestion, circuit-switched networks simply dropped calls before or when resources became unavailable. If more calls were received that the network supported, the connections were dropped and the users or callers heard a busy tone. This has obviously forced the users to try establish their phone calls later, but once their calls established, they got a reliable service with the same quality they expected and always experienced.

The circuit-switched network was inherently designed for loss-sensitive and delay-sensitive applications like voice or to some extend to application like video, but they were clearly not appropriate for heterogeneous and structure-agnostic protocols. Note though, that some technologies from the circuit-switched networks era are still of and in use today. A good example is Packet of SONET/SDH links, which are still widely used by Service Providers, even if they tend to be replaced by Ethernet. Synchronous Digital Hierarchy (SDN) was originally designed to transport a large but finite number of phone channels (DS0/E0/J0). Despite their higher cost, SONET/SDH links are still widely used because they're already there, and because they provide efficient OAM capabilities.

In contrast to circuit-switched networks, packet-switched networks are designed around the statistical multiplexing concept. Statistical multiplexing allow the bandwidth to be divided into an arbitrary number of channels or data streams. This flexibility allow the network to allocate resources as needed and independently of the nature of the data. Transmission resources are obviously limited, but this time, they're not bound to any channels, which allow for a more efficient use of bandwidth. Packet-switched networks also bring in the concept of buffering. When not enough resources are available, the network buffer the excess traffic. Because no buffer is infinite, when it is full, packets are discarded or dropped until the buffer is emptied and can accommodate more data. Buffering is good and evil, because it allow excess data to be stored until the media or the network is ready to transmit more information, but at the same time it introduces delay. Delay could be potentially disruptive to some application like voice or real-time conferencing; two widely and commonly used technology today.

In today networks, we all consume peer-to-peer voice, not to mention P2P file sharing (yea, I am pretty sure you do). We also exchange our moods and a lot of stories over social media, we exchange pictures of our holidays and so on. This drives a large amount of data, and as the network become busy and congested, data compete with each other. If the network experience congestion, and it usually does one time or another, data is buffered (or delayed) and in the worst case, it is discarded (or dropped). Retransmissions may follow data loss, and may unfortunately worsen congestion overall because it induce additional resources to transmit, which is a very inefficient use of bandwidth by the way. Retransmission also means delay, and real-time applications can hardly tolerate more delay than what users are accustomed with or can tolerate. Did you ever had to make a phone call over a 600ms satellite link? I'm pretty sure you know what I mean. The network convergence drives the need for Quality of Service, because while traffic of different natures and behaviors compete with each other, real-time applications still require guaranteed resources, in term of bandwidth, delay, jitter and loss. These four parameters are crucial and must be well-known and understood for a proper QoS design. They all directly affects real-time applications and how users perceive the performance and the quality of the service. Peer-to-peer file sharing obviously won't be much affected by jitter, however it's one more competing applications and a pretty aggressive one.

As I mentioned previously, four parameters are crucial to a proper QoS design.

  • Bandwidth: It define the end-to-end information capacity. Some people may argue that the best way to solve congestion is by adding more bandwidth. And they are right, but in no way, it means that QoS won't be necessary. If you consider how packet-switched networks behave and how people use computer networks today, bandwidth is definitively not an infinite resource.
  • Delay: It is the time information takes to travel between two distant communication endpoints. End-to-end Delay is the result of the addition of the propagation delay, the queuing, transmission (or serialization), and processing delay. Delay is a very important and sometimes overlooked factor of QoS.
  • Delay Variation: Delay variation, also known as Jitter, is the variation of delay over time. A very large variation in end-to-end delay could be even more disruptive than a reasonably long but fixed delay. Voice decoders are particularly sensitive to jitter, and most VOIP endpoints implements small buffers to accommodate for this variation, at the expense of a very small additional buffering delay (usually <100ms).
  • Loss: Packet loss can results from faulty equipments, network reconvergence, damaged medium or overflowed buffers. Loss is also very disruptive to some applications. Methods like Forward Error Correction (FEC) are commonly used to fight against the effect of loss and retransmissions on lousy mediums like radio links.

Every application performance issues are more or less related to one or more of the four parameters mentioned above. Therefore, when outlining your QoS strategy, you must consider all of them. You should also keep in mind that they're usually interrelated. By example, temporary lack of bandwidth and overflowed buffers on a given link can introduce loss. Loss can introduce retransmissions, and as the congestion worsen, delay and jitter can dramatically increase.

About the author Nicolas Chabbey

Nicolas Chabbey is a Network Engineer certified with Cisco Systems and Juniper Networks. He has begun his career in 2003, and has designed, implemented and maintained networks for enterprises and service providers. When he is not behind a computer, he is riding his mountain bike across the Swiss alps.

Previous Home Next


blog comments powered by Disqus