Quality Of Service QOS – Explained with Example

By | 15th October 2017

Understanding QoS

How qos works

Quality of service

The Provision of sufficient Quality of Service (QoS) across IP networks has become a necessary criterion in enterprise IT infrastructure of the future. It has been deemed a necessity especially for voice and the streaming of video over the network.

Let us look at why the quality of service (QoS) is vital in today’s and future network, how it (QOS) works and its benefits.

A few commonly used applications running on your network system are sensitive to delay. These applications usually utilize the UDP protocol rather than TCP.

The basic difference between the TCP and UDP protocol in relation to time and sensitivity is that TCP will retransmit packets that are lost in traffic while UDP does not.

This means that TCP should be used in the transmission of files for its great feature of re-transmitting, re-ordering of lost or malformed file, TCP helps to recreate these files on the destination PC.

Let us take for example of an IP phone call;  packets are transmitted in as an ordered stream, losing even a few packets will result in the voice quality becoming choppy and unintelligible.

Additionally, packets are sensitive to what’s known as jitter.

Jitter is the variation in delay of a streaming application.

Packet loss, delay or jitter are normally caused by enormous traffic or over-use of your network bandwidth which is above what it can handle, but if your network has plenty of it, that should be any problem, delays or lost packets.

In a situation where you are dealing with large enterprise networks, there will be times where links become hugely congested to the point where routers and switches start dropping packets because they are coming in/out faster than what can be processed.

As a result, your streaming applications are going to suffer. This is where QoS comes in.




How does QoS work?

Quality of Service assists in the management of packet loss, delay and jitter on your network infrastructure.
Bandwith usage is growing huge day by day as the internet continue to expand. Since we’re working with a finite amount of bandwidth, our foremost priority is to identify those applications that would benefit most.

As a network administrator, you need to prioritize the use of bandwidth for certain applications. Once you discover the applications that need to have priority over bandwidth on a network, the next step is to identify that traffic.

There are several ways to identify or mark the traffic:
1. Class of Service (CoS) –
2. Differentiated Services Code Point (DSCP) are two examples.
CoS will mark a data stream in the layer 2 frame header while DSCP will mark a data stream in the layer 3 packet header.

Various applications can be marked differently, which allows the network equipment to be able to categorize data into different groups.

After you categorized data steams into different groups, you then use that information to create a policy that will provide preferential or higher priority of transmission to some data over others. This is called queuing.

Let us take an example:

In your network policy, if voice traffic is tagged and given access to the majority of network bandwidth on a link, the routing or switching device will move voice packets/frames to the front of the queue and transmit them immediately.
But if the policy marks voice data with a lower priority, it will wait (be queued) until there is sufficient bandwidth to transmit.
When the queue becomes too much, the lower-priority packets/frames are the first to get dropped.

But, be it as it may, in today’s enterprise networks, QoS policy favors voice and video streams because they are the most commonly used!…also in our ever-increasing IoT, much priority and bandwidth leverage are given to highly time-sensitive data such as temperature, humidity, and location awareness etc.

QoS play an increasingly important role in making sure that certain data streams are given priority over others in order to operate efficiently.

The figure above shows the internals of a router, how packets are processed during transmission on a link:

Step 1. The network router enabled with QOS tools makes a forwarding (routing) decision on packets.
Step 2. The queuing tool uses classification logic to determine which packets go into which output queue.
Step 3. The router holds the packets in the output queue waiting for the outgoing interface to be available to send the next message.
Step 4. The queuing tool’s scheduling logic chooses the next packet, effectively prioritizing one packet over another.