There are obstacles that can cause a breach or delay even loss of communication over the network.
As the internet grows beyond prediction, the modern networks support for traffic is getting beyond the traditional data types and becomes increasingly difficult to maintain.
Communication over the network involves email, file sharing, or web traffic and increasingly, data networks share a common link with more sensitive forms of traffic, like voice and video.
Now..these sensitive forms of traffic types mostly require guaranteed or regulated service because they are most susceptible to the various obstacles of network communication, including lack of Bandwidth, delay, jitter or even loss of data during communication.
Good to know that modern networking devices, especially Cisco routers and switches are equipped with wide range of QoS tools or features for managing traffic
There are four characteristics of network traffic:
Bandwidth refers to the speed of and capacity of a link, measured in bits per second (bps). This sure means how many bits can be sent over the link per given second.
The networking device’s (router or switch) QoS tools determine and control what packet is sent over the link at a given point; which messages get access to the bandwidth next, and how much of that bandwidth (capacity) each type of traffic gets over time.
In a large network, a typical WAN edge router has hundreds of packets waiting to pass through the link.
The WAN edge router link might be configured with a QoS queuing tool to reserve 50 percent of the bandwidth for very important or emergency data traffic, 10 percent for voice, and leave the rest of the bandwidth for all other types of traffic.
Now…the configuration and settings are done, so it’s down to the queuing tool to use those settings to make the choice about which packets to send next on the link.
Delay. There is two type of delay here; one-way delay or round-trip delay.
One-way delay describes the time lapse it takes for a packet to arrive at the destination host.
Round-trip delay measures the time lapse between when the packet gets to the destination host and the receiver to send it back.
In saying that, So many different individual actions cause the delay of packets on a link; just like so many factors cause delay when you driving from point A to B…
so, Jitter describes what happens when packets sent on a particular link gets to their destination host in deferent times.
For example, let say an application sends a few hundred packets to one particular host.
…for more understanding, let’s use Packet A, B, C
The first packet’s A, one-way delay is 200 milliseconds (200 ms, or .2 seconds).
The next packet’s B one-way delay is 210 ms;
The next packet’s C has a one-way delay of 225 ms, etc…
…you get my gist right…
ok…the example above shows there is some variation in the delay, 10 ms between packets and so on.
That difference is called jitter. But if the packets arrive at the destination host with the same timing, then there is no jitter.
Finally, loss refers to the number of lost messages, usually as a percentage of packets sent.
Simply, if 100 packets are sent, and only 98 arrive at the destination host, that means that particular data flow experienced 2 percent loss.
Just like delay; loss can be caused by various factors, it can be caused by fault(s) in cabling, poor WAN services etc. But more loss happens when the networking devices experienced large queues waiting to access the link, and it gets so full that the device (router/switch) has nowhere to put new packets, the next option is to discards the packet.
But good to know that there are several QoS tools used to manage queues and help control and avoid loss.