QOS: Understanding QoS MARKING
The term marking refers to a type of QoS tool that classifies packets based on their header contents then marks the message by changing some bits in specific header fields.
This means that the QoS tool changes one or more header fields, setting a value in the header.
ok…this is how QOS marking works…
Traditionally, When a host sends data through the network to another host, the host sends a data link frame that encapsulates the IP packet.
Each router that forwards the IP packet strips and discards the old data link header, and adds a new header.
But when the IP header is been marked, the marking will stay with the data from the first point it is marked until it reaches the destination host…that’s marking…
Enabling QoS Tools.
QoS tools for congestion management are enabled on an interface for directions (entry or exit) just like ACLs; they position themselves on the link that packets take when being forwarded through by a network device (router/switch).
Right here, they are set up to process the marking, matching or classification of fields in a message header in order to determine which packets to take certain QoS actions (queuing policing and priority) against as configured by the network engineer.
QOS Tools for Congestion Management in Action.
Understanding QoS Queuing
Queuing methods are used to provide service for higher priority traffic at the expense of the lower priority traffic, based on classification.
Type of queuing methods are available:
• First-In-First-Out (FIFO)
• Priority Queuing (PQ)
• Custom Queuing (CQ)
• Weighted Fair Queuing (WFQ)
• Class-Based Weighted Fair Queuing (CBWFQ)
• Low-Latency Queuing (LLQ)
1. First in, First out (FIFO)
FIFO is an acronym for First In First Out. This expression describes the principle of a queue or first-come-first-serve behavior: what comes in first is handled first as shown in the figure below, what comes in next waits until the first is finished etc.
Simply, FIFO queuing involves storing packets when the network is congested and forwarding them in order of arrival when the network is no longer congested. FIFO is the default queuing algorithm in some instances, thus requiring no configuration, but it has several shortcomings.
Most importantly, FIFO queuing makes no decision about packet priority; the order of arrival determines bandwidth, promptness, and buffer allocation. Nor does it provide protection against ill-behaved applications (sources).
Bursty sources can cause long delays in delivering time-sensitive application traffic, and potentially to network control and signaling messages.
Cisco IOS software implements queuing algorithms that avoid the shortcomings of FIFO queuing.
2. Priority Queueing (PQ)
Priority queue ensures that important traffic gets the fastest handling at each point where it is used, as shown in Figure below.
PQ was designed to give strict priority to important traffic. Priority queue prioritizes according to packet source/destination, packet type, and packet label and so on to discriminatively serve various traffic flows
In PQ, each packet is placed in one of four queues—high, medium, normal, or low—based on an assigned priority.
Packets that are not classified by this priority list mechanism fall into the normal queue (see Figure above).
During transmission, the algorithm gives higher-priority queues absolute preferential treatment over low-priority queues.
Priority queuing is the basis for a class of queue scheduling algorithms that are designed to provide a relatively simple method of supporting differentiated service classes. In classic PQ, packets are first classified by the system and then placed into different priority queues.
Packets are scheduled from the head of the given queue only if all queues of higher priority are empty.
Priority queuing can flexibly prioritize according to network protocol (for example IP, IPX, or AppleTalk), incoming interface, packet size, source/destination address, and so on.
Within each of the priority queues, packets are scheduled in FIFO order.
Some of the PQ benefits are relatively low computational load on the system and setting priorities so that real-time traffic gets priority over applications that do not operate in real time. One of the biggest problems using
PQ is the high amount of the high priority traffic. If the volume of higher-priority traffic becomes excessive, lower priority traffic can be dropped as the buffer space allocated to low-priority queues starts to overflow. This could lead to complete resource starvation for lower priority traffic.
3. Weighted Fair Queuing (WFQ)
WFQ algorithm provides fair output bandwidth sharing according to assigned weights as shown in Figure below.
The weighted fair queue is a variant of fair queue equipped with a weighted bandwidth allocation. The bandwidth allocation among packet queues involves not only the traffic discrimination but the weightings assigned to packet queues.
In characteristics, weighting fair queue provides two important properties, supporting effective QoS design: No occurrence of bandwidth starvation encountered usually in Priority Queue.
Fairness of bandwidth sharing is ensured among admitted flows. Thus, in a weighted fair queue, traffic gets predictable service. Weighted Fair Queuing (WFQ) offers fair queuing that divides the available bandwidth across queues of traffic based on weights. Each flow is associated
with an independent queue assigned with a weight to ensure that important traffic gets higher priority over less important traffic.
WFQ supports flows with different bandwidth requirements by giving each queue
a weight that assigns it a different percentage of output port bandwidth.
WFQ also supports variable-length packets, so that flows with larger packets are not allocated more bandwidth than flows with smaller packets.
Class-Based Weighted Fair Queuing (CBWFQ)
Class-Based Weighted Fair Queuing is a type of Cisco routers algorithm tool used to guarantee a minimum amount of bandwidth to each classified packet as defined by the network administrator.
With this type of logical scheduling tool, the scheduler guarantees the percentage bandwidth to each queue. That is if there is a congestion on the outgoing link is, a queue
get the defined bandwidth percent of the link even during busy times.
Understanding QoS Trust Tool.
The trust boundary refers to the point or perimeter in the path of a packet flowing through the network at which the networking devices can trust the current QoS markings.
That boundary typically sits in a device under the control of the network administrator or staff as detailed in the design and policy decisions document.
For example, a typical trust boundary could be set in the middle of the first ingress switch, router or other device connected to the network, as shown in Figure below.
The markings on the message as sent by the PC is not in trust. However, because SW1 performed classification and marking as the packets entering the switch, the markings can be trusted at that point.
Understanding QoS Prioritization
Prioritization, also known as round-robin Scheduling ( algorithm used by Cisco routers and switches) refers to the concept of giving priority to one queue over another in some way.
With prioritization; traffic is classed into categories such as high, medium, and low. The lower the priority, the likelihood the packet is dropped.
E-mail and Web traffic is often placed in the lowest categories. When the network gets busy, packets from the lowest categories are dropped first.
Cisco’s round robin scheduling logic devised a way of inspecting the queue in other and takes one message or a number of bytes from each queue making up enough messages to total that number of bytes.
This how it goes…
The scheduler takes some messages from queue 1, move on and take some from queue 2, then take some from queue 3, etc, goes back to queue 1 after finishing a complete pass through the queues.
Round robin scheduling also includes the concept of weighting – also known as weighted
Basically, the scheduler takes a different number of packets (or bytes) from each queue, giving more preference to one queue over another.
Understanding QoS Shaping and Policing
Shaping and Policing are two related QoS tools. These tools have a more specialized use and are not found in as many locations in a typical enterprise. These tools are most often used at the WAN edge in an enterprise network design.
Both policing and shaping monitor the bit rate of the combined messages that flow through
a device. Once enabled, the policer or shaper notes each packet that passes and measures
the number of bits per second over time.
Both attempt to keep the bit rate at or below the configured speed, but by using two different actions: policers discard packets and shapers hold packets in queues to delay the packets.
Shapers and policers monitor the traffic rate (the bits/second that move through the shaper
or policer) versus a configured shaping rate or policing rate, respectively.
The basic question that both ask is listed below, with the actions based on the answers:
i. Does this next packet push the measured rate past the configured shaping rate or
a. Let the packet keep moving through the normal path and do nothing extra to the
b. shaping, delay the message by queuing it.
b. If policing, either discard the message or mark it differently.
Let’s take a sec to focus on the traffic rate versus the configured policing rate for a moment, and the policing action of discarding messages. These concepts sit at the core of what the policing function does.
QOS traffic policing is similar to shaping, but it differs in a way:
In Policing. traffic that exceeds the configured rate is not buffered (and normally is discarded).
Cisco’s implementation of policing (committed access rate [CAR]) allows a number of actions besides discard to be performed.
However, policing normally refers to the discard of traffic above a configured rate.
Let’s use a real-world scenario :
When you get a subscription of your internet usage from an ISP (for example a fiber connection) normally you will pay for it per bit that you desire, for example, 10mb, 20mb, 1Gb etc.
But know this, the fiber connection from the ISP is capable of sending traffic at a much higher bitrate (for example 100 Mb 2GB or more…).
But because you are paying for what your money worths or according to your presumed usage the ISP will “limit” the traffic to whatever you are paying for.
The amount of bit rate that you pay for to the ISP is often called the CIR (Committed Information Rate).
Now…the ISP will limit the bit rate of your connection with policing or shaping.
The difference between the two as mentioned above is that policing will drop the exceeding traffic and shaping will buffer it.
The ISP uses Policing to check and measure traffic paid for matches it to the cumulative byte-rate of arriving packets and the policer then make one of the following actions:
i. Allow the packet to pass.
ii. Drop the packet.
iii. Remark the packet with a different DSCP or IP precedence value.
When working with policing there are three categories that we can use to see if a packet confirms the traffic contract or not:
Conforming: means that the packet falls within the traffic contract
Exceeding: means that the packet is using up the excess burst capability; (that’s when your charged extra)
Violating: means that it’s totally out of the traffic contract rate.
Policing is often configured on interfaces at the edge of a network to limit the rate of traffic entering or leaving the network. In the most common Traffic Policing configurations, traffic that conforms is transmitted and traffic that exceeds is sent with a decreased priority or is dropped. Users can change these configuration options to suit their network needs.