QOS: Congestion Management Tools Explained

QOS: Understanding QoS MARKING

The term marking refers to a type of QoS tool that classifies packets based on their header contents then marks the message by changing some bits in specific header fields.

This means that the QoS tool changes one or more header fields, setting a value in the header.

ok…this is how QOS marking works…

Traditionally, When a host sends data through the network to another host, the host sends a data link frame that encapsulates the IP packet.
Each router that forwards the IP packet strips and discards the old data link header, and adds a new header.

But when the IP header is been marked, the marking will stay with the data from the first point it is marked until it reaches the destination host…that’s marking…

Enabling QoS Tools.

QoS tools for congestion management are enabled on an interface for directions (entry or exit) just like ACLs; they position themselves on the link that packets take when being forwarded through by a network device (router/switch). 

Right here, they are set up to process the marking, matching or classification of fields in a message header in order to determine which packets to take certain QoS actions (queuing policing and priority) against as configured by the network engineer.

QOS Tools for Congestion Management in Action.

Understanding QoS Queuing

Queuing methods are used to provide service for higher priority traffic at the expense of the lower priority traffic, based on classification.

Type of queuing methods are available:
• First-In-First-Out (FIFO)
• Priority Queuing (PQ)
• Custom Queuing (CQ)
• Weighted Fair Queuing (WFQ)
• Class-Based Weighted Fair Queuing (CBWFQ)
• Low-Latency Queuing (LLQ)

1. First in, First out (FIFO)
FIFO is an acronym for First In First Out. This expression describes the principle of a queue or first-come-first-serve behavior: what comes in first is handled first as shown in the figure below, what comes in next waits until the first is finished etc.

Simply, FIFO queuing involves storing packets when the network is congested and forwarding them in order of arrival when the network is no longer congested. FIFO is the default queuing algorithm in some instances, thus requiring no configuration, but it has several shortcomings.

Most importantly, FIFO queuing makes no decision about packet priority; the order of arrival determines bandwidth, promptness, and buffer allocation. Nor does it provide protection against ill-behaved applications (sources).

Bursty sources can cause long delays in delivering time-sensitive application traffic, and potentially to network control and signaling messages.
Cisco IOS software implements queuing algorithms that avoid the shortcomings of FIFO queuing.

2. Priority Queueing (PQ)
Priority queue ensures that important traffic gets the fastest handling at each point where it is used, as shown in Figure below.

PQ was designed to give strict priority to important traffic. Priority queue prioritizes according to packet source/destination, packet type, and packet label and so on to discriminatively serve various traffic flows

In PQ, each packet is placed in one of four queues—high, medium, normal, or low—based on an assigned priority.
Packets that are not classified by this priority list mechanism fall into the normal queue (see Figure above).

During transmission, the algorithm gives higher-priority queues absolute preferential treatment over low-priority queues.

Priority queuing is the basis for a class of queue scheduling algorithms that are designed to provide a relatively simple method of supporting differentiated service classes. In classic PQ, packets are first classified by the system and then placed into different priority queues.
Packets are scheduled from the head of the given queue only if all queues of higher priority are empty.

Priority queuing can flexibly prioritize according to network protocol (for example IP, IPX, or AppleTalk), incoming interface, packet size, source/destination address, and so on.

Within each of the priority queues, packets are scheduled in FIFO order.
Some of the PQ benefits are relatively low computational load on the system and setting priorities so that real-time traffic gets priority over applications that do not operate in real time. One of the biggest problems using

PQ is the high amount of the high priority traffic. If the volume of higher-priority traffic becomes excessive, lower priority traffic can be dropped as the buffer space allocated to low-priority queues starts to overflow. This could lead to complete resource starvation for lower priority traffic.

3. Weighted Fair Queuing (WFQ)
WFQ algorithm provides fair output bandwidth sharing according to assigned weights as shown in Figure below.

The weighted fair queue is a variant of fair queue equipped with a weighted bandwidth allocation. The bandwidth allocation among packet queues involves not only the traffic discrimination but the weightings assigned to packet queues.

In characteristics, weighting fair queue provides two important properties, supporting effective QoS design: No occurrence of bandwidth starvation encountered usually in Priority Queue.

Fairness of bandwidth sharing is ensured among admitted flows. Thus, in a weighted fair queue, traffic gets predictable service. Weighted Fair Queuing (WFQ) offers fair queuing that divides the available bandwidth across queues of traffic based on weights. Each flow is associated
with an independent queue assigned with a weight to ensure that important traffic gets higher priority over less important traffic.

WFQ supports flows with different bandwidth requirements by giving each queue 
a weight that assigns it a different percentage of output port bandwidth.

WFQ also supports variable-length packets, so that flows with larger packets are not allocated more bandwidth than flows with smaller packets.

Class-Based Weighted Fair Queuing (CBWFQ)

Class-Based Weighted Fair Queuing is a type of Cisco routers algorithm tool used to guarantee a minimum amount of bandwidth to each classified packet as defined by the network administrator.
With this type of logical scheduling tool, the scheduler guarantees the percentage bandwidth to each queue. That is if there is a congestion on the outgoing link is, a queue
get the defined bandwidth percent of the link even during busy times.

Understanding QoS Trust Tool.

The trust boundary refers to the point or perimeter in the path of a packet flowing through the network at which the networking devices can trust the current QoS markings.

That boundary typically sits in a device under the control of the network administrator or staff as detailed in the design and policy decisions document.

For example, a typical trust boundary could be set in the middle of the first ingress switch, router or other device connected to the network, as shown in Figure below.

The markings on the message as sent by the PC is not in trust. However, because SW1 performed classification and marking as the packets entering the switch, the markings can be trusted at that point.

Understanding QoS Prioritization

Prioritization, also known as round-robin Scheduling ( algorithm used by Cisco routers and switches) refers to the concept of giving priority to one queue over another in some way.
With prioritization; traffic is classed into categories such as high, medium, and low. The lower the priority, the likelihood the packet is dropped.

E-mail and Web traffic is often placed in the lowest categories.  When the network gets busy, packets from the lowest categories are dropped first.

Cisco’s round robin scheduling logic devised a way of inspecting the queue in other and takes one message or a number of bytes from each queue making up enough messages to total that number of bytes.

This how it goes…

The scheduler takes some messages from queue 1, move on and take some from queue 2, then take some from queue 3, etc, goes back to queue 1 after finishing a complete pass through the queues.
Round robin scheduling also includes the concept of weighting – also known as weighted
round robin.
Basically, the scheduler takes a different number of packets (or bytes) from each queue, giving more preference to one queue over another.

Cisco Round Robin

Understanding QoS Shaping and Policing

Shaping and Policing are two related QoS tools. These tools have a more specialized use and are not found in as many locations in a typical enterprise. These tools are most often used at the WAN edge in an enterprise network design.

Both policing and shaping monitor the bit rate of the combined messages that flow through
a device. Once enabled, the policer or shaper notes each packet that passes and measures
the number of bits per second over time.

Both attempt to keep the bit rate at or below the configured speed, but by using two different actions: policers discard packets and shapers hold packets in queues to delay the packets.

Shapers and policers monitor the traffic rate (the bits/second that move through the shaper
or policer) versus a configured shaping rate or policing rate, respectively.

The basic question that both ask is listed below, with the actions based on the answers:

i. Does this next packet push the measured rate past the configured shaping rate or
policing rate?
 If no:
a. Let the packet keep moving through the normal path and do nothing extra to the
 If yes:
b. shaping, delay the message by queuing it.
b. If policing, either discard the message or mark it differently.


Let’s take a sec to focus on the traffic rate versus the configured policing rate for a moment, and the policing action of discarding messages. These concepts sit at the core of what the policing function does.

QOS traffic policing is similar to shaping, but it differs in a way:

In Policing. traffic that exceeds the configured rate is not buffered (and normally is discarded).

Cisco’s implementation of policing (committed access rate [CAR]) allows a number of actions besides discard to be performed.

However, policing normally refers to the discard of traffic above a configured rate.

Let’s use a real-world scenario :

When you get a subscription of your internet usage from an ISP (for example a fiber connection) normally you will pay for it per bit that you desire, for example, 10mb, 20mb, 1Gb etc.
But know this, the fiber connection from the ISP is capable of sending traffic at a much higher bitrate (for example 100 Mb 2GB or more…).

But because you are paying for what your money worths or according to your presumed usage the ISP will “limit” the traffic to whatever you are paying for.

The amount of bit rate that you pay for to the ISP is often called the CIR (Committed Information Rate).

Now…the ISP will limit the bit rate of your connection with policing or shaping.

The difference between the two as mentioned above is that policing will drop the exceeding traffic and shaping will buffer it.

The ISP uses Policing to check and measure traffic paid for matches it to the cumulative byte-rate of arriving packets and the policer then make one of the following actions:

i. Allow the packet to pass.
ii. Drop the packet.
iii. Remark the packet with a different DSCP or IP precedence value.

When working with policing there are three categories that we can use to see if a packet confirms the traffic contract or not:

Conforming: means that the packet falls within the traffic contract
Exceeding:  means that the packet is using up the excess burst capability; (that’s when your charged extra)
Violating: means that it’s totally out of the traffic contract rate.

 Policing is often configured on interfaces at the edge of a network to limit the rate of traffic entering or leaving the network. In the most common Traffic Policing configurations, traffic that conforms is transmitted and traffic that exceeds is sent with a decreased priority or is dropped. Users can change these configuration options to suit their network needs.

QoS: Bandwidth, Delay, Jitter, and Loss Explained

There are obstacles that can cause a breach or delay even loss of communication over the network.
As the internet grows beyond prediction, the modern networks support for traffic is getting beyond the traditional data types and becomes increasingly difficult to maintain.

Communication over the network involves email, file sharing, or web traffic and increasingly, data networks share a common link with more sensitive forms of traffic, like voice and video.

Now..these sensitive forms of traffic types mostly require guaranteed or regulated service because they are most susceptible to the various obstacles of network communication, including lack of Bandwidth, delay, jitter or even loss of data during communication.

Good to know that modern networking devices, especially Cisco routers and switches are equipped with wide range of QoS tools or features for managing traffic

There are four characteristics of network traffic:

Bandwidth refers to the speed of and capacity of a link, measured in bits per second (bps). This sure means how many bits can be sent over the link per given second.
The networking device’s (router or switch) QoS tools determine and control what packet is sent over the link at a given point;  which messages get access to the bandwidth next, and how much of that bandwidth (capacity) each type of traffic gets over time.

In a large network, a typical WAN edge router has hundreds of packets waiting to pass through the link.
The WAN edge router link might be configured with a QoS queuing tool to reserve 50 percent of the bandwidth for very important or emergency data traffic, 10 percent for voice, and leave the rest of the bandwidth for all other types of traffic.

Now…the configuration and settings are done, so it’s down to the queuing tool to use those settings to make the choice about which packets to send next on the link.

Delay. There is two type of delay here; one-way delay or round-trip delay.

One-way delay describes the time lapse it takes for a packet to arrive at the destination host.

Round-trip delay measures the time lapse between when the packet gets to the destination host and the receiver to send it back.

In saying that, So many different individual actions cause the delay of packets on a link; just like so many factors cause delay when you driving from point A to B…
so, Jitter describes what happens when packets sent on a particular link gets to their destination host in deferent times.

For example, let say an application sends a few hundred packets to one particular host.

…for more understanding, let’s use Packet A, B, C

The first packet’s A, one-way delay is 200 milliseconds (200 ms, or .2 seconds).

The next packet’s B one-way delay is 210 ms;

The next packet’s C has a one-way delay of 225 ms, etc…

…you get my gist right…

ok…the example above shows there is some variation in the delay, 10 ms between packets and so on.

That difference is called jitter. But if the packets arrive at the destination host with the same timing, then there is no jitter.

Finally, loss refers to the number of lost messages, usually as a percentage of packets sent.

Simply, if 100 packets are sent, and only 98 arrive at the destination host, that means that particular data flow experienced 2 percent loss.

Just like delay; loss can be caused by various factors, it can be caused by fault(s) in cabling, poor WAN services etc. But more loss happens when the networking devices experienced large queues waiting to access the link, and it gets so full that the device (router/switch) has nowhere to put new packets, the next option is to discards the packet.

But good to know that there are several QoS tools used to manage queues and help control and avoid loss.

Related Articles

Quality Of Service QOS – Explained with Example

Understanding QoS

How qos works

Quality of service

The Provision of sufficient Quality of Service (QoS) across IP networks has become a necessary criterion in enterprise IT infrastructure of the future. It has been deemed a necessity especially for voice and the streaming of video over the network.

Let us look at why the quality of service (QoS) is vital in today’s and future network, how it (QOS) works and its benefits.

A few commonly used applications running on your network system are sensitive to delay. These applications usually utilize the UDP protocol rather than TCP.

The basic difference between the TCP and UDP protocol in relation to time and sensitivity is that TCP will retransmit packets that are lost in traffic while UDP does not.

This means that TCP should be used in the transmission of files for its great feature of re-transmitting, re-ordering of lost or malformed file, TCP helps to recreate these files on the destination PC.

Let us take for example of an IP phone call;  packets are transmitted in as an ordered stream, losing even a few packets will result in the voice quality becoming choppy and unintelligible.

Additionally, packets are sensitive to what’s known as jitter.

Jitter is the variation in delay of a streaming application.

Packet loss, delay or jitter are normally caused by enormous traffic or over-use of your network bandwidth which is above what it can handle, but if your network has plenty of it, that should be any problem, delays or lost packets.

In a situation where you are dealing with large enterprise networks, there will be times where links become hugely congested to the point where routers and switches start dropping packets because they are coming in/out faster than what can be processed.

As a result, your streaming applications are going to suffer. This is where QoS comes in.

How does QoS work?

Quality of Service assists in the management of packet loss, delay and jitter on your network infrastructure.
Bandwith usage is growing huge day by day as the internet continue to expand. Since we’re working with a finite amount of bandwidth, our foremost priority is to identify those applications that would benefit most.

As a network administrator, you need to prioritize the use of bandwidth for certain applications. Once you discover the applications that need to have priority over bandwidth on a network, the next step is to identify that traffic.

There are several ways to identify or mark the traffic:
1. Class of Service (CoS) –
2. Differentiated Services Code Point (DSCP) are two examples.
CoS will mark a data stream in the layer 2 frame header while DSCP will mark a data stream in the layer 3 packet header.

Various applications can be marked differently, which allows the network equipment to be able to categorize data into different groups.

After you categorized data steams into different groups, you then use that information to create a policy that will provide preferential or higher priority of transmission to some data over others. This is called queuing.

Let us take an example:

In your network policy, if voice traffic is tagged and given access to the majority of network bandwidth on a link, the routing or switching device will move voice packets/frames to the front of the queue and transmit them immediately.
But if the policy marks voice data with a lower priority, it will wait (be queued) until there is sufficient bandwidth to transmit.
When the queue becomes too much, the lower-priority packets/frames are the first to get dropped.

But, be it as it may, in today’s enterprise networks, QoS policy favors voice and video streams because they are the most commonly used!…also in our ever-increasing IoT, much priority and bandwidth leverage are given to highly time-sensitive data such as temperature, humidity, and location awareness etc.

QoS play an increasingly important role in making sure that certain data streams are given priority over others in order to operate efficiently.

The figure above shows the internals of a router, how packets are processed during transmission on a link:

Step 1. The network router enabled with QOS tools makes a forwarding (routing) decision on packets.
Step 2. The queuing tool uses classification logic to determine which packets go into which output queue.
Step 3. The router holds the packets in the output queue waiting for the outgoing interface to be available to send the next message.
Step 4. The queuing tool’s scheduling logic chooses the next packet, effectively prioritizing one packet over another.


Multi-Protocol Label Switching (MPLS) Explained with Examples

What is Multi-Protocol Label Switching (MPLS)?

Multiprotocol Label Switching (MPLS) is a type of network data traffic technique which carries data from one network device to the next using short path labels instead of long and complex network router lookups in a routing table.

Ok…simply said; MPLS is best summarized as a middleman protocol between Layer 2 and Layers 3 in the OSI model. Some tech pundits call it “Layer 2.5 networking protocol”.

In the traditional OSI model, Layer 2 covers protocols like Ethernet and which can traffic IP packets over LANs or point-to-point WANs only.
Then; Layer 3 takes care of the Internet-wide addressing and routing using IP protocols…

Now..MPLS sits between these traditional layers (2&3), providing additional features for the transport of data across the network. It simply uses a packet-forwarding technology known as labels in order to make data forwarding decisions.


In a traditional IP network, each network router performs an IP lookup on the routed data or packet, the router determines a next-hop-based on its routing table, and forwards the packet to the next-hop. Every router does the same on the same data or packet each making its own independent routing decisions until the final destination is reached.

In an MPLS enabled network, MPLS does “label switching”; which means the first router or network device does a routing lookup, but instead of finding a next-hop, it finds the final destination router.

MPLS configured router applies a “label” on the data, other routers use the label to route the traffic without needing to perform any additional IP lookups. At the final destination router, the label is removed and the packet is delivered via normal IP routing.

What is a label? What is the structure of the label?

A label is a short, four-byte, fixed-length, locally-significant identifier which is used in order to identify a Forwarding Equivalence Class (FEC).
The label which is put on a particular packet represents the FEC to which that packet is assigned.

To actually make MPLS work, you need preset paths which are called label-switched paths (LSPs). An LSP is required for any MPLS forwarding to occur.
An LSP is essentially a unidirectional tunnel of MPLS information exchange among routers in an MPLS network.
MPLS router operates on preset paths for various source to destination.
To accomplish real efficiencies over typical IP routing, every router on the LSP must be able to switch the packet forward.

What is important here is that every router along the LSP from router 1 to router 6 must have the same view of the LSP.

MPLS Router Roles/Positions

Label switch router/ (LSR) or transit router:
This the router(s) in an MPLS network that performs routing based only on the labeling and swapping of packets.
The LSR router is normally located in the middle of an MPLS network. It is responsible for switching the labels used to route packets.

When an LSR receives a packet, it examines and indexes the label included in the packet header so as to determine the next hop on the label-switched path (LSP) and a corresponding label for the packet from its lookup table.
The old label is then removed from the header and replaced with the new label before the packet is routed forward.

Label edge router
A label edge router (LER, also known as edge LSR or “ingress node”) is a router that operates at the edge of an MPLS network and acts as the entry and exit points for the network.These edge router places an MPLS label on an incoming packet and sends it forward into the MPLS domain.
The same job is performed upon receiving a labeled packet which is destined to exit the MPLS domain, the LER removes the label and forwards the IP packet using normal IP address.

Provider router
In an MPLS based virtual private network (VPN) environment, LERs that functions as ingress and/or egress routers to the VPN are often called PE (Provider Edge) routers. Devices that function only as transit routers are similarly called P (Provider) routers.
The job of a P router is significantly easier than that of a PE router, so they can be less complex and may be more dependable because of this.

Label Distribution Protocol
Labels are distributed between LERs and LSRs using the Label Distribution Protocol (LDP).
LSRs in an MPLS network regularly exchange label and reachability information with each other using standardized procedures in order to build a complete picture of the network they can then use to forward packets.
CE is the “Customer Edge”, the customer device or router a PE router talks to.


Cisco Dynamic Trunking Protocol (DTP) Explained

 Understanding Dynamic Trunking Protocol (DTP)

The Dynamic Trunking Protocol (DTP) is a Cisco proprietary protocol that is automatically enabled on Catalyst 2960 and Catalyst 3560 Series switches. DTP is used to negotiate forming a trunk link between two or more Cisco devices before actually forming the trunk connection. The main benefit of DTP is to increase traffic on a trunked link.
A Non-Cisco network device or Switch from other vendors do not support DTP. DTP manages trunk negotiation only if the port on the neighbor switch is configured in a trunk mode that supports DTP.

Switch trunk interfaces support different trunking modes. An interface can be set to Trunking, non-trunking, or to Negotiate. Ethernet Trunk negotiation is managed by the Dynamic Trunking Protocol (DTP), which operates on a point-to-point basis between network devices.

The default DTP configuration for Cisco Catalyst 2960 and 3560 switches is in “dynamic auto” or “dynamic desirable” mode.

For you to enable trunking between a Cisco switch to a non-Cisco switch or device that does not support DTP, use the switchport mode trunk and switchport nonegotiate interface configuration mode commands. This causes the interface to become a trunk but not generate DTP frames.

In order to avoid misconfigurations and wrongly forwarded of DTP frames by a non-Cisco device, turn off DTP on interfaces on a Cisco switch connected to devices that do not support DTP.

In the diagram below:

The Fa0/1 ports on Cisco switch1 and Cisco Switch2 are set to dynamic auto, dynamic desirable so the negotiation results in the trunking state. This creates an active trunk link.

dtp with dynamic auto and desirable

dtp with dynamic desirable and desirable

How to configure dynamic desirable mode

Sw1#configure terminal
Sw1(config)#interface fa0/1
Sw1(config-if)#switchport mode dynamic desirable

How to configure dynamic auto mode

Sw1#configure terminal
Sw1(config)#interface fa0/1
Sw1(config-if)#switchport mode dynamic auto

In the diagram below,  the link between switches Switch1 and Switch2 becomes a trunk because the Fa0/1 ports on Cisco switch1 and Non-Cisco switch2 are configured to ignore all DTP advertisements and to come up in and stay in trunk port mode.

How to configure trunk mode

Sw1#configure terminal
Sw1(config)#interface fa0/1
Sw1(config-if)#switchport mode trunk

How to configure trunk mode with nonegotiate

Sw1#configure terminal
Sw1(config)#interface fa0/1
Sw1(config-if)#switchport mode trunk
Sw1(config-if)#switchport nonegotiate


In the diagram below: The F0/1 ports on Cisco switch1 and Cisco Switch2 are set to dynamic auto, so the negotiation results in the access mode state. This creates an inactive trunk link.

DTP Negotiated Interface Modes Explained:
Switches support different trunking modes with the help of DTP:
• Switchport mode access: This puts the interface (access port) into permanent nontrunking mode and negotiates to convert the link into a nontrunk link. The interface becomes a nontrunk interface, regardless of whether the neighboring interface is a trunk interface.

• Switchport mode dynamic auto: This enables the interface to convert the link to a trunk link. The interface becomes a trunk interface if the neighboring interface is set to trunk or desirable mode. The default switchport mode for newer Cisco switches Ethernet interfaces is dynamic auto.

Note that if two Cisco switches are left to the common default setting of auto, a trunk will never form.

• Switchport mode dynamic desirable: This places the interface in an active attempt to convert the link to a trunk link. The interface becomes a trunk interface if the neighboring interface is set to trunk, desirable, or auto mode.

Switchport mode trunk Puts the interface into permanent trunking mode and negotiates to convert the neighboring link into a trunk link. The interface becomes a trunk interface even if the neighboring interface is not a trunk interface.

• Switchport nonegotiate: This mode prevents the interface from generating DTP frames. You can use this command only when the interface switchport mode is access or trunk. You must manually configure the neighboring interface as a trunk interface to establish a trunk link.

Also, Read VLAN Trunking Protocol



RIP Routing Loop Explained


Split Horizon, Route Poisoning and Holddown Explained

Split Horizon
RIP as a distance vector protocols are also susceptible to routing loops (network problem which enables a data packet to continually being routed through the same routers over and over. The data packets continue to be routed within the network in an endless circle.).

Split horizon is one of the features of distance vector routing protocols that prevents routing loops. This feature prevents a router from advertising a route back to the interface from which it was learned or received.

example network topology.

routing loop split horizon

Using the above diagram, R2 has a route to the subnet that is advertised to router R1 by using RIP.
R1 receives the update and stores the route on its routing table.
R1 knows that the routing update for that route has come from R2, so it won’t advertise the route back to router R2, because, if the route to the network goes down, router R1 could receive a route to the subnet from R2.

Router R1 now believes that R2 has the route to reach the subnet, and uses that route. R2 receives update packets from R1 and sends them back to R2 because both routers think that each has a route to reach the subnet, this will go on forever; this what is called a routing loop.

Route Poisoning
Route poisoning is another method distance vector routing protocols use to prevent routing loops.
When a router detects that one of its directly connected routes has failed, the router sends the advertisement for that route with an infinite metric of 16 (“poisoning the route”).
Any router on the network that receives the update will realize that the route has failed and doesn’t use it anymore.

Consider the following example.

route poisening

Note this; R1 is directly connected to the subnet.
R1 has RIP enabled and the subnet is advertised to R2.
When the R1’s Fa0/1 interface fails, a route update advertisement is sent by R1 to R2, indicating that the route has failed.
The route will be labeled with a metric of 16, which is more than the RIP’s maximum hop count of 15, so R1 considers the route to be unreachable.

Hold-down is another loop-prevention mechanism use by distance vector routing protocol.
This feature prevents a router from learning new information about a failed route. When a router receives update information about an unreachable route, a hold-down timer is instantly started.
The router overlooks all routing updates for that route until the timer expires (180 seconds is the default for RIP).

The only routing updates permitted during that period are updates sent from the router that initially advertised the route.
If R1 router advertises the update, the hold-down timer is stopped and the routing information is processed.
Let’s use the following network topology below as an example:

RIP hold down timer

The hold-down time starts by R2 after it received a failed (unreachable) route update advertisement from R1.
During that time all updates from any other routers about that route are ignored to prevent routing loops.
If interface Fa0/1 on R1 comes back up, R1 will advertise the route once more.

The r2 process that updates even while the hold-down timer is still running, because the update is sent by the same router that originally advertised the route.


DHCP Snooping Explained


The Dynamic Host Configuration Protocol (DHCP) allocates IP addresses dynamically,
it leases addresses to connected devices and the addresses can be reused when no longer needed.

All connected Hosts and end devices that require IP addresses obtained through DHCP must communicate with a DHCP server across the LAN.

DHCP snooping acts like a firewall between trusted DHCP servers and untrusted hosts. DHCP snooping acts as a guardian or in the form of network security .
DHCP snooping enables the switching or network device, which can be either a switch or a router, to monitor DHCP messages received from untrusted devices connected to the switching device.

When DHCP snooping is enabled on a switched network or VLAN, it examines all DHCP messages sent from untrusted hosts associated with the network or VLAN and extracts their IP addresses and lease information.

dhcp snooping explained

DHCP Snooping Binding Database

All extracted information will be used to build and maintain the DHCP snooping database, also known as the binding table.
Only verified hosts from this database are allowed access to the network.

The database contains an entry for each untrusted host with a leased IP address if the host is associated with a VLAN that has DHCP snooping enabled.

The database does not contain entries for hosts connected through trusted interfaces.
Each entry in the DHCP snooping binding database includes the MAC address of the host, the leased IP address, the lease time, the binding type, and the VLAN number and interface information associated with the host.

Features of DHCP snooping

•DHCP snooping validates incoming messages received from untrusted sources and filters out invalid messages.

•DHCP snooping Builds maintains and stores information about untrusted hosts these include their IP-MAC address binding, the lease time for the IP address, type of binding, VLAN name, and interface for each host.

All these information are extracted, maintained and stored in the DHCP snooping binding database to be validated.

•DHCP snooping uses the binding database to validate subsequent requests from untrusted hosts.

Dynamic ARP inspection (DAI) and IP Source Guard also use information stored in the DHCP snooping binding database.

By default, DHCP Snooping is disabled, DHCP Snooping can be enabled on a single VLAN or a range of VLANs across the network.

DHCP Packet Validation

Switches validate DHCP packets received on the untrusted interfaces of all configured VLANs with DHCP snooping enabled.
The switch then forwards the DHCP packet or packet will be dropped if it fails validation.

When the DHCP snooping service detects a violation, the packet is dropped, and a message is logged that includes the text :


If the switch is configured to send logs to a syslog server.

Messages alert that’s is likely to appear:


The above message indicates that the source frame and embedded client hardware address in a DHCP request differ, and seems to be unfortunately common.

If you see these, consider investigating a few of them to verify that the issue is indeed a poor vendor DHCP client or IP forwarding implementation, and determine your policy going forward.


Such messages are usually serious. This message indicates that a client is being spoofed, or worse. sounds like a rogue DHCP server is in operation.

The following conditions must be met before the switch will forward a packet:

•When the switch receives a packet (with a DHCPOFFER, DHCPACK, DHCPNAK, or DHCPLEASEQUERY packet) from a DHCP server outside the network or firewall.

•The switch receives a packet from an untrusted interface, and the source MAC address and
the DHCP client hardware address do not meet validation rules. This check can only be performed if the DHCP snooping MAC address verification option is turned on.

•The switch receives a DHCPRELEASE or DHCPDECLINE message from an untrusted host with an entry in the DHCP snooping binding table, and the interface information in the binding table does not match the interface on which the message was received.

•The switch receives a DHCP packet that includes a relay agent IP address that is not


1. The network device sends a DHCPDISCOVER packet to request an IP address.

2. The switching device forwards the packet to the DHCP server.

3. The server sends a DHCPOFFER packet to offer an address. If the DHCPOFFER packet is from a trusted interface, the switching device forwards the packet to the DHCP client.

4. The network device sends a DHCPREQUEST packet to accept the IP address.

5. The switching device adds an IP-MAC placeholder binding to the database. The entry is considered a placeholder until a DHCPACK packet is received from the server. Until then,
the IP address could still be assigned to some other host.

6. The server sends a DHCPACK packet to assign the IP address or a DHCPNAK packet to deny the address request.

7. The switching device updates the DHCP snooping database according to the type of packet received (If the switching device receives a DHCPACK packet, it updates lease information for the IP-MAC bindings in its database.
If the switching device receives a DHCPNACK packet, it deletes the placeholder.)

How to Enable DHCP Snooping

This example shows how to enable DHCP snooping globally and on VLAN 8 and to configure a rate limit of 100 packets per second on a port:

Sw1(config)# ip dhcp snooping
Sw1(config)# ip dhcp snooping vlan 8
Sw1(config)# ip dhcp snooping information option
Sw1(config)# interface gigabitethernet0/1
Sw1(config-if)# ip dhcp snooping limit rate 100


What is Link Layer Discovery Protocol (LLDP)?


Understanding LLDP and LLDP-MED

The smooth running operation of the various networking devices in a LAN or switched network means that all protocols and applications are enabled and that all devices and are configured correctly.
However, the larger the network gets, the more difficult it will be for the network administrator to control, manage and sort out configuration problems.

This is where the IEEE 802.1AB Link Layer Discovery Protocol (LLDP) steps in.

If your network is running only Cisco network devices (routers, bridges, access servers, and switches), is a known fact that Cisco Discovery Protocol (CDP) which runs over layer 2 (data link layer) can be used for network management of applications and to automatically discover and learn about other Cisco devices connected to the network.

The Link Layer Discovery Protocol (LLDP) is a protocol that can be used to support non-Cisco devices on your network.

LLDP is a neighbor discovery protocol that is used for network devices to advertise information about themselves to other devices on the network and learn about each other.

LLDP like CDP runs over the data-link layer of your network that includes non-Cisco devices or different network layer protocols.


How does LLDP work?

LLDP enabled network devices regularly exchange LLDP advertisements with their network neighbors and store this information in their internal database (MIB).
A Network Management Software – NMS can use SNMP to access this information to build an inventory of the network devices connected to the network and other applications.

Features of LLDP

LLDP have some features it uses in advertising, discovering and learning neighbor devices. These attributes contain type, length, and value descriptions and are referred to as TLVs.

TLVs are used by LLDP to receive, send and gather information to and from their neighbors. Details such as configuration information, device capabilities, and device identity are information advertised using this protocol.

Cisco Catalyst switch supports the following basic management TLVs:

•Port description TLV

•System name TLV

•System description TLV

•System capabilities TLV

• Management address TLV

These organizationally specific LLDP TLVs are also advertised to support LLDP-MED.

•Port VLAN ID TLV ((IEEE 802.1 organizationally specific TLVs)

•MAC/PHY configuration/status TLV(IEEE 802.3 organizationally specific TLVs)

How to configure LLDP

Disabling and Enabling LLDP on an Interface
LLDP is disabled globally on all supported interfaces. You must enable LLDP globally to allow a device to send LLDP packets. However, no changes are required at the interface level.

You can configure an individual interface to selectively not to send and receive LLDP packets with the no lldp transmit and no lldp receive commands.

This example shows how to globally enable LLDP.

Switch# configure terminal
Switch(config)# lldp run
Switch(config)# end

This example shows how to globally disable LLDP.

Switch# configure terminal
Switch(config)# no lldp run
Switch(config)# end

Understanding LLDP-MED

LLDP for Media (LLDP-MED) is an extension to LLDP that operates between endpoint devices such as IP phones and network devices such as switches.

LLDP-MED specifically supports voice over IP (VoIP) applications and provides additional TLVs for capabilities discovery, network policy, Power over Ethernet, inventory management, and location information. By default, all LLDP-MED TLVs are enabled.

Configuring LLDP-MED TLVs
By default, the Cisco catalyst switch only sends LLDP packets until it receives LLDP-MED packets from the end device.

The switch continues to send LLDP-MED packets until it only receives LLDP packets.

This example shows how to enable a TLV on an interface when it has been disabled.

Switch# configure terminal
Switch(config)# interface GigabitEthernet1/0/1
Switch(config-if)# lldp med-tlv-select inventory management
Switch(config-if)# end

Related Posts:

What is Syslog? Syslog Explained.

System Message Logging – SYSLOG

Modern network devices have advanced from simple transmitting of messages (email.documents, multimedia etc), network devices like Cisco routers and switches provide the features for network administrators to reading system messages from their internal buffer about network situation at a particular time.

The way do this is by using Syslog server.

Cisco network devices (Routers and Switches) use Syslog to send system messages and debug output to a local logging process inside the device.

These system messages can even be sent across the network to a syslog server or to an internal buffer so that you can view them at your convenience at a later time right through the device command line interface. Whichever way you choose is configurable.

You can use the following destinations for syslog messages:

• The logging buffer (RAM inside the router or switch)
• The console line
• The terminal lines
• A Syslog server

syslogging in the network

So you know, all system messages and debug output generated by the router or switch IOS go out only the console port by default and are also logged in buffers in RAM. To accomplish the sending of messages from Cisco routers, to the VTY lines, use the terminal monitor command.

Basically, by default, you will see something like this on your console line:

*Oct 21 17:33:50.565:%LINK-5-CHANGED:Interface FastEthernet0/0, changed
state to administratively down
*Oct 21 17:33:51.565:%LINEPROTO-5-UPDOWN:Line protocol on Interface
FastEthernet0/0, changed state to down

Cisco router would send a summarized version of the message to the syslog server that would look something like this:

Seq no:timestamp: %facility-severity-MNEMONIC:description

A detail explanation of what this means:
seq no : This a sequence number of the message, but not by default. for you to know the time the message was sent, you’ve got to configure it.
Timestamp : This means Data and time of the message or event, which also need to be configured
Facility : The facility to which the message refers.
Severity : this a single-digit code from 0 to 7 that shows the severity of the message.
MNEMONIC : Text string that uniquely describes the message.
Description : Text string containing detailed information about the event being reported.

Example of Real syslog message:

Apr 10 14:10:01.052: %MESKING-5-UPDOWN: Line protocol on Interface FastEthernet0/0,
changed state to down

= A timestamp: *Apr 10 14:10:01.052
The facility on the router that generated the message: %MESKING
The severity level: 5
A mnemonic for the message: UPDOWN
The description of the message: Line protocol on Interface FastEthernet0/0, changed state to down

Syslog Severity levels Explained:

Emergency (severity 0)  System is unusable.
Alert (severity 1)  Immediate action is needed.
Critical (severity 2)  Critical condition.
Error (severity 3)  Error condition.
Warning (severity 4)  Warning condition.
Notification (severity 5)  Normal but significant condition.
Information (severity 6)  Normal information message.
Debugging (severity 7)  Debugging message.

How to Configure and Verifying Syslog.

Cisco devices send all log messages according to the severity level you configure or chosen to the console.
These messages also go to the buffer, and both happen by default.
You can disable and enable these features with the following commands, to enable use:

Router(config)#logging ?

The above command with a question mark will display all the option to choose from.

Router(config)#logging console
Router(config)#logging buffered

The configuration above can be used to enable the console and buffer to receive all log message of all severity, just know that this is the default setting for all Cisco IOS devices.

If you want to disable the defaults, use the following commands:

Router(config)#no logging console
Router(config)#no logging buffered

A Syslog server saves copies of console messages and can time-stamp them for viewing at a later time. This feature can be enabled with the following command:

HQ(config)#service timestamps log datetime msec

The above command will save all the console messages in one location to be viewed at your convenience! use the logging host ip_address command.

syslogging in the network

You can set a limit to a number of messages sent to the syslog server, based on severity with the following command:

SF(config)#logging trap warnings

The command above shows that you can use either the number or the actual severity level name—and they are in alphabetical order, not severity order, Cisco router will send messages for levels 0 through 4 (warnings)


What is Stateless DHCPv6? Explained with Examples

Stateless DHCPv6 Server and Client Auto-configuration.

During the SLAAC process, the client receives information to create an IPv6 global unicast address.
This includes the default gateway information from the source IPv6 address in the RA message, which is the link-local address of the router.
A stateless DHCPv6 server can be used to provide information that might not be included in the RA message (DNS server address and the domain name).

Stateless DHCPv6 Server Configuration command terms:

The ipv6 dhcp server interface command binds the DHCPv6 pool to the interface.
The O flag needs to be changed from 0 to 1 using the interface command ipv6 nd other-config-flag.

Stateless DHCPv6 Server Configuration

we use the topology below to configure stateless DHCP servers and clients.

dhcp sever config

R1(config)#ipv6 unicast-routing
R1(config)#ipv6 dhcp pool Stateless_DHCP
R1(config-dhcpv6)#dns-server AAAA:BBBB:CCCC:DDDD::FFFF
R1(config-dhcpv6)#domain-name orbitCO.com
R1(config)#interface s0/0/0
R1(config-if)#ipv6 address 2001:df6:adac:1::1/64
R1(config-if)#ipv6 dhcp server Stateless_DHCP
R1(config-if)#ipv6 nd other-config-flag

Stateless DHCPv6 Client Configuration

we use the same topology below to configure stateless DHCP client.

dhcp sever config
Stateless DHCPv6 Client Configuration command terms.

The ipv6 enable command is used to enable for a global unicast addressing.
The ipv6 address autoconfig command enables automatic configuration of IPv6 addressing using SLAAC

R2(config)#interface s0/0/0
R2(config-if)#ipv6 enable
R2(config-if)#ipv6 address autoconfig

Stateless DHCPv6 Verification

R2#show ipv6 interface s0/0/0
Serial0/0/0 is up, line protocol is up
IPv6 is enabled, link-local address is FE80::2
No Virtual link-local address(es):
Stateless address autoconfig enabled
Global unicast address(es):
    2001:DF6:ADAC:1::2, subnet is 2001:DF6:ADAC:1::/64 [EUI/CAL/PRE]
       valid lifetime 2591259 preferred lifetime 604059
Joined group address(es):
MTU is 1500 bytes
ICMP error messages limited to one every 100 milliseconds
ICMP redirects are enabled
ICMP unreachables are sent
ND DAD is enabled, number of DAD attempts: 1
ND reachable time is 30000 milliseconds (using 30000)
ND RAs are suppressed (periodic)
Hosts use stateless autoconfig for addresses.

From the display above, the show ipv6 interface command shows that the router has “Stateless address autoconfig enabled” and has an IPv6 global unicast address (2001:DF6:ADAC:1::2) and displays the subnet address (2001:DF6:ADAC:1::/64) as well.
The IPv6 global unicast address was created using SLAAC which includes the prefix which can be found in the RA message.
The Interface ID was generated using EUI-64 which is displayed to the right of the subnet address.
The Duplicate Address Detection (DAD) is used in verifying that no one else on your network is using the same address you created.