Skip to content
  • June 9, 2015

  • QoS Basics Explained

    I'm no expert on QoS but I haven't found a concise explanation as to how all of the pieces work on a basic level.

    There are several basic QoS concepts that apply to most systems. The first is a Queue. A queue is just what it sound like, a buffer or bucket where packets are held. QoS systems tag packets based on where they came from(or are going to), what protocol or port they are using, or in the case of more capable systems some other detail about the packet or the connection it belongs to.

    In most QoS systems, a packet is either tagged by some selection rule and put into that assigned queue, or placed in a bucket with all the other packets that don't fit any particular rule. Once the packets are added to a bucket, the QoS system decides which one gets to go on the wire next.

    Queueing Behaviour

    This is really two different decisions in most systems, which queue to pull a packet out of, and which packet to take from that queue first. Deciding which queue to take a packet from depends on the relative "priority" of the queue. Priority of queues is decided in two basic ways. The simplest is deciding a raw priority which orders queues in importance, and taking packets from higher priority queues until they are empty before bothering with lower priority queues. The other basic method is assigning each queue a percentage of the estimated network link, and deciding which queue to take a packet from based on an average percent of bandwidth that each is allowed to use.

    Most real life systems are a hybrid of both, allowing certain queues to take priority over others but also assigning percentages of minimum and maximum bandwidth. You can generally tell the two apart because the percentage based QoS requires a link speed estimate to work properly, whereas pure priority systems do not. A ton of variations on these methods exist, things like minimum bandwidth guarantees, inheritance hierarchies for link percentages, burst buckets which allow momentary bandwidth excursions. For more details, google queue disciplines.

    Congestion Control

    The other major decision made by the de-queuing step is which packet to take once the queue discipline has decided picked a queue to pull from. The simplest and most obvious method is FIFO, or first in first out. Packets will be emptied from the queue in the order they are received. Most systems use FIFO with modifications for congestion control.

    So the last major piece in modern QoS systems is congestion control. Also known as active queue management these days to refer to any algorithm besides tail-drop. When looking up Active Queue Management, you will see many references like Wikipedia mentioning buffers at each network interface. Without QoS this is true, but when using QoS, each queue has its own buffer and acts like a different interface in respect to congestion control. This is the basic way that QoS maintains different connection quality for packets depending on the queue they belong to.

    Congestion control is usually done on a per-queue level when the queue gets backed up with too many packets. The simplest and most common method of dealing with queue overflows is tail-drop. Simply put, when the queue is full the oldest packets will be discarded. More advanced algorithms include ECN, which sets a flag on packets sent back to sender telling them to slow down transmission, and RED, which randomly drops packets with an increasing frequency before the link is completely saturated. The newest systems usually support CoDel, which is a really effective solution with no knobs that I would recommend over the others for its effectiveness and simplicity.

    Reach out to us

    We look forward to answering your questions. We are always available to provide any support you need.
    Let’s talk.