qdisc
s)The flexibility and control of Linux traffic control can be unleashed through the agency of the classful qdiscs. Remember that the classful queuing disciplines can have filters attached to them, allowing packets to be directed to particular classes and subqueues.
There are several common terms to describe classes directly attached to
the root
qdisc and terminal classes. Classess attached to the
root
qdisc are known as root classes, and more generically inner
classes. Any terminal class in a particular queuing discipline is known
as a leaf class by analogy to the tree structure of the classes. Besides
the use of figurative language depicting the structure as a tree, the
language of family relationships is also quite common.
HTB is meant as a more understandable and intuitive replacement for the CBQ (see chapter 7.4) qdisc in Linux. Both CBQ and HTB help you to control the use of the outbound bandwidth on a given link. Both allow you to use one physical link to simulate several slower links and to send different kinds oftraffic on different simulated links. In both cases, you have to specify how to divide the physical link into simulated links and how to decide which simulated link to use for a given packet to be sent.
HTB uses the concepts of tokens and buckets
along with the class-based system and filter
s to allow for
complex and granular control over traffic. With a complex
borrowing model, HTB can perform a variety of sophisticated
traffic control techniques. One of the easiest ways to use HTB
immediately is that of shaping.
By understanding tokens and buckets or by grasping the function of TBF, HTB should be merely a logical step. This queuing discipline allows the user to define the characteristics of the tokens and bucket used and allows the user to nest these buckets in an arbitrary fashion. When coupled with a classifying scheme, traffic can be controlled in a very granular fashion.
Below is example output of the syntax for HTB on the command line with the tc tool. Although the syntax for tcng is a language of its own, the rules for HTB are the same.
Example 11. tc usage for HTB
Usage: ... qdisc add ... htb [default N] [r2q N] default minor id of class to which unclassified packets are sent {0} r2q DRR quantums are computed as rate in Bps/r2q {10} debug string of 16 numbers each 0-3 {0} ... class add ... htb rate R1 burst B1 [prio P] [slot S] [pslot PS] [ceil R2] [cburst B2] [mtu MTU] [quantum Q] rate rate allocated to this class (class can still borrow) burst max bytes burst which can be accumulated during idle period {computed} ceil definite upper class rate (no borrows) {rate} cburst burst but for ceil {computed} mtu max packet size we create rate map for {1600} prio priority of leaf; lower are served first {0} quantum how much bytes to serve from leaf at once {use r2q} TC HTB version 3.3
Unlike almost all of the other software discussed, HTB is a newer queuing discipline and your distribution may not have all of the tools and capability you need to use HTB. The kernel must support HTB; kernel version 2.4.20 and later support it in the stock distribution, although earlier kernel versions require patching. To enable userland support for HTB, see HTB for an iproute2 patch to tc.
One of the most common applications of HTB involves shaping transmitted traffic to a specific rate.
All shaping occurs in leaf classes. No shaping occurs in inner or root classes as they only exist to suggest how the borrowing model should distribute available tokens.
A fundamental part of the HTB qdisc is the borrowing mechanism.
Children classes borrow tokens from their parents once they have
exceeded rate
. A child class will continue to
attempt to borrow until it reaches ceil
, at which
point it will begin to queue packets for transmission until more
tokens/ctokens are available. As there are only two primary types of
classes which can be created with HTB the following table and
diagram identify the various possible states and the behaviour of the
borrowing mechanisms.
Table 2. HTB class states and potential actions taken
type of class | class state | HTB internal state | action taken |
---|---|---|---|
leaf | < rate | HTB_CAN_SEND | Leaf class will dequeue queued bytes up to available tokens (no more than burst packets) |
leaf | > rate , < ceil | HTB_MAY_BORROW |
Leaf class will attempt to borrow tokens/ctokens from
parent class. If tokens are available, they will be lent in
quantum increments and the leaf class will dequeue up
to cburst bytes
|
leaf | > ceil | HTB_CANT_SEND | No packets will be dequeued. This will cause packet delay and will increase latency to meet the desired rate. |
inner, root | < rate | HTB_CAN_SEND | Inner class will lend tokens to children. |
inner, root | > rate , < ceil | HTB_MAY_BORROW |
Inner class will attempt to borrow tokens/ctokens from
parent class, lending them to competing children in
quantum increments per request.
|
inner, root | > ceil | HTB_CANT_SEND | Inner class will not attempt to borrow from its parent and will not lend tokens/ctokens to children classes. |
This diagram identifies the flow of borrowed tokens and the manner in which tokens are charged to parent classes. In order for the borrowing model to work, each class must have an accurate count of the number of tokens used by itself and all of its children. For this reason, any token used in a child or leaf class is charged to each parent class until the root class is reached.
Any child class which wishes to borrow a token will request a token
from its parent class, which if it is also over its rate
will
request to borrow from its parent class until either a token is
located or the root class is reached. So the borrowing of tokens
flows toward the leaf classes and the charging of the usage of tokens
flows toward the root class.
Note in this diagram that there are several HTB root classes. Each of these root classes can simulate a virtual circuit.
default
An optional parameter with every HTB qdisc
object,
the default default
is 0, which cause any unclassified
traffic to be dequeued at hardware speed, completely bypassing
any of the classes attached to the root
qdisc.
rate
Used to set the minimum desired speed to which to limit transmitted traffic. This can be considered the equivalent of a committed information rate (CIR), or the guaranteed bandwidth for a given leaf class.
ceil
Used to set the maximum desired speed to which to limit the transmitted traffic. The borrowing model should illustrate how this parameter is used. This can be considered the equivalent of “burstable bandwidth”.
burst
This is the size of the rate
bucket (see
Tokens and buckets). HTB will dequeue
burst
bytes before awaiting the arrival of more
tokens.
cburst
This is the size of the ceil
bucket (see
Tokens and buckets). HTB will dequeue
cburst
bytes before awaiting the arrival of more
ctokens.
quantum
This is a key parameter used by HTB to control borrowing.
Normally, the correct quantum
is calculated by
HTB, not specified by the user. Tweaking this parameter
can have tremendous effects on borrowing and shaping under
contention, because it is used both to split traffic between
children classes over rate
(but below
ceil
) and to transmit packets from these same
classes.
r2q
Also, usually calculated for the user, r2q
is a hint to
HTB to help determine the optimal quantum
for a particular class.
mtu
prio
In the round-robin process, classes with the lowest priority field are tried for packets first. Mandatory field.
prio
Place of this class within the hierarchy. If attached directly to a qdisc and not to another class, minor can be omitted. Mandatory field.
prio
Like qdiscs, classes can be named. The major number must be equal to the major number of the qdisc to which it belongs. Optional, but needed if this class is going to have children.
The root of a HTB qdisc class tree has the following parameters:
parent major:minor | root
This mandatory parameter determines the place of the HTB instance, either at the root of an interface or within an existing class.
handle major:
Like all other qdiscs, the HTB can be assigned a handle. Should consist only of a major number, followed by a colon. Optional, but very useful if classes will be generated within this qdisc.
default minor-id
Unclassified traffic gets sent to the class with this minor-id.
Below are some general guidelines to using HTB culled from http://www.docum.org/docum.org/ and the (new) LARTC mailing list (see also the (old) LARTC mailing list archive). These rules are simply a recommendation for beginners to maximize the benefit of HTB until gaining a better understanding of the practical application of HTB.
Shaping with HTB occurs only in leaf classes. See also Section 7.1.2, “Shaping”.
Because HTB does not shape in any class except the leaf
class, the sum of the rate
s of leaf classes should not
exceed the ceil
of a parent class. Ideally, the sum of
the rate
s of the children classes would match the
rate
of the parent class, allowing the parent class to
distribute leftover bandwidth (ceil
- rate
) among
the children classes.
This key concept in employing HTB bears repeating. Only leaf classes actually shape packets; packets are only delayed in these leaf classes. The inner classes (all the way up to the root class) exist to define how borrowing/lending occurs (see also Section 7.1.3, “Borrowing”).
The quantum
is only only used when a class is over
rate
but below ceil
.
The quantum
should be set at MTU or higher. HTB
will dequeue a single packet at least per service opportunity even
if quantum
is too small. In such a case, it will not be
able to calculate accurately the real bandwidth consumed
[9].
Parent classes lend tokens to children in increments of
quantum
, so for maximum granularity and most
instantaneously evenly distributed bandwidth, quantum
should be as low as possible while still no less than MTU.
A distinction between tokens and ctokens is only meaningful in a leaf class, because non-leaf classes only lend tokens to child classes.
HTB borrowing could more accurately be described as “using”.
Like see before, within the one HTB instance many classes may exist. Each of these classes contains another qdisc, by default tc-pfifo.When enqueueing a packet, HTB starts at the root and uses various methods to determine which class should receive the data. In the absence of uncommon configuration options, the process is rather easy. At each node we look for an instruction, and then go to the class the instruction refers us to. If the class found is a barren leaf-node (without children), we enqueue the packet there. If it is not yet a leaf node, we do the whole thing over again starting from that node.
The following actions are performed, in order at each node we visit, until one sends us to another node, or terminates the process.
Consult filters attached to the class. If sent to a leafnode, we are done. Otherwise, restart.
If none of the above returned with an instruction, enqueue at this node.
This algorithm makes sure that a packet always ends up somewhere, even while you are busy building your configuration.
The HFSC classful qdisc balances delay-sensitive traffic against throughput sensitive traffic. In a congested or backlogged state, the HFSC queuing discipline interleaves the delay-sensitive traffic when required according service curve definitions. Read about the Linux implementation in German, HFSC Scheduling mit Linux or read a translation into English, HFSC Scheduling with Linux. The original research article, A Hierarchical Fair Service Curve Algorithm For Link-Sharing, Real-Time and Priority Services, also remains available.
This section will be completed at a later date.
The PRIO classful qdisc works on a very simple precept. When it is ready to dequeue a packet, the first class is checked for a packet. If there's a packet, it gets dequeued. If there's no packet, then the next class is checked, until the queuing mechanism has no more classes to check. PRIO is a scheduler and never delays packets - it is a work-conserving qdisc, though the qdiscs contained in the classes may not be
On creation with tc qdisc add, a fixed number of bands is created. Each band is a class, although is not possible to add classes with tc class add. The number of bands to be created is fixed at the creation of the qdisc itself.
When dequeueing packets, band 0 is always checked first. If it has no packet to dequeue, then PRIO will try band 1, and so onwards. Maximum reliability packets should therefore go to band 0, minimum delay to band 1 and the rest to band 2.
As the PRIO qdisc itself will have minor number 0, band 0 is actually major:1, band 1 is major:2, etc. For major, substitute the major number assigned to the qdisc on 'tc qdisc add' with the handle parameter.
$ tc qdisc ... dev dev ( parent classid | root) [ handle major: ] prio [bands bands ] [ priomap band band band... ] [ estimator interval time‐constant ]
Three methods are available to determine the target band in which a packet will be enqueued.
From userspace, a process with sufficient privileges can encode the destination class directly with SO_PRIORITY.
Programmatically, a tc filter attached to the root qdisc can point any traffic directly to a class.
And, typically, with reference to the priomap, a packet's priority, is derived from the Type of Service (ToS) assigned to the packet.
Only the priomap is specific to this qdisc.
total number of distinct bands. If changed from the default of 3, priomap must be updated as well.
a tc filter attached to the root qdisc can point traffic directly to a class
The priomap specifies how this qdisc determines how a packet maps to a specific band. Mapping occurs based on the value of the ToS octet of a packet.
0 1 2 3 4 5 6 7 +-----+-----+-----+-----+-----+-----+-----+-----+ | PRECEDENCE | ToS | MBZ | RFC 791 +-----+-----+-----+-----+-----+-----+-----+-----+ 0 1 2 3 4 5 6 7 +-----+-----+-----+-----+-----+-----+-----+-----+ | DiffServ Code Point (DSCP) | (unused) | RFC 2474 +-----+-----+-----+-----+-----+-----+-----+-----+
The four ToS bits from the (the 'ToS field') are defined slightly differently in RFC 791 and RFC 2474. The later RFC supersedes the definitions of the former, but not all software, systems and terminology have caught up to that change. So, often packet analysis programs will still refer to Type of Service (ToS) instead of DiffServ Code Point (DSCP).
Table 3. RFC 791 interpretation of IP ToS header
Binary | Decimal | Meaning |
---|---|---|
1000 | 8 | Minimize delay (md) |
0100 | 4 | Maximize throughput (mt) |
0010 | 2 | Maximize reliability (mr) |
0001 | 1 | Minimize monetary cost (mmc) |
0000 | 0 | Normal Service |
As there is 1 bit to the right of these four bits, the actual value of the ToS field is double the value of the ToS bits. Running tcpdump -v -v shows you the value of the entire ToS field, not just the four bits. It is the value you see in the first column of this table:
Table 4. Mapping ToS value to priomap band
ToS Field | ToS Bits | Meaning | Linux Priority | Band |
---|---|---|---|---|
0x0 | 0 | Normal Service | 0 Best Effort | 1 |
0x2 | 1 | Minimize Monetary Cost (mmc) | 1 Filler | 2 |
0x4 | 2 | Maximize Reliability (mr) | 0 Best Effort | 1 |
0x6 | 3 | mmc+mr | 0 Best Effort | 1 |
0x8 | 4 | Maximize Throughput (mt) | 2 Bulk | 2 |
0xa | 5 | mmc+mt | 2 Bulk | 2 |
0xc | 6 | mr+mt | 2 Bulk | 2 |
0xe | 7 | mmc+mr+mt | 2 Bulk | 2 |
0x10 | 8 | Minimize Delay (md) | 6 Interactive | 0 |
0x12 | 9 | mmc+md | 6 Interactive | 0 |
0x14 | 10 | mr+md | 6 Interactive | 0 |
0x16 | 11 | mmc+mr+md | 6 Interactive | 0 |
0x18 | 12 | mt+md | 4 Int. Bulk | 1 |
0x1a | 13 | mmc+mt+md | 4 Int. Bulk | 1 |
0x1c | 14 | mr+mt+md | 4 Int. Bulk | 1 |
0x1e | 15 | mmc+mr+mt+md | 4 Int. Bulk | 1 |
The second column contains the value of the relevant four ToS bits, followed by their translated meaning. For example, 15 stands for a packet wanting Minimal Monetary Cost, Maximum Reliability, Maximum Throughput AND Minimum Delay.
The fourth column lists the way the Linux kernel interprets the ToS bits, by showing to which Priority they are mapped.
The last column shows the result of the default priomap. On the command line, the default priomap looks like this:
1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
This means that priority 4, for example, gets mapped to band number 1. The priomap also allows you to list higher priorities (> 7) which do not correspond to ToS mappings, but which are set by other means.
PRIO classes cannot be configured further - they are automatically created when the PRIO qdisc is attached. Each class however can contain yet a further qdisc.
Class Based Queuing (CBQ) is the classic implementation (also called venerable) of a traffic control system. CBQ is a classful qdisc that implements a rich link sharing hierarchy of classes. It contains shaping elements as well as prioritizing capabilities. Shaping is performed by calculating link idle time based on the timing of dequeue events and knowledge of the underlying link layer bandwidth.
Shaping is done using link idle time calculations, and actions taken if these calculations deviate from set limits.
When shaping a 10mbit/s connection to 1mbit/s, the link will be idle 90% of the time. If it isn't, it needs to be throttled so that it is idle 90% of the time.
From the kernel's perspective, this is hard to measure, so CBQ instead computes idle time from the number of microseconds that elapse between requests from the device driver for more data. Combined with the knowledge of packet sizes, this is used to approximate how full or empty the link is.
This is rather circumspect and doesn't always arrive at proper results. The physical link bandwidth may be ill defined in case of not-quite-real network devices like PPP over Ethernet or PPTP over TCP/IP. The effective bandwidth in that case is probably determined by the efficiency of pipes to userspace - which not defined.
During operations, the effective idletime is measured using an exponential weighted moving average (EWMA). This calculation of activity against idleness values recent packets exponentially more than predecessors. The EWMA is an effective calculation to deal with the problem that a system is either active or inactive. For example, the Unix system load average is calculated in the same way.
The calculated idle time is subtracted from the EWMA measured one, the resulting number is called 'avgidle'. A perfectly loaded link has an avgidle of zero: packets arrive exactly at the calculated interval.
An overloaded link has a negative avgidle and if it gets too negative, CBQ throttles and is then 'overlimit'. Conversely, an idle link might amass a huge avgidle, which would then allow infinite bandwidths after a few hours of silence. To prevent this, avgidle is capped at maxidle.
If overlimit, in theory, the CBQ could throttle itself for exactly the amount of time that was calculated to pass between packets, and then pass one packet, and throttle again. Due to timer resolution constraints, this may not be feasible, see the minburst parameter below.
Under one installed CBQ qdisc many classes may exist. Each of these classes contains another qdisc, by default tc-pfifo.
When enqueueing a packet, CBQ starts at the root and uses various methods to determine which class should receive the data. If a verdict is reached, this process is repeated for the recipient class which might have further means of classifying traffic to its children, if any. CBQ has the following methods available to classify a packet to any child classes.
skb>priority class encoding. Can be set from userspace by an application with the SO_PRIORITY setsockopt. The skb->priority class encoding only applies if the skb->priority holds a major:minor handle of an existing class within this qdisc.
tc filters attached to the class.
The defmap of a class, as set with the split and defmap parameters. The defmap may contain instructions for each possible Linux packet priority.
Each class also has a level. Leaf nodes, attached to the bottom of theclass hierarchy, have a level of 0.
Classification is a loop, which terminates when a leaf class is found. At any point the loop may jump to the fallback algorithm. The loop consists of the following steps:
If the packet is generated locally and has a valid classid encoded within its skb->priority, choose it and terminate.
Consult the tc filters, if any, attached to this child. If these return a class which is not a leaf class, restart loop from he class returned. If it is a leaf, choose it and terminate.
If the tc filters did not return a class, but did return a classid, try to find a class with that id within this qdisc. Checkif the found class is of a lower level than the current class. If so, and the returned class is not a leaf node, restart the loop at the found class. If it is a leaf node, terminate. If we found an upward reference to a higher level, enter the fallback algorithm.
If the tc filters did not return a class, nor a valid reference to one, consider the minor number of the reference to be the priority. Retrieve a class from the defmap of this class for the priority. If this did not contain a class, consult the defmap of this class for the BEST_EFFORT class. If this is an upward reference, or no BEST_EFFORT class was defined, enter the fallback algorithm. If a valid class was found, and it is not a leaf node, restart the loop at this class. If it is a leaf, choose it and terminate. If neither the priority distilled from the classid, nor the BEST_EFFORT priority yielded a class, enter the fallback algorithm.
The fallback algorithm resides outside of the loop and is as follows.
Consult the defmap of the class at which the jump to fallback occured. If the defmap contains a class for the priority of the class (which is related to the ToS field), choose this class and terminate.
Consult the map for a class for the BEST_EFFORT priority. If found, choose it, and terminate.
Choose the class at which break out to the fallback algorithm occurred. Terminate.
The packet is enqueued to the class which was chosen when either algorithm terminated. It is therefore possible for a packet to be enqueued not at a leaf node, but in the middle of the hierarchy.
When dequeuing for sending to the network device, CBQ decides which of its classes will be allowed to send. It does so with a Weighted Round Robin process in which each class with packets gets a chance to send in turn. The WRR process starts by asking the highest priority classes (lowest numerically - highest semantically) for packets, and will continue to do so until they have no more data to offer, in which case the process repeats for lower priorities.
Each class is not allowed to send at length though, they can only dequeue a configurable amount of data during each round.
If a class is about to go overlimit, and it is not bounded it will try to borrow avgidle from siblings that are not isolated. This process is repeated from the bottom upwards. If a class is unable to borrow enough avgidle to send a packet, it is throttled and not asked for a packet for enough time for the avgidle to increase above zero.
The root qdisc of a CBQ class tree has the following parameters:
parent
root
| major:minor
this mandatory parameter determines the place of the CBQ instance, either at the root of an interface or within an existing class.
handle
major:
like all other qdiscs, the CBQ can be assigned a handle. Should consist only of a major number, followed by a colon. This parameter is optional.
avpkt
bytes
for calculations, the average packet size must be known. It is silently capped at a minimum of 2/3 of the interface MTU. This parameter is mandatory.
bandwidth
rate
underlying available bandwidth; to determine the idle time, CBQ must know the bandwidth of your A) desired target bandwidth, B) underlying physical interface or C) parent qdisc. This is a vital parameter, more about it later. This parameter is mandatory.
cell
size
the cell size determines the granularity of packet transmission time calculations. Must be an integral power of 2, defaults to 8.
mpu
bytes
a zero sized packet may still take time to transmit. This value is the lower cap for packet transmission time calculations - packets smaller than this value are still deemed to have this size. Defaults to 0.
ewma
log
CBQ calculates idleness using an Exponentially Weighted Moving
Average (EWMA) which smooths out
measurements easily accommodating short bursts. The
log
value determines how much
smoothing occurs. Lower values imply greater sensitivity.
Must be between 0 and 31. Defaults to 5.
A CBQ qdisc does not shape out of its own accord. It only needs to know certain parameters about the underlying link. Actual shaping is done in classes.
Classes have a lot of parameters to configure their operation.
parent
major:minor
place of this class within the hierarchy. If attached directly to a qdisc and not to another class, minor can be omitted. This parameter is mandatory.
classid
major:minor
like qdiscs, classes can be named. The major number must be equal to the major number of the qdisc to which it belongs. Optional, but needed if this class is going to have children.
weight
weightvalue
when dequeuing to the lower layer, classes are tried for traffic in a round-robin fashion. Classes with a higher configured qdisc will generally have more traffic to offer during each round, so it makes sense to allow it to dequeue more traffic. All weights under a class are normalized, so only the ratios matter. Defaults to the configured rate, unless the priority of this class is maximal, in which case it is set to 1.
allot
bytes
allot specifies how many bytes a qdisc can dequeue during each round of the process. This parameter is weighted using the renormalized class weight described above.
priority
priovalue
in the round-robin process, classes with the lowest priority field are tried for packets first. This parameter is mandatory.
rate
bitrate
maximum aggregated rate at which this class (children inclusive) can transmit. The bitrate is specified using the tc way of specifying rates (e.g. '1544kbit'). This parameter is mandatory.
bandwidth
bitrate
this is different from the bandwidth specified when creating a
parent CBQ qdisc. The CBQ class
bandwidth
parameter is only used to
determine maxidle and offtime, which, in turn, are only
calculated when specifying maxburst or minburst. Thus, this
parameter is only required if specifying
maxburst
or
minburst
.
maxburst
packetcount
, this number of packets is used to calculate maxidle so that when avgidle is at maxidle, this number of average packets can be burst before avgidle drops to 0. Set it higher to be more tolerant of bursts. You can't set maxidle directly, only via this parameter.
minburst
packetcount
as mentioned before, CBQ needs to throttle in case of overlimit. The ideal solution is to do so for exactly the calculated idle time, and pass 1 packet. However, Unix kernels generally have a hard time scheduling events shorter than 10ms, so it is better to throttle for a longer period, and then pass minburst packets in one go, and then sleep minburst times longer. The time to wait is called the offtime. Higher values of minburst lead to more accurate shaping in the long term, but to bigger bursts at millisecond timescales.
minidle
microseconds
, minidle: if avgidle is below 0, we are overlimits and need to wait until avgidle will be big enough to send one packet. To prevent a sudden burst from shutting down the link for a prolonged period of time, avgidle is reset to minidle if it gets too low. Minidle is specified in negative microseconds, so 10 means that avgidle is capped at -10us.
bounded
| borrow
,
identifies a borrowing policy. Either the class will try
to borrow
bandwidth from its siblings
or it will consider itself bounded
.
Mutually exclusive.
isolated
| sharing
identifies a sharing policy. Either the class will engage
in a sharing
policy toward its
siblings or it will consider itself
isolated
. Mutually exclusive.
split major:minor and defmap bitmap[/bitmap]: if consulting filters attached to a class did not give a verdict, CBQ can also classify based on the packet's priority. There are 16 priorities available, numbered from 0 to 15. The defmap specifies which priorities this class wants to receive, specified as a bitmap. The Least Significant Bit corresponds to priority zero. The split parameter tells CBQ at which class the decision must be made, which should be a (grand)parent of the class you are adding.
As an example, 'tc class add ... classid 10:1 cbq .. split 10:0 defmap c0' configures class 10:0 to send packets with priorities 6 and 7 to 10:1.
The complimentary configuration would then be: 'tc class add ... classid 10:2 cbq ... split 10:0 defmap 3f' Which would send all packets 0, 1, 2, 3, 4 and 5 to 10:1.
estimator interval timeconstant: CBQ can measure how much bandwidth each class is using, which tc filters can use to classify packets with. In order to determine the bandwidth it uses a very simple estimator that measures once every interval microseconds how much traffic has passed. This again is a EWMA, for which the time constant can be specified, also in microseconds. The time constant corresponds to the sluggishness of the measurement or, conversely, to the sensitivity of the average to short bursts. Higher values mean less sensitivity.
This qdisc is not included in the standard kernels.
The WRR qdisc distributes bandwidth between its classes using the weighted round robin scheme. That is, like the CBQ qdisc it contains classes into which arbitrary qdiscs can be plugged. All classes which have sufficient demand will get bandwidth proportional to the weights associated with the classes. The weights can be set manually using the tc program. But they can also be made automatically decreasing for classes transferring much data.
The qdisc has a built-in classifier which assigns packets coming from or sent to different machines to different classes. Either the MAC or IP and either source or destination addresses can be used. The MAC address can only be used when the Linux box is acting as an ethernet bridge, however. The classes are automatically assigned to machines based on the packets seen.
The qdisc can be very useful at sites where a lot of unrelated individuals share an Internet connection. A set of scripts setting up a relevant behavior for such a site is a central part of the WRR distribution.
[9]
HTB will report bandwidth usage in this scenario
incorrectly. It will calculate the bandwidth used by
quantum
instead of the real dequeued packet size.
This can skew results quickly.