Bandwidth usage is actively controlled on PlanetLab nodes through a policy of fair share access, similarly to how CPU usage is controlled. If you are performing measurement experiments on PlanetLab, it is essential that you understand how bandwidth usage is controlled in order to qualify your results.
Table of Contents
Bandwidth usage is actively controlled on PlanetLab nodes through a policy of fair share access, similarly to how CPU usage is controlled. In addition to shares, the bandwidth policy also supports exemptions, limits, and guarantees. The underlying mechanism used to implement this policy is the Linux Hierarchical Token Bucket (HTB) scheduler. If you are performing measurement experiments on PlanetLab, it is essential that you understand how bandwidth usage is controlled in order to qualify your results.
The available outbound bandwidth of a PlanetLab node is primarily divided amongst active slices according to the notion of fair shares. By default, each slice is granted one "share", or opportunistic fraction, of the available bandwidth; the administrative, or root, slice is granted five shares. For example, if six active slices including the root slice are competing for 10 megabits per second (Mbps) of bandwidth, the root slice will receive up to 5 Mbps, and each of the five regular slices will receive up to 1 Mbps each. A share is not a guarantee; a slice must actively compete for bandwidth in order to receive it. For instance, if the root slice stops transmitting, each regular slice will then receive up to 2 Mbps each.
Guarantees for slices are supported, but are disabled by default. Shares control access to bandwidth after guarantees are satisfied. If two slices have equal shares and are competing for 10 Mbps of bandwidth, but one slice has been guaranteed 2 Mbps, then that slice will receive up to 6 Mbps (2 Mbps guaranteed plus half of the remaining 8 Mbps, or 4 Mbps) and the other slice will receive up to 4 Mbps.
Three different types of limits, or caps, on bandwidth usage may be enforced on a node. The first limit is on the total outbound bandwidth usage of the node. If the node is on a shared infrastructure and it is desirable to limit the node's use of it, a PI may place a cap on the node through the PlanetLab web interface. As far as shares are concerned, the bandwidth available to a slice on a node capped at 10 Mbps is the same as that on an uncapped node with a 10 Mbps network adapter.
The other two types of limits are on the bandwidth usage of each slice. If two slices have equal shares and are competing for 10 Mbps of bandwidth, they will usually receive 5 Mbps each. However, if one slice has been capped at 2 Mbps, then that slice will receive up to 2 Mbps, and the other slice will receive up to 8 Mbps. By default, the cap on each slice is simply equal to the node cap, if any, allowing each slice to burst up to the maximum allowed, or possible, speed.
This kind of cap is on the instantaneous
bandwidth usage of the slice. The last type of cap is on the average daily bandwidth usage of
the slice. To prevent slices from using an unreasonable amount of
bandwidth no matter what the node cap is, an average daily rate of (by
default) 1 Mbps is softly enforced for every slice. This ammounts to a daily Tx byte limit of 10 gigabytes over the 24hour recording period. At
80% (8GB) of the Tx byte limit, a slice is capped at the byte limit
minus the current byte total over the time left in the recording
period. This rate is recomputed every 5 minutes and is
raised back to the node cap for each new recording period.
Data transmission rates are, of course, not instantaneously measured in the course of controlling bandwidth usage. The resolution of the kernel timer, currently 1 millisecond (ms) on all PlanetLab nodes, governs how finely rates can be controlled. During each timer tick, the appropriate amount of data necessary to achieve, on average, the allocated rate for each slice, is released to the hardware. If the node has been capped at a particular rate—for example, 10 Mbps—the maximum amount of data that can be continuously burst onto the wire in 1 ms is the minimum amount necessary to achieve 10 Mbps averaged over time, or about 1250 bytes.
Thus, all uncapped slices are allowed to burst at up to wire rate on uncapped nodes, and up to the capped rate on capped nodes. However, on busy nodes, bandwidth usage may be CPU bound and achieving guaranteed and/or wire rates may only be possible by requesting additional CPU shares.
Known routes over Internet2 are exempt from node caps. Bandwidth over these routes is still fairly shared, however. Suppose two slices have equal shares and are competing for 10 Mbps of capped bandwidth. Further suppose that the actual maximum link speed is 20 Mbps. As usual, both will receive up to 5 Mbps of bandwidth over non-Internet2 routes. However, if one of the slices begins sending traffic over Internet2, it will receive up to 10 Mbps of bandwidth over these routes for a total of 15 Mbps. If the other slice begins sending traffic over Internet2 as well, it will split the 10 Mbps of exempt bandwidth with the other slice and both will receive a total of 10 Mbps.
A more complicated example is presented in the appendix.
The slice attributes which control bandwidth usage are listed below.
Listed below are a few example scenarios. In all cases, access to bandwidth is not CPU bound and all slices are attempting to transmit at at least their entitled rate.
Ignoring exemptions, the bandwidth available to a slice as a function of share and guarantee is:
For instance, assume that there are 5 slices on a node with 10 Mbps
of available bandwidth, and that every slice except for
Even though an Internet2 route may be exempt from both node and slice caps, access to that bandwidth is still fairly shared. Because of this fact, if many slices on a node are all simultaneously transmitting, the fact that some routes are exempt from the node cap may not matter. Suppose that a node with four slices has a 10 Mbps node cap and that the actual maximum link speed is 20 Mbps. If two of the slices transmit over regular routes and the other two slices transmit over exempt routes, each will receive 5 Mbps of bandwidth. Even though the latter two slices are transmitting over exempt routes, access to the 20 Mbps of total bandwidth is fairly shared up to the maximum rate over each class of routes. If more total bandwidth were available—say, 100 Mbps instead of 20 Mbps—then the latter two slices would each receive up to 45 Mbps of bandwidth.
 Traditionally, in the context of data transmission, prefixes are multiples of 10, so 10 megabits per second (Mbps) is equivalent to 10000000 bits per second. Confusingly, the tc command used for configuring the HTB scheduler on Linux, interprets "bps" to mean "bytes per second" and "Mbps" to mean "megabytes per second". In this document, "bps" is an abbreviation for the more traditional "bits per second", and "Mbps" is an abbreviation for "megabits per second", or 1000000 bits per second.
 Technically, each slice is guaranteed a negligible rate of 8 bps to satisfy the requirements of the HTB borrowing model. For all intents and purposes, however, an 8 bps guarantee means no guarantee.
 500000 bits per second / 8 bytes per bit * 24 hours per day * 60 minutes per hour * 60 seconds per minute = 5400000000 bytes = 5.4 gigabytes in the context of data transmission.
 The actual calculation performed by the tc command pads this value with an additional 1600 bytes for safety.