Flow 1.0.1
Flow project: Full implementation reference.
cong_ctl.hpp
Go to the documentation of this file.
1/* Flow
2 * Copyright 2023 Akamai Technologies, Inc.
3 *
4 * Licensed under the Apache License, Version 2.0 (the
5 * "License"); you may not use this file except in
6 * compliance with the License. You may obtain a copy
7 * of the License at
8 *
9 * https://www.apache.org/licenses/LICENSE-2.0
10 *
11 * Unless required by applicable law or agreed to in
12 * writing, software distributed under the License is
13 * distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
14 * CONDITIONS OF ANY KIND, either express or implied.
15 * See the License for the specific language governing
16 * permissions and limitations under the License. */
17
18/// @file
19#pragma once
20
24#include "flow/util/util.hpp"
25#include "flow/log/log.hpp"
26#include <map>
27#include <iostream>
28#include <boost/weak_ptr.hpp>
29
30namespace flow::net_flow
31{
32// Types.
33
34/**
35 * The abstract interface for a per-socket module that determines the socket's congestion control
36 * behavior. Each Peer_socket is to create an instance of a concrete subclass of this class,
37 * thus determining that socket's can-send policy.
38 *
39 * ### Congestion control ###
40 * This term refers to the process of deciding when to send data, assuming data are available
41 * to send. In a magic world we could send all available data immediately upon it becoming
42 * available, but in the real world doing so at a high volume or in poor networks will eventually
43 * lead to packets being dropped somewhere along the way or at the destination. To determine
44 * whether we can send data (if available), we maintain `m_snd_flying_bytes` (how much we think is
45 * currently In-flight, i.e. sent by us but not yet Acknowledged by the receiver or Dropped by the
46 * network or receiver) and CWND (how much data we think the route, or pipe, can hold In-flight
47 * without dropping too much). Basically if Peer_socket::m_snd_flying_bytes < CWND, data could be sent.
48 *
49 * How to determine CWND though? That is the main question in congestion control. In `net_flow`,
50 * the Peer_socket data member `Congestion_control_strategy sock->m_cong_ctl` provides the API that
51 * returns CWND (how it does so depends on the real type implementing that interface). In order to
52 * compute/update CWND, Congestion_control_strategy and all subclasses have `const` access to `sock`.
53 * Since it does not get its own thread, it needs to be informed of various events on `sock` (ACKs,
54 * timeouts) so that it can potentially recompute its CWND. Thus the interface consists of,
55 * basically: congestion_window_bytes() (obtain CWND for comparison to In-flight bytes in
56 * `can_send()`); and `on_...()` methods to effect change in the internally stored CWND.
57 *
58 * ### Object life cycle ###
59 * There is a strict 1-to-1 relationship between one Congestion_control_strategy
60 * instance and one Peer_socket. A Congestion_control_strategy is created shortly after Peer_socket
61 * is and is saved inside the latter. Conversely a pointer to the Peer_socket is stored inside the
62 * Congestion_control_strategy. Many congestion control algorithms need (read-only) access to the
63 * innards of a socket; for example they may frequently access Peer_socket::m_snd_smoothed_round_trip_time
64 * (SRTT) for various calculations. The containing Peer_socket must exist at all times while
65 * Congestion_control_strategy exists. Informally it's recommended that Peer_socket destructor or
66 * other method deletes its Congestion_control_strategy instance when it is no longer needed.
67 *
68 * ### Functionality provided by this class ###
69 * The main functionality is the aforementioned interface.
70 *
71 * Secondarily, this main functionality is implemented as do-nothing methods (as opposed to pure
72 * methods). Finally, the class stores a pointer to the containing Peer_socket, as a convenience
73 * for subclasses.
74 *
75 * ### General design note ###
76 * TCP RFCs like RFC 5681 tend to present a monolithic congestion
77 * control algorithm. For example, Fast Retransmit/Fast Recovery is described as one combined
78 * algorithm in RFC 5681-3.2. It doesn't describe loss detection (3 dupe-ACKs), retransmission
79 * (send first segment found to be lost), congestion window adjustment and loss recovery
80 * (CWND/SSTHRESH halving, CWND inflation/deflation) as belonging to separate modules working
81 * together but rather as one monolithic algorithm. The design we use here is an attempt to
82 * separate those things into distinct modules (classes or at least methods) that work together but
83 * are abstractly separated within reason. This should result in cleaner, clearer, more
84 * maintainable code. Namely, retransmission and loss detection are (separate) in Node, while CWND
85 * manipulations are in Congestion_control_strategy. Congestion_control_strategy's focus is, as
86 * narrowly as possible, to compute CWND based on inputs from Node. This may run counter to the
87 * how a given RFC is written, since the part of "Reno" involving counting dupe-ACKs is not at all
88 * part of the Reno congestion control module (Congestion_control_classic) but directly inside Node
89 * instead.
90 *
91 * ### Assumptions about outside worl###
92 * To support an efficient and clean design, it's important to
93 * cleanly delineate the level of abstraction involved in the Congestion_control_strategy class
94 * hierarchy. How stand-alone is the hierarchy? What assumptions does it make about its underlying
95 * socket's state? The answer is as follows. The hierarchy only cares about
96 *
97 * 1. events (`on_...()` methods like on_acks()) that occur on the socket (it must know about EVERY
98 * occurrence of each event as soon as it occurs);
99 * 2. certain state of the socket (such as Peer_socket::m_snd_flying_bytes).
100 *
101 * The documentation for each `on_...()` method must therefore exactly specify what the event means.
102 * Similarly, either the doc header for the appropriate method or the class doc header must
103 * specify what, if any, Peer_socket state must be set and how. That said, even with clean
104 * documentation, the key point is that the Congestion_control_strategy hierarchy must work TOGETHER
105 * with the Node and Peer_socket; there is no way to make the programmer of a congestion control
106 * module be able to ignore the relevant details (like how ACKs are generated and
107 * handled) of Node and Peer_socket. The reverse is not true; congestion control MUST be a black
108 * box to the Node code; it can provide inputs (as events and arguments to the `on_...()` calls; and
109 * as read-only state in Peer_socket) to Congestion_control_strategy objects, but it must have no
110 * access (read or write) to the latter's internals. This philosophy is loosely followed in the
111 * Linux kernel code, though in my opinion Linux kernel code (in this case) is not quite as
112 * straightforward or clean as it could be (which I humbly tried to improve upon here). Bottom line:
113 *
114 * 1. Congestion_control_strategy and subclasses are a black box to Node/Peer_socket code (no
115 * access to internals; access only to constructors/destructor and API).
116 * 2. Congestion_control_strategy and subclasses have `const` (!) `friend` access to Peer_socket
117 * internals.
118 * 3. The programmer of any Congestion_control_strategy subclass must assume a certain event
119 * model to be followed by Node. This model is to be explicitly explained in the doc headers
120 * for the various `on_...()` methods. Node must call the `on_...()` methods as soon as it
121 * detects the appropriate events, and it should aim to detect them as soon as possible after
122 * they occur.
123 * 4. The programmer of any Congestion_control_strategy subclass may assume the existence
124 * and meaning of certain state of Peer_socket, which she can use to make internal
125 * computations. Any such state (i.e., in addition to the on-event calls, and their
126 * arguments, in (3)) must be explicitly documented in the class or method doc headers.
127 *
128 * ### Choice of congestion window units ###
129 * We choose bytes, instead of multiples of max-block-size (in
130 * TCP, this would be maximum segment size [MSS]). Either would have been feasible. TCP RFCs use
131 * bytes (even if most of the math involves incrementing/decrementing in multiples of MSS); so does
132 * at least the canonical BSD TCP implementation (Stevens/Wright, TCP/IP Illustrated Vol. 2) from
133 * the early 90s (not sure about modern version). Linux TCP uses multiples of MSS, as do many
134 * papers on alternative congestion control methods. Reasoning for our choice of bytes: First, the
135 * particular congestion control implementation can still do all internal math in multiples of
136 * max-block-size (if desired) and then just multiply by that in congestion_window_bytes() at the
137 * last moment. Second, I've found that certain algorithms, namely Appropriate Byte Counting, are
138 * difficult to perform in terms of multiples while staying true to the algorithm as written (one
139 * either has to lose some precision or maintain fractional CWND parts which cancels out some of the
140 * simplicity in using the multiples-of-MSS accounting method). Thus it seemed natural to go with
141 * the more flexible approach. The only cost of that is that Node, when using
142 * congestion_window_bytes(), must be ready for it to return a non-multiple-of-max-block-size value
143 * and act properly. This is not at all a major challenge in practice.
144 *
145 * ### Terminology ###
146 * In doc comments throughout this class hierarchy, the terms "simultaneously,"
147 * "immediately," and "as soon as" are to be interpreted as follows: Within a non-blocking amount of
148 * time. Note that that is not the same as literally "as soon as possible," because for efficiency
149 * the Node implementation may choose to perform other non-blocking actions first. For example,
150 * on_acks() is to be called "as soon as" a packet acknowledgment is received, but the Node can and
151 * should first accumulate all other acks that have already arrived, and only then call on_acks()
152 * for all of them. Thus, in practice, in the `net_flow` implementation, "immediately/as soon
153 * as/simultaneously" is the same as "within the same boost.asio handler invocation," because each
154 * handler is written to complete without blocking (sleeping).
155 *
156 * ### Thread safety ###
157 * Unless stated otherwise, a Congestion_control_strategy object is to be accessed
158 * from the containing Peer_socket's Node's thread W only.
159 *
160 * How to add a new Congestion_control_strategy subclass? First write the code to above spec (using
161 * existing strategies as a basis -- especially Congestion_control_classic, upon which most others
162 * are usually based) and put it in the self-explanatory location. Next, add it to the socket
163 * option machinery, so that it can be programmatically selected for a given Node or Peer_socket,
164 * or selected by the user via a config file or command line, if the application-layer programmer so
165 * chooses. To do so, add it to the following locations (by analogy with existing ones):
166 *
167 * - `enum Peer_socket_options::Congestion_control_strategy_choice::Congestion_control_strategy_choice.`
168 * - Congestion_control_selector::S_ID_TO_STRATEGY_MAP `static` initializer.
169 * - Congestion_control_selector::S_STRATEGY_TO_ID_MAP `static` initializer.
170 * - Factory method Congestion_control_selector::create_strategy().
171 *
172 * Voila! You can now use the new congestion control algorithm.
173 *
174 * @todo Tuck away all congestion control-related symbols into new `namespace cong_ctl`?
175 */
178 public log::Log_context,
179 private boost::noncopyable
180{
181public:
182 // Methods.
183
184 /**
185 * Returns the maximal number of bytes (with respect to `m_data` field of DATA packets) that this
186 * socket should allow to be In-flight at this time. Bytes, if available, can be sent if and only
187 * if this value is greater than the # of In-flight bytes at this time.
188 *
189 * This is pure. Each specific congestion control strategy must implement this.
190 *
191 * @note For definition of In-flight bytes, see Peer_socket::m_snd_flying_pkts_by_sent_when.
192 * @return See above.
193 */
194 virtual size_t congestion_window_bytes() const = 0;
195
196 /**
197 * Informs the congestion control strategy that 1 or more previously sent packets whose status was
198 * In-flight just received acknowledgments, thus changing their state from In-flight to
199 * Acknowledged. For efficiency and simplicity of behavior, on_acks() should be called as few
200 * times as possible while still satisfying the requirement in the previous sentence. That is,
201 * suppose acknowledgments for N packets were received simultaneously. Then on_acks() must be
202 * called one time, with the "packets" argument equal to N -- not, say, N times with `packets == 1`.
203 *
204 * The acknowledgments that led to this on_acks() call also results in 1 or more individual
205 * on_individual_ack() calls covering each individual packet. You MUST call
206 * on_individual_ack() and THEN call on_acks().
207 *
208 * If the acknowledgment group that led to on_acks() also exposed the loss of some packets, i.e.,
209 * if the criteria for on_loss_event() also hold, then you MUST call on_loss_event() and THEN call
210 * on_acks(). (Informal reasoning: the ACKs are exposing drop(s) that occurred in the past,
211 * chronologically before the ACKed packets arrived. Thus the events should fire in that order.)
212 *
213 * You MUST call on_acks() AFTER Peer_socket state (Peer_socket::m_snd_flying_pkts_by_sent_when et al) has been
214 * updated to reflect the acknowledgments being reported here.
215 *
216 * Assumptions about ACK sender (DATA receiver): It is assumed that:
217 *
218 * - Every DATA packet is acknowledged at most T after it was received, where T is some
219 * reasonably small constant time period (see Node::S_DELAYED_ACK_TIMER_PERIOD). (I.e., an
220 * ACK may be delayed w/r/t DATA reception but only up to a certain delay T.)
221 * - All received but not-yet-acked DATA packets are acknowledged as soon as there are at least
222 * Node::S_MAX_FULL_PACKETS_BEFORE_ACK_SEND * max-block-size bytes in the received but
223 * not-yet-acked DATA packets. (I.e., every 2nd DATA packet forces an immediate ACK to be
224 * sent.)
225 * - If, after the DATA receiver has processed all DATA packets that were received
226 * simultaneously, at least one of those DATA packets has a higher sequence number than a
227 * datum the receiver has not yet received, then all received but not-yet-acked DATA packets
228 * are acknowledged. (I.e., every out-of-order DATA packet forces an immediate ACK to be
229 * sent.)
230 *
231 * @note congestion_window_bytes() may return a higher value after this call. You should check
232 * `can_send()`.
233 * @note Acknowledgments of data that are not currently In-flight due to being Dropped (a/k/a
234 * late ACKs) or Acknowledged (i.e., duplicate ACKs) must NOT be passed to this method.
235 * @note If an acknowledgment for packet P transmission N is received, while packet P transmission
236 * M != N is the one currently In-flight (i.e., packet was retransmitted, but the earlier
237 * incarnation was late-acked), such acknowledgments must NOT be passed to this method.
238 * We may reconsider this in the future.
239 * @note on_acks() makes no assumptions about how the reported individual packet acks were
240 * packaged by the ACK sender into actual ACK packets (how many ACKs there were, etc.).
241 * It just assumes every individual acknowledgment is reported to on_acks() as soon as
242 * possible and grouped into as few on_acks() calls as possible.
243 * @note For definition of In-flight, Acknowledged, and Dropped bytes, see
244 * Peer_socket::m_snd_flying_pkts_by_sent_when and Peer_socket::m_snd_flying_pkts_by_seq_num.
245 *
246 * @param bytes
247 * The sum of the number of bytes in the user data fields of the packets that have been
248 * Acknowledged. Must not be zero.
249 * @param packets
250 * The number of packets thus Acknowledged.
251 */
252 virtual void on_acks(size_t bytes, size_t packets);
253
254 /**
255 * Informs the congestion control strategy that 1 or more previously sent packets whose status was
256 * In-flight have just been inferred to be Dropped by receiving acknowledgments of packets that
257 * were sent later than the now-Dropped packets. For efficiency and simplicity of behavior,
258 * on_loss_event() should be called as few times as possible while still satisfying the requirement in
259 * the previous sentence. That is, suppose acknowledgments for N packets were received
260 * simultaneously thus exposing M packets as dropped. Then on_loss_event() must be called one
261 * time, with the "packets" argument equal to M and not, say, M times with packets == 1.
262 *
263 * An important addendum to the above rule is as follows. You MUST NOT call on_loss_event(), if
264 * the Dropped packets which would have led to this call are part of the same loss event as those
265 * in the preceding on_loss_event() call. How is "part of the same loss event" defined? This is
266 * formally defined within the large comment header at the top of
267 * Node::handle_accumulated_acks(). The informal short version: If the new Dropped
268 * packets were sent roughly within an RTT of those in the previous on_loss_event(), then do not
269 * call on_loss_event(). The informal reasoning for this is to avoid sharply reducing
270 * congestion_window_bytes() value due to 2+ groups of acks that arrive close to each other but
271 * really indicate just 1 loss event nevertheless repeatedly reducing CWND.
272 *
273 * on_loss_event() must be called BEFORE on_individual_ack() and on_acks() are called for the
274 * ack group that exposed the lost packets. See on_acks() and on_individual_ack().
275 *
276 * You MUST call on_loss_event() AFTER Peer_socket state (`m_snd_flying_pkts_by_sent_when` et al) has been
277 * updated to reflect the drops being reported here.
278 *
279 * @note congestion_window_bytes() WILL NOT return a higher value after this call. You need not
280 * call `can_send()`.
281 * @note For definition of In-flight, Acknowledged, and Dropped bytes, see
282 * Peer_socket::m_snd_flying_pkts_by_sent_when and Peer_socket::m_snd_flying_pkts_by_sent_when.
283 * @note This is analogous to the 3-dupe-ACKs part of the Fast Retransmit/Recovery algorithm in
284 * classic TCP congestion control (e.g., RFC 5681).
285 *
286 * @param bytes
287 * The sum of the number of bytes in the user data fields of the packets that have been
288 * Dropped. Must not be zero.
289 * @param packets
290 * The number of packets thus Dropped.
291 */
292 virtual void on_loss_event(size_t bytes, size_t packets);
293
294 /**
295 * Informs the congestion control strategy that exactly 1 previously sent packet whose status was
296 * In-flight is now known to have the given round trip time (RTT), via acknowledgment. In other
297 * words, this informs congestion control of each valid individual-packet acknowledgment of a
298 * packet that was In-flight at time of acknowledgment.
299 *
300 * The acknowledgment that led to the given individual RTT measurement also results in a
301 * consolidated on_acks() call that covers that packet and all other packets acked simultaneously;
302 * you MUST call this on_individual_ack() and THEN call on_acks(). on_individual_ack()
303 * should be called in the order of receipt of the containing ACK that led to the RTT measurement;
304 * if two RTTs are generated from one ACK, the tie should be broken in the order of appearance
305 * within the ACK.
306 *
307 * If the acknowledgment group that led to on_individual_ack() also exposed the loss of some packets,
308 * i.e., if the criteria for on_loss_event() also hold, then you MUST call on_loss_event() and
309 * THEN call on_individual_ack(). (Informal reasoning: the ACKs are exposing drop(s) that occurred in the
310 * past, chronologically before the ACKed packets arrived. Thus the events should fire in that
311 * order.)
312 *
313 * You MUST call on_individual_ack() AFTER Peer_socket state (Peer_socket::m_snd_flying_pkts_by_sent_when
314 * et al) has been updated to reflect the acknowledgments being reported here.
315 *
316 * Assumptions about RTT value: The RTT value is assumed to include only the time spent in transit
317 * from sender to receiver plus the time the ACK spent in transit from receiver to sender. Any
318 * delay (such as ACK delay) adding to the total time from sending DATA to receiving ACK is *not*
319 * to be included in the RTT. RTT is meant to measure network conditions/capacity.
320 *
321 * @note congestion_window_bytes() may return a higher value after this call, but you should wait
322 * to query it until after calling on_acks() for the entire round of acknowledgments being
323 * handled.
324 * @note Acknowledgments of data that is not currently In-flight due to being Dropped (a/k/a late
325 * ACKs) or Acknowledged (i.e., duplicate ACKs) must NOT be passed to this method.
326 * @note If an acknowledgment for packet P transmission N is received, while packet P transmission
327 * M != N is the one currently In-flight (i.e., packet was retransmitted, but the earlier
328 * incarnation was late-acked), such acknowledgments must NOT be passed to this method.
329 * We may reconsider this in the future.
330 * @note For definition of In-flight, Acknowledged, and Dropped bytes, see
331 * Peer_socket::m_snd_flying_pkts_by_sent_when and Peer_socket::m_snd_flying_pkts_by_sent_when.
332 *
333 * @param packet_rtt
334 * Round trip time of an individual packet.
335 * @param bytes
336 * The number of bytes of user data corresponding to this RTT sample (i.e., # of bytes
337 * acknowledged in the acknowledged packet).
338 * @param sent_cwnd_bytes
339 * congestion_window_bytes() when acked DATA packet was sent.
340 */
341 virtual void on_individual_ack(const Fine_duration& packet_rtt, const size_t bytes, const size_t sent_cwnd_bytes);
342
343 /**
344 * Informs the congestion control strategy that 1 or more previously sent packets whose status was
345 * In-flight have just been inferred to be Dropped because of the Drop Timer expiring. A formal
346 * description of what "Drop Timer expiring" means is too large to put here, and there are many
347 * different ways to do it. See class Drop_timer and the code that uses it. Formally, we expect
348 * that one Drop Timer is running if and only if there is at least one packet In-flight, and that
349 * that Drop Timer expiring implies the immediate conversion of at least one packet from In-flight
350 * to Dropped. We also expect that, informally, the Drop Timeout indicates serious loss events
351 * and is the 2nd and last resort in detecting loss, the main (and more likely to trigger first)
352 * one being on_loss_event().
353 *
354 * You MUST call on_drop_timeout() AFTER Peer_socket state (Peer_socket::m_snd_flying_pkts_by_sent_when
355 * et al) has been updated to reflect the drops being reported here.
356 *
357 * @note congestion_window_bytes() WILL NOT return a higher value after this call. You need not
358 * call `can_send()`.
359 * @note For definition of In-flight, Acknowledged, and Dropped bytes, see
360 * Peer_socket::m_snd_flying_pkts_by_sent_when and Peer_socket::m_snd_flying_pkts_by_seq_num.
361 * @note This is analogous to the Retransmit Timeout (RTO) algorithm in classic TCP congestion
362 * control (e.g., RFCs 5681 and 6298).
363 *
364 * @param bytes
365 * The sum of the number of bytes in the user data fields of the packets that have been
366 * Dropped with this Drop Timeout. Must not be zero.
367 * @param packets
368 * The number of packets thus Dropped.
369 */
370 virtual void on_drop_timeout(size_t bytes, size_t packets);
371
372 /**
373 * Informs the congestion control strategy that Node considers the connection to be "idle" by
374 * virtue of no desired send activity on the part of the user application for some period of time.
375 * Informally, this means "the user hasn't wanted to send anything for a while, so you may want to
376 * update your CWND calculations based on that fact, as it's likely you have obsolete information
377 * about the connection." For example, if a connection has been idle for 5 minutes, then there
378 * have been no ACKs for a while, and since ACKs typically are the tool used to gather congestion
379 * data and thus compute CWND, the internal CWND may be reset to its default initial value within
380 * on_idle_timeout().
381 *
382 * The formal definition of Idle Timeout is the one used in Node::send_worker(). Short version:
383 * an Idle Timeout occurs T after the last packet to be sent out, where T is the Drop Timeout (see
384 * on_drop_timeout()).
385 *
386 * @note congestion_window_bytes() WILL NOT return a higher value after this call. You need not
387 * call `can_send()`.
388 * @note This is analogous to the "Restarting Idle Connections" algorithm in classic TCP
389 * congestion control (RFC 5681-4.1).
390 */
391 virtual void on_idle_timeout();
392
393protected:
394
395 /**
396 * Constructs object by setting up logging and saving a pointer to the containing Peer_socket.
397 * Only a weak pointer of `sock` is stored: the `shared_ptr` itself is not saved, so the reference
398 * count of `sock` does not increase. This avoids a circular `shared_ptr` situation.
399 *
400 * @param logger_ptr
401 * The Logger implementation to use subsequently.
402 * @param sock
403 * The Peer_socket for which this module will control congestion policy.
404 */
406
407 /**
408 * Utility for subclasses that returns a handle to the containing Peer_socket. If somehow the
409 * containing Peer_socket has been deleted, `assert()` trips.
410 *
411 * @return Ditto.
412 */
414
415private:
416
417 // Data.
418
419 /**
420 * The containing socket (read-only access). Implementation may rely on various state stored
421 * inside the pointed-to Peer_socket.
422 *
423 * Why `weak_ptr`? If we stored a `shared_ptr` (Peer_socket::Const_ptr) then something would have to
424 * delete this Congestion_control_strategy object before the pointee Peer_socket's ref-count could
425 * drop to zero and it too could be deleted; this is undesirable since the guy that would want to
426 * delete this Congestion_control_strategy is Peer_socket's destructor itself (standard circular
427 * `shared_ptr` problem). If we stored a raw const `Peer_socket*` pointer instead, that would be
428 * fine. However, using a `weak_ptr` allows us to `assert()` in a civilized way if the underlying
429 * Peer_socket had been deleted (instead of crashing due to accessing deleted memory as we would
430 * with a raw pointer). This isn't really a big deal, since hopefully our code will be written
431 * properly to avoid this, but this is just a little cleaner.
432 */
433 boost::weak_ptr<Peer_socket::Const_ptr::element_type> m_sock;
434}; // class Congestion_control_strategy
435
436/**
437 * Namespace-like class that enables an `enum`-based selection of the Congestion_control_strategy
438 * interface implementation to use for a given socket (for programmatic socket options) and
439 * facilitates stream I/O of these enums (allowing parsing and outputting these socket options).
440 *
441 * ### Provided facilities ###
442 * - Create Congestion_control_strategy subclass instance based on `enum` value;
443 * - Return set of possible text IDs, each representing a distinct `enum` value;
444 * - Read such an ID from an `istream` into an `enum` value; write an ID from an `enum` value to an `ostream`.
445 */
447 private boost::noncopyable
448{
449public:
450 // Types.
451
452 /// Short-hand for Peer_socket_options::Congestion_control_strategy.
454
455 // Methods.
456
457 /**
458 * Factory method that, given an `enum` identifying the desired strategy, allocates the appropriate
459 * Congestion_control_strategy subclass instance on the heap and returns a pointer to it.
460 *
461 * @param strategy_choice
462 * The type of strategy (congestion control algorithm) to use.
463 * @param logger_ptr
464 * The Logger implementation to use subsequently.
465 * @param sock
466 * The Peer_socket for which this module will control congestion policy.
467 * @return Pointer to newly allocated instance.
468 */
470 log::Logger* logger_ptr, Peer_socket::Const_ptr sock);
471
472 /**
473 * Returns a list of strings, called IDs, each of which textually represents a distinct
474 * Congestion_control_strategy subclass. You can output this in socket option help text as the
475 * possible choices for congestion control strategy. They will match the possible text inputs to
476 * `operator>>(istream&, Strategy_choice&)`.
477 *
478 * @param ids
479 * The `vector` of strings to clear and fill with the above.
480 */
481 static void get_ids(std::vector<std::string>* ids);
482
483private:
484 // Friends.
485
486 // Friend of Congestion_control_selector: For access to Congestion_control_selector::S_ID_TO_STRATEGY_MAP and so on.
487 friend std::istream& operator>>(std::istream& is,
489 // Friend of Congestion_control_selector: For access to Congestion_control_selector::S_STRATEGY_TO_ID_MAP and so on.
490 friend std::ostream& operator<<(std::ostream& os,
491 const Congestion_control_selector::Strategy_choice& strategy_choice);
492
493 // Data.
494
495 /// Maps each ID to the corresponding #Strategy_choice `enum` value.
496 static const std::map<std::string, Strategy_choice> S_ID_TO_STRATEGY_MAP;
497 /// The inverse of #S_ID_TO_STRATEGY_MAP.
498 static const std::map<Strategy_choice, std::string> S_STRATEGY_TO_ID_MAP;
499
500 // Privacy stubs.
501
502 /// Forbid all instantiation.
504}; // class Congestion_control_selector
505
506} // namespace flow::net_flow
Convenience class that simply stores a Logger and/or Component passed into a constructor; and returns...
Definition: log.hpp:1619
Interface that the user should implement, passing the implementing Logger into logging classes (Flow'...
Definition: log.hpp:1291
Namespace-like class that enables an enum-based selection of the Congestion_control_strategy interfac...
Definition: cong_ctl.hpp:448
friend std::ostream & operator<<(std::ostream &os, const Congestion_control_selector::Strategy_choice &strategy_choice)
Serializes a Peer_socket_options::Congestion_control_strategy_choice enum to a standard ostream – the...
Definition: cong_ctl.cpp:146
Congestion_control_selector()=delete
Forbid all instantiation.
static Congestion_control_strategy * create_strategy(Strategy_choice strategy_choice, log::Logger *logger_ptr, Peer_socket::Const_ptr sock)
Factory method that, given an enum identifying the desired strategy, allocates the appropriate Conges...
Definition: cong_ctl.cpp:101
static const std::map< Strategy_choice, std::string > S_STRATEGY_TO_ID_MAP
The inverse of S_ID_TO_STRATEGY_MAP.
Definition: cong_ctl.hpp:498
friend std::istream & operator>>(std::istream &is, Congestion_control_selector::Strategy_choice &strategy_choice)
Deserializes a Peer_socket_options::Congestion_control_strategy_choice enum from a standard input str...
Definition: cong_ctl.cpp:124
static const std::map< std::string, Strategy_choice > S_ID_TO_STRATEGY_MAP
Maps each ID to the corresponding Strategy_choice enum value.
Definition: cong_ctl.hpp:496
static void get_ids(std::vector< std::string > *ids)
Returns a list of strings, called IDs, each of which textually represents a distinct Congestion_contr...
Definition: cong_ctl.cpp:115
The abstract interface for a per-socket module that determines the socket's congestion control behavi...
Definition: cong_ctl.hpp:180
Congestion_control_strategy(log::Logger *logger_ptr, Peer_socket::Const_ptr sock)
Constructs object by setting up logging and saving a pointer to the containing Peer_socket.
Definition: cong_ctl.cpp:29
boost::weak_ptr< Peer_socket::Const_ptr::element_type > m_sock
The containing socket (read-only access).
Definition: cong_ctl.hpp:433
virtual void on_loss_event(size_t bytes, size_t packets)
Informs the congestion control strategy that 1 or more previously sent packets whose status was In-fl...
Definition: cong_ctl.cpp:41
virtual size_t congestion_window_bytes() const =0
Returns the maximal number of bytes (with respect to m_data field of DATA packets) that this socket s...
Peer_socket::Const_ptr socket() const
Utility for subclasses that returns a handle to the containing Peer_socket.
Definition: cong_ctl.cpp:63
virtual void on_idle_timeout()
Informs the congestion control strategy that Node considers the connection to be "idle" by virtue of ...
Definition: cong_ctl.cpp:53
virtual void on_acks(size_t bytes, size_t packets)
Informs the congestion control strategy that 1 or more previously sent packets whose status was In-fl...
Definition: cong_ctl.cpp:36
virtual void on_individual_ack(const Fine_duration &packet_rtt, const size_t bytes, const size_t sent_cwnd_bytes)
Informs the congestion control strategy that exactly 1 previously sent packet whose status was In-fli...
Definition: cong_ctl.cpp:58
virtual void on_drop_timeout(size_t bytes, size_t packets)
Informs the congestion control strategy that 1 or more previously sent packets whose status was In-fl...
Definition: cong_ctl.cpp:47
An empty interface, consisting of nothing but a default virtual destructor, intended as a boiler-plat...
Definition: util.hpp:45
Const_target_ptr Const_ptr
Short-hand for ref-counted pointer to immutable values of type Target_type::element_type (a-la T cons...
Flow module containing the API and implementation of the Flow network protocol, a TCP-inspired stream...
Definition: node.cpp:25
Fine_clock::duration Fine_duration
A high-res time duration as computed from two Fine_time_pts.
Definition: common.hpp:410
Congestion_control_strategy_choice
The possible choices for congestion control strategy for the socket.
Definition: options.hpp:63