Flow 1.0.0
Flow project: Public API.
Public Member Functions | Static Public Attributes | Protected Member Functions | Static Protected Attributes | List of all members
flow::net_flow::Node Class Reference

An object of this class is a single Flow-protocol networking node, in the sense that: (1) it has a distinct IP address and UDP port; and (2) it speaks the Flow protocol over a UDP transport layer. More...

#include <node.hpp>

Inheritance diagram for flow::net_flow::Node:
[legend]
Collaboration diagram for flow::net_flow::Node:
[legend]

Public Member Functions

 Node (log::Logger *logger, const util::Udp_endpoint &low_lvl_endpoint, Net_env_simulator *net_env_sim=0, Error_code *err_code=0, const Node_options &opts=Node_options())
 Constructs Node. More...
 
 ~Node () override
 Destroys Node. More...
 
bool running () const
 Returns true if and only if the Node is operating. More...
 
const util::Udp_endpointlocal_low_lvl_endpoint () const
 Return the UDP endpoint (IP address and UDP port) which will be used for receiving incoming and sending outgoing Flow traffic in this Node. More...
 
Peer_socket::Ptr connect (const Remote_endpoint &to, Error_code *err_code=0, const Peer_socket_options *opts=0)
 Initiates an active connect to the specified remote Flow server. More...
 
Peer_socket::Ptr connect_with_metadata (const Remote_endpoint &to, const boost::asio::const_buffer &serialized_metadata, Error_code *err_code=0, const Peer_socket_options *opts=0)
 Same as connect() but sends, as part of the connection handshake, the user-supplied metadata, which the other side can access via Peer_socket::get_connect_metadata() after accepting the connection. More...
 
template<typename Rep , typename Period >
Peer_socket::Ptr sync_connect (const Remote_endpoint &to, const boost::chrono::duration< Rep, Period > &max_wait, Error_code *err_code=0, const Peer_socket_options *opts=0)
 The blocking (synchronous) version of connect(). More...
 
template<typename Rep , typename Period >
Peer_socket::Ptr sync_connect_with_metadata (const Remote_endpoint &to, const boost::chrono::duration< Rep, Period > &max_wait, const boost::asio::const_buffer &serialized_metadata, Error_code *err_code=0, const Peer_socket_options *opts=0)
 A combination of sync_connect() and connect_with_metadata() (blocking connect, with supplied metadata). More...
 
Peer_socket::Ptr sync_connect (const Remote_endpoint &to, Error_code *err_code=0, const Peer_socket_options *opts=0)
 Equivalent to sync_connect(to, duration::max(), err_code, opt)s; i.e., sync_connect() with no user timeout. More...
 
Peer_socket::Ptr sync_connect_with_metadata (const Remote_endpoint &to, const boost::asio::const_buffer &serialized_metadata, Error_code *err_code=0, const Peer_socket_options *opts=0)
 Equivalent to sync_connect_with_metadata(to, duration::max(), serialized_metadata, err_code, opts); i.e., sync_connect_with_metadata() with no user timeout. More...
 
Server_socket::Ptr listen (flow_port_t local_port, Error_code *err_code=0, const Peer_socket_options *child_sock_opts=0)
 Sets up a server on the given local Flow port and returns Server_socket which can be used to accept subsequent incoming connections to this server. More...
 
Event_set::Ptr event_set_create (Error_code *err_code=0)
 Creates a new Event_set in Event_set::State::S_INACTIVE state with no sockets/events stored; returns this Event_set. More...
 
void interrupt_all_waits (Error_code *err_code=0)
 Interrupts any blocking operation, a/k/a wait, and informs the invoker of that operation that the blocking operation's outcome was being interrupted. More...
 
bool set_options (const Node_options &opts, Error_code *err_code=0)
 Dynamically replaces the current options set (options()) with the given options set. More...
 
Node_options options () const
 Copies this Node's option set and returns that copy. More...
 
size_t max_block_size () const
 The maximum number of bytes of user data per received or sent block on connections generated from this Node, unless this value is overridden in the Peer_socket_options argument to listen() or connect() (or friend). More...
 
- Public Member Functions inherited from flow::util::Null_interface
virtual ~Null_interface ()=0
 Boring virtual destructor. More...
 
- Public Member Functions inherited from flow::log::Log_context
 Log_context (Logger *logger=0)
 Constructs Log_context by storing the given pointer to a Logger and a null Component. More...
 
template<typename Component_payload >
 Log_context (Logger *logger, Component_payload component_payload)
 Constructs Log_context by storing the given pointer to a Logger and a new Component storing the specified generically typed payload (an enum value). More...
 
 Log_context (const Log_context &src)
 Copy constructor that stores equal Logger* and Component values as the source. More...
 
 Log_context (Log_context &&src)
 Move constructor that makes this equal to src, while the latter becomes as-if default-constructed. More...
 
Log_contextoperator= (const Log_context &src)
 Assignment operator that behaves similarly to the copy constructor. More...
 
Log_contextoperator= (Log_context &&src)
 Move assignment operator that behaves similarly to the move constructor. More...
 
void swap (Log_context &other)
 Swaps Logger pointers and Component objects held by *this and other. More...
 
Loggerget_logger () const
 Returns the stored Logger pointer, particularly as many FLOW_LOG_*() macros expect. More...
 
const Componentget_log_component () const
 Returns reference to the stored Component object, particularly as many FLOW_LOG_*() macros expect. More...
 

Static Public Attributes

static const size_t & S_NUM_PORTS = Port_space::S_NUM_PORTS
 Total number of Flow ports in the port space, including S_PORT_ANY.
 
static const size_t & S_NUM_SERVICE_PORTS = Port_space::S_NUM_SERVICE_PORTS
 Total number of Flow "service" ports (ones that can be reserved by number with Node::listen()).
 
static const size_t & S_NUM_EPHEMERAL_PORTS = Port_space::S_NUM_EPHEMERAL_PORTS
 Total number of Flow "ephemeral" ports (ones reserved locally at random with Node::listen(S_PORT_ANY) or Node::connect()).
 
static const flow_port_tS_FIRST_SERVICE_PORT = Port_space::S_FIRST_SERVICE_PORT
 The port number of the lowest service port, making the range of service ports [S_FIRST_SERVICE_PORT, S_FIRST_SERVICE_PORT + S_NUM_SERVICE_PORTS - 1].
 
static const flow_port_tS_FIRST_EPHEMERAL_PORT = Port_space::S_FIRST_EPHEMERAL_PORT
 The port number of the lowest ephemeral Flow port, making the range of ephemeral ports [S_FIRST_EPHEMERAL_PORT, S_FIRST_EPHEMERAL_PORT + S_NUM_EPHEMERAL_PORTS - 1].
 

Protected Member Functions

template<typename Peer_socket_impl_type >
Peer_socketsock_create_forward_plus_ctor_args (const Peer_socket_options &opts)
 Returns a raw pointer to newly created Peer_socket or sub-instance like asio::Peer_socket, depending on the template parameter. More...
 
template<typename Server_socket_impl_type >
Server_socketserv_create_forward_plus_ctor_args (const Peer_socket_options *child_sock_opts)
 Like sock_create_forward_plus_ctor_args() but for Server_sockets. More...
 

Static Protected Attributes

static const uint8_t S_DEFAULT_CONN_METADATA = 0
 Type and value to supply as user-supplied metadata in SYN, if user chooses to use [[a]sync_]connect() instead of [[a]sync_]connect_with_metadata(). More...
 

Detailed Description

An object of this class is a single Flow-protocol networking node, in the sense that: (1) it has a distinct IP address and UDP port; and (2) it speaks the Flow protocol over a UDP transport layer.

Here we summarize class Node and its entire containing Flow module flow::net_flow.

See also flow::asio::Node, a subclass that allows for full use of our API (its superclass) and turns our sockets into boost.asio I/O objects, able to participate with ease in all boost.asio event loops. If you're already very familiar with boost::asio::ip::tcp, you can skip to the asio::Node doc header. If not, recommend becoming comfortable with the asio-less API, then read the forementioned asio::Node doc header.

The flow::asio::Node class doc header (as of this writing) includes a compact summary of all network operations supported by the entire hierarchy and hence deserves a look for your convenience.

Using flow::net_flow, starting with the present class Node

Node is an important and central class of the netflow Flow module and thus deserves some semi-philosophical discussion, namely what makes a Node a Node – why the name? Let's delve into the 2 aforementioned properties of a Node.

A Node has a distinct IP address and UDP port: util::Udp_endpoint

A Node binds to an IP address and UDP port, both of which are given (with the usual ephemeral port and IP address<->interface(s) nomenclature) as an argument at Node::Node() construction and can never change over the lifetime of the object. The IP and port together are a util::Udp_endpoint, which is a using-alias of boost.asio's boost::asio::ip::udp::endpoint . In the same network (e.g., the Internet) no two Node objects (even in separate processes; even on different machines) may be alive (as defined by Node::running() == true) with constructor-provided util::Udp_endpoint objects R1 and R2 such that R1 == R2. In particular, if Node n1 exists, with n1.running() and n1.local_low_lvl_endpoint() == R1, and on the same machine one attempts to construct Node n2(R2), such that R1 == R2 (their IPs and ports are equal), then n2 will fail to properly construct, hence n2.running() == false will be the case, probably due to port-already-bound OS error. (There are counter-examples with NAT'ed IP addresses and special values 0.0.0.0 and port 0, but please just ignore those and other pedantic objections and take the spirit of what I am saying. Ultimately, the point is:

A successfully constructed (running() == true) Node occupies the same IP-and-UDP "real estate" as would a a mere successfully bound UDP socket.

So all that was a long, overbearing way to emphasize that a Node binds to an IP address and UDP port, and a single such combo may have at most one Node on it (unless it has !running()). That's why it is called a Node: it's a node on the network, especially on Internet.

A Node speaks the Flow network protocol to other, remote Nodes

If Node n1 is successfully constructed, and Node n2 is as well, the two can communicate via a new protocol implemented by this Flow module. This protocol is capable of working with stream (TCP-like) sockets implemented on top of UDP in a manner analogous to how an OS's net-stack implements TCP over IP. So one could call this Flow/UDP. One can talk Flow/UDP to another Flow/UDP endpoint (a/k/a Node) only; no compatibility with any other protocol is supported. (This isn't, for example, an improvement to one side of TCP that is still compatible with legacy TCPs on the other end; though that is a fertile area for research in its own right.) The socket can also operate in unreliable, message boundary-preserving mode, controllable via a Flow-protocol-native socket option; in which case reliability is the responsibility of the net_flow user. By default, though, it's like TCP: message bounds are not preserved; reliability is guaranteed inside the protocol. n1 and n2 can be local in the same process, or local in the same machine, or remote in the same overall network – as long as one is routable to the other, they can talk.

For practical purposes, it's important to have idea of a single running() Node's "weight." Is it light-weight like a UDP or TCP socket? Is it heavy-weight like an Apache server instance? The answer is that it's MUCH close to the former: it is fairly light-weight. As of this writing, internally, it stores a table of peer and server sockets (of which there could be a handful or tons, depending on the user's own API calls prior); and uses at least one dedicated worker thread (essentially not visible to the user but conceptually similar to a UDP or TCP stack user's view of the kernel: it does stuff for one in the background – for example it can wait for incoming connections, if asked). So, a Node is an intricate but fairly light-weight object that stores socket tables (proportional in size to the sockets currently required by the Node's user) and roughly a single worker thread performing low-level I/O and other minimally CPU-intensive tasks. A Node can get busy if a high-bandwidth network is sending or receiving intense traffic, as is the case for any TCP or UDP net-stack. In fact, a Node can be seen as a little Flow-protocol stack implemented on top of UDP transport. (Historical note: class Node used to be class Stack, but this implied a heavy weight and misleadingly discouraged multiple constructions in the same program; all that ultimately caused the renaming to Node.)

Essential properties of Flow network protocol (Flow ports, mux/demuxing)

A single Node supports 0 or more (an arbitrary # of) peer-to-peer connections to other Nodes. Moreover, given two Nodes n1 and n2, there can similarly be 0 or more peer-to-peer connections running between the two. In order to allow this, a port (and therefore multiplexing/demultiplexing) system is a feature of Flow protocol. (Whether this features is necessary or even desirable is slightly controversial and not a settled matter – a to-do on this topic can be found below.)

More specifically, think of a given Node n1 as analogous (in terms of is multiplexing capabilities) to one TCP stack running on a one-interface machine. To recap the TCP port-addressing scheme (assuming only 1 interface): The TCP stack has approximately 2^16 (~65k) ports available. One may create and "bind" a server "socket" to (more or less, for our discussion) any 1 of these ports. Let's say a server socket is bound to port P1. If a remote TCP stack successfully connects to such a server-bound port, this results in a passively-connected client "socket," which – also – is bound to P1 (bear with me as to how this is possible). Finally, the TCP stack's user may bind an actively connecting client "socket" to another port P2 (P2 =/= P1; as P1 is reserved to that server and passively connected clients from that server). Recall that we're contriving a situation where there is only one other remote stack, so suppose there is the remote, 1-interface TCP stack. Now, let's say a packet arrives along an established connection from this stack. How does our local TCP stack determine to which connection this belongs? This is "demultiplexing." If the packet contains the info "destination port: P2," then that clearly belongs to the actively-connected client we mentioned earlier... but what if it's "dest. port: P1"? This could belong to any number of connections originally passive-connected by incoming server connection requests to port P1. Answer: the packet also contains a "source TCP port" field. So the connection ID (a/k/a socket ID) consists of BOTH pieces of data: (1) destination (local) port; (2) source (remote) port. (Recall that, symmetrically, the remote TCP stack had to have a client bind to some port, and that means no more stuff can bind to that port; so it is unique and can't clash with anything else – inside that remote stack.) So this tuple uniquely identifies the connection in this scenario of a single-interface local TCP that can have both active client sockets and passive-client-socket-spawning server sockets; and talk to other stacks like it. Of course, there can be more than one remote TCP stack. So the 2-tuple (pair) becomes a 3-tuple (triplet) in our slightly simplified version of reality: (1) destination (local) TCP port; (2) source (remote) IP address; and (3) source (remote) TCP port. (In reality, the local TCP stack can bind to different interfaces, so it becomes a 4-tuple by adding in destination (local) IP address... but that's TCP and is of no interest to our analogy to Flow protocol.)

What about Flow protocol? GIVEN n1 and n2, it works just the same. We have a special, TCP-like, Flow port space WITHIN n1 and similarly within n2. So if only n1 and n2 are involved, an n1 Server_socket (class) object can listen() (<– actual method) on a net_flow::flow_port_t (<– alias to 2-byte unsigned as of this writing) port P1; Server_socket::accept() (another method) incoming connections, each still bound to port P1; and n1 can also actively connect() (another method) to n2 at some port over there. Then an incoming UDP packet's intended established connection is demuxed to by a 2-tuple: (1) destination (local) flow_port_t; (2) source (remote) flow_port_t.

In reality, other remote Nodes can of course be involved: n3, n4, whatever. As we've established, each Node lives at a UDP endpoint: util::Udp_endpoint (again, IP address + UDP port). Therefore, from the stand-point of a given local Node n1, each established peer-to-peer connection is identified fully by the 5-tuple (marked here with roman numerals):

  1. Local flow_port_t within n1's port-space (not dissimilar to TCP's port space in size and behavior). (I)
  2. Remote endpoint identifying the remote Node: Remote_endpoint.
    1. util::Udp_endpoint.
      1. IP address. (II)
      2. UDP port. (III)
    2. Remote net_flow::flow_port_t. (IV)

So, that is how it works. Of course, if this complexity is not really necessary for some application, then only really (II) and (III) are truly necessary. (I) and (IV) can just be chosen to be some agreed-upon constant port number. Only one connection can ever exist in this situation, and one would need to create more Nodes one side or the other or both to achieve more connections between the same pair of IP addresses, but that's totally reasonable: it's no different from simply binding to more UDP ports. My point here is that the Flow-protocol-invented construct of "Flow ports" (given as flow_port_t values) can be used to conserve UDP ports; but they can also be NOT used, and one can just use more UDP ports, as a "regular" UDP-using pair of applications would, if more than one flow of information is necessary between those two apps. It is up to you. (Again, some arguments can be made for getting rid of (I) and (IV), after all. This possibility is discussed in a below to-do.)

(Do note that, while we've emulated TCP's port scheme, there is no equivalent of IP's "interfaces." Each Node just has a bunch of ports; there is no port table belonging to each of N interfaces or any such thing.)

flow::net_flow API overview

This is a summary (and some of this is very briefly mentioned above); all the classes and APIs are much more deeply documented in their own right. Also, see above pointer to asio::Node whose doc header may be immediately helpful to experienced users. Meanwhile, to summarize:

The Node hands out sockets as Peer_socket objects; it acts as a factory for them (directly) via its connect() and (indirectly) Server_socket::accept() families of methods. It is not possible to construct a Peer_socket independently of a Node, due to tight coordination between the Node and each Peer_socket. Moreover each Peer_socket is handed out via boost::shared_ptr smart pointer. While not strictly necessary, this is a situation where both the user and a central registry (Node) can own the Peer_socket at a given time, which is an ideal application for shared_ptr<> that can greatly simplify questions of object ownership and providing auto-delete to boot.

Thus: Node::listen(flow_port_t P) yields a Server_socket::Ptr, which will listen for incoming connections on P. Server_socket::accept() (and similar) yields a Peer_socket::Ptr, one side of a peer-to-peer connection. On the other side, Node::connect(Remote_endpoint R) (where R contains Udp_endpoint U, where value equal to U had been earlier passed to constructor of the listen()ing Node; and R also contains flow_port_t P, passed to Node::listen()). connect(), too, yields a Peer_socket::Ptr. And thus, if all went well, each side now has a Peer_socket::Ptr S1 and S2, which – while originating quite differently – are now completely equal in capabilities: they are indeed peer sockets. They have methods like Peer_socket::send() and Peer_socket::receive().

Further nuances can be explored in the various other doc headers, but I'll mention that both non-blocking behavior (meaning the call always returns immediately, even if unable to immediately perform the desired task such as accept a connection or receive 1 or more bytes) and blocking behavior as supported, as in (for example) a BSD sockets API. However, there is no "blocking" or "non-blocking" mode as in BSD or WinSock (personally I, Yuri, see it as an annoying anachronism). Instead you simply call a method named according to whether it will never block or (possibly – if appropriate) block. The nomenclature convention is as follows: if the action is X (e.g., X is receive or accept), then ->X() is the non-blocking version; and ->sync_X() is the blocking one. A non-blocking version always exists for any possible action; and a blocking version exists if it makes sense for it to exist. (Exception: Event_set::async_wait() explicitly includes async_ prefix contrary to this convention. Partially it's because just calling it wait() – convention or not – makes it sound like it's going to block, whereas it emphatically will never do so. ALSO it's because it's a "special" method with unique properties including letting user execute their own code in a Node's internal worker thread. So rules go out the window a little bit for that method; hence the slight naming exception.)

Nomenclature: "low-level" instead of "UDP"

Side note: You will sometimes see the phrase low_lvl in various identifiers among net_flow APIs. low_lvl (low-level) really means "UDP" – but theoretically some other packet-based transport could be used instead in the future; or it could even be an option to chooose between possible underlying protocols. For example, if net_flow moved to kernel-space, the transport could become IP, as it is for TCP. So this nomenclature is a hedge; and also it argubly is nicer/more generic: the fact it's UDP is immaterial; that it's the low-level (from our perspective) protocol is the salient fact. However, util::Udp_endpoint is thus named because it is very specifically a gosh-darned UDP port (plus IP address), so hiding from that by naming it Low_Lvl_endpoint (or something) seemed absurd.

Event, readability, writability, etc.

Any experienced use of BSD sockets, WinSock, or similar is probably wondering by now, "That sounds reasonable, but how does the API allow me to wait until I can connect/accept/read/write, letting me do other stuff in the meantime?" Again, one can use a blocking version of basically every operation; but then the wait for readability/writability/etc. may block the thread. One can work around this by creating multiple threads, but multi-threaded coding introduced various difficulties. So, the experienced socketeer will want to use non-blocking operations + an event loop + something that allow one to wait of various states (again, readability, writability, etc.) with various modes of operation (blocking, asynchronous, with or without a timeout, etc.). The most advanced and best way to get these capabilities is to use boost.asio integration (see asio::Node). As explained elsewhere (see Event_set doc header) this is sometimes not usable in practice. In that case: These capabilities are supplied in the class Event_set. See that class's doc header for further information. Event_set is the select() of this socket API. However it is significantly more convenient AND indeed supports a model that will allow one to use Flow-protocol sockets in a select()- or equivalent-based event loop, making net_flow module usable in a true server, such as a web server. That is, you don't just have to write a separate Flow event loop operating independently of your other sockets, file handles, etc. This is an important property in practice. (Again: Ideally you wouldn't need Event_set for this; asio::Node/etc. might be better to use.)

Error reporting

Like all Flow modules, net_flow uses error reporting conventions/semantics introduced in namespace flow doc header Error Reporting section.

In particular, this module does add its own error code set. See namespace net_flow::error doc header which should point you to error::Code enum. All error-emitting net_flow APIs emit Error_codes assigned from error::Code enum values.

Configurability, statistics, logging

Great care is taken to provide visibility into the "black box" that is Flow-protocol. That is, while the API follows good practices wherein implementation is shielded away from the user, at the same time the human user has powerful tools to both examine the insides of the library/protocol's performance AND to tweak the parameters of its behavior. Picture a modern automobile: while you're driving at least, it's not going to let you look at or mess with its engine or transmission – nor do you need to understand how they work; BUT, the onboard monitor will feature screens that tell you about its fuel economy performance, the engine's inner workings, and perhaps a few knobs to control the transmission's performance (for example). Same principles are followed here.

More specifically:

Multiple Node objects

As mentioned already many times, multiple Node objects can exist and function simultaneously (as long as they are not bound to the same conceptual util::Udp_endpoint, or to the same UDP port of at least one IP interface). However, it is worth emphasizing that – practically speaking – class Node is implemented in such a way as to make a given Node 100% independent of any other Node in the same process. They don't share working thread(s), data (except static data, probably just constants), any namespaces, port spaces, address spaces, anything. Each Node is independent both API-wise and in terms of internal implementation.

Thread safety

All operations safe for simultaneous execution on 2+ separate Node objects or on the same Node, or on any objects (e.g., Peer_socket) returned by Node. (Please note the emphasized phrase.) "Operations" are any Node or Node-returned-object method calls after construction and before destruction of the Node. (In particular, for example, one thread can listen() while another connect()s.) The same guarantee may or may not apply to other classes; see their documentation headers for thread safety information.

Thread safety of destructor

Generally it is not safe to destruct a Node (i.e., let Node::~Node() get called) while a Node operation is in progress on that Node (obviously, in another thread). There is one exception to this: if a blocking operation (any operation with name starting with sync_) has entered its blocking (sleep) phase, it is safe to delete the underlying Node. In practice this means simply that, while you need not lock a given Node with an external mutex while calling its various methods from different threads (if you really must use multiple threads this way), you should take care to probably join the various threads before letting a Node go away.

Historical note re. FastTCP, Google BBR

Historical note re. FastTCP

One notable change in this net_flow vs. the original libgiga is this one lacks the FastTCP congestion control strategy. I omit the historical reasons for this for now (see to-do regarding re-introducing licensing/history/location/author info, in common.hpp).

Addendum to the topic of congestion control: I am not that interested in FastTCP, as I don't see it as cutting-edge any longer. I am interested in Google BBR. It is a goal to implement Google BBR in net_flow, as that congestion control algorithm is seen by many as simply the best one available; a bold conclusion given how much research and given-and-take and pros-and-cons discussions have tramspired ever since the original Reno TCP became ubiquitous. Google BBR is (modulo whatever proprietary improvements Google chooses to implement in their closed-source software) publicly documented in research paper(s) and, I believe, available as Google open source.

Todo:
flow::net_flow should use flow::cfg for its socket-options mechanism. It is well suited for that purpose, and it uses some ideas from those socket-options in the first place but is generic and much more advanced. Currently net_flow socket-options are custom-coded from long before flow::cfg existed.
Todo:
ostream output operators for Node and asio::Node should exist. Also scour through all types; possibly some others could use the same. (I have been pretty good at already implementing these as-needed for logging; but I may have "missed a spot.")
Todo:
Some of the ostream<< operators we have take X* instead of const X&; this should be changed to the latter for various minor reasons and for consistency.
Todo:
Actively support IPv6 and IPv4, particularly in dual-stack mode (wherein net_flow::Server_socket would bind to an IPv6 endpoint but accept incoming V4 and V6 connections alike). It already supports it nominally, in that one can indeed listen on either type of address and connect to either as well, but how well it works is untested, and from some outside experience it might involve some subtle provisions internally.
Todo:
Based on some outside experience, there maybe be problems – particularly considering the to-do regarding dual-stack IPv6/v4 support – in servers listening in multiple-IP situations; make sure we support these seamlessly. For example consider a machine with a WAN IP address and a LAN (10.x.x.x) IP address (and possibly IPv6 versions of each also) that (as is typical) binds on all of them at ANY:P (where P is some port; and ANY is the IPv6 version, with dual-stack mode ensuring V4 datagrams are also received). If a client connects to a LAN IP, while in our return datagrams we set the source IP to the default, does it work? Outside experience shows it might not, depending, plus even if in our protocol it does, it might be defeated by some firewall... the point is it requires investigation (e.g., mimic TCP itself; or look into what IETF or Google QUIC does; and so on).

Constructor & Destructor Documentation

◆ Node()

flow::net_flow::Node::Node ( log::Logger logger,
const util::Udp_endpoint low_lvl_endpoint,
Net_env_simulator net_env_sim = 0,
Error_code err_code = 0,
const Node_options opts = Node_options() 
)
explicit

Constructs Node.

Post-condition: Node ready for arbitrary use. (Internally this includes asynchronously waiting for any incoming UDP packets on the given endpoint.)

Does not block. After exiting this constructor, running() can be used to determine whether Node initialized or failed to do so; or one can get this from *err_code.

Potential shared use of Logger *logger

All logging, both in this thread (from which the constructor executes) and any potential internally spawned threads, by this Node and all objects created through it (directly or otherwise) will be through this Logger. *logger may have been used or not used for any purpose whatsoever prior to this constructor call. However, from now on, Node will assume that *logger will be in exclusive use by this Node and no other code until destruction. It is strongly recommended that all code refrains from further use of *logger until the destructor ~Node() exits. Otherwise, quality of this Node's logging (until destruction) may be lowered in undefined fashion except for the following formal guarantees: the output will not be corrupted from unsafe concurrent logging; and the current thread's nickname (for logging purposes only) will not be changed at any point. Less formally, interleaved or concurrent use of the same Logger might result in such things as formatters from Node log calls affecting output of your log calls or vice versa. Just don't, and it'll look good.

Parameters
low_lvl_endpointThe UDP endpoint (IP address and UDP port) which will be used for receiving incoming and sending outgoing Flow traffic in this Node. E.g.: Udp_endpoint(Ip_address_v4::any(), 1234) // UDP port 1234 on all IPv4 interfaces.
loggerThe Logger implementation through which all logging from this Node will run. See notes on logger ownership above.
net_env_simNetwork environment simulator to use to simulate (fake) external network conditions inside the code, e.g., for testing. If 0, no such simulation will occur. Otherwise the code will add conditions such as loss and latency (in addition to any present naturally) and will take ownership of the the passed in pointer (meaning, we will delete as we see fit; and you must never do so from now on).
err_codeSee flow::Error_code docs for error reporting semantics. error::Code generated: error::Code::S_NODE_NOT_RUNNING (Node failed to initialize), error::Code::S_OPTION_CHECK_FAILED.
optsThe low-level per-Node options to use. The default uses reasonable values that normally need not be changed. No reference to opts is saved; it is only copied. See also Node::set_options(), Node::options(), Node::listen(), Node::connect(), Peer_socket::set_options(), Peer_socket::options().

◆ ~Node()

flow::net_flow::Node::~Node ( )
override

Destroys Node.

Closes all Peer_socket objects as if by sock->close_abruptly(). Then closes all Server_socket objects Then closes all Event_set objects as if by event_set->close().

Todo:
Server_socket objects closed as if by what?

Frees all resources except the objects still shared by shared_ptr<>s returned to the Node user. All shared_ptr<> instances inside Node sharing the latter objects are, however, eliminated. Therefore any such object will be deleted the moment the user also eliminates all her shared_ptr<> instances sharing that same object; any object for which that is already the case is deleted immediately.

Does not block.

Note: as a corollary of the fact this acts as if {Peer|Server_}socket::close_abruptly() and Event_set::close(), in that order, were called, all event waits on the closed sockets (sync_send(), sync_receive(), sync_accept(), Event_set::sync_wait(), Event_set::async_wait()) will execute their on-event behavior (sync_send() return, sync_receive() return, sync_accept() return, sync_wait() return and invoke handler, respectively). Since Event_set::close() is executed soon after the sockets close, those Event_set objects are cleared. Therefore, the user on-event behavior handling may find that, despite a given event firing, the containing Event_set is empty; or they may win the race and see an Event_set with a bunch of S_CLOSED sockets. Either way, no work is possible on these sockets.

Rationale for previous paragraph: We want to wake up any threads or event loops waiting on these sockets, so they don't sit around while the underlying Node is long since destroyed. On the other hand, we want to free any resources we own (including socket handles inside Event_set). This solution satisfies both desires. It does add a bit of non-determinism (easily handled by the user: any socket in the Event_set, even if user wins the race, will be S_CLOSED anyway). However it introduces no actual thread safety problems (corruption, etc.).

Todo:
Provide another method to shut down everything gracefully?

Member Function Documentation

◆ connect()

Peer_socket::Ptr flow::net_flow::Node::connect ( const Remote_endpoint to,
Error_code err_code = 0,
const Peer_socket_options opts = 0 
)

Initiates an active connect to the specified remote Flow server.

Returns a safe pointer to a new Peer_socket. The socket's state will be some substate of S_OPEN at least initially. The connection operation, involving network traffic, will be performed asynchronously.

One can treat the resulting socket as already connected; its Writable and Readable status can be determined; once Readable or Writable one can receive or send, respectively.

Port selection: An available local Flow port will be chosen and will be available for information purposes via sock->local_port(), where sock is the returned socket. The port will be in the range [Node::S_FIRST_EPHEMERAL_PORT, Node::S_FIRST_EPHEMERAL_PORT + Node::S_NUM_EPHEMERAL_PORTS - 1]. Note that there is no overlap between that range and the range [Node::S_FIRST_SERVICE_PORT, Node::S_FIRST_SERVICE_PORT + Node::S_NUM_SERVICE_PORTS - 1].

Parameters
toThe remote Flow port to which to connect.
err_codeSee flow::Error_code docs for error reporting semantics. error::Code generated: error::Code::S_OUT_OF_PORTS, error::Code::S_INTERNAL_ERROR_PORT_COLLISION, error::Code::S_OPTION_CHECK_FAILED.
optsThe low-level per-Peer_socket options to use in the new socket. If null (typical), the per-socket options template in Node::options() is used. If not null, the given per-socket options are first validated and, if valid, used. If invalid, it is an error. See also Peer_socket::set_options(), Peer_socket::options().
Returns
Shared pointer to Peer_socket, which is in the S_OPEN main state; or null pointer, indicating an error.

◆ connect_with_metadata()

Peer_socket::Ptr flow::net_flow::Node::connect_with_metadata ( const Remote_endpoint to,
const boost::asio::const_buffer &  serialized_metadata,
Error_code err_code = 0,
const Peer_socket_options opts = 0 
)

Same as connect() but sends, as part of the connection handshake, the user-supplied metadata, which the other side can access via Peer_socket::get_connect_metadata() after accepting the connection.

Note
It is up to the user to serialize the metadata portably. One recommended convention is to use boost::endian::native_to_little() (and similar) before connecting; and on the other side use the reverse (boost::endian::little_to_native()) before using the value. Packet dumps will show a flipped (little-endian) representation, while with most platforms the conversion will be a no-op at compile time. Alternatively use native_to_big() and vice-versa.
Why provide this metadata facility? After all, they could just send the data upon connection via send()/receive()/etc. Answers: Firstly, this is guaranteed to be delivered (assuming successful connection), even if reliability (such as via retransmission) is disabled in socket options (opts argument). For example, if a reliability mechanism (such as FEC) is built on top of the Flow layer, parameters having to do with configuring that reliability mechanism can be bootstrapped reliably using this mechanism. Secondly, it can be quite convenient (albeit not irreplaceably so) for connection-authenticating techniques like security tokens known by both sides.
Parameters
toSee connect().
serialized_metadataData copied and sent to the other side during the connection establishment. The other side can get equal data using Peer_socket::get_connect_metadata(). The returned socket sock also stores it; it's similarly accessible via sock->get_connect_metadata() on this side. The metadata must fit into a single low-level packet; otherwise error::Code::S_CONN_METADATA_TOO_LARGE error is returned.
err_codeSee connect(). Added error: error::Code::S_CONN_METADATA_TOO_LARGE.
optsSee connect().
Returns
See connect().

◆ event_set_create()

Event_set::Ptr flow::net_flow::Node::event_set_create ( Error_code err_code = 0)

Creates a new Event_set in Event_set::State::S_INACTIVE state with no sockets/events stored; returns this Event_set.

Parameters
err_codeSee flow::Error_code docs for error reporting semantics. error::Code generated: error::Code::S_NODE_NOT_RUNNING.
Returns
Shared pointer to Event_set; or null pointer, indicating an error.

◆ interrupt_all_waits()

void flow::net_flow::Node::interrupt_all_waits ( Error_code err_code = 0)

Interrupts any blocking operation, a/k/a wait, and informs the invoker of that operation that the blocking operation's outcome was being interrupted.

Conceptually, this causes a similar fate as a POSIX blocking function exiting with -1/EINTR, for all such functions currently executing. This may be called from any thread whatsoever and, particularly, from signal handlers as well.

Before deciding to call this explicitly from signal handler(s), consider using the simpler Node_options::m_st_capture_interrupt_signals_internally instead.

The above is vague about how an interrupted "wait" exhibits itself. More specifically, then: Any operation with name sync_...() will return with an error, that error being Error_code error::Code::S_WAIT_INTERRUPTED. Event_set::async_wait()-initiated wait will end, with the handler function being called, passing the Boolean value true to that function. true indicates the wait was interrupted rather than successfully finishing with 1 or more active events (false would've indicated th latter, more typical situation).

Note that various calsses have sync_...() operations, including Node (Node::sync_connect()), Server_socket (Server_socket::sync_accept()), and Peer_socket (Peer_socket::sync_receive()).

Parameters
err_codeSee flow::Error_code docs for error reporting semantics. error::Code generated: error::Code::S_NODE_NOT_RUNNING.

◆ listen()

Server_socket::Ptr flow::net_flow::Node::listen ( flow_port_t  local_port,
Error_code err_code = 0,
const Peer_socket_options child_sock_opts = 0 
)

Sets up a server on the given local Flow port and returns Server_socket which can be used to accept subsequent incoming connections to this server.

Any subsequent incoming connections will be established asynchronously and, once established, can be claimed (as Peer_socket objects) via Server_server::accept() and friends.

Port specification: You must select a port in the range [Node::S_FIRST_SERVICE_PORT, Node::S_FIRST_SERVICE_PORT + Node::S_NUM_SERVICE_PORTS - 1] or the special value S_PORT_ANY. In the latter case an available port in the range [Node::S_FIRST_EPHEMERAL_PORT, Node::S_FIRST_EPHEMERAL_PORT + Node::S_NUM_EPHEMERAL_PORTS - 1] will be chosen for you. Otherwise we will use the port you explicitly specified.

Note that using S_PORT_ANY in this context typically makes sense only if you somehow communicate serv->local_port() (where serv is the returned socket) to the other side through some other means (for example if both client and server are running in the same program, you could just pass it via variable or function call). Note that there is no overlap between the two aforementioned port ranges.

Parameters
local_portThe local Flow port to which to bind.
err_codeSee flow::Error_code docs for error reporting semantics. error::Code generated: error::Code::S_NODE_NOT_RUNNING, error::Code::S_PORT_TAKEN, error::Code::S_OUT_OF_PORTS, error::Code::S_INVALID_SERVICE_PORT_NUMBER, error::Code::S_INTERNAL_ERROR_PORT_COLLISION.
child_sock_optsIf null, any Peer_sockets that serv->accept() may return (where serv is the returned Server_socket) will be initialized with the options set equal to options().m_dyn_sock_opts. If not null, they will be initialized with a copy of *child_sock_opts. No reference to *child_sock_opts is saved.
Returns
Shared pointer to Server_socket, which is in the Server_socket::State::S_LISTENING state at least initially; or null pointer, indicating an error.

◆ local_low_lvl_endpoint()

const util::Udp_endpoint & flow::net_flow::Node::local_low_lvl_endpoint ( ) const

Return the UDP endpoint (IP address and UDP port) which will be used for receiving incoming and sending outgoing Flow traffic in this Node.

This is similar to to the value passed to the Node constructor, except that it represents the actual bound address and port (e.g., if you chose 0 as the port, the value returned here will contain the actual emphemeral port randomly chosen by the OS).

If !running(), this equals Udp_endpoint(). The logical value of the returned util::Udp_endpoint never changes over the lifetime of the Node.

Returns
See above. Note that it is a reference.

◆ max_block_size()

size_t flow::net_flow::Node::max_block_size ( ) const

The maximum number of bytes of user data per received or sent block on connections generated from this Node, unless this value is overridden in the Peer_socket_options argument to listen() or connect() (or friend).

See Peer_socket_options::m_st_max_block_size.

Returns
Ditto.

◆ options()

Node_options flow::net_flow::Node::options ( ) const

Copies this Node's option set and returns that copy.

If you intend to use set_options() to modify a Node's options, we recommend you make the modifications on the copy returned by options().

Returns
Ditto.

◆ running()

bool flow::net_flow::Node::running ( ) const

Returns true if and only if the Node is operating.

If not, all attempts to use this object or any objects generated by this object (Peer_socket::Ptr, etc.) will result in error.

Returns
Ditto.

◆ serv_create_forward_plus_ctor_args()

template<typename Server_socket_impl_type >
Server_socket * flow::net_flow::Node::serv_create_forward_plus_ctor_args ( const Peer_socket_options child_sock_opts)
protected

Like sock_create_forward_plus_ctor_args() but for Server_sockets.

Template Parameters
Server_socket_impl_typeEither net_flow::Server_socket or net_flow::asio::Server_socket, as of this writing.
Parameters
child_sock_optsSee, for example, Peer_socket::accept(..., const Peer_socket_options* child_sock_opts)
Returns
Pointer to new object of type Server_socket or of a subclass.

◆ set_options()

bool flow::net_flow::Node::set_options ( const Node_options opts,
Error_code err_code = 0 
)

Dynamically replaces the current options set (options()) with the given options set.

Only those members of opts designated as dynamic (as opposed to static) may be different between options() and opts. If this is violated, it is an error, and no options are changed.

Typically one would acquire a copy of the existing options set via options(), modify the desired dynamic data members of that copy, and then apply that copy back by calling set_options(). Warning: this technique is only safe if other (user) threads do not call set_options() simultaneously. There is a to-do to provide a thread-safe maneuver for when this is a problem (see class Node doc header).

Parameters
optsThe new options to apply to this socket. It is copied; no reference is saved.
err_codeSee flow::Error_code docs for error reporting semantics. error::Code generated: error::Code::S_OPTION_CHECK_FAILED, error::Code::S_NODE_NOT_RUNNING.
Returns
true on success, false on error.

◆ sock_create_forward_plus_ctor_args()

template<typename Peer_socket_impl_type >
Peer_socket * flow::net_flow::Node::sock_create_forward_plus_ctor_args ( const Peer_socket_options opts)
protected

Returns a raw pointer to newly created Peer_socket or sub-instance like asio::Peer_socket, depending on the template parameter.

Template Parameters
Peer_socket_impl_typeEither net_flow::Peer_socket or net_flow::asio::Peer_socket, as of this writing.
Parameters
optsSee, for example, Peer_socket::connect(..., const Peer_socket_options&).
Returns
Pointer to new object of type Peer_socket or of a subclass.

◆ sync_connect() [1/2]

template<typename Rep , typename Period >
Peer_socket::Ptr flow::net_flow::Node::sync_connect ( const Remote_endpoint to,
const boost::chrono::duration< Rep, Period > &  max_wait,
Error_code err_code = 0,
const Peer_socket_options opts = 0 
)

The blocking (synchronous) version of connect().

Acts just like connect() but instead of returning a connecting socket immediately, waits until the initial handshake either succeeds or fails, and then returns the socket or null, respectively. Additionally, you can specify a timeout; if the connection is not successful by this time, the connection attempt is aborted and null is returned.

Note that there is always a built-in Flow protocol connect timeout that is mandatory and will report an error if it expires; but it may be too long for your purposes, so you can specify your own that may expire before it. The two timeouts should be thought of as fundamentally independent (built-in one is in the lower level of Flow protocol; the one you provide is at the application layer), so don't make assumptions about Flow's behavior and set a timeout if you know you need one – even if in practice it is longer than the Flow one (which as of this writing can be controlled via socket option).

The following are the possible outcomes:

  1. Connection succeeds before the given timeout expires (or succeeds, if no timeout given). Socket is at least Writable at time of return. The new socket is returned, no error is returned via *err_code.
  2. Connection fails before the given timeout expires (or fails, if no timeout given). null is returned, *err_code is set to reason for connection failure. (Note that a built-in handshake timeout – NOT the given user timeout, if any – falls under this category.) *err_code == error::Code::S_WAIT_INTERRUPTED means the wait was interrupted (similarly to POSIX's EINTR).
  3. A user timeout is given, and the connection does not succeed before it expires. null is returned, and *err_code is set to error::Code::S_WAIT_USER_TIMEOUT. (Rationale: consistent with Server_socket::sync_accept(), Peer_socket::sync_receive(), Peer_socket::sync_send() behavior.)

Tip: Typical types you might use for max_wait: boost::chrono::milliseconds, boost::chrono::seconds, boost::chrono::high_resolution_clock::duration.

Template Parameters
RepSee boost::chrono::duration documentation (and see above tip).
PeriodSee boost::chrono::duration documentation (and see above tip).
Parameters
toSee connect().
max_waitThe maximum amount of time from now to wait before giving up on the wait and returning. "duration<Rep, Period>::max()" will eliminate the time limit and cause indefinite wait – however, not really, as there is a built-in connection timeout that will expire.
err_codeSee flow::Error_code docs for error reporting semantics. error::Code generated: error::Code::S_WAIT_INTERRUPTED, error::Code::S_WAIT_USER_TIMEOUT, error::Code::S_NODE_NOT_RUNNING, error::Code::S_CANNOT_CONNECT_TO_IP_ANY, error::Code::S_OUT_OF_PORTS, error::Code::S_INTERNAL_ERROR_PORT_COLLISION, error::Code::S_CONN_TIMEOUT, error::Code::S_CONN_REFUSED, error::Code::S_CONN_RESET_BY_OTHER_SIDE, error::Code::S_NODE_SHUTTING_DOWN, error::Code::S_OPTION_CHECK_FAILED.
optsSee connect().
Returns
See connect().

◆ sync_connect() [2/2]

Peer_socket::Ptr flow::net_flow::Node::sync_connect ( const Remote_endpoint to,
Error_code err_code = 0,
const Peer_socket_options opts = 0 
)

Equivalent to sync_connect(to, duration::max(), err_code, opt)s; i.e., sync_connect() with no user timeout.

Parameters
toSee other sync_connect().
err_codeSee other sync_connect().
optsSee sync_connect().
Returns
See other sync_connect().

◆ sync_connect_with_metadata() [1/2]

Peer_socket::Ptr flow::net_flow::Node::sync_connect_with_metadata ( const Remote_endpoint to,
const boost::asio::const_buffer &  serialized_metadata,
Error_code err_code = 0,
const Peer_socket_options opts = 0 
)

Equivalent to sync_connect_with_metadata(to, duration::max(), serialized_metadata, err_code, opts); i.e., sync_connect_with_metadata() with no user timeout.

Parameters
toSee sync_connect().
serialized_metadataSee connect_with_metadata().
err_codeSee sync_connect(). Added error: error::Code::S_CONN_METADATA_TOO_LARGE.
optsSee sync_connect().
Returns
See sync_connect().

◆ sync_connect_with_metadata() [2/2]

template<typename Rep , typename Period >
Peer_socket::Ptr flow::net_flow::Node::sync_connect_with_metadata ( const Remote_endpoint to,
const boost::chrono::duration< Rep, Period > &  max_wait,
const boost::asio::const_buffer &  serialized_metadata,
Error_code err_code = 0,
const Peer_socket_options opts = 0 
)

A combination of sync_connect() and connect_with_metadata() (blocking connect, with supplied metadata).

Parameters
toSee sync_connect().
max_waitSee sync_connect().
serialized_metadataSee connect_with_metadata().
err_codeSee sync_connect(). Added error: error::Code::S_CONN_METADATA_TOO_LARGE.
optsSee sync_connect().
Returns
See sync_connect().

Member Data Documentation

◆ S_DEFAULT_CONN_METADATA

const uint8_t flow::net_flow::Node::S_DEFAULT_CONN_METADATA = 0
staticprotected

Type and value to supply as user-supplied metadata in SYN, if user chooses to use [[a]sync_]connect() instead of [[a]sync_]connect_with_metadata().

If you change this value, please update Peer_socket::get_connect_metadata() doc header.


The documentation for this class was generated from the following files: