Flow-IPC 1.0.2
Flow-IPC project: Public API.
|
Let's discuss opening channels from within an IPC context (Session
). (Or go back to preceding page: Sessions: Setting Up an IPC Context. These two pages are interconnected, as in this context, no pun intended, one uses a Session
as a source of channel(s).)
A channel is a bundling of the peer resources required for, essentially, a bidirectional pipe capable of transmitting binary messages (blobs) and, optionally, native handles. A particular ipc::transport::Channel (as specified at compile-time via its template parameters) may, for example, consist of an outgoing POSIX MQ handle and a similar incoming-MQ handle; or conceptually similar SHM-backed MQs (boost.ipc MQs). A peer Channel can be upgraded to an ipc::transport::struc::Channel in order to represent a structured, arbitrarily-sized datum per message as opposed to a mere (limited-size) blob.
A Channel
can be assembled manually from its constituent parts via the ipc::transport API. However this is somewhat painful due to (1) the boiler-plate required and more importantly (2) the naming coordination (and additional cleanup) considerations. It is much easier to do so from within a Session
which elides all such details essentially entirely. (That said, regardless of how one came up with it, an open Channel
has the same capabilities as another – certain details omitted. ipc::transport Manual pages get into all that. In particular though a Channel
can always be upgraded into a struc::Channel
.)
As of this writing a given Session
, on both ends, is capable of opening a single concrete ipc::transport::Channel type. That type is specified at compile time via template arguments to (on session-server side) Session_server
and (on client side) Client_session
. In Sessions: Setting Up an IPC Context we specified a particular reasonable type, but now let's discuss what capabilities are actually availably fully. Note that once you've made this decision by specifying those particular template arguments on each side, from that point on all required concrete types will be available as aliases, or aliases within that aliased class, or aliases within that aliased class (and so on). For example:
using Session_server = Session_server_t<...the_knobs...>; using Session = Session_server::Server_session_obj;
;using Channel = Session::Channel_obj;
gets us the concrete Channel
type.Session
's channel typeSo let's discuss these ...the_knobs...
. They are the first two template args in:
The first question to answer is: Do your channels need to be capable of transmitting native handles (a/k/a FDs in the POSIX world)? This is a fairly specialized need; so in many cases the answer is no. If the answer is yes then, underneath it all, each channel will at least use a bidirectional Unix domain socket, stream-style connection, with epoll_*()
as the main signaling mechanism. Local streams are the only relevant transport that can transmit native handles.
The second question to answer is: Do you want to, underneath it all, use message queues (MQs) and if so then what kind? Regardless of the type, enabling MQs in a channel means Flow-IPC shall establish 2 MQs; 1 will be used (exclusively) for A->B traffic, the other for B->A. (Each MQ could support an arbitrary number of reader and writer entities, including separate processes, but in this use case each MQ is strictly limited to one reader and one writer, typically separated by a process boundary.)
unlink()
ed; does not survive reboot) shared memory area.S_MQ_TYPE_OR_NONE = ipc::session::schema::MqType::POSIX
.S_MQ_TYPE_OR_NONE = ipc::session::schema::MqType::BIPC
.(Otherwise specify S_MQ_TYPE_OR_NONE = ipc::session::schema::MqType::NONE
.)
An ipc:session-generated Channel
contains either one or two bidirectional pipes; if 2 then they can be used independently of each other, one capable of transmitting blobs, the other of blobs and blob/native-handle pairs. If used directly, sans upgrade to structured messaging, you can use the pipe(s) as desired. Alternatively if a 2-pipe Channel
is upgraded to struc::Channel
in particular, then it will handle the pipe choice internally on your behalf. For performance:
struc::Channel
shall use the the blobs-only pipe for any message that does not contain (as specified by the user) a native handle.struc::Channel
shall use the the blobs-and-handles pipe for any message does contain a native handle.struc::Channel
shall never reorder messages (maintaining, internally, a little reassembly queue in the rare case of a race between a handle-bearing and non-handle-bearing message).All of that said, the bottom line is:
NONE
+ S_TRANSMIT_NATIVE_HANDLES=true
=> Single pipe: a Unix domain socket stream.NONE
+ S_TRANSMIT_NATIVE_HANDLES=false
=> Single pipe: a Unix domain socket stream.POSIX
/BIPC
+ S_TRANSMIT_NATIVE_HANDLES=false
=> Single pipe: 2 MQs of the specified type, facing opposite directions.POSIX
/BIPC
+ S_TRANSMIT_NATIVE_HANDLES=true
=> Two pipes:struc::Channel
uses for non-handle-bearing messages);struc::Channel
uses for handle-bearing messages).This provides a healthy number of choices. However it's best not to overthink it. If native handle transport is required, then you must specify that flag as true
. Beyond that it's entirely a question of performance. If, as we recommend, you subsequently use zero-copy (SHM-backed) structured messaging, then we suspect the impact of which knob values you specify is small; the payloads internally being transmitted will always be quite small, and the signaling mechanisms internally used will have similarly small latency profiles. That said it may have some impact. If you intend to use non-zero-copy structured messaging, or unstructured messaging without your own zero-copy mechanics on top, then the impact may be more significant.
A performance analysis at this low level is beyond our scope here (though this may change). We provide now some results from our own tests which should be taken as near-hearsay – but better than nothing: Using ~2015 server hardware (X4), in Linux, with 256-byte messages (note: larger by ~1 order of magnitude than what zero-copy structured messages use), each of the 3 mechanisms has a raw RTT of ~10 microseconds. Of the three, in decreasing order of perf:
Arguably stream sockets are both versatile (handling handles if needed) and fast, while POSIX MQs are fast, if transmitting FDs is not a factor. Combining both might be best of both worlds but could add a bit of latency due to the more complex logic involved. For you this level of perf analysis can only be settled via your own benchmarks. In the meantime all it takes to change from one to another is, at worst, a recompile – though, granted, of both applications, which is more complex once they're deployed in the field and require coordinated upgrade.
Both ways optionally make use of a special technique called channel-open metadata. We will discuss it in more detail below as needed. Essentially it is a small piece of structured data transmitted over the session API around session-open time and/or around subsequent channel-open time; it is a way to communicate information from process A to process B or vice versa intended to help negotiate channel opening and/or identify channels and/or any other basic information transmissible despite no actual channel being available to transmit it. It's a bootstrap mechanism for more complex IPC setups.
That said please remember the following limitation:
Text
, Data
, or List
) that exceeds this serialized size at runtime. If not, ipc::session will emit ipc::transport::struc::error::Code::S_INTERNAL_ERROR_SERIALIZE_LEAF_TOO_BIG.In practice this should be more than sufficient for all but the wildest scenarios (in fact exactly 1 such segment should be plenty), as we'll informally show below.
Simply: you can optionally have N channels available as soon as the session becomes available in opened state (i.e., upon Client_session::sync_connect()
sync success and Session_server::async_accept()
success + Session_server::init_handlers()
sync return). Then you can immediately begin to use them on either side!
The question is, of course: how many is N, and subsequently which one to use for what purpose. Naturally the two sides must agree on both aspects, else chaos ensues. The answer can range from
This requires large amounts of flexibility despite involving zero asynchronicity. What it does not require is back-and-forth negotiation. (If that is truly required, you'll need to consider opening on-demand channels.)
So here's how it works in the client->server direction:
sync_connect()
itself, locally.The server->client direction is similar but reversed; the data transmitted server->client being specified via 2nd Session_server::async_accept() overload and, in summary, comprising:
To demonstrate it in code we have to invent a scenario. I choose a moderately complex one involving client->server only but involving both a non-zero # of init-channels and some metadata. Let's say that the client application determines the number of channels C at runtime in the first main()
command-line argument. Further let's say the 2nd command-line argument specifies how many of those C channels shall be structured. (To be clear the latter part is contrived by the Manual author; it can be anything and arbitrarily complex – as long as it's representable via a capnp schema agreed-upon at compile time by both sides.)
The metadata requires a capnp input file whose resulting capnp-generated .c++ file shall produce an object file compiled into both applications A (server) and B. In our case its contents could be:
We are now ready to make the more-complex sync_connect()
call. This is a different overload than the one used in the simpler example in Sessions: Setting Up an IPC Context.
On the server end, accordingly:
We leave server->client channel-count and metadata transmission as an exercise to the reader. It is a bit more complex, because those decisions have to be made during the session-open procedure – with the opposing-client's identity (ipc::session::Client_app) a possible item of interest and only available post-async_accept()
-call – and hence done as part of one or both of the function args to async_accept()
(which in our example were dummies/no-ops). We omit it for space.
That said in many cases the real situation will be simpler even than the above. Informally one can think of it as a function call of sorts, in this case the arguments being the channels, the type being the conceptual identity/purpose of each one. Often, in a function call, the number and types of args is known by the caller and the callee at compile time. If not, then more complex measures must be taken.
The alternative to the preceding technique is an asymmetrical opening of a channel. It is asymmetrical, in that for a given channel, one side must be designated the active-opener, the other the passive-opener. (That said the two Session
s are equal in their capabilities, as far as Flow-IPC is concerned: for a given channel, either side can be chosen as the active-opener making the other side the passive-opener as a consequence. It is not relevant which side was chosen as the session-server versus session-client: Either can, at any time, choose to active-open – as long as the other side accepts passive-opens at all.)
This works as follows:
F(Channel&& new_channel)
.Channel
right away as a way or rejecting it; the opposing side will be informed via the Channel
– not via the Session
.)While either side can accept passive-opens, it will only do so if indeed a passive-open handler has been set up on that side. If not, and the opposing side does attempt an active-open (Session::open_channel()
), that active-open will quickly fail with a non-session-hosing error, namely ipc::session::error::Code::S_SESSION_OPEN_CHANNEL_REMOTE_PEER_REJECTED_PASSIVE_OPEN. Setting up a passive-open is done as follows:
open_channel()
s would non-fatally-to-session fail. By adding an extra arg in that ctor call, we would have indicated otherwise.This may seem asymmetrical, contrary to the spirit of the allegedly-side-agnostic Session
concept, but that is not so:
Session_client
ctor is invoked before the session is "opened," as sync_connect()
has not been called let alone succeeded.Session_server::init_handlers()
is invoked just before the session is opened, as init_handlers()
is the step that promotes the session to "opened" state.Channel
is so linked with the source Session
– in this context – it may be natural to assume that they remain somehow linked after that. This is not the case. A Channel
, once opened, is independent of any Session
. Your application may desire to handle them as a single unit, which we discuss in Organizing IPC-Related Objects In Your App, but to Flow-IPC they are independent. After all opening the Channel
can be done without any ipc::session involvement; it's "just" easier to use ipc::session.Quick snippets to demonstrate follow. Here we chose the session-client as the passive-open side, just for fun. Client:
And server:
We've covered pretty much every approach for opening channels via ipc::session. Only one topic remains: channel-open may be optionally used when opening a channel on-demand (via Session::open_channel()
). Notice the ignored auto&&
2nd arg to the passive-open handler in the preceding section? Instead of ignoring it, you can optionally read information (optionally prepared by the active-opener) that might be used to specify how to handle that particular channel. For example, suppose here we had defined in capnp struct ChannelOpenMdt
the member doUpgradeToStructuredChannel :Bool
which could perhaps instruct the passive-opener as to whether both sides should operate on the channel as a structured channel as opposed to unstructured-binary style. This could look like, on the passive-open side:
On the active-open side it is accomplished as follows: a different overload of open_channel()
also takes a metadata structure, a blank one first being obtained via Session::mdt_builder()
and then typically modified with some capnp-generated accessorsno differently fromn the pre-session-open metadata example here.
In the somewhat advanced situation where one wants to use both methods of channel-opening (init-channels and on-demand), or some other complex scenario, recall that this is capnp; you can use its union
s (anonymous or otherwise) and so on. There's no performance penalty; both applications just need to agree on the protocol at compile-time. (If you're comfortable with Cap'n Proto, this isn't news to you.) For example:
Opening channels is the main point of sessions, so we are close to done, as far as mainstream ipc::session topics go. One basic thing we did skip over is error handling; which is related to the topic of the best way to organize the IPC-relevant parts of an application. Let's discuss that next.