Flow-IPC 1.0.0
Flow-IPC project: Public API.
|
Let's learn how to set up your first process-to-process session, from which it is easy to create communication channels. (Or go back to preceding page: Asynchronicity and Integrating with Your Event Loop.)
The structure of this page and the next few pages of the Manual is a top-down exploration of what is involved in using Flow-IPC in a typical way. It is, in a way, a tutorial. It will skip topics in aid of exposition and make certain explicit assumptions for the same reason. (We will cover the skipped topics and challenge those assumptions later in the Manual. The Reference of any given API, especially classes and class templates, is also comprehensive.)
The short answers:
This concept is simple enough once the session is open. In that state, the capabilities of each side of the session (i.e., the Session
object in each mutually opposing process) are generally identical. At that stage the fun begins, and that's where we want to get in this tutorial. However to get there things are by necessity somewhat asymmetrical. Let's now discuss the procedures involved in starting a session, so that in the next Manual page we can get to the fun part of ever-so-symmetrically using the now-open session.
For purposes of this tutorial let's deal with the simplest possible situation: application Ap and application Bp want to enage in IPC; this is called an IPC split, in that it can be thought of as a meta-application ApBp being "split" into two parts. A given application Xp is either not running at all (inactive), or it is running 1 process (a/k/a instance) X, or more than 1 process/instance simultaneously (X1, X2, ...). It is quite common for some applications to only have 1 active instance at a time; but it's not true of all applications. It is of course completely common for instances to run serially: X1 starts, then exits; then X2 starts, then exits; and of course there can be combinations.
It is possible (and supported by Flow-IPC) that an application is part of more than 1 split (e.g., Ap-Bp-Cp involves 2 splits). For this tutorial we will not pursue this possibility (we will cover that topic separately later: Multi-split Universes).
In a given Ap-Bp split, in the ipc::session paradigm, you must make one major decision which is to assign one the role of the session-server, the other the session-client. (These roles have absolutely no bearing on which – if any – is a server or client in any other sense, once a session is open. This only matters, to the vast majority of Flow-IPC, during the session-opening procedure.) By convention we will assign Ap to be server, Bp client in this tutorial. How should you make this decision though?
Answer: In a given split, the only limitation is: at a given time, there can be at most one active instance of a session-server, but there (at your option) can be any number of active instances of a session-client. If in your meta-application there is at most one active instance of either application, then the choice does not matter; just pick one. That said you might consider which one (in the given split) is likelier to require multiple concurrent instances in the future.
For our purposes let's assume we've chosen application Ap (server) and Bp (client).
Conceptually, Ap and Bp are both applications, so each has certain basic properties (such as executable name). In addition, Ap takes the role of a server in our split, while Bp takes the role of a client; and in that role Ap has certain additional server-only properties (such as a list of client applications that may establish sessions with it – in our case just Bp). In order for ipc::session::Session hierarchy to work properly when establishing a session, you must provide all of that information to it. This is done very straightforwardly by filling out simple struct
s of types ipc::session::App, Client_app, and Server_app. Together these are are the IPC universe description.
To avoid confusion and errors down the line it is important to keep to a simple rule (which is nevertheless tempting to break): The IPC universe description (the App
s/Client_app
s/Server_app
s combined) may be a simple and small thing, but there is conceptually only one of it. So when various little parts of that universe are passed-in to (most notably) Session_server
and Client_session
constructors, their values must never conflict; they must be equal on each side. So if application Ap lives at "/bin/a.exec" and has UID 322, then both application Ap and application Bp must know those property values for application Ap – as must application Cp, if it's also a part of the universe. We'll show this more practically below, but just remember this rule.
There are various trivial ways of accomplishing this, but the bottom line is: for code reuse and avoiding mistakes there should be only one piece of code creating and filling out these struct
s and containers thereof, with the same values; then each of A and B shall invoke that code; then on each side they'll be used in various capacities, passed-in to ipc::session::Session hierarchy objects, so that sessions might open properly. E.g., this struct
/etc.-building code can be in an exported function in a static library shared by applications Ap and Bp; or they can be inlined in a header #include
d by both; or some other such technique. It may be tempting to skip this nicety, but don't; you don't want to maintain 2+ sets of copy/pasted code that must be kept synced.
Back to our simple Ap-Bp IPC-universe. Step one is to define the applications involved. That's simple for us, as there are only two. Regardless of how many there are, there is no implicit need to collect them in any container – one must merely define them individually.
In this example the two apps share a user (user so-and-so, group so-and-so) for simplicity and therefore specify the same values there. This is by no means a requirement. (See Safety and Permissions for other, potentially more safety-robust production approaches.)
Step two is to define all the applications that take on (at least) a session-client role; each needs a Client_app
object. In our case that is just Bp. As a session-client, Bp is App
Bp; so it shall have a copy thereof inside it; in fact Client_app
sub-classes App
. As of this writing it lacks any additional data as a client; so it's just:
All client-apps must also be collected in a container for a certain purpose that we'll soon reveal. We have just the one, so:
Step three is to define all the applications that take on (at least) a session-server role; each needs a Server_app
object. In our case that is just Ap. As a session-server, Ap is App
Ap; so it shall have a copy thereof inside it; Server_app
sub-classes App
. It also has a couple other attributes which must be set, notably those Client_app
s that are allowed to open sessions with it; in our case that's just Bp. There is no need to collect them in a container. Thus:
We use USER_ACCESS permissions-level, because all our apps share an effective user. (See Safety and Permissions for more information and more safety-robust setups.)
That's it. That's our IPC universe description: the App
s, the Client_app
s (collected in a container), and the Server_app
s. It may be tempting, since our situation is so simple, to skip some things – e.g., we could skip separately defining App
s and copying them into Client_app
and Server_app
/rather just define the latter 2 items – but the performance/RAM impact of doing the whole thing is negligible, while in the long run keeping it clean this way will only help when/if your IPC universe becomes more complex.
You should run all 3 steps in each of Ap and Bp, thus ending up with the same data structures. You are now ready to start some sessions; or at least a session.
The IPC-related lifecycle of any session-client process is particularly simple:
C.sync_connect()
to the server.Session C
until (gracefully) told to exit (such as via SIGINT/SIGTERM)∗; or:Session C
on-error handler that the session was closed by the opposing side; therefore:C
.∗ - In this case destroy C
and exit process.
∗∗ - See Client_session::sync_connect() Reference doc note regarding dealing with inactive opposing server.
Thus a typical session-client is, as far as IPC is concerned, always either trying to open a session or is engaging in IPC via exactly one open session; and it only stops doing the latter if it itself exits entirely; or the session is closed by the opposing side. There is, in production, no known good reason to end a session otherwise nor to create simultaneous 2+ Client_session
s in one process.
The IPC-related lifecycle of a session-server can be more complex. In the simplest case, wherein you've decided the session-server only needs to speak to one opposing process at a time (for this split), it is very much like the session-client:
P.async_accept()
which on success yields Server_session S
.Client_session
attempts to connect.Session S
until (gracefully) told to exit (such as via SIGINT/SIGTERM); or:Session S
on-error handler that the session was closed by the opposing side; therefore:S
.However if your application desires to indeed support multiple concurrent sessions then it shall act more like an actual server as opposed to just a glorified client. Namely, the real difference is that upon P.async_accept()
success, conceptually it should – as opposed to merely engaging in IPC off the new session – kick off two algorithms simultaneously:
P.async_accept()
.Client_session
setup Let's now follow the above outline for a session-client's operation, the client being application Bp. We shall skip to step 3, as we've already covered step 2.
Client_session
is a class template, not a class, and moreover extensions like shm::classic::Client_session
exist. The choice of actual concrete type is quite important. First of all decide whether you desire to work with SHM-backed channels and/or SHM arenas; and if so what kind. Simply put our recommendation:
Client_session
is giving away performance for no good reason.Next let's decide on the values for those 3 template parameters. The first two determine the concrete type of ipc::transport::Channel that shall be opened in this session. The last template parameter is used for certain advanced purposes around channel-opening. We'll talk about all of those topics in Sessions: Opening Channels. For now we'll forego the advanced-metadata stuff (leave that arg at default), and we'll choose a sufficiently powerful and performant setting for the former 2 params. Thus, again, we strongly recommend to define the concrete type in an alias, so that the details need not be repeated subsequently.
So now we can construct our Client_session
(which we named just Session
since, once it's open, its capabilities are ~identical to the opposing process's counterpart). We can use one of 2 constructors: one that allows subsequent channel passive-opens; the other which does not. Let's use the latter, as it's slightly simpler. (We'll get into that topic in Sessions: Opening Channels. For now don't worry.)
Now we can attempt connection via sync_connect()
. There are 2 forms, the simple and the advanced. Ironically, due to its ability to pre-open channels, the advanced form is in most cases the easier one to use all-in-all – it makes it unnecessary to do annoying possibly-async channel-opening later – but we aren't talking about channel opening yet, so that part is not relevant; hence for now we'll go with the simpler API. (Sessions: Opening Channels gets into all that.)
Using it is simple:
Session_server
and Server_session
setup On the server end, accordingly, we accept the connection. The pattern may be familiar for users of boost.asio: There's a server (acceptor) object which, on success, modifies an un-connected peer object. We will use the simpler form of ipc::session::Session_server::async_accept() and discuss the other one in (you guessed it) Sessions: Opening Channels.
async_accept()
has succeeded. One must call ipc::session::Server_session::init_handlers()
, whose overloads take certain handler function(s) as arg(s). This is in fact symmetrical to what the Client_session
user had to do at construction time as shown above. Server_session
does not provide the same in its constructor, because what handlers one may want to install for a given server-session depends on, among other things, which Client_app
is in fact connecting to it; it can be any one from MASTER_APPS_AS_CLI
, and one cannot time-travel to the future to predict this before async_accept()
being called and succeeding.Client_session
ctor + Server_session::init_handlers()
handler args in Sessions: Opening Channels (the optional passive-channel-open handler) and Sessions: Teardown; Organizing Your Code (the mandatory error handler).In summary then:
In our example code on each side, go_do_ipc_yay()
now has an equally-capable Session
object which represents a shared IPC context through which to, broadly, open channels and (optionally) directly access SHM arena(s). Via the open channels, in turn, one can send messages (covered later in the Manual when we discuss ipc::transport). Via the SHM arena(s) one can allocate objects (which can be shared however one wants but most easily via ipc::transport channels as well).
Next let's discuss Sessions: Opening Channels.