Flow-IPC 1.0.1
Flow-IPC project: Full implementation reference.
Sessions: Teardown; Organizing Your Code
MANUAL NAVIGATION: Preceding Page - Next Page - Table of Contents - Reference

Here we discuss the important task of handling session-ending events (errors); and recommended an approach to structuring your subject-to-IPC objects within your application code. (Or go back to preceding page: Sessions: Opening Channels. The penultimate-to-us page may be even more directly relevant however.)

Page summary

First, a reminder: A session is a conversation between process A, of application Ap, and process B of Bp. Once open, the two processes are equal in their capabilities (in the context of IPC).

Thus a session's lifetime is (omitting non-IPC-related setup/teardown for simplicity of discussion) the maximum time period when both the given processes A and B are up. Therefore the session is closed at the earliest time such that one of the processes A or B exits gracefully (exit()), exits ungracefully (abort()), or enters an unhealthy/stuck state of some kind (zombification).

Therefore, a session's closure in a given process P has two basic trigger types:

  • Locally triggered: P itself wants to exit – gracefully, via exit(), for example/typically due to receiving SIGTERM or equivalent. (We assume a process never "wants" to exit ungracefully or zombify, and even if it's retained some measure of control in that event, it cannot be worried about gracefully destroying Session objects on purpose.)
    • In this case, by definition, it shall close the local Session (etc.; details below) and then exit. I.e., there's no need to start another session.
  • Partner-triggered: P detects that the opposing process is exit()ing, abort()ing, or has become zombified/unhealthy.
    • In this case, assuming (as we do) the process wants to continue being useful – i.e., engage in IPC – it may start more sessions, depending on its identity (session-client versus session-server). We've touched on this already in Sessions: Setting Up an IPC Context example code. To summarize:
      • P is B (session-client): It should create a new Session and attempt to connect to the session-server. A session-client engages in at most one session (IPC conversation, IPC context) at a time.
      • P is A (session-server): In the simplest scenario, where the server is coded to only talk to one partner process at a time, its task is the same as for B, a client process. In the more complex scenario, it already has a loop in which it accepts any incoming session-opens on-demand, even while other session(s) is/are operating. Therefore in that case it need not do anything further.

That said:

Topic 1: Session teardown

In Sessions: Setting Up an IPC Context, we talked plenty about how to open a session, but we almost completely skipped over closing it. In particular when it came time to supply a session error handler (in Client_session constructor and Session_server::init_handlers()), our example code said ... and punted to the present Manual page. Explaining this, in and of itself, is straightforward enough; and we will do so below.

Topic 2: Organizing application code around sessions

However conceptually that topic touches on another, much less clear-cut, discussion: How to organize one's application code (on each side) in such a way as to avoid making the program an unmaintainable mess, particularly around session deinitialization time. We did, briefly, mention that after the session is closed, on the client one would attempt to open another, identically; and on the server potentially one would do so immediately after opening a session, so that multiple sessions at a time could be ongoing. How to make all this happen in code? How to make the session-client and session-server code as mutually symmetrical as possible? How to keep one session's IPC resources (channels, SHM arenas, SHM-allocated objects) segregated in code from another session's? That is the topic here. Please understand that this is informal advice, which you're free to ignore or modify for yourself. In many cases tutorial-style documentation of a library like this would omit such advice; but we felt it important to help you save pain and time. As long as you understand the reasoning here, we feel this page's mission is accomplished.

The mechanics of session teardown

So let's get into the relatively easy topic: the API for closing an open session.

If the closing trigger is local, then one simply destroys the Session object (its destructor getting called), or in the case of the server side potentially 1+ Session object(s). That's it. Then it can exit the process.

Note
When locally-triggered (process is exiting gracefully), a session-server should, also, destroy the Session_server object (its destrctuctor getting called). The order does not matter; for example Session_server can be destroyed first.

If the closing trigger is the peer partner, then one... also simply destroys the one, relevant Session object. Then one either attempts to start another session (if a session-client, or server acting equally to a client), or does nothing (if acting as a server that can accept arbitrary number of concurrent sessions anyway).

Any Session or Session_server destructor has essentially synchronous effects. There is no lagging activity of which one need to be particularly aware.


Kernel-persistent resource cleanup
"Kernel-persistent resources" are areas in RAM that are not necessarily given back to the OS for general use when a process accessing them dies; instead they're only guaranteed to be made available on next boot. (As of this writing Flow-IPC potentially acquires the following types of these resources: SHM pools (a/k/a segments); POSIX MQs.) Naturally it is important that these be returned to the OS at the proper time; otherwise it's no less than a RAM leak that could persist beyond any process that was using the RAM. How is this topic (which we call cleanup) handled in Flow-IPC's context? Answer:
Short version: It's handled. You need not worry about it. (That is one of the features of the library that's not so easy to guarantee manually and in fact much harder than it would seem, until one has to actually design/code.)
Longer version, for your general curiosity only (as these are internal impl items): You can observe the various cleanup steps in INFO-level log messages. These show the following:
In case all processes involved shut down gracefully (usually via exit()), resources are freed as soon as no sessions needing them are up (nor, for certain cross-session resources, can start in the future) but no earlier. In particular the major points where this is achieved are: Session destructor (any per-session resources for that session); Session_server destructor (any cross-session resources). In most cases both resource acquisition and resource cleanup is performed (internally) in session-server code as opposed to session-client. (As of this writing the exception to this is SHM-jemalloc which does things in a more equitable/symmetrical fashion, since internally each side creates/controls its own SHM arena, a/k/a SHM pool collection, from which the other side "borrows" individual allocated objects.)
In case of ungraceful shutdown, usually via abort(): The RAM may leak temporarily; but it will be cleaned-up zealously once a process of the same applications Ab/Bp next starts. In most cases left-behind-due-to-abort items are cleaned once the Session_server in application Ap (the session-server-assigned App) is created, synchronously in its constructor. This is safe, since only one session-server for given app Ap is to be active at a time. (As of this writing, again, SHM-jemalloc is the exception to this. In its case any process of application Ap or Bp shall regularly clean-up any SHM arenas/pools created by those processes of its own application Ap or Bp, respectively, that are reported by the OS to be no longer running. The mechanism for checking this, in Linux, is to "send" fake signal 0 to the given process-ID and observe the result of the syscall.)

Okay, but – in the case of partner-triggered session closure – how do we detect that the partner has indeed closed it, either gracefully or ungracefully, or has become become unhealthy/zombified? This is more complex than destroying the Session (plus possibly other Session(s) and/or Session_server) that follows it, but it's also done via a well-defined API. To wit: this is the error handler which we've omitted in (Sessions: Setting Up an IPC Context) examples so far. As shown in that Manual page, the error handler is supplied to Client_session constructor or Server_session::init_handlers(). Recall these both must be called strictly before the session is officially considered opened.

Session keeps track of partner-triggered session closure, which we term the session becoming hosed (a/k/a session-hosing conditions). Yes, really. Once detected, it immediately triggers the error handler you supplied. Since it is potentially important for your peace of mind, at least the following (internal) conditions will lead to session-hosing:

  • An internally maintained (for various needs but most importantly channel-opening negotiations) session master channel uses a local (Unix-domain) stream socket connection; and that socket connection has become gracefully, or ungracefully, closed by the opposing side. A graceful closure would (internally) involve a TCP-FIN-like end-of-"file" condition being received from the opposing side; usually indicating a clean exit(). An ungraceful closure would (internally) involve a TCP-RST-like error condition (ECONNRESET, EPIPE) being received from the opposing side; usually indicating an abort() or similar.
    • This (internally) occurs, at the latest as part of the OS tearing down the opposing process, as thus is generally rather responsive. There is still time for chaos in the ungraceful-shutdown scenario, but generally it's pretty quick, once things go down.
  • Some other error occurred while operating along that same (internally maintained) socket connection, either when sending or receiving internal data. The aforementioned end-of-"file," connection-reset, broken-pipe are the conditions we typically see in practice.
  • That same (internally maintained) connection remains open without system errors popping up, but no (internally tracked) keep-alive messages have been received for some time (low seconds). Probably the opposing process has become zombified or otherwise unhealthy.
    • This is, by design, a heuristic; the time (seconds) of having received no traffic (user-triggered or internal pings) is non-trivial. During this time the opposing process may have already been unresponsive for a significant portion of it, so there's a higher chance of entropy in this case. Hopefully it does not come to that, and the harder-edged socket error path described above is the one detected the vast majority of the time.

In any case an Error_code passed to your error handler will indicate the triggering condition, while logs (at least WARNING message(s) by Flow-IPC) will contain further details. The Error_code itself is for your logging/reporting/alerting purposes only; it is not advised to make algorithmic decisions based on it. If the error handler was invoked, the session is hosed, period.

We defer an example of registering an error handler until the following section, as it's very much relevant to:

Organizing application code around sessions

The Manual author(s) must stress that much of the following does not represent hard rules or even (necessarily) verbatim recommendations, in particular text from this point on. The point is to encourage certain basic principles which we feel shall make your life, and that of maintainers of your code, easier than otherwise. Undoubtedly you will and should modify the below recommendations and especially code snippets to match your specific situation. That said – it would be unwise to ignore it entirely.

Basic concept: data scope

The basic problem we are solving here is this: You have your algorithms and data structures, in your meta-application Ap-Bp, and in service of these it is required that you perform IPC. To perform IPC, in the ipc::session paradigm, Flow-IPC requires that you use its abtractions, most notably those of Session_server, Server_session (on open, just Session), and Client_session (ditto). How to organize your code given your existing algorithms/structures and the aforementioned Flow-IPC abstractions?

The key point is that each of your own (IPC-relevant) data structures (and probably related algorithms) should be very cleanly and explicitly classified to have a particular scope, a/k/a lifetime. Generally, the scope of each (IPC-relevant) datum is one of the following:

  • [Per-]session scope: The datum begins life no earlier than the creation of the (opened!) Session (A-B process-to-process conversation) and no later than the (opened) Session's destruction. If the session ends (as earlier noted, when the process exits – or, far more interestingly, when the opposing process exits or dies or gets zombified), then this datum is no longer relevant by definition: The process reponsible for ~half of the algorithm that uses the datum is simply out of commission permanently, at a minimum.
  • Cross-session scope a/k/a app scope: The datum may be accessed by possibly multiple sessions concurrently; and possibly by sessions that do not yet exist. For example, an in-memory cache of web objects might be relevant to any processes that might connect later to make use of (and add to) the cache, not to mention any processes currently connected into the IPC-engaged system.
    • Consider a given Session_server. App-scope data, if they even exist in your meta-application at all, do not simply apply to all sessions to start via this Session_server. Instead each cross-section datum must pertain to a particular partner (client) application (not process – that'd be per-session). For example, if your server Ap supports (as listed in ipc::session::Server_app::m_allowed_client_apps) two possible partner applications Bp and Cp, then a given datum must be classified (by you) to be either per-app-B or per-app-C.
    • Such a datum begins life no earlier than the first session pertaining to the particular Client_app becoming open (which can occur only after Session_server construction). Its life ends no later than the Session_server's destruction.
    • Note well: Flow-IPC does NOT allow for IPC-relevant, ipc::session-managed data to persist beyond the lifetime of a session-server process. (Certainly such data can be maintained in the meta-application – but not as part of ipc::session-managed data.)

About what kinds of data are we even speaking? Answer:

  • Channels: A given open ipc::transport::Channel (and thus struc::Channel) is by definition session-scope.
    • It describes a bidirectional pipe between two processes, so if one of them goes away, the channel is no longer operational and cannot be reused.
  • Structured-channel messages: A given ipc::transport::struc::Msg_in or struc::Msg_out is often, but not necessarily, session-scope.
    • A struc::Msg_out (a/k/a ipc::transport::struc::Channel::Msg_out) is a container of sorts, living usually in SHM. If you chose to put it into session-scope SHM arena, it is session-scope. If you chose (an) app-scope SHM arena, it is app-scope. The details of this are out of our scope here (it's an ipc::transport thing), but rest assured specifying which one you want is not difficult and involves no thinking about SHM, specifically.
  • C++ data structures: A given direct-SHM-stored C++ data structure – referred to via shared_ptr<T> a/k/a Arena::Handle<T> – is often, but not necessarily, session-scope.
    • Much like with struc::Msg_out, but more explicitly specified on your part, this (potentially container-involving) data structure will live in a SHM arena of your choice. Spoiler alert (even though this is an ipc::transport topic): Session::session_shm()->construct<T>() = session-cope; Session::app_shm()->construct<T>() (or equivalently Session_server::app_shm(app)->construct<T>(), where app is a Client_app) = app-scope.
Note
- Recall that we, generally, consider the latter bullet point (direct-stored-in-SHM data structures) an advanced use case. It has full support, and in fact is used internally a-la-eat-own-dog-food in our own structured-message implementation. That said one could argue that if one can represent IPC-shared data as Msg_*s exclusively, it is a good idea to do so. (There are, certainly, reasons to go beyond it.)
- As for Msg_* (the middle bullet point), it is often entirely possible – and reasonable, as it tends to simplify algorithms – to only store them on-the-go, on the stack, letting them go out of scope after (as sender) creating/building or (as receiver) receiving/reading them. Such messages should always be session-scope but, from your point of view, don't really apply to this discussion at all. E.g., suppose you receive an message with the latest traffic stats for your HTTP server; synchronously report these to some central server (or whatever); and that's it. In that case your handler would receive an ipc::transport::struc::Channel::Msg_in_ptr (a shared_ptr<Msg_in>), read from it via capnp-generated accessors, and then let the Msg_in_ptr lapse right there.

Organizing your objects: Up to sessions

So, bottom line, you've got Session (possibly multiple), possibly a Session_server, Channels and/or struc::Channels, and then IPC-shared data (structured messages and/or C++ data structured). How to organize them in your code on each side? And, in a strongly related topic, what's the best way to hook up the Session error handler? Our recommendations follow. Again: it's the communicated principles that matter; the exact code snippets are an informal guide. Or, perhaps, it can be looked at a case study (of a reasonably complex situation) which you can morph for your own needs.

Let's have a (singleton, really) class Process {} just to bracket things nicely. (It's certainly optional but helps to communicate subsequent topics.) It applies to both the session-server app A and session-client app B. To do any ipc::session-based IPC, you'll definitely need to set up your Apps, Client_apps, and Server_apps – IPC universe description, exactly as described in that link. As it says, run the same code at the start of each Process lifecycle (perhaps in its constructor). Store them in the Process as data members as needed, namely:

  • Session-client Process:
    • The local Client_app is needed each time a new Session is created so as to connect to the server; thus at least at startup and then whenever a session ends, and thus we need a new Session.
    • The opposing Server_app is needed at the exact same time also.
  • Session-server Process:
    • The local Server_app is needed when constructing the Session_server. This is normally done only once, and it is reasonable to do so in Process constructor; but if that action is delayed beyond that point, then you'll need the Server_app available and may wish to keep it as a data member in Process.
    • Same for the master Client_app list, which we in the past called MASTER_APPS_AS_CLI.
    • Lastly, each individial Client_app that is listed in ipc::session::Server_app::m_allowed_client_apps – as individual data members (like Client_app m_cli_app_b and Client_app m_cli_app_c), may be helpful for multi-client-application setups (i.e., if we are Ap, and Bp and Cp may establish sessions we us). For example, when deciding which part of your session-server application shall deal with a newly-opened Session, Session::client_app() is the actual Client_app connecting_cli as a handler argument. So then you could do, like, if (session.client_app()->m_name == m_cli_app_b.m_name) { ...start the app Bp session... } else ...etc....

Whatever you do store should be listed in Process { private: } section first, as subsequent data items will need them.

Now it's time to set up some session(s). The more difficult task is on the server side; let us assume you do need to support multiple sessions concurrently. In this discussion we will assume your application's main loop is single-threaded. Your process will need a main-loop thread, and we will continue to assume the same proactor-pattern-with-Flow-IPC-internally-starting-threads-as-needed pattern as we have been (see Asynchronicity and Integrating with Your Event Loop for discussion including other possiblities). In this example I will use the flow::async API (as stated in the afore-linked Manual page, direct boost.asio use is similar, just with more boiler-plate). So, something like the following would work. Note we use the techniques explained in Sessions: Setting Up an IPC Context (and, to a lesser extent, Sessions: Opening Channels), but now in the context of our recommended organization of the program.

class Process : // In session-server app A.
public flow::log::Log_context
{
private:
// ...Your IPC universe description members go here....
// Alias your session-related types here.
template<ipc::session::schema::MqType S_MQ_TYPE_OR_NONE, bool S_TRANSMIT_NATIVE_HANDLES,
typename Mdt_payload = ::capnp::Void>
using Session_server = Session_server_t<...knobs...>;
using Session = Session_server::Server_session_obj;
// Your main loop (a/k/a thread U). All your code will be post()ed here.
// Wrapping it in std::optional<> or unique_ptr<> may be helpful, depending on what you need to do first.
flow::async::Single_thread_task_loop m_worker;
// The big Kahuna. m_session_srv->async_accept() will repeatedly give you a `Session`, at which point you will
// in thread U kick-off its further handling.
std::optional<Session_server> m_session_srv;
// Accessed from thread U only, this is the target of the current m_session_srv->async_accept() of which there
// is (in our example; and it is generally reasonable) one outstanding at a time.
Session m_next_session;
public:
Process(...) :
flow::log::Log_context(...),
m_worker(get_logger(), "srv_a_main")
{
// ...Load up IPC universe description around here....
m_worker.start();
m_worker.post([this]()
{
// We are in thread U. From now on we do work here.
Error_code err_code;
m_session_srv.emplace(get_logger(), ...server_app_describing_us..., ...master_apps_as_cli...,
&err_code); // Can also let it throw (pass in null here).
if (err_code)
{
m_session_srv.reset(); // It did not initialize, so it is useless.
// Problem setting up server are pretty common when first trying this: permissions, etc.
// In any case we can't really continue at all. See Session_server doc headers for details.
// ...Get out, in whatever style makes sense for you.... Possibly just log and exit().
return;
}
// else Ready to accept sessions. Thus:
m_session_srv->async_accept(&m_next_session,
..., // Possibly more args, depending on the complexity of your setup.
[this](const Error_code& err_code)
{
{
m_worker.post([this, err_code]() { on_new_session(err_code); });
}
});
}); // m_worker.post()
} // Process()
private:
void on_new_session(const Error_code& err_code)
{
if (err_code)
{
// async_accept() failed. This is fairly unusual and worth noting/logging/alerting, as it indicates
// some kind of setup or environmental or transient problem worth looking into. For more detail about
// how to best deal with it, see Session_server::async_accept() doc header. It is not fatal though.
// ...
}
else
{
// else: Success!
process_session(); // We'll explain what do inside here, next.
// m_next_session is empty again.
}
// Either way -- whether we just kicked off doing work on a new session, or something bad happened instead --
// we will async-accept the next session whenever that may be ready to go. Just like the initial one in ctor:
m_session_srv->async_accept(&m_next_session,
...,
[this](const Error_code& err_code)
{
{
m_worker.post([this, err_code]() { on_new_session(err_code); });
}
});
} // void on_new_session()
}; // class Process
This is to vanilla Session_server what shm::classic::Server_session is to vanilla Server_session: it ...
@ S_OBJECT_SHUTDOWN_ABORTED_COMPLETION_HANDLER
Async completion handler is being called prematurely, because underlying object is shutting down,...
flow::Error_code Error_code
Short-hand for flow::Error_code which is very common.
Definition: common.hpp:297

To summarize: There are concurrent algorithms in play here, executing in interleaved async fashion via thread U (m_worker):

  • First, initialize IPC universe description; and start Session_server m_session_srv. Then begin loop:
  • Algorithm 1 (one running throughout): Ask m_session_srv to accept the next session in the background. Once ready, give it to algorithm 2; and repeat.
    • Kicked off in ctor.
    • Resumed in each on_new_session() invocation.
  • Algorithm 2 (1+ parallel ones, each started in successful on_new_session()): Do the actual work of communicating with the opposing process, long-term, until one of us exits/aborts/becomes unhealthy.
    • Kicked off in each successful on_new_session() invocation.
    • The meat of it is in process_session() which we will describe next.

What about the client side? It is simpler, as there are no concurrent sessions; at most just one at a given time; plus every connect attempt is a straightforward non-blocking call that either immediately succeeds or immediately fails (the latter usually meaning no server is active/listening).

class Process : // In session-client app B.
public flow::log::Log_context
{
private:
// ...Your IPC universe description members go here... namely at least:
// Alias your session-related types here.
template<ipc::session::schema::MqType S_MQ_TYPE_OR_NONE, bool S_TRANSMIT_NATIVE_HANDLES,
typename Mdt_payload = ::capnp::Void>
using Session = Client_session_t<...knobs...>;
// Your main loop (a/k/a thread U). All your code will be post()ed here.
// Wrapping it in std::optional<> or unique_ptr<> may be helpful, depending on what you need to do first.
flow::async::Single_thread_task_loop m_worker;
// Accessed from thread U only, this is -- at a given point in time, not counting non-blocking synchronous
// calls into `Client_session::sync_connect() -- either (1) sitting there empty (as-if default-constructed),
// because *m_active_session is currently open; or (2) sitting there empty (as-if default-constructed), because
// the last `.sync_connect()` failed (server inactive/not listening), so we're waiting a bit to check again.
Session m_next_session;
// You'll see in the next section. This shall encapsulate the active Session and your other IPC-involved algorithms and data.
std::optional<App_session> m_active_session;
public:
Process(...) :
flow::log::Log_context(...),
m_worker(get_logger(), "cli_b_main")
{
// ...Load up IPC universe description around here... namely at least:
// - m_srv_app
// - m_cli_app
m_worker.start();
m_worker.post([this]()
{
// We are in thread U. From now on we do work here.
open_next_session();
});
}
private:
void open_next_session()
{
// When not connecting, m_session is empty. We are about to connect, so let's get it into shape.
// Note Session is (efficiently) move()able.
m_next_session
= Session(get_logger(), m_cli_app, m_srv_app,
..., // Possible channel-passive-open handler (discussed elsewhere).
[this](const Error_code& err_code)
{
m_worker.post([this, err_code]()
{
// We'll get into this later; but this is relevant only once m_next_session.sync_connect() succeeds.
// Spoiler alert: Because the handler must be supplied before sync_connect(), we pre-hook-this up;
// but the real handling of the error in an open session will be in App_session m_active_session.
// We just forward it; otherwise it's not in the purview of class Process:
m_active_session->on_session_hosed(err_code);
});
});
// It's not doing anything; now we attempt connect. Don't worry; that forwarding error handler does not
// matter yet -- only once the sync_connect() succeeds.
Error_code err_code;
m_next_session.sync_connect(&err_code); // This is a synchronous, non-blocking call!
if (err_code)
{
// sync_connect() failed. Assuming everything is configured okay, this would only happen
// if the opposing server is currently inactive. Therefore it's not a great idea to immediately
// sync_connect() again. A reasonable plan is to schedule another attempt in 250-5000ms.
// If the app need not do anything else in the meantime, it could just this_thread::sleep_*()
// between attempts in a loop; otherwise use async scheduling, such as via flow::async's
// m_worker.schedule_from_now() or boost.asio's timers.
// Details left as exercise to reader.
// ...;
return;
}
// else: Success! m_next_session is ready to go.
process_session(); // We'll explain what to do inside here, next.
// m_next_session is empty (as-if default-cted) again.
// It'll stay that way, until the stuff kicked-off in process_session() detects the death of the
// Session we just opened. Then, we'll call open_next_session() again.
} // open_next_session()
}; // class Process
Implements the SHM-related API common to shm::classic::Server_session and shm::classic::Client_sessio...
Definition: session.hpp:44
An App that is used as a client in at least one client-server IPC split.
Definition: app.hpp:185
An App that is used as a server in at least one client-server IPC split.
Definition: app.hpp:206

Certainly both snippets leave open questions, and on both sides they concern process_session(). So let's get into that.

Organizing your objects: Sessions and lower

It's tempting to type out each Process::process_session(), but it's a better idea to talk about the mysterious App_session seen in the last snippet. The organization of and around App_session is possibly the most valuable principle to communicate, as doing the right stuff in this context is arguably the least obvious and most life-easing challenge.

The proposed class App_session is your application's encapsulation of everything to do with a given open session. Clearly this will include the ipc::session Session itself, but also all your per-session data and algorithms. Its lifetime shall equal that of the Session remaining open. Some key points:

  • Just as – once open – ipc::session::Server_session and ipc::session::Client_session have identical public capabilities/APIs, so should the two mutually-facing App_sessions. (Of course, internally, they will have typically different/complementary duties, depending on what your design is.) In particular:
    • Each should take-over (via move-semantics) an already-open Session and manage it (e.g., opening channels; using SHM), until the session is hosed (next bullet point).
    • Each should be ready for that Session to report being hosed and then report that to the "parent" Process; which should at that point destroy the App_session and everything in it (including the taken-over Session) immediately.

Because App_session on each side is, conceptually, the same (as a black-box), ideally they should have the same APIs. This isn't any kind of hard requirement, as these are separate applications, and no polymorphism of any kind will actually be used; but it is a good rule of thumb and helps ensure a clean design.

Therefore we will begin by describing App_session. So what about process_session()? Answer: It's just a bit of logical glue between App_session and the mutually-different patterns in a server Process versus client Process. Thus we'll finish this part of the discussion by describing process_session(). On to App_session:

// Similar, or identical, API (and the listed logic/data) regardless of whether this is application Ap (session-server) or Bp (session-client).
class Process::App_session
{
private:
// In our example we use a single worker thread together with parent Process.
flow::async::Single_thread_task_loop& m_worker;
// The open Session we take-over from Process.
Session m_session;
// How we inform Process that they should delete us.
ipc::util::Task m_on_session_closed_func;
// ...ATTENTION! IPC-relevant (per-session) data/handles shall be listed here -- *after* the above,
// especially after m_session. Thus, they will be destroyed (when `*this` is destroyed) in the opposite order.
// `Session` should always be destroyed after per-session items. This is, at a minimum, the clean thing to do;
// and in some cases -- including, likely, shared_ptr<> handles into SHM -- it will avoid memory corruptions and
// crashes....
//
// Corollary: If you, for whatever reason, wrap m_session in a unique_ptr or std::optional<> or the like, and
// decide to `.reset()` that thing manually (perhaps in explicit ~App_session()), then make sure you
// do so only *after* these data members are first destroyed/nullified.
public:
App_session(..., flow::async::Single_thread_task_loop* worker) :
m_worker(*worker)
{
// Do not start work yet. They must call go().
}
template<typename On_session_closed_func>
void go(Session&& session, On_session_closed_func&& on_session_closed_func)
{
m_session = std::move(session); // Session inside Process has been emptied now.
m_on_session_closed_func = std::move(on_session_closed_func);
// ...Ready to work! From this point on, do what needs to be done: open channels via m_session.open_channel(),
// use already-opened init-channels, use m_session.session_shm() and m_session.app_shm(), etc.
// Obviously the specifics will differ depending on whether this is application Ap or Bp... after all, they
// are different applications!
}
// `Process` promises to call this (which would occur at most once), if m_session reports error.
// We'd do it ourselves, but it's done differently depending on client-versus-server, and we try to leave such
// differing details outside App_session.
void on_session_hosed(const Error_code& err_code)
{
// ...Do stuff, possibly nothing, knowing that m_session is unusable, and we can no longer open channels or
// use per-session SHM. Technically, `Channel`s (and thus `struc::Channel`s) operate independently of
// the source Session -- if any -- but a Session being hosed indicates, at best, ill health of the opposing
// process; therefore it is best to not touch them from this point on....
// Then, inform Process via the callback it provided. This should cause it to destroy *this.
// (There are certainly other ways of organizing this, but this way works as well as any.)
m_worker->post([this]() { m_on_session_closed_func(); });
}
};
flow::async::Task Task
Short-hand for polymorphic function (a-la std::function<>) that takes no arguments and returns nothin...
Definition: util_fwd.hpp:116

Simple, really. So what's the big deal? Answer: Mainly the big deal is: the per-session objects (where it says ATTENTION!) are encapsulated cleanly at the same level as the Session: they are thus destroyed/nullified at the same time (in fact, the latter before the former).

So now all that's left is to hook this up to the rest of Process, in each application: process_session(). That's where things are somewhat different – since the session is open almost-but-not-quite, and the API somewhat differs between the two. Client first:

// In session-client app Bp.
void Process::process_session()
{
// App_session m_active_session is null; now we have the Session m_next_session it should take-over, so
// we can construct the App_session.
m_active_session.emplace(..., &m_worker);
m_active_session->go(std::move(m_next_session), // Let it eat it, making it empty (as-if default-cted) again.
[this]()
{
// As promised -- destroy the App_session, together with all its stuff.
m_active_session.reset();
// ...and start trying to open the next one.
open_next_session();
});
// That's it! We've already set up m_active_session->on_session_hosed() back in open_next_session().
// We provided the on-session-closed logic just above. And in the client-side API the Session is open upon
// successful sync_connect(). The rest is up to App_session in the mean-time.
}

Server is somewhat more complex as usual.

// In session-server app Ap.
class Process
{
private:
using App_session_ptr = shared_ptr<App_session>;
// This may or may not be necessary. It is possible to just capture it in a lambda and not save it in
// `*this` directly. Still it might be nice; like for instance m_app_sessions.size() can be logged/reported
// periodically.
unordered_set<App_session_ptr> m_app_sessions;
void process_session()
{
// Similarly to client, App_session requires:
// -# an open Session to take-over;
// -# that its ->on_session_hosed() is invoked at the proper time (when Session reports it);
// -# that it knows how to tell us it's ready to be destroyed (and that we do the right thing at that point).
//
// The first one just requires that we call session.init_handlers().
// (The second one is accomplished via that call.)
// The third one, same as on the client side, is an explicit function argument to App_session::go().
// Create the App_session; note the ctor does not require `session` (go() does).
auto app_session = make_shared<App_session>(..., &m_worker);
m_app_sessions.insert(app_session); // Save it, since we track them in a set. (May not be necessary.)
// Last step required for Session becoming open, as go() needs.
m_next_session->init_handlers(..., // Possible channel-passive-open handler (discussed elsewhere).
[this, app_session]
(const Error_code& err_code) mutable
{
m_worker.post([this, app_session = std::move(app_session), err_code]()
{
app_session->on_session_hosed(err_code); // Just forward it; same as in client.
});
});
// And supply the on-app-session-closed logic.
app_session->go(std::move(m_next_session), // Let it eat it, making it empty (as-if default-cted) again.
[this, app_session]()
{
// It's going away.
m_app_sessions.erase(app_session);
// As promised -- destroy the App_session, together with all its stuff.
app_session.reset();
// That's it. Process main loop already expects more (possibly concurrent) `Session`s arriving whenever.
});
} // process_session()
};

The separation of App_session ctor and go() ensures that the differently-ordered APIs around session opening on the server side versus client side are both supported. It is, for example, entirely possible to swap the bodies of App_session::go() (and all that they invoke, privately) between the two, thus swapping their duties – once the session is open. As we've emphasized before: The capabilities of a Session – once open – are identical, and the session-server versus session-client identity no longer matters to Flow-IPC. We've decided on a reasonably simple API for the user to use, so that App_session acts the same way.

What did we miss?

Answer: not much. What remains – assuming one grokked all of the above – can probably be left as an exercise for the reader. Nevertheless:

We've covered partner-triggered session closing, which is the hard part; what about locally-triggered? One way to approach it might be a SIGTERM/SIGINT handler outside Process that destroys said Process. On either side the App_session (or App_sessions) will be destroyed, hence their stored open Sessions will too. (The way we wrote the code in the server side, app_session is captured in a callback lambda saved by m_worker, but m_worker too will be destroyed with all its stored closures.) In the server side, the Session_server will also be destroyed as part of the process.

We ignored the case of a session-server that speaks with 2+ different client applications (Bp, Cp, ...). A good way to handle that would be to have multiple App_session classes, one for each partner application. Process::process_session() can then query m_next_session.client_app() and create/save/activate whichever one is appropriate based on that.

We skipped over channel-opening considerations.

  • Init-channel object(s) or list(s) thereof could be passed-through into App_session in its constructor on each side.
  • We omitted the passive-open handler, on each side, in our case study. If it is necessary to use these, one would probably add App_session::on_passive_channel_open(Channel&&, Mdt_reader_ptr) and have the Session ctor (client-side) or Session::init_handlers() (server-side) forward to that. (We reiterate that, generally, init-channels are easier to set up, as it involves less asynchronicity on at least one side.)

Lastly we omitted mention of app-scope (as opposed session-scope) data. In short, assuming you understand scopes in general, the above case-study code would be expanded as follows:

  • Client side: Not much is different. All logic and data are still in App_session. Certainly the logic therein must be aware that some of data members in App_session would potentially refer to app-scope data, meaning data that would remain in SHM past the life of *this App_session, but the data members would still live in App_session. In a session-client there is no other place for them by their nature.
  • Server side: The session-server, most likely, would store app-scope (cross-session) data members outside App_session; perhaps in Process, or in some individual-Client_app-specific (possibly nested in Process) class. For example, if there is a memory cache object, it could be a data member in Process. In the opposing (client-side) App_session there might be a mirror data member.

Unlike per-session algorithms and data, wherein the server and client are equal from Flow-IPC's point of view, by definition of app-scope there is an asymmetry here between the two sides. Generally one tries to avoid this, but that's the nature of the beast in this case.


And that's that. If you want to use ipc::session – and unless you enjoy avoidable pain, you probably should – then the preceding pages provide more than enough information to proceed. What's left now are some odds and ends that can be quite important in production but are arguably not going to be a mainstream worry during the bulk of development and design. The next couple (or so) pages deal with such ipc::session topics.

The next page is: Safety and Permissions.


MANUAL NAVIGATION: Preceding Page - Next Page - Table of Contents - Reference