Flow-IPC 1.0.2
Flow-IPC project: Full implementation reference.
|
Here we discuss the important task of handling session-ending events (errors); and recommended an approach to structuring your subject-to-IPC objects within your application code. (Or go back to preceding page: Sessions: Opening Channels. The penultimate-to-us page may be even more directly relevant however.)
First, a reminder: A session is a conversation between process A, of application Ap, and process B of Bp. Once open, the two processes are equal in their capabilities (in the context of IPC).
Thus a session's lifetime is (omitting non-IPC-related setup/teardown for simplicity of discussion) the maximum time period when both the given processes A and B are up. Therefore the session is closed at the earliest time such that one of the processes A or B exits gracefully (exit()
), exits ungracefully (abort()
), or enters an unhealthy/stuck state of some kind (zombification).
Therefore, a session's closure in a given process P has two basic trigger types:
exit()
, for example/typically due to receiving SIGTERM or equivalent. (We assume a process never "wants" to exit ungracefully or zombify, and even if it's retained some measure of control in that event, it cannot be worried about gracefully destroying Session
objects on purpose.)Session
(etc.; details below) and then exit. I.e., there's no need to start another session.exit()
ing, abort()
ing, or has become zombified/unhealthy.Session
and attempt to connect to the session-server. A session-client engages in at most one session (IPC conversation, IPC context) at a time.That said:
In Sessions: Setting Up an IPC Context, we talked plenty about how to open a session, but we almost completely skipped over closing it. In particular when it came time to supply a session error handler (in Client_session
constructor and Session_server::init_handlers()
), our example code said ...
and punted to the present Manual page. Explaining this, in and of itself, is straightforward enough; and we will do so below.
However conceptually that topic touches on another, much less clear-cut, discussion: How to organize one's application code (on each side) in such a way as to avoid making the program an unmaintainable mess, particularly around session deinitialization time. We did, briefly, mention that after the session is closed, on the client one would attempt to open another, identically; and on the server potentially one would do so immediately after opening a session, so that multiple sessions at a time could be ongoing. How to make all this happen in code? How to make the session-client and session-server code as mutually symmetrical as possible? How to keep one session's IPC resources (channels, SHM arenas, SHM-allocated objects) segregated in code from another session's? That is the topic here. Please understand that this is informal advice, which you're free to ignore or modify for yourself. In many cases tutorial-style documentation of a library like this would omit such advice; but we felt it important to help you save pain and time. As long as you understand the reasoning here, we feel this page's mission is accomplished.
So let's get into the relatively easy topic: the API for closing an open session.
If the closing trigger is local, then one simply destroys the Session
object (its destructor getting called), or in the case of the server side potentially 1+ Session
object(s). That's it. Then it can exit the process.
Session_server
object (its destrctuctor getting called). The order does not matter; for example Session_server
can be destroyed first.If the closing trigger is the peer partner, then one... also simply destroys the one, relevant Session
object. Then one either attempts to start another session (if a session-client, or server acting equally to a client), or does nothing (if acting as a server that can accept arbitrary number of concurrent sessions anyway).
Any Session
or Session_server
destructor has essentially synchronous effects. There is no lagging activity of which one need to be particularly aware.
exit()
), resources are freed as soon as no sessions needing them are up (nor, for certain cross-session resources, can start in the future) but no earlier. In particular the major points where this is achieved are: Session
destructor (any per-session resources for that session); Session_server
destructor (any cross-session resources). In most cases both resource acquisition and resource cleanup is performed (internally) in session-server code as opposed to session-client. (As of this writing the exception to this is SHM-jemalloc which does things in a more equitable/symmetrical fashion, since internally each side creates/controls its own SHM arena, a/k/a SHM pool collection, from which the other side "borrows" individual allocated objects.)abort()
: The RAM may leak temporarily; but it will be cleaned-up zealously once a process of the same applications Ab/Bp next starts. In most cases left-behind-due-to-abort items are cleaned once the Session_server
in application Ap (the session-server-assigned App
) is created, synchronously in its constructor. This is safe, since only one session-server for given app Ap is to be active at a time. (As of this writing, again, SHM-jemalloc is the exception to this. In its case any process of application Ap or Bp shall regularly clean-up any SHM arenas/pools created by those processes of its own application Ap or Bp, respectively, that are reported by the OS to be no longer running. The mechanism for checking this, in Linux, is to "send" fake signal 0 to the given process-ID and observe the result of the syscall.)Okay, but – in the case of partner-triggered session closure – how do we detect that the partner has indeed closed it, either gracefully or ungracefully, or has become become unhealthy/zombified? This is more complex than destroying the Session
(plus possibly other Session
(s) and/or Session_server
) that follows it, but it's also done via a well-defined API. To wit: this is the error handler which we've omitted in (Sessions: Setting Up an IPC Context) examples so far. As shown in that Manual page, the error handler is supplied to Client_session
constructor or Server_session::init_handlers(). Recall these both must be called strictly before the session is officially considered opened.
Session
keeps track of partner-triggered session closure, which we term the session becoming hosed (a/k/a session-hosing conditions). Yes, really. Once detected, it immediately triggers the error handler you supplied. Since it is potentially important for your peace of mind, at least the following (internal) conditions will lead to session-hosing:
exit()
. An ungraceful closure would (internally) involve a TCP-RST-like error condition (ECONNRESET, EPIPE) being received from the opposing side; usually indicating an abort()
or similar.In any case an Error_code
passed to your error handler will indicate the triggering condition, while logs (at least WARNING message(s) by Flow-IPC) will contain further details. The Error_code
itself is for your logging/reporting/alerting purposes only; it is not advised to make algorithmic decisions based on it. If the error handler was invoked, the session is hosed, period.
We defer an example of registering an error handler until the following section, as it's very much relevant to:
The Manual author(s) must stress that much of the following does not represent hard rules or even (necessarily) verbatim recommendations, in particular text from this point on. The point is to encourage certain basic principles which we feel shall make your life, and that of maintainers of your code, easier than otherwise. Undoubtedly you will and should modify the below recommendations and especially code snippets to match your specific situation. That said – it would be unwise to ignore it entirely.
The basic problem we are solving here is this: You have your algorithms and data structures, in your meta-application Ap-Bp, and in service of these it is required that you perform IPC. To perform IPC, in the ipc::session paradigm, Flow-IPC requires that you use its abtractions, most notably those of Session_server
, Server_session
(on open, just Session
), and Client_session
(ditto). How to organize your code given your existing algorithms/structures and the aforementioned Flow-IPC abstractions?
The key point is that each of your own (IPC-relevant) data structures (and probably related algorithms) should be very cleanly and explicitly classified to have a particular scope, a/k/a lifetime. Generally, the scope of each (IPC-relevant) datum is one of the following:
Session
(A-B process-to-process conversation) and no later than the (opened) Session
's destruction. If the session ends (as earlier noted, when the process exits – or, far more interestingly, when the opposing process exits or dies or gets zombified), then this datum is no longer relevant by definition: The process reponsible for ~half of the algorithm that uses the datum is simply out of commission permanently, at a minimum.Session_server
. App-scope data, if they even exist in your meta-application at all, do not simply apply to all sessions to start via this Session_server
. Instead each cross-section datum must pertain to a particular partner (client) application (not process – that'd be per-session). For example, if your server Ap supports (as listed in ipc::session::Server_app::m_allowed_client_apps) two possible partner applications Bp and Cp, then a given datum must be classified (by you) to be either per-app-B or per-app-C.Client_app
becoming open (which can occur only after Session_server
construction). Its life ends no later than the Session_server
's destruction.About what kinds of data are we even speaking? Answer:
struc::Msg_out
(a/k/a ipc::transport::struc::Channel::Msg_out) is a container of sorts, living usually in SHM. If you chose to put it into session-scope SHM arena, it is session-scope. If you chose (an) app-scope SHM arena, it is app-scope. The details of this are out of our scope here (it's an ipc::transport thing), but rest assured specifying which one you want is not difficult and involves no thinking about SHM, specifically.shared_ptr<T>
a/k/a Arena::Handle<T>
– is often, but not necessarily, session-scope.struc::Msg_out
, but more explicitly specified on your part, this (potentially container-involving) data structure will live in a SHM arena of your choice. Spoiler alert (even though this is an ipc::transport topic): Session::session_shm()->construct<T>()
= session-cope; Session::app_shm()->construct<T>()
(or equivalently Session_server::app_shm(app)->construct<T>()
, where app
is a Client_app
) = app-scope.Msg_*
s exclusively, it is a good idea to do so. (There are, certainly, reasons to go beyond it.)Msg_*
(the middle bullet point), it is often entirely possible – and reasonable, as it tends to simplify algorithms – to only store them on-the-go, on the stack, letting them go out of scope after (as sender) creating/building or (as receiver) receiving/reading them. Such messages should always be session-scope but, from your point of view, don't really apply to this discussion at all. E.g., suppose you receive an message with the latest traffic stats for your HTTP server; synchronously report these to some central server (or whatever); and that's it. In that case your handler would receive an ipc::transport::struc::Channel::Msg_in_ptr (a shared_ptr<Msg_in>
), read from it via capnp-generated accessors, and then let the Msg_in_ptr
lapse right there.So, bottom line, you've got Session
(possibly multiple), possibly a Session_server
, Channel
s and/or struc::Channel
s, and then IPC-shared data (structured messages and/or C++ data structured). How to organize them in your code on each side? And, in a strongly related topic, what's the best way to hook up the Session
error handler? Our recommendations follow. Again: it's the communicated principles that matter; the exact code snippets are an informal guide. Or, perhaps, it can be looked at a case study (of a reasonably complex situation) which you can morph for your own needs.
Let's have a (singleton, really) class Process {}
just to bracket things nicely. (It's certainly optional but helps to communicate subsequent topics.) It applies to both the session-server app A and session-client app B. To do any ipc::session-based IPC, you'll definitely need to set up your App
s, Client_app
s, and Server_app
s – IPC universe description, exactly as described in that link. As it says, run the same code at the start of each Process
lifecycle (perhaps in its constructor). Store them in the Process
as data members as needed, namely:
Process
:Client_app
is needed each time a new Session
is created so as to connect to the server; thus at least at startup and then whenever a session ends, and thus we need a new Session
.Server_app
is needed at the exact same time also.Process
:Server_app
is needed when constructing the Session_server
. This is normally done only once, and it is reasonable to do so in Process
constructor; but if that action is delayed beyond that point, then you'll need the Server_app
available and may wish to keep it as a data member in Process
.Client_app
list, which we in the past called MASTER_APPS_AS_CLI
.Client_app
that is listed in ipc::session::Server_app::m_allowed_client_apps – as individual data members (like Client_app m_cli_app_b
and Client_app m_cli_app_c
), may be helpful for multi-client-application setups (i.e., if we are Ap, and Bp and Cp may establish sessions we us). For example, when deciding which part of your session-server application shall deal with a newly-opened Session
, Session::client_app()
is the actual Client_app connecting_cli
as a handler argument. So then you could do, like, if (session.client_app()->m_name == m_cli_app_b.m_name) { ...start the app Bp session... } else ...etc...
.Whatever you do store should be listed in Process { private: }
section first, as subsequent data items will need them.
Now it's time to set up some session(s). The more difficult task is on the server side; let us assume you do need to support multiple sessions concurrently. In this discussion we will assume your application's main loop is single-threaded. Your process will need a main-loop thread, and we will continue to assume the same proactor-pattern-with-Flow-IPC-internally-starting-threads-as-needed pattern as we have been (see Asynchronicity and Integrating with Your Event Loop for discussion including other possiblities). In this example I will use the flow::async
API (as stated in the afore-linked Manual page, direct boost.asio use is similar, just with more boiler-plate). So, something like the following would work. Note we use the techniques explained in Sessions: Setting Up an IPC Context (and, to a lesser extent, Sessions: Opening Channels), but now in the context of our recommended organization of the program.
To summarize: There are concurrent algorithms in play here, executing in interleaved async fashion via thread U (m_worker
):
Session_server m_session_srv
. Then begin loop:m_session_srv
to accept the next session in the background. Once ready, give it to algorithm 2; and repeat.on_new_session()
invocation.on_new_session()
): Do the actual work of communicating with the opposing process, long-term, until one of us exits/aborts/becomes unhealthy.on_new_session()
invocation.process_session()
which we will describe next.What about the client side? It is simpler, as there are no concurrent sessions; at most just one at a given time; plus every connect attempt is a straightforward non-blocking call that either immediately succeeds or immediately fails (the latter usually meaning no server is active/listening).
Certainly both snippets leave open questions, and on both sides they concern process_session()
. So let's get into that.
It's tempting to type out each Process::process_session()
, but it's a better idea to talk about the mysterious App_session
seen in the last snippet. The organization of and around App_session
is possibly the most valuable principle to communicate, as doing the right stuff in this context is arguably the least obvious and most life-easing challenge.
The proposed class App_session
is your application's encapsulation of everything to do with a given open session. Clearly this will include the ipc::session Session
itself, but also all your per-session data and algorithms. Its lifetime shall equal that of the Session
remaining open. Some key points:
App_session
s. (Of course, internally, they will have typically different/complementary duties, depending on what your design is.) In particular:Session
and manage it (e.g., opening channels; using SHM), until the session is hosed (next bullet point).Session
to report being hosed and then report that to the "parent" Process
; which should at that point destroy the App_session
and everything in it (including the taken-over Session
) immediately.Because App_session
on each side is, conceptually, the same (as a black-box), ideally they should have the same APIs. This isn't any kind of hard requirement, as these are separate applications, and no polymorphism of any kind will actually be used; but it is a good rule of thumb and helps ensure a clean design.
Therefore we will begin by describing App_session
. So what about process_session()
? Answer: It's just a bit of logical glue between App_session
and the mutually-different patterns in a server Process
versus client Process
. Thus we'll finish this part of the discussion by describing process_session()
. On to App_session
:
Simple, really. So what's the big deal? Answer: Mainly the big deal is: the per-session objects (where it says ATTENTION!
) are encapsulated cleanly at the same level as the Session
: they are thus destroyed/nullified at the same time (in fact, the latter before the former).
So now all that's left is to hook this up to the rest of Process
, in each application: process_session()
. That's where things are somewhat different – since the session is open almost-but-not-quite, and the API somewhat differs between the two. Client first:
Server is somewhat more complex as usual.
The separation of App_session ctor and go()
ensures that the differently-ordered APIs around session opening on the server side versus client side are both supported. It is, for example, entirely possible to swap the bodies of App_session::go()
(and all that they invoke, private
ly) between the two, thus swapping their duties – once the session is open. As we've emphasized before: The capabilities of a Session
– once open – are identical, and the session-server versus session-client identity no longer matters to Flow-IPC. We've decided on a reasonably simple API for the user to use, so that App_session
acts the same way.
Answer: not much. What remains – assuming one grokked all of the above – can probably be left as an exercise for the reader. Nevertheless:
We've covered partner-triggered session closing, which is the hard part; what about locally-triggered? One way to approach it might be a SIGTERM/SIGINT handler outside Process
that destroys said Process
. On either side the App_session
(or App_session
s) will be destroyed, hence their stored open Session
s will too. (The way we wrote the code in the server side, app_session
is captured in a callback lambda saved by m_worker
, but m_worker
too will be destroyed with all its stored closures.) In the server side, the Session_server
will also be destroyed as part of the process.
We ignored the case of a session-server that speaks with 2+ different client applications (Bp, Cp, ...). A good way to handle that would be to have multiple App_session
classes, one for each partner application. Process::process_session()
can then query m_next_session.client_app()
and create/save/activate whichever one is appropriate based on that.
We skipped over channel-opening considerations.
App_session
in its constructor on each side.App_session::on_passive_channel_open(Channel&&, Mdt_reader_ptr)
and have the Session
ctor (client-side) or Session::init_handlers()
(server-side) forward to that. (We reiterate that, generally, init-channels are easier to set up, as it involves less asynchronicity on at least one side.)Lastly we omitted mention of app-scope (as opposed session-scope) data. In short, assuming you understand scopes in general, the above case-study code would be expanded as follows:
App_session
. Certainly the logic therein must be aware that some of data members in App_session
would potentially refer to app-scope data, meaning data that would remain in SHM past the life of *this App_session
, but the data members would still live in App_session
. In a session-client there is no other place for them by their nature.App_session
; perhaps in Process
, or in some individual-Client_app
-specific (possibly nested in Process
) class. For example, if there is a memory cache object, it could be a data member in Process
. In the opposing (client-side) App_session
there might be a mirror data member.Unlike per-session algorithms and data, wherein the server and client are equal from Flow-IPC's point of view, by definition of app-scope there is an asymmetry here between the two sides. Generally one tries to avoid this, but that's the nature of the beast in this case.
And that's that. If you want to use ipc::session – and unless you enjoy avoidable pain, you probably should – then the preceding pages provide more than enough information to proceed. What's left now are some odds and ends that can be quite important in production but are arguably not going to be a mainstream worry during the bulk of development and design. The next couple (or so) pages deal with such ipc::session topics.
The next page is: Safety and Permissions.