Flow-IPC 1.0.0
Flow-IPC project: Public API.
Namespaces | Classes | Typedefs | Functions
ipc::session Namespace Reference

Flow-IPC module providing the broad lifecycle and shared-resource organization – via the session concept – in such a way as to make it possible for a given pair of processes A and B to set up ipc::transport structured- or unstructured-message channels for general IPC, as well as to share data in SHared Memory (SHM). More...

Namespaces

namespace  error
 Namespace containing the ipc::session module's extension of boost.system error conventions, so that that API can return codes/messages from within its own new set of error codes/messages.
 
namespace  shm
 ipc::session sub-namespace that groups together facilities for SHM-backed sessions, particularly augmenting Client_session, Server_session, and Session_server classes by providing SHM-backed zero-copy functionality.
 
namespace  sync_io
 sync_io-pattern counterparts to async-I/O-pattern object types in parent namespace ipc::session.
 

Classes

struct  App
 A description of an application in this ipc::session inter-process communication universe. More...
 
struct  Client_app
 An App that is used as a client in at least one client-server IPC split. More...
 
class  Client_session_mv
 Implements Session concept on the Client_app end: a Session_mv that first achieves PEER state by connecting to an opposing Session_server_mv via Client_session_mv::sync_connect(). More...
 
struct  Server_app
 An App that is used as a server in at least one client-server IPC split. More...
 
class  Server_session_mv
 Implements Session concept on the Server_app end: a Session that is emitted in almost-PEER state by local Session_server accepting a connection by an opposing Client_session_mv::sync_connect(). More...
 
class  Session
 A documentation-only concept defining the local side of an IPC conversation (session) with another entity (typically a separate process), also represented by a Session-implementing object, through which one can easily open IPC channels (ipc::transport::Channel), among other IPC features. More...
 
class  Session_mv
 Implements the Session concept when it is in PEER state. More...
 
class  Session_server
 To be instantiated typically once in a given process, an object of this type asynchronously listens for Client_app processes each of which wishes to establish a session with this server process; emits resulting Server_session objects locally. More...
 

Typedefs

using Shared_name = util::Shared_name
 Convenience alias for the commonly used type util::Shared_name.
 
using Session_token = transport::struc::Session_token
 Convenience alias for the commonly used type transport::struc::Session_token.
 
template<schema::MqType S_MQ_TYPE_OR_NONE, bool S_TRANSMIT_NATIVE_HANDLES, typename Mdt_payload = ::capnp::Void>
using Server_session = Server_session_mv< Server_session_impl< S_MQ_TYPE_OR_NONE, S_TRANSMIT_NATIVE_HANDLES, Mdt_payload > >
 A vanilla Server_session with no optional capabilities. More...
 
template<schema::MqType S_MQ_TYPE_OR_NONE, bool S_TRANSMIT_NATIVE_HANDLES, typename Mdt_payload = ::capnp::Void>
using Client_session = Client_session_mv< Client_session_impl< S_MQ_TYPE_OR_NONE, S_TRANSMIT_NATIVE_HANDLES, Mdt_payload > >
 A vanilla Client_session with no optional capabilities. More...
 

Functions

void ensure_resource_owner_is_app (flow::log::Logger *logger_ptr, const fs::path &path, const App &app, Error_code *err_code=0)
 Utility, used internally but exposed in public API in case it is of general use, that checks that the owner of the given resource (at the supplied file system path) is as specified in the given App (App::m_user_id et al). More...
 
void ensure_resource_owner_is_app (flow::log::Logger *logger_ptr, util::Native_handle handle, const App &app, Error_code *err_code=0)
 Identical to the other ensure_resource_owner_is_app() overload but operates on a pre-opened Native_handle (a/k/a handle, socket, file descriptor) to the resource in question. More...
 
std::ostream & operator<< (std::ostream &os, const App &val)
 Prints string representation of the given App to the given ostream. More...
 
std::ostream & operator<< (std::ostream &os, const Client_app &val)
 Prints string representation of the given Client_appp to the given ostream. More...
 
std::ostream & operator<< (std::ostream &os, const Server_app &val)
 Prints string representation of the given Server_app to the given ostream. More...
 
template<typename Client_session_impl_t >
std::ostream & operator<< (std::ostream &os, const Client_session_mv< Client_session_impl_t > &val)
 Prints string representation of the given Client_session_mv to the given ostream. More...
 
template<typename Server_session_impl_t >
std::ostream & operator<< (std::ostream &os, const Server_session_mv< Server_session_impl_t > &val)
 Prints string representation of the given Server_session_mv to the given ostream. More...
 
template<typename Session_impl_t >
std::ostream & operator<< (std::ostream &os, const Session_mv< Session_impl_t > &val)
 Prints string representation of the given Session_mv to the given ostream. More...
 
template<schema::MqType S_MQ_TYPE_OR_NONE, bool S_TRANSMIT_NATIVE_HANDLES, typename Mdt_payload >
std::ostream & operator<< (std::ostream &os, const Session_server< S_MQ_TYPE_OR_NONE, S_TRANSMIT_NATIVE_HANDLES, Mdt_payload > &val)
 Prints string representation of the given Session_server to the given ostream. More...
 

Detailed Description

Flow-IPC module providing the broad lifecycle and shared-resource organization – via the session concept – in such a way as to make it possible for a given pair of processes A and B to set up ipc::transport structured- or unstructured-message channels for general IPC, as well as to share data in SHared Memory (SHM).

See namespace ipc doc header for an overview of Flow-IPC modules including how ipc::session relates to the others. Then return here. A synopsis follows:

It is possible to use the structured layer of ipc::transport, namely the big daddy transport::struc::Channel, without any help from ipc::session. (It's also possible to establish unstructured transport::Channel and the various lower-level IPC pipes it might comprise.) And, indeed, once a given struc::Channel (or Channel) is "up," one can and should simply use it to send/receive stuff. The problem that ipc::session solves is in establishing the infrastructure that makes it simple to (1) open new struc::Channels or Channels or SHM areas; and (2) properly deal with process lifecycle events such as the starting and ending (gracefully or otherwise) of the local and partner process.

Regarding (1), in particular (just to give a taste of what one means):

While ipc::transport lets one do whatever one wants, with great flexibility, ipc::session establishes conventions for all these things – typically hiding/encapsulating them away.

Regarding (2) (again, just giving a taste):

Again – ipc::session establishes conventions for these lifecycle matters and provides key utilities such as kernel-persistent resource cleanup.

Basic concepts

An application is, at least roughly speaking, 1-1 with a distinct executable presumably interested in communicating with another such executable. A process is a/k/a instance of an application that has begun execution at some point. In the IPC universe that uses ipc, on a given machine, there is some number of distinct applications. If two processes A and B want to engage in IPC, then their apps A and B comprise a meta-application split, in that (in a sense) they are one meta-application that have been split into two.

A process that is actively operating, at least in the IPC sense, is called an active process. A zombie process in particular is not active, nor is one that has not yet initialized IPC or has shut it down, typically during graceful (or otherwise) termination.

In the ipc::session model, any two processes A-B must establish a session before engaging in IPC. This session comprises all shared resources pertaining to those two processes engaging in IPC. (Spoiler alert: at a high level these resources comprise, at least, channels of communication – see transport::Channel – between them; and may also optionally comprise SHared Memory (SHM).) The session ends when either A terminates or B terminates (gracefully or otherwise), and no earlier. The session begins at roughly the earliest time when both A and B are active simultaneously. (It is possible either process may run before IPC and after IPC, but for purposes of our discussion we'll ignore such phases are irrelevant for simplicity of exposition.) An important set of shared resources is therefore per-session shared resources. (However, as we're about to explain, it is not the only type.)

In the ipc::session model, in a given A-B split, one must be chosen as the client, the other as the server; let's by convention here assume they're always listed in server-client order. The decision must be made which one is which, at compile time. The decision as to which is which is an important one, when using ipc::session and IPC in that model. While a complete discussion of making that decision is outside the scope here, the main points are that in the A-B split (A the server, B the client):

More specifically/practically:

Brief illustration

Suppose A is a memory cache, while B is a cache reader of objects. A starts; B1 and B2 start; so sessions A-B1 and A-B2 now exist. Now each of B1, B2 might establish channel(s) and then use it/them to issue messages and receive messages back. Now A<->B1 channel(s) and A<->B2 channel(s) are established. No B1<->B2 channel(s) can be established. (Note: transport::Channel and other ipc::transport IPC techniques are – in and of themselves – session-unaware. So B1 could speak to B2. They just cannot use ipc::session to do so. ipc::session facilitates establishing channels and SHM in an organized fashion, and that organization currently does not support channels between instances of the same client application. ipc::session is not the only option for doing so.)

Suppose, in this illustration, that A is responsible both for acquiring network objects (on behalf of B*) and memory-caching them. Then B1 might say, via an A-B1 channel, "give me object X." A does not have it in cache yet, so it loads it from network, and saves it in the per-app-B SHM area and replies (via same A-B1 channel) "here is object X: use this SHM handle X'." B1 then accesses X' directly in SHM. Say B2 also says "give me object X." It is cached, so A sends it X' again. Now, say A-B1 session ends, because B1 stops. All A-B1 channels go away – they are per-session resources – and any per-session-A-B1 SHM area (which we haven't mentioned in this illustration) also goes away similarly.

A-B2 continues. B2 could request X again, and it would work. Say B3 now starts and requests X. Again, the per-app-B SHM area persists; A would send X' to B3 as well.

Now suppose A stops, due to a restart. This ends A-B2 and A-B3 sessions; and the per-app-B SHM area goes away too. Thus the server process's lifetime encompasses all shared resources in the A-B split. Processes B2 and B3 continue, but they must await for a new active A process (instance) to appear, so that they can establish new A-B2 and A-B3 sessions. In the meantime, it is IPC no-man's-land: there is no active per-app-B shared resources (SHM in our illustration) and certainly no per-session-A-B* shared resources either. Once A-B2 and A-B3 (new) sessions are established, shared resources can begin to be acquired again.

This illustrates that:

2+ splits co-existing

The above discussion concerned a particular A-B split, in which by definition A is the server (only 1 active instance at a time), while B is the client (0+ active instances at a time).

Take A, in split A-B. Suppose there is app Cp also. An A-C split is also possible (as is A-D, A-E, etc.). Everything written so far applies in natural fashion. Do note, however, that the per-app scope applies to a given (client) application, not all client applications. So the per-app-B SHM area is independent of the per-app-C SHM area (both maintained by A). Once the 1st instance of B establishes a session, that SHM area is lazily created. Once the 1st instance of C does the same, that separate SHM area is lazily created. So B1, B2, ... access one per-app SHM area, while C1, C2, ... access another.

Now consider app K, in split K-A. Here K is the server, A is the client. This, also, is allowed, despite the fact A is server in splits A-B, A-C. However, there is a natural constraint on A: there can only be one active process of it at a time, because it is the server in at least 1 split. It can be a client to server K; but there would only ever be 1 instance of it establishing sessions against K.

Informally: This is not necessarily frowned upon. After all the ability to have multiple concurrently active processes of a given app is somewhat exotic in the first place. That said, very vaguely speaking, it is more flexible to design a given app to only be server in every split in which it participates.

Overview of relevant APIs

With the above explained, here is how the ipc::session API operates over these various ideas.

The simplest items are App (description of an application, which might a client or a server or both), Client_app (description of an application in its role as a client), and Server_app (you get the idea). See their doc headers, of course, but basically these are very simple data stores describing the basic structure of the IPC universe. Each application generally shall load the same values into its master lists of these structures. References to immutable Client_app and Server_app objects are passed into actual APIs having to do with the Session concept, to establish who can establish sessions with whom and how. Basically they describe the allowed splits.

As noted at the very top, the real goal is to open channels between processes and possibly to share data in SHM between them and others. To get there for a given process pair A-B, one must establish session A-B, as described above. Having chosen which one is A (server) and which is B (client), and loaded Client_app and Server_app structures to that effect, it is time to manage the sessions.

An established session is said to be a session in PEER state and is represented on each side by an object that implements the Session concept (see its important doc header). On the client side, this impl is the class template Client_session. On the server side, this impl is the class template Server_session. Once each respective side has its PEER-state Session impl object, opening channels and operating SHM areas on a per-session basis is well documented in the Session (concept) doc header and is fully symmetrical in terms of the API.

Client_session does not begin in PEER state. One constructs it in NULL state, then invokes Client_session::sync_connect() to connect to the server process if any exists; once it fires its handler successfully, the Client_session is a Session in PEER state. If Client_session, per Session concept requirements, indicates the session has finished (due to the other side ending session), one must create a new Client_session and start over (w/r/t IPC and relevant shared resources).

Asymmetrically, on the server side, one must be ready to open potentially multiple Server_sessions. Firstly create a Session_server (see its doc header). To open 1 Server_session, default-construct one (hence in NULL state), then pass it as the target to Session_server::async_accept(). Once it fires your handler, you have an (almost-)PEER state Server_session, and moreover you may call Server_session::client_app() to determine the identity of the connecting client app (via Client_app) which should (in multi-split situations) help decide what to further do with this Server_session (also, as on the opposing side, now a PEER-state Session concept impl). (If A participates in splits A-B and A-C, then the types of IPC work it might do with a B1 or B2 is likely to be quite different from same with a C1 or C2 or C3. If only split A-B is relevant to A, then that is not a concern.)

As noted, in terms of per-session shared resources, most notably channel communication, a Server_session and Client_session have identical APIs with identical capabilities, each implementing the PEER-state Session concept to the letter.

SHM

Extremely important (for performance at least) functionality is provided in the sub-namespace of ipc::session: ipc::session::shm. Please read its doc header now before using the types directly within ipc::session. That will allow you to make an informed choice.

sync_io: integrate with poll()-ish event loops

ipc::session APIs feature exactly the following asynchronous (blocking, background, not-non-blocking, long...) operations:

All APIs mentioned so far operation broadly in the async-I/O pattern: Each event in question is reported from some unspecified background thread; it is up to the user to "post" the true handling onto their worker thread(s) as desired.

If one desires to be informed, instead, in a fashion friendly to old-school reactor loops – centered on poll(), epoll_wait(), or similar – a sync_io alternate API is available. It is no faster, but it may be more convenient for some applications.

To use the alternate sync_io interface: Look into session::sync_io and its 3 adapter templates:

Exactly the aforementioned async APIs are affected. All other APIs are available without change.

Typedef Documentation

◆ Client_session

template<schema::MqType S_MQ_TYPE_OR_NONE, bool S_TRANSMIT_NATIVE_HANDLES, typename Mdt_payload = ::capnp::Void>
using ipc::session::Client_session = typedef Client_session_mv<Client_session_impl<S_MQ_TYPE_OR_NONE, S_TRANSMIT_NATIVE_HANDLES, Mdt_payload> >

A vanilla Client_session with no optional capabilities.

See Client_session_mv (+doc header) for full API as well as its doc header for possible alternatives that add optional capabilities (such as, at least, SHM setup/access). Opposing peer object: Server_session.

The following important template parameters are knobs that control the properties of the session; the opposing Server_session must use identical settings.

Template Parameters
S_MQ_TYPE_OR_NONESession::Channel_obj (channel openable via open_channel() on this or other side) type config: Enumeration constant that specifies which type of MQ to use (corresponding to all available transport::Persistent_mq_handle concept impls) or to not use one (NONE). Note: This enum type is capnp-generated; see common.capnp for values and brief docs.
S_TRANSMIT_NATIVE_HANDLESSession::Channel_obj (channel openable via open_channel() on this or other side) type config: Whether it shall be possible to transmit a native handle via the channel.
Mdt_payloadSee Session concept. In addition the same type may be used for mdt or mdt_from_srv_or_null in *_connect(). (If it is used for open_channel() and/or passive-open and/or *connect() mdt and/or mdt_from_srv_or_null, recall that you can use a capnp-union internally for various purposes.)
See also
Client_session_mv for full API and its documentation.
Session: implemented concept.

◆ Server_session

template<schema::MqType S_MQ_TYPE_OR_NONE, bool S_TRANSMIT_NATIVE_HANDLES, typename Mdt_payload = ::capnp::Void>
using ipc::session::Server_session = typedef Server_session_mv<Server_session_impl<S_MQ_TYPE_OR_NONE, S_TRANSMIT_NATIVE_HANDLES, Mdt_payload> >

A vanilla Server_session with no optional capabilities.

See Server_session_mv (+doc header) for full API as well as its doc header for possible alternatives that add optional capabilities (such as, at least, SHM setup/access). Opposing peer object: Client_session. Emitted by: Session_server.

The following important template parameters are knobs that control the properties of the session; the opposing Client_session must use identical settings.

Template Parameters
S_MQ_TYPE_OR_NONEIdentical to Client_session.
S_TRANSMIT_NATIVE_HANDLESIdentical to Client_session.
Mdt_payloadSee Session concept. In addition the same type may be used for mdt_from_cli_or_null (and srv->cli counterpart) in Session_server::async_accept(). (Recall that you can use a capnp-union internally for various purposes.)
See also
Server_session_mv for full API and its documentation.
Session: implemented concept.

Function Documentation

◆ ensure_resource_owner_is_app() [1/2]

void ipc::session::ensure_resource_owner_is_app ( flow::log::Logger *  logger_ptr,
const fs::path &  path,
const App app,
Error_code err_code = 0 
)

Utility, used internally but exposed in public API in case it is of general use, that checks that the owner of the given resource (at the supplied file system path) is as specified in the given App (App::m_user_id et al).

If the resource cannot be accessed (not found, permissions...) that system Error_code shall be emitted. If it can, but the owner does not match, error::Code::S_RESOURCE_OWNER_UNEXPECTED shall be emitted.

Parameters
logger_ptrLogger to use for logging (WARNING, on error only, including S_RESOURCE_OWNER_UNEXPECTED).
pathPath to resource. Symlinks are followed, and the target is the resource in question (not the symlink).
appDescribes the app with the expected owner info prescribed therein.
err_codeSee flow::Error_code docs for error reporting semantics. Error_code generated: error::Code::S_RESOURCE_OWNER_UNEXPECTED (check did not pass), system error codes if ownership cannot be checked.

◆ ensure_resource_owner_is_app() [2/2]

void ipc::session::ensure_resource_owner_is_app ( flow::log::Logger *  logger_ptr,
util::Native_handle  handle,
const App app,
Error_code err_code = 0 
)

Identical to the other ensure_resource_owner_is_app() overload but operates on a pre-opened Native_handle (a/k/a handle, socket, file descriptor) to the resource in question.

Parameters
logger_ptrSee other overload.
handleSee above. handle.null() == true causes undefined behavior (assertion may trip). Closed/invalid/etc. handle will yield civilized Error_code emission.
appSee other overload.
err_codeSee flow::Error_code docs for error reporting semantics. Error_code generated: error::Code::S_RESOURCE_OWNER_UNEXPECTED (check did not pass), system error codes if ownership cannot be checked (invalid descriptor, un-opened descriptor, etc.).

◆ operator<<() [1/7]

std::ostream & operator<< ( std::ostream &  os,
const App val 
)

Prints string representation of the given App to the given ostream.

Parameters
osStream to which to write.
valObject to serialize.
Returns
os.

◆ operator<<() [2/7]

std::ostream & operator<< ( std::ostream &  os,
const Client_app val 
)

Prints string representation of the given Client_appp to the given ostream.

Parameters
osStream to which to write.
valObject to serialize.
Returns
os.

◆ operator<<() [3/7]

template<typename Client_session_impl_t >
std::ostream & operator<< ( std::ostream &  os,
const Client_session_mv< Client_session_impl_t > &  val 
)

Prints string representation of the given Client_session_mv to the given ostream.

Parameters
osStream to which to write.
valObject to serialize.
Returns
os.

◆ operator<<() [4/7]

std::ostream & operator<< ( std::ostream &  os,
const Server_app val 
)

Prints string representation of the given Server_app to the given ostream.

Parameters
osStream to which to write.
valObject to serialize.
Returns
os.

◆ operator<<() [5/7]

template<typename Server_session_impl_t >
std::ostream & operator<< ( std::ostream &  os,
const Server_session_mv< Server_session_impl_t > &  val 
)

Prints string representation of the given Server_session_mv to the given ostream.

Parameters
osStream to which to write.
valObject to serialize.
Returns
os.

◆ operator<<() [6/7]

template<typename Session_impl_t >
std::ostream & operator<< ( std::ostream &  os,
const Session_mv< Session_impl_t > &  val 
)

Prints string representation of the given Session_mv to the given ostream.

Parameters
osStream to which to write.
valObject to serialize.
Returns
os.

◆ operator<<() [7/7]

template<schema::MqType S_MQ_TYPE_OR_NONE, bool S_TRANSMIT_NATIVE_HANDLES, typename Mdt_payload >
std::ostream & ipc::session::operator<< ( std::ostream &  os,
const Session_server< S_MQ_TYPE_OR_NONE, S_TRANSMIT_NATIVE_HANDLES, Mdt_payload > &  val 
)

Prints string representation of the given Session_server to the given ostream.

Parameters
osStream to which to write.
valObject to serialize.
Returns
os.