Flow-IPC 1.0.1
Flow-IPC project: Full implementation reference.
session_fwd.hpp
Go to the documentation of this file.
1/* Flow-IPC: Sessions
2 * Copyright 2023 Akamai Technologies, Inc.
3 *
4 * Licensed under the Apache License, Version 2.0 (the
5 * "License"); you may not use this file except in
6 * compliance with the License. You may obtain a copy
7 * of the License at
8 *
9 * https://www.apache.org/licenses/LICENSE-2.0
10 *
11 * Unless required by applicable law or agreed to in
12 * writing, software distributed under the License is
13 * distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
14 * CONDITIONS OF ANY KIND, either express or implied.
15 * See the License for the specific language governing
16 * permissions and limitations under the License. */
17
18/// @file
19#pragma once
20
24#include "ipc/util/util_fwd.hpp"
26
27/**
28 * Flow-IPC module providing the broad lifecycle and shared-resource organization -- via the *session* concept --
29 * in such a way as to make it possible for a given pair of processes A and B to set up ipc::transport
30 * structured- or unstructured-message channels for general IPC, as well as to share data in
31 * SHared Memory (SHM). See namespace ::ipc doc header for an overview of Flow-IPC modules
32 * including how ipc::session relates to the others. Then return here. A synopsis follows:
33 *
34 * It is possible to use the structured layer of ipc::transport, namely the big daddy transport::struc::Channel,
35 * without any help from ipc::session. (It's also possible to establish unstructured transport::Channel and
36 * the various lower-level IPC pipes it might comprise.) And, indeed,
37 * once a given `struc::Channel` (or `Channel`) is "up," one can and *should*
38 * simply use it to send/receive stuff. The problem that ipc::session solves is in establishing
39 * the infrastructure that makes it simple to (1) open new `struc::Channel`s or `Channel`s or SHM areas;
40 * and (2) properly deal with process lifecycle events such as the starting and ending (gracefully or otherwise) of
41 * the local and partner process.
42 *
43 * Regarding (1), in particular (just to give a taste of what one means):
44 * - What underlying low-level transports will we even be using? MQs? Local (Unix domain) stream sockets?
45 * Both?
46 * - For each of those, we must connect or otherwise establish each one. In the case of MQs, say, there has
47 * to be an agreed-upon util::Shared_name for the MQ in each direction... so what is that name?
48 * How to prevent collisions in this name?
49 *
50 * While ipc::transport lets one do whatever one wants, with great flexibility, ipc::session establishes conventions
51 * for all these things -- typically hiding/encapsulating them away.
52 *
53 * Regarding (2) (again, just giving a taste):
54 * - To what process are we even talking? What if we want to talk to 2, or 3, processes? What if we want to talk
55 * to 1+ processes of application X and 1+ processes of application Y? How might we segregate the data
56 * between these?
57 * - What happens if 1 of 5 instances of application X, to whom we're speaking, goes down? How to ensure cleanup
58 * of any kernel-persistence resources, such as the potentially used POSIX MQs, or even of SHM (ipc::shm)?
59 *
60 * Again -- ipc::session establishes conventions for these lifecycle matters and provides key utilities such as
61 * kernel-persistent resource cleanup.
62 *
63 * ### Basic concepts ###
64 * An *application* is, at least roughly speaking, 1-1 with a distinct executable presumably interested in
65 * communicating with another such executable. A *process* is a/k/a *instance* of an application that has begun
66 * execution at some point. In the IPC *universe* that uses ::ipc, on a given machine, there is some number
67 * of distinct applications. If two processes A and B want to engage in IPC, then their apps A and B comprise
68 * a meta-application *split*, in that (in a sense) they are one meta-application that have been split into two.
69 *
70 * A process that is actively operating, at least in the IPC sense, is called an *active* process. A zombie
71 * process in particular is not active, nor is one that has not yet initialized IPC or has shut it down, typically
72 * during graceful (or otherwise) termination.
73 *
74 * In the ipc::session model, any two processes A-B must establish a *session* before engaging in IPC. This
75 * *session* comprises all *shared resources* pertaining to those two *processes* engaging in IPC. (Spoiler alert:
76 * at a high level these resources comprise, at least, *channels* of communication -- see transport::Channel --
77 * between them; and may also optionally comprise SHared Memory (SHM).) The session ends when either A terminates
78 * or B terminates (gracefully or otherwise), and no earlier. The session begins at roughly the earliest time
79 * when *both* A and B are active simultaneously. (It is possible either process may run before IPC and after IPC,
80 * but for purposes of our discussion we'll ignore such phases are irrelevant for simplicity of exposition.)
81 * An important set of shared resources is therefore *per-session shared resources*. (However, as we're about to
82 * explain, it is not the only type.)
83 *
84 * In the ipc::session model, in a given A-B split, one must be chosen as the *client*, the other as the *server*;
85 * let's by convention here assume they're always listed in server-client order. The decision must be made which
86 * one is which, at compile time. The decision as to which is which is an important one, when using ipc::session
87 * and IPC in that model. While a complete discussion of making that decision is outside the scope here, the main
88 * points are that in the A-B split (A the server, B the client):
89 * - For each *process* pair A-B, there are per-session shared resources, as described above, and as pertains
90 * to these per-session resources A is no different from B. Once either terminates, these resources
91 * are not usable and shall be cleaned up as soon as possible to free RAM/etc.
92 * - There may be multiple (active) instances of B at a given time but only one (active) instance of A.
93 * - Accordingly, A is special in that it may maintain some *per-app shared resources* (cf. per-session ones)
94 * which persist even when a given B *process* terminates (thus that A-B session ends) and are available
95 * for access both by
96 * - other concurrently active instances of B; and
97 * - any future active instances of B.
98 *
99 * More specifically/practically:
100 * - *Channels* are between the 2 processes A-B in session A-B. If B1 and B2 are instances of (client) app Bp,
101 * a channel cannot be established between B1 and B2; only A-B1 and A-B2. So channels are always *per-session*.
102 * - A *SHM area* may exist in the *per-session* scope. Hence A-B1 have a per-session SHM area; and A-B2 have
103 * a per-session SHM area. When the session ends, that SHM area goes away.
104 * - A *SHM area* may exist in the *per-app* scope. Hence A-B* have a per-app-B SHM area, which persists
105 * throughout the lifetime of (server) process A and remains available as B1, B2, ... start and stop.
106 * It disappears once A stops. It appears when the first instance of B establishes a session with A
107 * (i.e., lazily).
108 *
109 * ### Brief illustration ###
110 * Suppose A is a memory cache, while B is a cache reader of objects. A starts;
111 * B1 and B2 start; so sessions A-B1 and A-B2 now exist. Now each of B1, B2 might establish channel(s) and then
112 * use it/them to issue messages and receive messages back. Now A<->B1 channel(s) and A<->B2 channel(s) are
113 * established. No B1<->B2 channel(s) can be established. (Note: transport::Channel and other
114 * ipc::transport IPC techniques are -- in and of themselves -- session-unaware. So B1 could speak to B2.
115 * They just cannot use ipc::session to do so. ipc::session facilitates establishing channels and SHM
116 * in an organized fashion, and that organization currently does not support channels between instances of
117 * the same client application. ipc::session is not the only option for doing so.)
118 *
119 * Suppose, in this illustration, that A is responsible both for acquiring network objects (on behalf of B*)
120 * and memory-caching them. Then B1 might say, via an A-B1 channel, "give me object X." A does not have it
121 * in cache yet, so it loads it from network, and saves it in the *per-app-B* SHM area and replies (via
122 * same A-B1 channel) "here is object X: use this SHM handle X'." B1 then accesses X' directly in SHM.
123 * Say B2 also says "give me object X." It is cached, so A sends it X' again. Now, say A-B1 session ends,
124 * because B1 stops. All A-B1 channels go away -- they are per-session resources -- and any per-session-A-B1 SHM
125 * area (which we haven't mentioned in this illustration) also goes away similarly.
126 *
127 * A-B2 continues. B2 could request X again, and it would work. Say B3 now starts and requests X. Again,
128 * the per-app-B SHM area persists; A would send X' to B3 as well.
129 *
130 * Now suppose A stops, due to a restart. This ends A-B2 and A-B3 sessions; and the per-app-B SHM area goes
131 * away too. Thus the server process's lifetime encompasses all shared resources in the A-B split.
132 * Processes B2 and B3 continue, but they must await for a new active A process (instance) to appear, so that
133 * they can establish new A-B2 and A-B3 sessions. In the meantime, it is IPC no-man's-land: there is
134 * no active per-app-B shared resources (SHM in our illustration) and certainly no per-session-A-B* shared
135 * resources either. Once A-B2 and A-B3 (new) sessions are established, shared resources can begin to be
136 * acquired again.
137 *
138 * This illustrates that:
139 * - A is the cache server and expected to service multiple cache clients Bs. This is asymmetric.
140 * - A can maintain shared resources, especially SHM, that outlive any individual B process. This is asymmetric.
141 * - A-B channels are relevant only to a given A-B* session. (Such per-session SHM areas can also exist.)
142 * This is symmetric between A and a given B instance.
143 * - No shared resources survive beyond a given cache server A process. A given client B* must be prepared
144 * to establish a new session with A when another active instance of A appears, before it can access IPC
145 * channels or SHM. In that sense a given *session* is symmetric: it begins when both A and a B* are active,
146 * and it ends when either of the 2 stops.
147 *
148 * ### 2+ splits co-existing ###
149 * The above discussion concerned a particular A-B split, in which by definition A is the server (only 1 active
150 * instance at a time), while B is the client (0+ active instances at a time).
151 *
152 * Take A, in split A-B. Suppose there is app Cp also. An A-C split is also possible (as is A-D, A-E, etc.).
153 * Everything written so far applies in natural fashion. Do note, however, that the per-app scope applies to
154 * a given (client) application, not all client applications. So the per-app-B SHM area is independent of the
155 * per-app-C SHM area (both maintained by A). Once the 1st instance of B establishes a session, that SHM area
156 * is lazily created. Once the 1st instance of C does the same, that separate SHM area is lazily created.
157 * So B1, B2, ... access one per-app SHM area, while C1, C2, ... access another.
158 *
159 * Now consider app K, in split K-A. Here K is the server, A is the client. This, also, is allowed, despite
160 * the fact A is server in splits A-B, A-C. However, there is a natural constraint on A: there can only be
161 * one active process of it at a time, because it is the server in at least 1 split. It can be a client to
162 * server K; but there would only ever be 1 instance of it establishing sessions against K.
163 *
164 * Informally: This is not necessarily frowned upon. After all the ability to have multiple concurrently active
165 * processes of a given app is somewhat exotic in the first place. That said, very vaguely speaking, it is more
166 * flexible to design a given app to only be server in every split in which it participates.
167 *
168 * ### Overview of relevant APIs ###
169 * With the above explained, here is how the ipc::session API operates over these various ideas.
170 *
171 * The simplest items are App (description of an application, which might a client or a server or both),
172 * Client_app (description of an application in its role as a client), and Server_app (you get the idea).
173 * See their doc headers, of course, but basically these are very simple data stores describing the basic
174 * structure of the IPC universe. Each application generally shall load the same values into its master
175 * lists of these structures. References to immutable Client_app and Server_app objects are passed into
176 * actual APIs having to do with the Session concept, to establish who can establish sessions with whom and
177 * how. Basically they describe the allowed splits.
178 *
179 * As noted at the very top, the real goal is to open channels between processes and possibly to share data in
180 * SHM between them and others. To get there for a given process pair A-B, one must establish session A-B, as
181 * described above. Having chosen which one is A (server) and which is B (client), and loaded Client_app
182 * and Server_app structures to that effect, it is time to manage the sessions.
183 *
184 * An established session is said to be a session in PEER state and is represented on each side by an object
185 * that implements the Session concept (see its important doc header). On the client side, this impl
186 * is the class template #Client_session. On the server side, this impl is the class template #Server_session.
187 * Once each respective side has its PEER-state Session impl object, opening channels and operating SHM areas
188 * on a per-session basis is well documented in the Session (concept) doc header and is fully symmetrical
189 * in terms of the API.
190 *
191 * #Client_session does not begin in PEER state. One constructs it in NULL state, then invokes
192 * `Client_session::sync_connect()` to connect to the server process if any exists; once it fires its handler
193 * successfully, the #Client_session is a Session in PEER state. If #Client_session, per Session concept requirements,
194 * indicates the session has finished (due to the other side ending session), one must create a new #Client_session
195 * and start over (w/r/t IPC and relevant shared resources).
196 *
197 * Asymmetrically, on the server side, one must be ready to open potentially multiple `Server_session`s.
198 * Firstly create a Session_server (see its doc header). To open 1 #Server_session, default-construct one
199 * (hence in NULL state), then pass it as the target to Session_server::async_accept(). Once it fires
200 * your handler, you have an (almost-)PEER state Server_session, and moreover you may call
201 * `Server_session::client_app()` to determine the identity of the connecting client app
202 * (via Client_app) which should (in multi-split situations) help decide what to further do with this
203 * Server_session (also, as on the opposing side, now a PEER-state Session concept impl).
204 * (If A participates in splits A-B and A-C, then the types of IPC work it might do
205 * with a B1 or B2 is likely to be quite different from same with a C1 or C2 or C3. If only split A-B is relevant
206 * to A, then that is not a concern.)
207 *
208 * As noted, in terms of per-session shared resources, most notably channel communication, a Server_session and
209 * Client_session have identical APIs with identical capabilities, each implementing the PEER-state Session concept
210 * to the letter.
211 *
212 * ### SHM ###
213 * Extremely important (for performance at least) functionality is provided in the sub-namespace of ipc::session:
214 * ipc::session::shm. Please read its doc header now before using the types directly within ipc::session.
215 * That will allow you to make an informed choice.
216 *
217 * ### `sync_io`: integrate with `poll()`-ish event loops ###
218 * ipc::session APIs feature exactly the following asynchronous (blocking, background, not-non-blocking, long...)
219 * operations:
220 * - ipc::session::Session on-error and (optionally) on-passive-channel-open handlers.
221 * - `ipc::session::Client_session::async_conenct()` (same for other, including SHM-aware, variants).
222 * - ipc::session::Session_server::async_accept() (same for SHM-aware variants).
223 *
224 * All APIs mentioned so far operation broadly in the async-I/O pattern: Each event in question is reported from
225 * some unspecified background thread; it is up to the user to "post" the true handling onto their worker thread(s)
226 * as desired.
227 *
228 * If one desires to be informed, instead, in a fashion friendly to old-school reactor loops -- centered on
229 * `poll()`, `epoll_wait()`, or similar -- a `sync_io` alternate API is available. It is no faster, but it may
230 * be more convenient for some applications.
231 *
232 * To use the alternate `sync_io` interface: Look into session::sync_io and its 3 adapter templates:
233 * - ipc::session::sync_io::Server_session_adapter;
234 * - ipc::session::sync_io::Client_session_adapter;
235 * - ipc::session::sync_io::Session_server_adapter.
236 *
237 * Exactly the aforementioned async APIs are affected. All other APIs are available without change.
238 */
239namespace ipc::session
240{
241
242// Types.
243
244/// Convenience alias for the commonly used type util::Shared_name.
246
247/// Convenience alias for the commonly used type transport::struc::Session_token.
249
250// Find doc headers near the bodies of these compound types.
251
252struct App;
253struct Server_app;
254struct Client_app;
255
256template<typename Session_impl_t>
257class Session_mv;
258
259template<typename Server_session_impl_t>
261template<typename Client_session_impl_t>
263
264template<schema::MqType S_MQ_TYPE_OR_NONE, bool S_TRANSMIT_NATIVE_HANDLES, typename Mdt_payload = ::capnp::Void>
265class Session_server;
266
267/**
268 * A vanilla `Server_session` with no optional capabilities. See Server_session_mv (+doc header) for full API as
269 * well as its doc header for possible alternatives that add optional capabilities (such as, at least, SHM
270 * setup/access). Opposing peer object: #Client_session. Emitted by: Session_server.
271 *
272 * The following important template parameters are *knobs* that control the properties of the session;
273 * the opposing #Client_session must use identical settings.
274 *
275 * @tparam S_MQ_TYPE_OR_NONE
276 * Identical to #Client_session.
277 * @tparam S_TRANSMIT_NATIVE_HANDLES
278 * Identical to #Client_session.
279 * @tparam Mdt_payload
280 * See Session concept. In addition the same type may be used for `mdt_from_cli_or_null` (and srv->cli
281 * counterpart) in Session_server::async_accept(). (Recall that you can use a capnp-`union` internally
282 * for various purposes.)
283 *
284 * @see Server_session_mv for full API and its documentation.
285 * @see Session: implemented concept.
286 */
287template<schema::MqType S_MQ_TYPE_OR_NONE, bool S_TRANSMIT_NATIVE_HANDLES, typename Mdt_payload = ::capnp::Void>
290
291/**
292 * A vanilla `Client_session` with no optional capabilities. See Client_session_mv (+doc header) for full API as
293 * well as its doc header for possible alternatives that add optional capabilities (such as, at least, SHM
294 * setup/access). Opposing peer object: #Server_session.
295 *
296 * The following important template parameters are *knobs* that control the properties of the session;
297 * the opposing #Server_session must use identical settings.
298 *
299 * @tparam S_MQ_TYPE_OR_NONE
300 * Session::Channel_obj (channel openable via `open_channel()` on this or other side) type config:
301 * Enumeration constant that specifies which type of MQ to use (corresponding to all available
302 * transport::Persistent_mq_handle concept impls) or to not use one (`NONE`). Note: This `enum` type is
303 * capnp-generated; see common.capnp for values and brief docs.
304 * @tparam S_TRANSMIT_NATIVE_HANDLES
305 * Session::Channel_obj (channel openable via `open_channel()` on this or other side) type config:
306 * Whether it shall be possible to transmit a native handle via the channel.
307 * @tparam Mdt_payload
308 * See Session concept. In addition the same type may be used for `mdt` or `mdt_from_srv_or_null`
309 * in `*_connect()`. (If it is used for `open_channel()` and/or passive-open and/or `*connect()`
310 * `mdt` and/or `mdt_from_srv_or_null`, recall that you can use a capnp-`union` internally for various
311 * purposes.)
312 *
313 * @see Client_session_mv for full API and its documentation.
314 * @see Session: implemented concept.
315 */
316template<schema::MqType S_MQ_TYPE_OR_NONE, bool S_TRANSMIT_NATIVE_HANDLES,
317 typename Mdt_payload = ::capnp::Void>
320
321// Free functions.
322
323/**
324 * Utility, used internally but exposed in public API in case it is of general use, that checks that the
325 * owner of the given resource (at the supplied file system path) is
326 * as specified in the given App (App::m_user_id et al). If the resource cannot be accessed (not found,
327 * permissions...) that system Error_code shall be emitted. If it can, but the owner does not match,
328 * error::Code::S_RESOURCE_OWNER_UNEXPECTED shall be emitted.
329 *
330 * @param logger_ptr
331 * Logger to use for logging (WARNING, on error only, including `S_RESOURCE_OWNER_UNEXPECTED`).
332 * @param path
333 * Path to resource. Symlinks are followed, and the target is the resource in question (not the symlink).
334 * @param app
335 * Describes the app with the expected owner info prescribed therein.
336 * @param err_code
337 * See `flow::Error_code` docs for error reporting semantics. #Error_code generated:
338 * error::Code::S_RESOURCE_OWNER_UNEXPECTED (check did not pass),
339 * system error codes if ownership cannot be checked.
340 */
341void ensure_resource_owner_is_app(flow::log::Logger* logger_ptr, const fs::path& path, const App& app,
342 Error_code* err_code = 0);
343
344/**
345 * Identical to the other ensure_resource_owner_is_app() overload but operates on a pre-opened `Native_handle`
346 * (a/k/a handle, socket, file descriptor) to the resource in question.
347 *
348 * @param logger_ptr
349 * See other overload.
350 * @param handle
351 * See above. `handle.null() == true` causes undefined behavior (assertion may trip).
352 * Closed/invalid/etc. handle will yield civilized Error_code emission.
353 * @param app
354 * See other overload.
355 * @param err_code
356 * See `flow::Error_code` docs for error reporting semantics. #Error_code generated:
357 * error::Code::S_RESOURCE_OWNER_UNEXPECTED (check did not pass),
358 * system error codes if ownership cannot be checked (invalid descriptor, un-opened descriptor, etc.).
359 */
360void ensure_resource_owner_is_app(flow::log::Logger* logger_ptr, util::Native_handle handle, const App& app,
361 Error_code* err_code = 0);
362
363
364/**
365 * Prints string representation of the given `App` to the given `ostream`.
366 *
367 * @relatesalso App
368 *
369 * @param os
370 * Stream to which to write.
371 * @param val
372 * Object to serialize.
373 * @return `os`.
374 */
375std::ostream& operator<<(std::ostream& os, const App& val);
376
377/**
378 * Prints string representation of the given `Client_appp` to the given `ostream`.
379 *
380 * @relatesalso Client_app
381 *
382 * @param os
383 * Stream to which to write.
384 * @param val
385 * Object to serialize.
386 * @return `os`.
387 */
388std::ostream& operator<<(std::ostream& os, const Client_app& val);
389
390/**
391 * Prints string representation of the given `Server_app` to the given `ostream`.
392 *
393 * @relatesalso Server_app
394 *
395 * @param os
396 * Stream to which to write.
397 * @param val
398 * Object to serialize.
399 * @return `os`.
400 */
401std::ostream& operator<<(std::ostream& os, const Server_app& val);
402
403/**
404 * Prints string representation of the given `Session_mv` to the given `ostream`.
405 *
406 * @relatesalso Session_mv
407 *
408 * @param os
409 * Stream to which to write.
410 * @param val
411 * Object to serialize.
412 * @return `os`.
413 */
414template<typename Session_impl_t>
415std::ostream& operator<<(std::ostream& os,
416 const Session_mv<Session_impl_t>& val);
417
418/**
419 * Prints string representation of the given `Server_session_mv` to the given `ostream`.
420 *
421 * @relatesalso Server_session_mv
422 *
423 * @param os
424 * Stream to which to write.
425 * @param val
426 * Object to serialize.
427 * @return `os`.
428 */
429template<typename Server_session_impl_t>
430std::ostream& operator<<(std::ostream& os,
432
433/**
434 * Prints string representation of the given `Client_session_mv` to the given `ostream`.
435 *
436 * @relatesalso Client_session_mv
437 *
438 * @param os
439 * Stream to which to write.
440 * @param val
441 * Object to serialize.
442 * @return `os`.
443 */
444template<typename Client_session_impl_t>
445std::ostream& operator<<(std::ostream& os, const Client_session_mv<Client_session_impl_t>& val);
446
447/**
448 * Prints string representation of the given `Session_server` to the given `ostream`.
449 *
450 * @relatesalso Session_server
451 *
452 * @param os
453 * Stream to which to write.
454 * @param val
455 * Object to serialize.
456 * @return `os`.
457 */
458template<schema::MqType S_MQ_TYPE_OR_NONE, bool S_TRANSMIT_NATIVE_HANDLES, typename Mdt_payload>
459std::ostream& operator<<(std::ostream& os,
460 const Session_server
461 <S_MQ_TYPE_OR_NONE, S_TRANSMIT_NATIVE_HANDLES, Mdt_payload>& val);
462
463} // namespace ipc::session
464
465/**
466 * `sync_io`-pattern counterparts to async-I/O-pattern object types in parent namespace ipc::session.
467 *
468 * In this case they are given as only a small handful of adapter templates. Just check out their doc headers.
469 */
471{
472
473// Types.
474
475template<typename Session>
477template<typename Session>
479template<typename Session_server>
481
482// Free functions.
483
484/**
485 * Prints string representation of the given `Server_session_adapter` to the given `ostream`.
486 *
487 * @relatesalso Server_session_adapter
488 *
489 * @param os
490 * Stream to which to write.
491 * @param val
492 * Object to serialize.
493 * @return `os`.
494 */
495template<typename Session>
496std::ostream& operator<<(std::ostream& os,
498
499/**
500 * Prints string representation of the given `Client_session_adapter` to the given `ostream`.
501 *
502 * @relatesalso Client_session_adapter
503 *
504 * @param os
505 * Stream to which to write.
506 * @param val
507 * Object to serialize.
508 * @return `os`.
509 */
510template<typename Session>
511std::ostream& operator<<(std::ostream& os,
513
514/**
515 * Prints string representation of the given `Session_server_adapter` to the given `ostream`.
516 *
517 * @relatesalso Session_server_adapter
518 *
519 * @param os
520 * Stream to which to write.
521 * @param val
522 * Object to serialize.
523 * @return `os`.
524 */
525template<typename Session_server>
526std::ostream& operator<<(std::ostream& os,
528
529} // namespace ipc::session::sync_io
Implements Session concept on the Client_app end: a Session_mv that first achieves PEER state by conn...
Implements Session concept on the Server_app end: a Session that is emitted in almost-PEER state by l...
Implements the Session concept when it is in PEER state.
Definition: session.hpp:505
To be instantiated typically once in a given process, an object of this type asynchronously listens f...
sync_io-pattern counterpart to async-I/O-pattern session::Client_session types and all their SHM-awar...
sync_io-pattern counterpart to async-I/O-pattern session::Server_session types and all their SHM-awar...
sync_io-pattern counterpart to async-I/O-pattern session::Session_server types and all their SHM-awar...
String-wrapping abstraction representing a name uniquely distinguishing a kernel-persistent entity fr...
sync_io-pattern counterparts to async-I/O-pattern object types in parent namespace ipc::session.
std::ostream & operator<<(std::ostream &os, const Server_session_adapter< Session > &val)
Prints string representation of the given Server_session_adapter to the given ostream.
Flow-IPC module providing the broad lifecycle and shared-resource organization – via the session conc...
Definition: app.cpp:27
void ensure_resource_owner_is_app(flow::log::Logger *logger_ptr, const fs::path &path, const App &app, Error_code *err_code)
Utility, used internally but exposed in public API in case it is of general use, that checks that the...
Definition: app.cpp:31
std::ostream & operator<<(std::ostream &os, const App &val)
Prints string representation of the given App to the given ostream.
Definition: app.cpp:124
util::Shared_name Shared_name
Convenience alias for the commonly used type util::Shared_name.
transport::struc::Session_token Session_token
Convenience alias for the commonly used type transport::struc::Session_token.
boost::uuids::uuid Session_token
A type used by struc::Channel for internal safety/security/auth needs.
Definition: struc_fwd.hpp:113
flow::Error_code Error_code
Short-hand for flow::Error_code which is very common.
Definition: common.hpp:297
A description of an application in this ipc::session inter-process communication universe.
Definition: app.hpp:78
An App that is used as a client in at least one client-server IPC split.
Definition: app.hpp:185
An App that is used as a server in at least one client-server IPC split.
Definition: app.hpp:206
A monolayer-thin wrapper around a native handle, a/k/a descriptor a/k/a FD.