Flow-IPC 1.0.1
Flow-IPC project: Public API.
|
Catch-all namespace for the Flow-IPC project: A library/API in modern C++17 providing high-performance communication between processes. More...
Namespaces | |
namespace | bipc |
Short-hand for boost.interprocess namespace. | |
namespace | fs |
Short-hand for filesystem namespace. | |
namespace | session |
Flow-IPC module providing the broad lifecycle and shared-resource organization – via the session concept – in such a way as to make it possible for a given pair of processes A and B to set up ipc::transport structured- or unstructured-message channels for general IPC, as well as to share data in SHared Memory (SHM). | |
namespace | shm |
Modules for SHared Memory (SHM) support. | |
namespace | transport |
Flow-IPC module providing transmission of structured messages and/or low-level blobs (and more) between pairs of processes. | |
namespace | util |
Flow-IPC module containing miscellaneous general-use facilities that ubiquitously used by ~all Flow-IPC modules and/or do not fit into any other Flow-IPC module. | |
Enumerations | |
enum class | Log_component { S_END_SENTINEL } |
The flow::log::Component payload enumeration containing various log components used by Flow-IPC internal logging. More... | |
Variables | |
const boost::unordered_multimap< Log_component, std::string > | S_IPC_LOG_COMPONENT_NAME_MAP |
The map generated by flow::log macro magic that maps each enumerated value in ipc::Log_component to its string representation as used in log output and verbosity config. More... | |
Catch-all namespace for the Flow-IPC project: A library/API in modern C++17 providing high-performance communication between processes.
It includes schema-based structured message definition and zero-copy transport between processes. It also includes a SHared Memory (SHM) module for direct allocation and related needs; and particular support for passing references to such bulk objects through the aforementioned messaging transport system.
From the user's perspective, one should view this namespace as the "root," meaning it consists of two parts:
enum class
ipc::Log_component which defines the set of possible flow::log::Component
values logged from within all modules of Flow-IPC. See end of common.hpp.Unlike with, say, Boost or Flow, the user of Flow-IPC should be familiar with the totality of its modules. They're interrelated and to be used together in a typical application. Contrast with how one might use flow::log
but not flow::async
, or boost.asio but not boost.random. Summary of modules follows:
struct
s and STL containers) into SHM – which is an advanced but sometimes desirable capability – direct use of ipc::shm is not necessary.The above text views Flow-IPC somewhat as a monolithic whole. Indeed the present documentation generally treats the entirety of Flow-IPC as available and usable, even though the are various sub-namespaces as shown that break the monolith into cooperating modules. When it comes to practical needs, this view is sufficient. Really, most users will (1) start a session (using ipc::session::shm for max performance), (2) use the session to create 1+ ipc::transport::Channel, (3) typically upgrade each to ipc::transport::struc::Channel immediately (a one-liner), (3) use the struc::Channel
API to send/receive messages with automatic end-to-end zero-copy performance. (4) Optionally one can also access a SHM-arena for direct C++ object placement and access in SHM; the SHM arenas are available from the session object.
That said, read on if you want to maintain or otherwise deeper understand Flow-IPC. There's a deeper organization of this monolith, in which one builds up the whole out of smaller parts, where we generally avoid circular dependencies (A needs B, and B needs A). Let's briefly go through the process of naming the most basic parts and then showing what depends on them, and so on, until everything is listed in bottom-up order. To wit:
Blob_receiver
(+ Native_handle_{send|receiv}er
); and their implementations over the aforementioned specific low-level transports (Unix domain sockets, MQs as of this writing).transport::struc::Channel
, the absolute most important object (possibly in all of Flow-IPC), adapts transport::Channel
, including for example leveraging that an unstructured Channel
might contain a blobs pipe and a native handles pipe.transport::Channel
to a transport::struc::Channel
: a structured channel.)struc::Channel
adapts an unstructured Channel
, allowing efficient transmission of structured messages filled-out according to user-provided capnp-generated schema(s). At its simplest, wihout the (spoiler alert) shm
sub-namespace, an out-message is backed by the regular heap (new
, etc.); and a received in-message is a copy also backed by the regular heap of the recipient process.shm
(see below) will add SHM capabilities.struct
s and... etc. You, the user, will only directly use this, if you need such functionality. If you only need to send structured messages with max perf (which internally is achieved using SHM), then you need not directly mention this.list<>
and flow::util::Blob
.shm::Builder
and shm::Reader
which are impls of ipc::transport::struc::Struct_builder and Struct_reader
concepts that enable end-to-end zero-copy transmission of any capnp-schema-based message – as long as one has certain key SHM-enabling objects, most notably a Shm_arena
.shm::Builder
and shm::Reader
in a key convenience alias.classic::Pool_arena
.jemalloc::Ipc_arena
. is Pool_arena
, a "classic" single-segment (pool) SHM arena with a simple arena-allocation algorithm.jemalloc::Ipc_arena
.struc::Channel
objects opened via these sessions. And, optionally, two: To have direct access to SHM-arena(s) in which to place and access C++ objects.Again – there's no need to understand all this, if you're just using Flow-IPC in the expected mainstream ways. Nevertheless it could be a useful, if wordy, map to the building blocks of Flow-IPC and how they interact.
The above describes Flow-IPC as a whole. Generally we recommend a distribution of Flow-IPC which includes all the pieces, to be used at will. That said, for reasons outside our scope here, this project is actually distributed in a few parts, each of which is a library with a set of header files. (The generated documentation is created from all of them together, and therefore various comments aren't particularly shy about referring to items across the boundaries between those parts.) These parts (libraries) are:
ipc_core
: Contains: ipc::util, ipc::transport (excluding ipc::transport::struc).ipc_transport_structured
: Contains: ipc::transport::struc (excluding ipc::transport::struc::shm).ipc_session
: Contains: ipc::session (excluding ipc::session::shm).ipc_shm
: Contains: ipc::shm::stl, ipc::transport::struc::shm (including ipc::transport::struc::shm::classic but excluding all other such sub-namespaces), ipc::shm::classic + ipc::session::shm::classic.ipc_shm_arena_lend
: Contains: ipc::transport::struc::shm::arena_lend, ipc::shm::arena_lend + ipc::session::shm::arena_lend.The dependencies between these are as follows:
ipc_core
<- each of the others;ipc_transport_structured
<- ipc_session
<- ipc_shm
<- ipc_shm_arena_lend
.ipc_shm_arena_lend
depends on ipc_transport_structured
in certain key internal impl details. We are not pointing out every direct dependency here, leaving it out as long as it's implied by another indirect dependency such as ipc_shm_arena_lend
indirectly depending on ipc_transport_structured
via several others. )Each one, in the source code, is in a separate top-level directory; and generates a separate static library. However, their directory structures – and accordingly the namespace trees – overlap in naming, which manifests itself when 2 or more of the sub-components are installed together. For example ipc_session
places ipc/session
into the #include
tree; and ipc_shm
places ipc/session/shm/classic
within that.
Flow-IPC requires Flow and Boost, not only for internal implementation purposes but also in some of its APIs. For example, flow::log
is the assumed logging system, and flow::Error_code
and related conventions are used for error reporting; and boost::interprocess
and boost::thread
APIs may be exposed at times.
Moreover, Flow-IPC shares Flow "DNA" in terms of coding style, error, logging, documentation, etc., conventions. Flow-IPC and Flow itself are also more loosely inspired by Boost "DNA." (For example: snake_case
for identifier naming is inherited from Flow, which inherits it more loosely from Boost; the error reporting API convention is taken from Flow which uses a modified version of the boost.asio convention.)
All code in the project proper follows a high standard of documentation, almost solely via comments therein (plus a guided Manual in manual/....dox.txt files, also as Doxygen-read comments). The standards and mechanics w/r/t documentation are entirely inherited from Flow. Therefore, see the namespace flow
doc header's "Documentation / Doxygen" section. It applies verbatim here (within reason). (Spoiler alert: Doc header comments on most entities (classes, functions, ...) are friendly to doc web page generation by Doxygen. Doxygen is a tool similar to Javadoc.)
The only exception to this is the addition of the aforementioned guided Manual as well which Flow lacks as of this writing (for the time being).
This section discusses usability topics that apply to all Flow-IPC modules including hopefully any future ones but definitely all existing ones as of this writing.
The standards and mechanics w/r/t error reporting are entirely inherited from Flow. Therefore, see the namespace flow
doc header's "Error reporting" section. It applies verbatim (within reason) here.
We use the Flow log module, in flow::log
namespace, for logging. We are just a consumer, but this does mean the Flow-IPC user must supply a flow::log::Logger
into various APIs in order to enable logging. (Worst-case, passing Logger == null
will make it log nowhere.) See flow::log
docs. Spoiler alert: You can hook it up to whatever logging output/other logging API you desire, or it can log for you in certain common ways including console and rotated files.
|
strong |
The flow::log::Component
payload enumeration containing various log components used by Flow-IPC internal logging.
Internal Flow-IPC code specifies members thereof when indicating the log component for each particular piece of logging code. Flow-IPC user specifies it, albeit very rarely, when configuring their program's logging such as via flow::log::Config::init_component_to_union_idx_mapping()
and flow::log::Config::init_component_names()
.
If you are reading this in Doxygen-generated output (likely a web page), be aware that the individual enum
values are not documented right here, because flow::log
auto-generates those via certain macro magic, and Doxygen cannot understand what is happening. However, you will find the same information directly in the source file log_component_enum_declare.macros.hpp
(if the latter is clickable, click to see the source).
See comment in similar place in flow/common.hpp
.
Enumerator | |
---|---|
S_END_SENTINEL | CAUTION – see ipc::Log_component doc header for directions to find actual members of this This entry is a placeholder for Doxygen purposes only, because of the macro magic involved in generating the actual |
|
extern |
The map generated by flow::log
macro magic that maps each enumerated value in ipc::Log_component to its string representation as used in log output and verbosity config.
Flow-IPC user specifies, albeit very rarely, when configuring their program's logging via flow::log::Config::init_component_names()
.
As an Flow-IPC user, you can informally assume that if the component enum
member is called S_SOME_NAME
, then its string counterpart in this map will be auto-computed to be "SOME_NAME"
(optionally prepended with a prefix as supplied to flow::log::Config::init_component_names()
). This is achieved via flow::log
macro magic.