Flow-IPC 1.0.2
Flow-IPC project: Public API.
Namespaces | Typedefs | Enumerations | Variables
ipc Namespace Reference

Catch-all namespace for the Flow-IPC project: A library/API in modern C++17 providing high-performance communication between processes. More...

Namespaces

namespace  bipc
 Short-hand for boost.interprocess namespace.
 
namespace  fs
 Short-hand for filesystem namespace.
 
namespace  session
 Flow-IPC module providing the broad lifecycle and shared-resource organization – via the session concept – in such a way as to make it possible for a given pair of processes A and B to set up ipc::transport structured- or unstructured-message channels for general IPC, as well as to share data in SHared Memory (SHM).
 
namespace  shm
 Modules for SHared Memory (SHM) support.
 
namespace  transport
 Flow-IPC module providing transmission of structured messages and/or low-level blobs (and more) between pairs of processes.
 
namespace  util
 Flow-IPC module containing miscellaneous general-use facilities that ubiquitously used by ~all Flow-IPC modules and/or do not fit into any other Flow-IPC module.
 

Typedefs

using Error_code = flow::Error_code
 Short-hand for flow::Error_code which is very common.
 
template<typename Signature >
using Function = flow::Function< Signature >
 Short-hand for polymorphic functor holder which is very common. This is essentially std::function.
 

Enumerations

enum class  Log_component { S_END_SENTINEL }
 The flow::log::Component payload enumeration containing various log components used by Flow-IPC internal logging. More...
 

Variables

const boost::unordered_multimap< Log_component, std::string > S_IPC_LOG_COMPONENT_NAME_MAP
 The map generated by flow::log macro magic that maps each enumerated value in ipc::Log_component to its string representation as used in log output and verbosity config. More...
 

Detailed Description

Catch-all namespace for the Flow-IPC project: A library/API in modern C++17 providing high-performance communication between processes.

It includes schema-based structured message definition and zero-copy transport between processes. It also includes a SHared Memory (SHM) module for direct allocation and related needs; and particular support for passing references to such bulk objects through the aforementioned messaging transport system.

Note
Nomenclature: The project is called Flow-IPC.

From the user's perspective, one should view this namespace as the "root," meaning it consists of two parts:

Flow-IPC modules overview

Unlike with, say, Boost or Flow, the user of Flow-IPC should be familiar with the totality of its modules. They're interrelated and to be used together in a typical application. Contrast with how one might use flow::log but not flow::async, or boost.asio but not boost.random. Summary of modules follows:

Note
Nomenclature: We refer to Cap'n Proto as capnp, lower-case, no backticks. Keep to this consistent convention in comments.

The above text views Flow-IPC somewhat as a monolithic whole. Indeed the present documentation generally treats the entirety of Flow-IPC as available and usable, even though the are various sub-namespaces as shown that break the monolith into cooperating modules. When it comes to practical needs, this view is sufficient. Really, most users will (1) start a session (using ipc::session::shm for max performance), (2) use the session to create 1+ ipc::transport::Channel, (3) typically upgrade each to ipc::transport::struc::Channel immediately (a one-liner), (3) use the struc::Channel API to send/receive messages with automatic end-to-end zero-copy performance. (4) Optionally one can also access a SHM-arena for direct C++ object placement and access in SHM; the SHM arenas are available from the session object.

That said, read on if you want to maintain or otherwise deeper understand Flow-IPC. There's a deeper organization of this monolith, in which one builds up the whole out of smaller parts, where we generally avoid circular dependencies (A needs B, and B needs A). Let's briefly go through the process of naming the most basic parts and then showing what depends on them, and so on, until everything is listed in bottom-up order. To wit:

  1. ipc::util: This contains basic, simple building blocks. ipc::util::Shared_name is used to name various shared resource throughout Flow-IPC. ipc::util::Native_handle is a trivial wrapper around a native handle (FD in POSIX/Linux/Unix parlance). There are various other items which you'll note when they're mentioned.
    • Dependents: Essentially all other code routinely depends on ipc::util.
  2. ipc::transport, excluding ipc::transport::struc: This is the transport core layer. Think of this as wrappers around legacy IPC transport APIs with which you may already be familiar: e.g., Unix domain sockets. There are key concepts, including ipc::transport::Blob_sender and Blob_receiver (+ Native_handle_{send|receiv}er); and their implementations over the aforementioned specific low-level transports (Unix domain sockets, MQs as of this writing).
  3. ipc::transport::struc: The transport structured layer builds on top of (1) the core layer and (2) capnp. struc::Channel adapts an unstructured Channel, allowing efficient transmission of structured messages filled-out according to user-provided capnp-generated schema(s). At its simplest, wihout the (spoiler alert) shm sub-namespace, an out-message is backed by the regular heap (new, etc.); and a received in-message is a copy also backed by the regular heap of the recipient process.
  4. ipc::session, excluding ipc::session::shm: This is the core support for sessions, which are how one painlessly begins a conversation between your process and the opposing process. Without it you'll need to worry about low-level resource naming and cleanup; with it, it's taken care-of – just open channels and use them. Spoiler alert: the sub-namespace shm (see below) will add SHM capabilities.
  5. ipc::shm::stl: A couple of key facilities here enable storage of STL-compliant C++ data structures directly in SHM; e.g., a map from strings to vectors of strings and structs and... etc. You, the user, will only directly use this, if you need such functionality. If you only need to send structured messages with max perf (which internally is achieved using SHM), then you need not directly mention this.
    • Dependents:
      • ipc::transport::struc::shm "eats our own dog food" by internally representing certain data structures using STL-compliant APIs, including list<> and flow::util::Blob.
  6. ipc::transport::struc::shm: This essentially just adds shm::Builder and shm::Reader which are impls of ipc::transport::struc::Struct_builder and Struct_reader concepts that enable end-to-end zero-copy transmission of any capnp-schema-based message – as long as one has certain key SHM-enabling objects, most notably a Shm_arena.
    • Dependents:
      • ipc::session::shm mentions shm::Builder and shm::Reader in a key convenience alias.
      • The bottom line is, if you use SHM-enabled sessions – which is at least easily the most convenient way to obtain end-to-end zero-copy perf when transmitting structured messages along channels – then ipc::transport::struc::shm shall be used, most likely without your needing to mention it or worry about it.
  7. ipc::shm::classic: This is a SHM-provider (of SHM-arenas); namely the SHM-classic provider. The core item is ipc::shm::classic::Pool_arena, a "classic" single-segment (pool) SHM arena with a simple arena-allocation algorithm.
  8. ipc::shm::arena_lend (more specifically ipc::shm::arena_lend::jemalloc): This is the other SHM-provider (of SHM-arenas); namely the SHM-jemalloc provider. The core item is jemalloc::Ipc_arena. is Pool_arena, a "classic" single-segment (pool) SHM arena with a simple arena-allocation algorithm.
    • Dependents: ipc::session::shm::arena_lend::jemalloc provides SHM-enabled sessions with this SHM-provider as the required SHM engine. Naturally in so doing it depends on ipc::shm::arena_lend::jemalloc, especially jemalloc::Ipc_arena.
  9. ipc::session::shm: This namespace adds SHM-enabled sessions. Namely that adds two capabilities; one, to easily get end-to-end zero-copy performance along struc::Channel objects opened via these sessions. And, optionally, two: To have direct access to SHM-arena(s) in which to place and access C++ objects.

Again – there's no need to understand all this, if you're just using Flow-IPC in the expected mainstream ways. Nevertheless it could be a useful, if wordy, map to the building blocks of Flow-IPC and how they interact.

Distributed sub-components (libraries)

The above describes Flow-IPC as a whole. Generally we recommend a distribution of Flow-IPC which includes all the pieces, to be used at will. That said, for reasons outside our scope here, this project is actually distributed in a few parts, each of which is a library with a set of header files. (The generated documentation is created from all of them together, and therefore various comments aren't particularly shy about referring to items across the boundaries between those parts.) These parts (libraries) are:

The dependencies between these are as follows:

Each one, in the source code, is in a separate top-level directory; and generates a separate static library. However, their directory structures – and accordingly the namespace trees – overlap in naming, which manifests itself when 2 or more of the sub-components are installed together. For example ipc_session places ipc/session into the #include tree; and ipc_shm places ipc/session/shm/classic within that.

Relationship with Flow and Boost

Flow-IPC requires Flow and Boost, not only for internal implementation purposes but also in some of its APIs. For example, flow::log is the assumed logging system, and flow::Error_code and related conventions are used for error reporting; and boost::interprocess and boost::thread APIs may be exposed at times.

Moreover, Flow-IPC shares Flow "DNA" in terms of coding style, error, logging, documentation, etc., conventions. Flow-IPC and Flow itself are also more loosely inspired by Boost "DNA." (For example: snake_case for identifier naming is inherited from Flow, which inherits it more loosely from Boost; the error reporting API convention is taken from Flow which uses a modified version of the boost.asio convention.)

Documentation / Doxygen

All code in the project proper follows a high standard of documentation, almost solely via comments therein (plus a guided Manual in manual/....dox.txt files, also as Doxygen-read comments). The standards and mechanics w/r/t documentation are entirely inherited from Flow. Therefore, see the namespace flow doc header's "Documentation / Doxygen" section. It applies verbatim here (within reason). (Spoiler alert: Doc header comments on most entities (classes, functions, ...) are friendly to doc web page generation by Doxygen. Doxygen is a tool similar to Javadoc.)

The only exception to this is the addition of the aforementioned guided Manual as well which Flow lacks as of this writing (for the time being).

Using Flow-IPC modules

This section discusses usability topics that apply to all Flow-IPC modules including hopefully any future ones but definitely all existing ones as of this writing.

Error reporting

The standards and mechanics w/r/t error reporting are entirely inherited from Flow. Therefore, see the namespace flow doc header's "Error reporting" section. It applies verbatim (within reason) here.

Logging

We use the Flow log module, in flow::log namespace, for logging. We are just a consumer, but this does mean the Flow-IPC user must supply a flow::log::Logger into various APIs in order to enable logging. (Worst-case, passing Logger == null will make it log nowhere.) See flow::log docs. Spoiler alert: You can hook it up to whatever logging output/other logging API you desire, or it can log for you in certain common ways including console and rotated files.

Enumeration Type Documentation

◆ Log_component

enum class ipc::Log_component
strong

The flow::log::Component payload enumeration containing various log components used by Flow-IPC internal logging.

Internal Flow-IPC code specifies members thereof when indicating the log component for each particular piece of logging code. Flow-IPC user specifies it, albeit very rarely, when configuring their program's logging such as via flow::log::Config::init_component_to_union_idx_mapping() and flow::log::Config::init_component_names().

If you are reading this in Doxygen-generated output (likely a web page), be aware that the individual enum values are not documented right here, because flow::log auto-generates those via certain macro magic, and Doxygen cannot understand what is happening. However, you will find the same information directly in the source file log_component_enum_declare.macros.hpp (if the latter is clickable, click to see the source).

Details regarding overall log system init in user program

See comment in similar place in flow/common.hpp.

Enumerator
S_END_SENTINEL 

CAUTION – see ipc::Log_component doc header for directions to find actual members of this enum class.

This entry is a placeholder for Doxygen purposes only, because of the macro magic involved in generating the actual enum class.

Variable Documentation

◆ S_IPC_LOG_COMPONENT_NAME_MAP

const boost::unordered_multimap<Log_component, std::string> ipc::S_IPC_LOG_COMPONENT_NAME_MAP
extern

The map generated by flow::log macro magic that maps each enumerated value in ipc::Log_component to its string representation as used in log output and verbosity config.

Flow-IPC user specifies, albeit very rarely, when configuring their program's logging via flow::log::Config::init_component_names().

As an Flow-IPC user, you can informally assume that if the component enum member is called S_SOME_NAME, then its string counterpart in this map will be auto-computed to be "SOME_NAME" (optionally prepended with a prefix as supplied to flow::log::Config::init_component_names()). This is achieved via flow::log macro magic.

See also
ipc::Log_component first.