Flow-IPC 1.0.0
Flow-IPC project: Full implementation reference.
Todo List
Page Asynchronicity and Integrating with Your Event Loop
We may supply an alternative API wherein Flow-IPC objects can be boost.asio I/O objects themselves, similarly to boost::asio::ip::tcp::socket.
Member ipc::session::Client_session_impl< S_MQ_TYPE_OR_NONE, S_TRANSMIT_NATIVE_HANDLES, Mdt_payload, S_SHM_TYPE_OR_NONE, S_GRACEFUL_FINISH_REQUIRED_V >::create_channel_obj (const Shared_name &mq_name_c2s_or_none, const Shared_name &mq_name_s2c_or_none, util::Native_handle &&local_hndl_or_null, Channel_obj *opened_channel_ptr, bool active_else_passive, Error_code *err_code_ptr)
As of this writing the eventuality where Client_session_impl::create_channel_obj() yields an error is treated as assertion-trip-worthy by its caller; hence consider just tripping assertion inside instead and no out-arg. For now it is left this way in case we'd want the (internal) caller to do something more graceful in the future, and in the meantime it's a decently reusable chunk of code to use in that alleged contingency.
Member ipc::session::Client_session_mv< Client_session_impl_t >::sync_connect (Error_code *err_code=0)
Consider adding an optional mode/feature to allow to wait through the condition wherein CNS (PID) file does not exist, or it does but the initial session-open connect op would be refused; instead it detects these relatively common conditions (server not yet up and/or is restarting and/or is operationally suspended for now, etc.) as normal and waits until the condition is cleared. Without this mode, a typical user would probably do something like: oh, sync_connect() failed; let's sleep 1 sec and try again (rinse/repeat). It's not awful, but we might as well make it easier and more responsive out of the box (optionally). Upon resolving this to-do please update the Manual section Sessions: Setting Up an IPC Context accordingly.
Class ipc::session::Session_base< S_MQ_TYPE_OR_NONE, S_TRANSMIT_NATIVE_HANDLES, Mdt_payload >::Graceful_finisher
Consider how to avoid having SHM-jemalloc ipc::session mechanism require one side's Session dtor await (and possibly block) the start of the opposing side's Session dtor, before it can proceed. The reason it does this today is above this to-do in the code. The blocking is not too bad – if the user's applications are properly written, it will not occur, and the existing logging should make clear why it happens, if it happens; but it's still not ideal. There are a couple known mitigating approaches, and at least 1 ticket covering them is likely filed. (Either way the Graceful_finisher mechanism could then be removed probably.) To summarize them: 1, there can be a mechanism deep inside SHM-jemalloc code that gives (arbitrarily long, configurable) grace period for known borrowed objects whose session channels have closed before those objects were returned; this was echan's idea on this topic. Once the grace period is reached, it would act as-if they got returned then (so in the above scenario the arena could get jemalloc-deinitialized and all). If program needs to exit, it could either block until the end of the grace period or... who knows? 2, something similar could be done about the SHM-internal-use channel: do not close it but add it to some list of such channels; they would be finally closed upon detection of the other side's Session dtor being reached, in some background thread somewhere. (On 2nd thought this sounds a lot like Graceful_finisher – but for this particular internal-use channel only. The code here would only get more complex, thought maybe not too much more. However, it would resolve the observed Session-dtor-blocking user-visible issue, at least until the delayed-channel-death process had to exit entirely.) Beyond these basic ideas (which could even perhaps be combined) this requires some thinking; it is an interesting problem. In the meantime the dtor-cross-process-barrier-of-sorts existing solution is pretty decent.
Class ipc::session::Session_server_impl< Session_server_t, Server_session_t >
Session_server, probably in ctor or similar, should – for safety – enforce the accuracy of Server_app attributes including App::m_exec_path, App::m_user_id, App::m_group_id. As of this writing it enforces these things about each opposing Client_app and process – so for sanity it can/should do so about itself, before the sessions can begin.
Member ipc::session::Session_server_impl< Session_server_t, Server_session_t >::this_session_srv ()
Reconsider the details of how classes in the non-virtual hierarchies Session_server, Server_session, Session_server_impl, Server_session_impl cooperate internally, as there is some funky stuff going on, particularly Session_server_impl::this_session_srv().
Member ipc::session::shm::arena_lend::jemalloc::error::S_SHM_ARENA_CREATION_FAILED
If ipc::shm::arena_lend arena-creation API(s) are modified to output an Error_code instead of reporting just success-versus-failure, then ipc::session::error::Code::S_SHM_ARENA_CREATION_FAILED should go away.
Member ipc::session::shm::arena_lend::jemalloc::error::S_SHM_ARENA_LEND_FAILED
If ipc::shm::arena_lend arena-lending API(s) are modified to output an Error_code instead of reporting just success-versus-failure and reporting problem through an async callback, then ipc::session::error::Code::S_SHM_ARENA_CREATION_FAILED should go away. See shm::arena_lend::jemalloc::init_shm() body for more discussion.
Class ipc::session::sync_io::Client_session_adapter< Session >
Make all of Server_session_adapter, Client_session_adapter move-ctible/assignable like their adapted counterparts. It is not of utmost importance practically, unlike for the adapter guys, but at least for consistency it would be good; and of course it never hurts usability even if not critical. (Internally: This is not difficult to implement; the async-I/O guys being movable was really the hard part.)
Class ipc::shm::stl::Stateless_allocator< T, Arena >
Currently Arena::Pointer shall be a fancy-pointer, but we could support raw pointers also. Suppose Arena_obj is set up in such a way as to map all processes' locally-dereferenceable pointers to the same SHM location to the same numeric value (by specifying each pool's start as some predetermined numerical value in the huge 64-bit vaddr space – in all processes sharing that SHM pool. Now no address translation is needed, and Arena::Pointer could be simply T*. As of this writing some inner impl details suppose it being a fancy-pointer, and it's the only active need we have; but that could be tweaked with a bit of effort.
Namespace ipc::transport::asio_local_stream_socket

asio_local_stream_socket additional feature: APIs that can read and write native sockets together with accompanying binary blobs can be extended to handle an arbitrary number of native handles (per call) as opposed to only 0 or 1. The main difficulty here is designing a convenient and stylish, yet performant, API.

At least asio_local_stream_socket::async_read_with_target_func() can be extended to other stream sockets (TCP, etc.). In that case it should be moved to a different namespace however (perhaps named asio_stream_socket; could then move the existing asio_local_stream_socket inside that one and rename it local).

asio_local_stream_socket additional feature: APIs that can read and write native handles together with accompanying binary blobs can be extended to handle scatter/gather semantics for the aforementioned blobs, matching standard boost.asio API functionality.

asio_local_stream_socket additional feature: Non-blocking APIs like nb_read_some_with_native_handle() and nb_write_some_with_native_handle() can gain blocking counterparts, matching standard boost.asio API functionality.

asio_local_stream_socket additional feature: async_read_some_with_native_handle() – async version of existing nb_read_some_with_native_handle(). Or another way to put it is, equivalent of boost.asio Peer_socket::async_read_some() but able to read native handle(s) with the blob. Note: This API would potentially be usable inside the impl of existing APIs (code reuse).

asio_local_stream_socket additional feature: async_read_with_native_handle() – async version of existing nb_read_some_with_native_handle(), plus the "stubborn" behavior of built-in async_read() free function. Or another way to put it is, equivalent of boost.asio async_read<Peer_socket>() but able to read native handle(s) with the blob. Note: This API would potentially be usable inside the impl of existing APIs (code reuse).

Namespace ipc::transport::error
This file and error.cpp are perfectly reasonable and worth-it and standard boiler-plate; and it's not that lengthy really; but it'd be nice to stop having to copy/paste all the stuff outside of just the error codes and messages. Should add a feature to Flow that'll reduce the boiler-plate to just that; maybe some kind of base class or something. boost.system already makes it pretty easy, but even that can probably be "factored out" into Flow. Update: I (ygoldfel) took a look at it for another project, and so far no obvious de-boiler-plate ideas come to mind. Ideally the inputs are: (1) an enum with the codes, like error::Code; (2) an int-to-string message table function, for log messages; and (3) a brief const char* identifying the code set, for log messages. The rest is the boiler-plate, but all of it seems to either already be accepably brief, and where something isn't quite so, I can't think of any obvious way to factor it out. Of course a macro-based "meta-language" is always a possibility, as we did in flow::log, but in this case it doesn't seem worth it at all.
Class ipc::transport::Native_handle_sender

In C++20, if/when we upgrade to that, Native_handle_sender (and other such doc-only classes) can become an actual concept formally implemented by class(es) that, today, implement it via the "honor system." Currently it is a class #ifdef-ed out from actual compilation but still participating in doc generation. Note that Doxygen (current version as of this writing: 1.9.3) claims to support doc generation from formal C++20 concepts.

Comparing Blob_sender::send_blob_max_size() (and similar checks in that family of concepts) to test whether the object is in PEER state is easy enough, but perhaps we can have a utility that would more expressively describe this check: in_peer_state() free function or something? It can still use the same technique internally.

Class ipc::transport::Native_socket_stream_acceptor
At the moment, if one decides to use a Native_socket_stream_acceptor directly – not really necessary given ipc::session Channel-opening capabilities – the the user must come up with their own naming scheme that avoids name clashes; we could supply an ipc::session-facilitated system for providing this service instead. I.e., ipc::session could either expose a facility for generating the Shared_name absolute_name arg to the Native_socket_stream_acceptor ctor (and opposing Native_socket_stream::sync_connect() call). Alternatively it could provide some kind of Native_socket_stream_acceptor factory and corresponding opposing facility. Long story short, within the ipc::session way of life literally only one acceptor exists, and it is set up (and named) internally to ipc::session. We could provide a way to facilitate the creation of more acceptors if desired by helping to choose their Shared_names. (An original "paper" design did specify a naming scheme for this.)
Member ipc::transport::Native_socket_stream_acceptor::m_next_peer_socket
Perform a rigorous analysis of the perf and style trade-offs between move-construction-based patterns versus shared_ptr-based ones, possibly focusing on boost.asio socket objects in particular.
Class ipc::transport::struc::Channel< Channel_obj, Message_body, Struct_builder_config, Struct_reader_config >

Consider adding the optional expectation of a particular Msg_which_in when registering expected responses in struc::Channel::async_request().

struc::Channel should probably be made move-constructible and move-assignable. No concept requires this, unlike with many other classes and class templates in ipc, so it is less essential; but both for consistency and usability it would be good. It would also make some APIs possible that currently would require the user to explicitly wrap this class in a unique_ptr. For example, imagine a Session::Structured_channel Session::structured_channel_upgrade(Channel_obj&& channel, ...) that constructs a suitably-typed struc::Channel, subsuming the raw channel just opened in that session::Session, and returns that guy. Currently it would need to return unique_ptr<Session::Structured_channel> or something.

Member ipc::transport::struc::Channel< Channel_obj, Message_body, Struct_builder_config, Struct_reader_config >::async_end_sending (Task_err &&on_done_func)
Consider adding blocking struc::Channel::sync_end_sending(), possibly with timeout, since async_end_sending() to completion is recommended (but not required) to execute at EOL of any struc::Channel. The "blocking" part is good because it's convenient to not have to worry about handling completion with async semantics boiler-plate. The "timeout" part is good because it's a best-effort operation, when in a bad/slow environment, and blocking – even during deinit – would be important to avoid. For example, the Session impls' dtors perform this op; we don't want those to block for any significant time if at all avoidable. The reason it's a mere to-do currently is that a bug-free opposing struc::Channel should not let would-block occur for any real length of time; so blocking is presumably unlikely. Nevertheless.
Member ipc::transport::struc::Channel< Channel_obj, Message_body, Struct_builder_config, Struct_reader_config >::owned_channel_mutable ()
Consider adding struc::Channel::auto_ping() and struc::Channel::idle_timer_run() and, for safety, removing struc::Channel::owned_channel_mutable().
Class ipc::transport::struc::Msg_in_impl< Message_body, Struct_reader_config >
Msg_in_impl is pretty wordy; maybe friend would have been stylistically acceptable after all? It's so much briefer, and we could simply resolve to only access the protected APIs and not private stuff....
Member ipc::transport::struc::Session_token
Look into whether something smaller that RFC 4122 UUIDs can and should be used for Session_token. This would be for perf but may well be unnecessary. See discussion near this to-do.
Member ipc::transport::struc::shm::Capnp_message_builder< Shm_arena >::lend (schema::detail::ShmTopSerialization::Builder *capnp_root, session::shm::Arena_to_shm_session_t< Arena > *shm_session)
Would be nice to provide a more-general counterpart to existing Capnp_message_builder::lend() (in addition to that one which outputs into a capnp structure), such as one that outputs a mere Blob. The existing one is suitable for the main use-case which is internally by shm::Builder; but Capnp_message_builder is also usable as a capnp::MessageBuilder directly. If a user were to indeed leverage it in that latter capacity, they may want to transmit/store the SHM-handle some other way. Note that as of this writing the direct-use-by-general-user-as-MessageBuilder use-case is supported "just because" it can be; nothing in particular needed it.
Class ipc::transport::struc::sync_io::Channel< Channel_obj, Message_body, Struct_builder_config, Struct_reader_config >::Msg_in_pipe
Look into the algorithm documented in Channel::Msg_in_pipe wherein (with 2 pipes in the channel) some low-level messages associated with handle-bearing user structured messages are sent over the (presumably somewhat slower) handles pipe despite, themselves, not containing a native handle being transmitted. See text just above this to-do in the code.
Member ipc::transport::sync_io::Blob_stream_mq_receiver_impl< Persistent_mq_handle >::m_nb_task_engine
Consider using specialization instead of or in addition if constexpr() w/r/t Mq::S_HAS_NATIVE_HANDLE-based compile-time branching: it could save some RAM by eliminating optionals such as the one near this to-do; though code would likely become wordier.
Member ipc::transport::sync_io::Blob_stream_mq_receiver_impl< Persistent_mq_handle >::m_target_control_blob
Maybe we should indeed use m_user_request->m_target_blob (instead of locally stored m_target_control_blob) to save RAM/a few cycles? Technically at least user should not care if some garbage is temporarily placed there (after PING a real message should arrive and replace it; or else on error or graceful-close who cares?).
Member ipc::transport::sync_io::Blob_stream_mq_sender_impl< Persistent_mq_handle >::m_nb_task_engine
Consider using specialization instead of or in addition if constexpr() w/r/t Mq::S_HAS_NATIVE_HANDLE-based compile-time branching: it could save some RAM by eliminating optionals such as the one near this to-do; though code would likely become wordier.
Class ipc::transport::sync_io::Native_socket_stream::Impl
Internal Native_socket_stream and Native_socket_stream_acceptor queue algorithms and data structures should be checked for RAM use; perhaps something should be periodically shrunk if applicable. Look for vectors, deques (including inside queues).
Member ipc::transport::sync_io::Native_socket_stream::Impl::m_peer_socket
Peer_socket m_peer_socket synchronous-read ops (read_some()) are actually documented in boost::asio to be thread-safe against concurrently invoked synchronous-write ops (write_some()), as are OS calls "::recvmsg()", "::sendmsg()"; therefore for possible perf bump consider never nullifying Impl::m_peer_socket; eliminating Impl::m_peer_socket_mutex; and letting each direction's logic discover any socket-error independently. (But, at the moment, m_peer_socket_mutex also covers mirror-guy m_ev_wait_hndl_peer_socket, so that must be worked out.)
Class ipc::util::Default_init_allocator< T, Allocator >
ipc::util::Default_init_allocator should be moved into Flow's flow::util.
Member ipc::util::operator<< (std::ostream &os, const Shared_name &val)
Does Shared_name operator>> and operator<< being asymmetrical get one into trouble when using Shared_name with boost.program_options (or flow::cfg which is built on top of it)? Look into it. It may be necessary to make operator<< equal to that of ostream << string after all; though the added niceties of the current << semantics may still at least be available via some explicit accessor.
Member ipc::util::Shared_name::m_raw_name
Consider providing a ref-to-mutable accessor to Shared_name::m_raw_name (or just make public). There are pros and cons; the basic pro being that Shared_name is meant to be a very thin wrapper around std::string, so it might make sense to allow for in-place modification without supplying some kind of reduced subset of string API. Suggest doing this to-do if a practical desire comes about.
Member ipc::util::Shared_name::S_MAX_LENGTH

Research real limits on Shared_name::S_MAX_LENGTH for different real resource types; choose something for MAX_LENGTH that leaves enough slack to avoid running into trouble when making actual sys calls; as discussed in the at-internal section of Shared_name doc header about this topic. Explain here how we get to the limit ultimately chosen. The limit as of this writing is 64, but real research is needed.

Shared_name::S_MAX_LENGTH currently applies to all shared resource types, but it'd be a useful feature to have different limits depending on OS/whatever limitations for particular resources types such as SHM object names versus queue names versus whatever.

Namespace ipc::util::sync_io
Write an example of sync_io-pattern use with an old-school reactor-pattern event loop, using poll() and/or epoll_*().
Member TEMPLATE_STRUCTURED_CHANNEL
Look over the rest of the code base – possibly in Flow too – and apply the define TEMPLATE_
  • define CLASS_ technique where it'd be subjectively beneficial.