Flow-IPC 1.0.2
Flow-IPC project: Public API.
Todo List
Page Asynchronicity and Integrating with Your Event Loop
We may supply an alternative API wherein Flow-IPC objects can be boost.asio I/O objects themselves, similarly to boost::asio::ip::tcp::socket.
Member ipc::session::Client_session_mv< Client_session_impl_t >::sync_connect (Error_code *err_code=0)
Consider adding an optional mode/feature to allow to wait through the condition wherein CNS (PID) file does not exist, or it does but the initial session-open connect op would be refused; instead it detects these relatively common conditions (server not yet up and/or is restarting and/or is operationally suspended for now, etc.) as normal and waits until the condition is cleared. Without this mode, a typical user would probably do something like: oh, sync_connect() failed; let's sleep 1 sec and try again (rinse/repeat). It's not awful, but we might as well make it easier and more responsive out of the box (optionally). Upon resolving this to-do please update the Manual section Sessions: Setting Up an IPC Context accordingly.
Class ipc::session::sync_io::Client_session_adapter< Session >
Make all of Server_session_adapter, Client_session_adapter move-ctible/assignable like their adapted counterparts. It is not of utmost importance practically, unlike for the adapter guys, but at least for consistency it would be good; and of course it never hurts usability even if not critical. (Internally: This is not difficult to implement; the async-I/O guys being movable was really the hard part.)
Class ipc::shm::stl::Stateless_allocator< T, Arena >
Currently Arena::Pointer shall be a fancy-pointer, but we could support raw pointers also. Suppose Arena_obj is set up in such a way as to map all processes' locally-dereferenceable pointers to the same SHM location to the same numeric value (by specifying each pool's start as some predetermined numerical value in the huge 64-bit vaddr space – in all processes sharing that SHM pool. Now no address translation is needed, and Arena::Pointer could be simply T*. As of this writing some inner impl details suppose it being a fancy-pointer, and it's the only active need we have; but that could be tweaked with a bit of effort.
Namespace ipc::transport::asio_local_stream_socket

asio_local_stream_socket additional feature: async_read_with_native_handle() – async version of existing nb_read_some_with_native_handle(), plus the "stubborn" behavior of built-in async_read() free function. Or another way to put it is, equivalent of boost.asio async_read<Peer_socket>() but able to read native handle(s) with the blob. Note: This API would potentially be usable inside the impl of existing APIs (code reuse).

At least asio_local_stream_socket::async_read_with_target_func() can be extended to other stream sockets (TCP, etc.). In that case it should be moved to a different namespace however (perhaps named asio_stream_socket; could then move the existing asio_local_stream_socket inside that one and rename it local).

asio_local_stream_socket additional feature: APIs that can read and write native sockets together with accompanying binary blobs can be extended to handle an arbitrary number of native handles (per call) as opposed to only 0 or 1. The main difficulty here is designing a convenient and stylish, yet performant, API.

asio_local_stream_socket additional feature: APIs that can read and write native handles together with accompanying binary blobs can be extended to handle scatter/gather semantics for the aforementioned blobs, matching standard boost.asio API functionality.

asio_local_stream_socket additional feature: Non-blocking APIs like nb_read_some_with_native_handle() and nb_write_some_with_native_handle() can gain blocking counterparts, matching standard boost.asio API functionality.

asio_local_stream_socket additional feature: async_read_some_with_native_handle() – async version of existing nb_read_some_with_native_handle(). Or another way to put it is, equivalent of boost.asio Peer_socket::async_read_some() but able to read native handle(s) with the blob. Note: This API would potentially be usable inside the impl of existing APIs (code reuse).

Class ipc::transport::Native_handle_sender

Comparing Blob_sender::send_blob_max_size() (and similar checks in that family of concepts) to test whether the object is in PEER state is easy enough, but perhaps we can have a utility that would more expressively describe this check: in_peer_state() free function or something? It can still use the same technique internally.

In C++20, if/when we upgrade to that, Native_handle_sender (and other such doc-only classes) can become an actual concept formally implemented by class(es) that, today, implement it via the "honor system." Currently it is a class #ifdef-ed out from actual compilation but still participating in doc generation. Note that Doxygen (current version as of this writing: 1.9.3) claims to support doc generation from formal C++20 concepts.

Class ipc::transport::Native_socket_stream_acceptor
At the moment, if one decides to use a Native_socket_stream_acceptor directly – not really necessary given ipc::session Channel-opening capabilities – the the user must come up with their own naming scheme that avoids name clashes; we could supply an ipc::session-facilitated system for providing this service instead. I.e., ipc::session could either expose a facility for generating the Shared_name absolute_name arg to the Native_socket_stream_acceptor ctor (and opposing Native_socket_stream::sync_connect() call). Alternatively it could provide some kind of Native_socket_stream_acceptor factory and corresponding opposing facility. Long story short, within the ipc::session way of life literally only one acceptor exists, and it is set up (and named) internally to ipc::session. We could provide a way to facilitate the creation of more acceptors if desired by helping to choose their Shared_names. (An original "paper" design did specify a naming scheme for this.)
Class ipc::transport::struc::Channel< Channel_obj, Message_body, Struct_builder_config, Struct_reader_config >

Consider adding the optional expectation of a particular Msg_which_in when registering expected responses in struc::Channel::async_request().

struc::Channel should probably be made move-constructible and move-assignable. No concept requires this, unlike with many other classes and class templates in ipc, so it is less essential; but both for consistency and usability it would be good. It would also make some APIs possible that currently would require the user to explicitly wrap this class in a unique_ptr. For example, imagine a Session::Structured_channel Session::structured_channel_upgrade(Channel_obj&& channel, ...) that constructs a suitably-typed struc::Channel, subsuming the raw channel just opened in that session::Session, and returns that guy. Currently it would need to return unique_ptr<Session::Structured_channel> or something.

Member ipc::transport::struc::Channel< Channel_obj, Message_body, Struct_builder_config, Struct_reader_config >::async_end_sending (Task_err &&on_done_func)
Consider adding blocking struc::Channel::sync_end_sending(), possibly with timeout, since async_end_sending() to completion is recommended (but not required) to execute at EOL of any struc::Channel. The "blocking" part is good because it's convenient to not have to worry about handling completion with async semantics boiler-plate. The "timeout" part is good because it's a best-effort operation, when in a bad/slow environment, and blocking – even during deinit – would be important to avoid. For example, the Session impls' dtors perform this op; we don't want those to block for any significant time if at all avoidable. The reason it's a mere to-do currently is that a bug-free opposing struc::Channel should not let would-block occur for any real length of time; so blocking is presumably unlikely. Nevertheless.
Member ipc::transport::struc::Channel< Channel_obj, Message_body, Struct_builder_config, Struct_reader_config >::owned_channel_mutable ()
Consider adding struc::Channel::auto_ping() and struc::Channel::idle_timer_run() and, for safety, removing struc::Channel::owned_channel_mutable().
Member ipc::transport::struc::shm::Capnp_message_builder< Shm_arena >::lend (schema::detail::ShmTopSerialization::Builder *capnp_root, session::shm::Arena_to_shm_session_t< Arena > *shm_session)
Would be nice to provide a more-general counterpart to existing Capnp_message_builder::lend() (in addition to that one which outputs into a capnp structure), such as one that outputs a mere Blob. The existing one is suitable for the main use-case which is internally by shm::Builder; but Capnp_message_builder is also usable as a capnp::MessageBuilder directly. If a user were to indeed leverage it in that latter capacity, they may want to transmit/store the SHM-handle some other way. Note that as of this writing the direct-use-by-general-user-as-MessageBuilder use-case is supported "just because" it can be; nothing in particular needed it.
Class ipc::util::Default_init_allocator< T, Allocator >
ipc::util::Default_init_allocator should be moved into Flow's flow::util.
Member ipc::util::operator<< (std::ostream &os, const Shared_name &val)
Does Shared_name operator>> and operator<< being asymmetrical get one into trouble when using Shared_name with boost.program_options (or flow::cfg which is built on top of it)? Look into it. It may be necessary to make operator<< equal to that of ostream << string after all; though the added niceties of the current << semantics may still at least be available via some explicit accessor.
Member ipc::util::Shared_name::S_MAX_LENGTH
Shared_name::S_MAX_LENGTH currently applies to all shared resource types, but it'd be a useful feature to have different limits depending on OS/whatever limitations for particular resources types such as SHM object names versus queue names versus whatever.
Namespace ipc::util::sync_io
Write an example of sync_io-pattern use with an old-school reactor-pattern event loop, using poll() and/or epoll_*().