Flow-IPC 1.0.0
Flow-IPC project: Full implementation reference.
|
boost::asio::ip::tcp::socket
. Session
dtor await (and possibly block) the start of the opposing side's Session
dtor, before it can proceed. The reason it does this today is above this to-do in the code. The blocking is not too bad – if the user's applications are properly written, it will not occur, and the existing logging should make clear why it happens, if it happens; but it's still not ideal. There are a couple known mitigating approaches, and at least 1 ticket covering them is likely filed. (Either way the Graceful_finisher mechanism could then be removed probably.) To summarize them: 1, there can be a mechanism deep inside SHM-jemalloc code that gives (arbitrarily long, configurable) grace period for known borrowed objects whose session channels have closed before those objects were returned; this was echan's idea on this topic. Once the grace period is reached, it would act as-if they got returned then (so in the above scenario the arena could get jemalloc-deinitialized and all). If program needs to exit, it could either block until the end of the grace period or... who knows? 2, something similar could be done about the SHM-internal-use channel: do not close it but add it to some list of such channels; they would be finally closed upon detection of the other side's Session
dtor being reached, in some background thread somewhere. (On 2nd thought this sounds a lot like Graceful_finisher – but for this particular internal-use channel only. The code here would only get more complex, thought maybe not too much more. However, it would resolve the observed Session
-dtor-blocking user-visible issue, at least until the delayed-channel-death process had to exit entirely.) Beyond these basic ideas (which could even perhaps be combined) this requires some thinking; it is an interesting problem. In the meantime the dtor-cross-process-barrier-of-sorts existing solution is pretty decent. virtual
hierarchies Session_server
, Server_session
, Session_server_impl
, Server_session_impl
cooperate internally, as there is some funky stuff going on, particularly Session_server_impl::this_session_srv(). Error_code
instead of reporting just success-versus-failure, then ipc::session::error::Code::S_SHM_ARENA_CREATION_FAILED should go away. Error_code
instead of reporting just success-versus-failure and reporting problem through an async callback, then ipc::session::error::Code::S_SHM_ARENA_CREATION_FAILED should go away. See shm::arena_lend::jemalloc::init_shm() body for more discussion. Arena::Pointer
shall be a fancy-pointer, but we could support raw pointers also. Suppose Arena_obj is set up in such a way as to map all processes' locally-dereferenceable pointers to the same SHM location to the same numeric value (by specifying each pool's start as some predetermined numerical value in the huge 64-bit vaddr space – in all processes sharing that SHM pool. Now no address translation is needed, and Arena::Pointer
could be simply T*
. As of this writing some inner impl details suppose it being a fancy-pointer, and it's the only active need we have; but that could be tweaked with a bit of effort. asio_local_stream_socket
additional feature: APIs that can read and write native sockets together with accompanying binary blobs can be extended to handle an arbitrary number of native handles (per call) as opposed to only 0 or 1. The main difficulty here is designing a convenient and stylish, yet performant, API.
At least asio_local_stream_socket::async_read_with_target_func() can be extended to other stream sockets (TCP, etc.). In that case it should be moved to a different namespace however (perhaps named asio_stream_socket
; could then move the existing asio_local_stream_socket
inside that one and rename it local
).
asio_local_stream_socket
additional feature: APIs that can read and write native handles together with accompanying binary blobs can be extended to handle scatter/gather semantics for the aforementioned blobs, matching standard boost.asio API functionality.
asio_local_stream_socket
additional feature: Non-blocking APIs like nb_read_some_with_native_handle() and nb_write_some_with_native_handle() can gain blocking counterparts, matching standard boost.asio API functionality.
asio_local_stream_socket
additional feature: async_read_some_with_native_handle()
– async version of existing nb_read_some_with_native_handle(). Or another way to put it is, equivalent of boost.asio Peer_socket::async_read_some()
but able to read native handle(s) with the blob. Note: This API would potentially be usable inside the impl of existing APIs (code reuse).
asio_local_stream_socket
additional feature: async_read_with_native_handle()
– async version of existing nb_read_some_with_native_handle(), plus the "stubborn" behavior of built-in async_read()
free function. Or another way to put it is, equivalent of boost.asio async_read<Peer_socket>()
but able to read native handle(s) with the blob. Note: This API would potentially be usable inside the impl of existing APIs (code reuse).
enum
with the codes, like error::Code; (2) an int
-to-string
message table function, for log messages; and (3) a brief const char*
identifying the code set, for log messages. The rest is the boiler-plate, but all of it seems to either already be accepably brief, and where something isn't quite so, I can't think of any obvious way to factor it out. Of course a macro-based "meta-language" is always a possibility, as we did in flow::log
, but in this case it doesn't seem worth it at all. In C++20, if/when we upgrade to that, Native_handle_sender (and other such doc-only classes) can become an actual concept formally implemented by class(es) that, today, implement it via the "honor system." Currently it is a class #ifdef
-ed out from actual compilation but still participating in doc generation. Note that Doxygen (current version as of this writing: 1.9.3) claims to support doc generation from formal C++20 concepts.
Comparing Blob_sender::send_blob_max_size() (and similar checks in that family of concepts) to test whether the object is in PEER state is easy enough, but perhaps we can have a utility that would more expressively describe this check: in_peer_state()
free function or something? It can still use the same technique internally.
Channel
-opening capabilities – the the user must come up with their own naming scheme that avoids name clashes; we could supply an ipc::session-facilitated system for providing this service instead. I.e., ipc::session could either expose a facility for generating the Shared_name absolute_name
arg to the Native_socket_stream_acceptor ctor (and opposing Native_socket_stream::sync_connect() call). Alternatively it could provide some kind of Native_socket_stream_acceptor factory and corresponding opposing facility. Long story short, within the ipc::session way of life literally only one acceptor exists, and it is set up (and named) internally to ipc::session. We could provide a way to facilitate the creation of more acceptors if desired by helping to choose their Shared_name
s. (An original "paper" design did specify a naming scheme for this.) shared_ptr
-based ones, possibly focusing on boost.asio socket objects in particular. Consider adding the optional expectation of a particular Msg_which_in when registering expected responses in struc::Channel::async_request().
struc::Channel should probably be made move-constructible and move-assignable. No concept requires this, unlike with many other classes and class templates in ipc, so it is less essential; but both for consistency and usability it would be good. It would also make some APIs possible that currently would require the user to explicitly wrap this class in a unique_ptr
. For example, imagine a Session::Structured_channel Session::structured_channel_upgrade(Channel_obj&& channel, ...)
that constructs a suitably-typed struc::Channel, subsuming the raw channel
just opened in that session::Session, and returns that guy. Currently it would need to return unique_ptr<Session::Structured_channel>
or something.
struc::Channel::sync_end_sending()
, possibly with timeout, since async_end_sending()
to completion is recommended (but not required) to execute at EOL of any struc::Channel. The "blocking" part is good because it's convenient to not have to worry about handling completion with async semantics boiler-plate. The "timeout" part is good because it's a best-effort operation, when in a bad/slow environment, and blocking – even during deinit – would be important to avoid. For example, the Session
impls' dtors perform this op; we don't want those to block for any significant time if at all avoidable. The reason it's a mere to-do currently is that a bug-free opposing struc::Channel should not let would-block occur for any real length of time; so blocking is presumably unlikely. Nevertheless. struc::Channel::auto_ping()
and struc::Channel::idle_timer_run()
and, for safety, removing struc::Channel::owned_channel_mutable(). friend
would have been stylistically acceptable after all? It's so much briefer, and we could simply resolve to only access the protected
APIs and not private
stuff.... Blob
. The existing one is suitable for the main use-case which is internally by shm::Builder; but Capnp_message_builder is also usable as a capnp::MessageBuilder
directly. If a user were to indeed leverage it in that latter capacity, they may want to transmit/store the SHM-handle some other way. Note that as of this writing the direct-use-by-general-user-as-MessageBuilder
use-case is supported "just
because" it can be; nothing in particular needed it. if constexpr()
w/r/t Mq::S_HAS_NATIVE_HANDLE
-based compile-time branching: it could save some RAM by eliminating optional
s such as the one near this to-do; though code would likely become wordier. m_user_request->m_target_blob
(instead of locally stored m_target_control_blob) to save RAM/a few cycles? Technically at least user should not care if some garbage is temporarily placed there (after PING a real message should arrive and replace it; or else on error or graceful-close who cares?). if constexpr()
w/r/t Mq::S_HAS_NATIVE_HANDLE
-based compile-time branching: it could save some RAM by eliminating optional
s such as the one near this to-do; though code would likely become wordier. vector
s, deque
s (including inside queue
s). Peer_socket m_peer_socket
synchronous-read ops (read_some()
) are actually documented in boost::asio to be thread-safe against concurrently invoked synchronous-write ops (write_some()
), as are OS calls "::recvmsg()"
, "::sendmsg()"
; therefore for possible perf bump consider never nullifying Impl::m_peer_socket; eliminating Impl::m_peer_socket_mutex; and letting each direction's logic discover any socket-error independently. (But, at the moment, m_peer_socket_mutex
also covers mirror-guy m_ev_wait_hndl_peer_socket, so that must be worked out.) flow::util
. operator>>
and operator<<
being asymmetrical get one into trouble when using Shared_name with boost.program_options (or flow::cfg
which is built on top of it)? Look into it. It may be necessary to make operator<<
equal to that of ostream << string
after all; though the added niceties of the current <<
semantics may still at least be available via some explicit accessor. public
). There are pros and cons; the basic pro being that Shared_name is meant to be a very thin wrapper around std::string
, so it might make sense to allow for in-place modification without supplying some kind of reduced subset of string
API. Suggest doing this to-do if a practical desire comes about. Research real limits on Shared_name::S_MAX_LENGTH for different real resource types; choose something for MAX_LENGTH that leaves enough slack to avoid running into trouble when making actual sys calls; as discussed in the at-internal section of Shared_name doc header about this topic. Explain here how we get to the limit ultimately chosen. The limit as of this writing is 64, but real research is needed.
Shared_name::S_MAX_LENGTH currently applies to all shared resource types, but it'd be a useful feature to have different limits depending on OS/whatever limitations for particular resources types such as SHM object names versus queue names versus whatever.
sync_io
-pattern use with an old-school reactor-pattern event loop, using poll()
and/or epoll_*()
. define TEMPLATE_
define CLASS_
technique where it'd be subjectively beneficial.