Flow-IPC 1.0.1
Flow-IPC project: Full implementation reference.
native_socket_stream_impl.hpp
Go to the documentation of this file.
1/* Flow-IPC: Core
2 * Copyright 2023 Akamai Technologies, Inc.
3 *
4 * Licensed under the Apache License, Version 2.0 (the
5 * "License"); you may not use this file except in
6 * compliance with the License. You may obtain a copy
7 * of the License at
8 *
9 * https://www.apache.org/licenses/LICENSE-2.0
10 *
11 * Unless required by applicable law or agreed to in
12 * writing, software distributed under the License is
13 * distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
14 * CONDITIONS OF ANY KIND, either express or implied.
15 * See the License for the specific language governing
16 * permissions and limitations under the License. */
17
18/// @file
19#pragma once
20
26#include <cstddef>
27#include <flow/async/single_thread_task_loop.hpp>
28
29namespace ipc::transport
30{
31
32// Types.
33
34/**
35 * Internal, non-movable pImpl implementation of Native_socket_stream class.
36 * In and of itself it would have been directly and publicly usable; however Native_socket_stream adds move semantics
37 * which are essential to cooperation with Native_socket_stream_acceptor and overall consistency with the rest
38 * of ipc::transport API and, arguably, boost.asio API design.
39 *
40 * @see All discussion of the public API is in Native_socket_stream doc header; that class forwards to this one.
41 * All discussion of pImpl-related notions is also there. See that doc header first please. Then come back here.
42 *
43 * Impl design
44 * -----------
45 * ### Intro / history ###
46 * Native_socket_stream::Impl is, one could argue, the core class of this entire library, Flow-IPC.
47 * Beyond that, even the closest alternatives (which, in the first place, cannot do some of what it can -- namely the
48 * `Blob_stream_mq_*` guys cannot transmit native handles) tend to use many of the same techniques.
49 * So, like, this impl design is important -- for performance on behalf of the ipc::transport use and ipc::session
50 * internals (which bases its session master channel on a single-`Native_socket_stream` Channel); and as
51 * a conceptual template for how various other things are implemented (at least in part).
52 *
53 * Truthfully, in some cases historically speaking I (ygoldfel) tweaked aspects of the concept APIs it
54 * implements (Native_handle_sender, Native_handle_receiver, Blob_sender, Blob_receiver) based on lessons
55 * learned in writing this impl. (Yes, it "should" be the other way around -- concept should be maximally
56 * useful, impl should fulfill it, period -- but I am just being real.)
57 *
58 * That said: at some point during the initial development of Flow-IPC, the `sync_io` pattern emerged as a
59 * necessary alternative API -- alternative to the async-I/O pattern of `*_sender` and `*_receiver` concepts.
60 * (See util::sync_io doc header. It describes the pattern and the rationale for it.) It was of course possible,
61 * at that point, to have the 2 be decoupled: Native_socket_stream (async-I/O pattern) is implemented like so,
62 * and sync_io::Native_socket_stream (`sync_io` pattern) is implemented separate like *so*. Historically speaking,
63 * Native_socket_stream was already written and working great, with a stable API and insides;
64 * but sync_io::Native_socket_stream didn't exist and needed to be written -- in fact, the 1st `sync_io` guy to
65 * be written.
66 *
67 * Naturally I scrapped the idea of simply writing the new guy from scratch and leaving existing guy as-is.
68 * Instead I moved the relevant aspects of the old guy into the new guy, and (re)wrote the formerly-old guy
69 * (that's the present class Native_socket_stream::Impl) *in terms of* the new guy. (I then repeated this for
70 * its peers Blob_stream_mq_sender -- which is like `*this` but for blobs-only and outgoing-only -- and
71 * Blob_stream_mq_receiver, which is (same) but incoming-only.) Any other approach would have resulted in 2x
72 * the code and 2x the bug potential and maintenance going forward. (Plus it allowed for the nifty
73 * concept of being able to construct a Native_socket_stream from a sync_io::Native_socket_stream "core";
74 * this was immediately standardized to all the relevant concepts.) Note -- however -- that as Native_socket_stream
75 * underwent this bifurcation into Native_socket_stream and sync_io::Native_socket_stream, its API did not change
76 * *at all* (modulo the addition of the `sync_io`-core-subsuming constructor).
77 *
78 * As a result -- whether that was a good or bad design (I say, good) -- at least the following is true:
79 * Understanding the impl of `*this` class means: understanding the `sync_io` core sync_io::Native_socket_stream
80 * *as a black box* (!); and then understanding the (actually quite limited) work necessary to build on top of that
81 * to provide the desired API (as largedly mandated by Native_handle_sender and Native_handle_receiver concept
82 * APIs). Moreover, given a `sync_op::_*sender` the PEER-state logic for `*_sender` is always the same;
83 * and similarly for `*_receiver`. Because of that I factored out this adapter logic into
84 * sync_io::Async_adapter_sender and sync_io::Async_adapter_receiver; so 49.5% of Native_socket_stream PEER-state
85 * logic forwards to `Async_adapter_sender`, 49.5% to `Async_adapter_receiver`; and accordingly
86 * 99% of each of `Blob_stream_mq_sender/receiver` forwards to `Async_adapter_sender/receiver`. If more
87 * sender/receiver types were to be written they could do the same. So only the `sync_io::X` core is different.
88 *
89 * So... actually... it is pretty straightforward now! Pre-decoupling it was much more of a challenge.
90 * Of course, you may want to change its insides; in that case you might need to get into
91 * sync_io::Native_socket_stream::Impl. Though, that guy is simpler too, as its mission is much reduced --
92 * no more "thread W" or async-handlers to invoke.
93 *
94 * That said you'll need to in fact understand how sync_io::Native_socket_stream (as a black box) works which means
95 * getting comfortable with the `sync_io` pattern. (See util::sync_io doc header.)
96 *
97 * ### Threads and thread nomenclature ###
98 * *Thread U* refers to the user "meta-thread" from which the public APIs are invoked after construction.
99 * Since ("Thread safety" in Native_socket_stream doc header) they are to protect from any concurrent API
100 * calls on a given `*this`, all access to the APIs can be thought of as coming from a single "thread" U. There is a
101 * caveat: User-supplied completion handler callbacks are explicitly said to be invoked from an unspecified
102 * thread that is *not* U, and the user is generally explicitly allowed to call public APIs directly from these
103 * callbacks. *Thread W* (see below) is the "unspecified thread that is not U" -- the worker thread. Therefore, at
104 * the top of each public API body, we write the internal comment "We are in thread U *or* thread W (but always
105 * serially)." or similar.
106 *
107 * Generally speaking the threads are used as follows internally by `*this`:
108 * - User invokes `this->f()` in thread U. We ask, immediately if possible, for `m_sync_io` (the `sync_io` core)
109 * to finish `m_sync_io.f()` as synchronously as it can. If it can, wonderful. If it cannot, it will
110 * (per `sync_io` pattern) ask to async-wait on an event (readable or writable) on some FD of its choice.
111 * We `.async_wait(F)` on it -- and as a result, on ready event, `F()` executes in thread W.
112 * - In thread W, `F()` eventually executes, so *within* `F()` we (1) tell `m_sync_io` of the ready event
113 * by calling into an `m_sync_io` API.
114 * - Into thread W, we `post()` user completion handlers, namely from `async_receive_*()` and async_end_sending().
115 * As promised that's the "unspecified thread" from which we do this.
116 *
117 * The thread-W `post()`ing does not require locking. It does need to happen in a separate boost.asio task, as
118 * otherwise recursive mayhem can occur (we want to do A-B synchronously, but then between A and B they invoke
119 * `this->something()`, which starts something else => mayhem).
120 *
121 * However: the other 2 bullet points (first one and the sub-bullet under it) do operate on the same data
122 * from threads U and W potentially concurrently. (Note: usually it is not necessary to engage thread W,
123 * except for receive-ops. A receive-op may indeed commonly encounter would-block, simply because no data have
124 * arrived yet. Otherwise: For send-ops, `m_sync_io` will synchronously succeed in each send, including that of
125 * graceful-close due to `*end_sending()`, unless the kernel buffer is full and gives would-block -- only if the
126 * opposing side is not reading in timely fashion. The sparing use of multi-threading, instead completing stuff
127 * synchronously whenever humanly possible, is a good property of this design that uses a `sync_io` core.)
128 *
129 * That brings us to:
130 *
131 * ### Locking ###
132 * All receive logic is bracketed by a lock, sync_io::Async_adapter_receiver::m_mutex.
133 * Lock contention occurs only if more `async_receive_*()` requests are made while one is in progress.
134 * This is, we guess, relatively rare.
135 *
136 * All send logic is similarly bracketed by a different lock, sync_io::Async_adapter_sender::m_mutex. Lock
137 * contention occurs only if (1) `send_*()` (or, one supposes, `*end_sending()`, but that's negligible) encounters
138 * would-block inside `m_sync_io.send_*()`; *and* (2) more `send_*()` (+ `*end_sending()`) requests are made
139 * during this state. (1) should be infrequent-to-non-existent assuming civilized opposing-side receiving behavior.
140 * Therefore lock contention should be rare.
141 *
142 * ### Outgoing-direction impl design ###
143 * @see sync_io::Async_adapter_sender where all that is encapsulated.
144 *
145 * ### Incoming-direction impl design ###
146 * @see sync_io::Async_adapter_receiver where all that is encapsulated.
147 */
149 public flow::log::Log_context,
150 private boost::noncopyable // And not movable.
151{
152public:
153 // Constructors/destructor.
154
155 /**
156 * See Native_socket_stream counterpart.
157 *
158 * @param logger_ptr
159 * See Native_socket_stream counterpart.
160 * @param nickname_str
161 * See Native_socket_stream counterpart.
162 */
163 explicit Impl(flow::log::Logger* logger_ptr, util::String_view nickname_str);
164
165 /**
166 * See Native_socket_stream counterpart.
167 *
168 * @param logger_ptr
169 * See Native_socket_stream counterpart.
170 * @param native_peer_socket_moved
171 * See Native_socket_stream counterpart.
172 * @param nickname_str
173 * See Native_socket_stream counterpart.
174 */
175 explicit Impl(flow::log::Logger* logger_ptr, util::String_view nickname_str,
176 Native_handle&& native_peer_socket_moved);
177
178 /**
179 * See Native_socket_stream counterpart.
180 *
181 * @param sync_io_core_in_peer_state_moved
182 * See Native_socket_stream counterpart.
183 */
184 explicit Impl(sync_io::Native_socket_stream&& sync_io_core_in_peer_state_moved);
185
186 /**
187 * See Native_socket_stream counterpart.
188 *
189 * @internal
190 * The unspecified thread from which remaining handlers are potentially invoked, as of this writing,
191 * is not the usual thread W. There is rationale discussion and detail in the body, but it seemed prudent to
192 * point it out here.
193 * @endinternal
194 */
195 ~Impl();
196
197 // Methods.
198
199 /* The following dead code is intentionally left-in; in fact this doc header (or close to it) could live
200 * in transport::Native_socket_stream and an identical signature in this Impl would implement it. Why is it around?
201 * Answer: There is currently no public async_connect(), only private; the public-facing reason for this
202 * is briefly given in the Native_socket_stream class doc header (TL;DR: there's no point, as locally it's always
203 * a quick operation); the internal reason that async_connect() does exist, but not publicly, is given at length
204 * in the sync_io::Native_socket_stream::Impl doc header. release() would be quite helpful, if async_connect()
205 * were public: As then one could construct an async-I/O-pattern transport::Native_socket_stream; .async_connect()
206 * it; then .release() the sync_io-pattern core; and various things elsewhere require sync_io-pattern cores
207 * rather than the heavier-weight, thread-assisted async-I/O wrappers like a *this. So one could .release() a core
208 * and then feed it to whatever needs it. Thing is, as noted elsewhere, there are serious plans to extend
209 * the Unix-domain-socket locally-focused Native_socket_stream to a TCP-networked version whose code would mostly
210 * be reusing us (a stream socket is a stream socket, in many ways, whether local or TCP). At that point
211 * we'd like this release() to be immediately visible to the developer. So that's why this is around. Naturally,
212 * its code might diverge over time, but at least as of this writing we suspect it's useful to keep this around,
213 * it doesn't get in the way too much. (Historically, it has not always been dead code and worked nicely.) */
214#if 0
215 /**
216 * In PEER state only, with no prior send or receive ops, returns a sync_io::Native_socket_stream core
217 * (as-if just constructed) operating on `*this` underlying low-level transport `Native_handle`; while
218 * `*this` becomes as-if default-cted.
219 *
220 * This can be useful if one desires a `sync_io` core -- e.g., to bundle into a Channel to then feed to
221 * a struc::Channel ctor -- after a successful async-I/O-style async_connect() advanced `*this`
222 * from NULL to CONNECTING to PEER state.
223 *
224 * In a sense it's the reverse of `*this` `sync_io`-core-adopting ctor.
225 *
226 * Behavior is undefined if `*this` is not in PEER state, or if it is, but you've invoked `async_receive_*()`,
227 * `send_*()`, `*end_sending()`, auto_ping(), or idle_timer_run() in the past. Please be careful.
228 *
229 * @return See above.
230 */
232#endif
233
234 /**
235 * See Native_socket_stream counterpart.
236 * @return See Native_socket_stream counterpart.
237 */
238 const std::string& nickname() const;
239
240 /**
241 * See Native_socket_stream counterpart.
242 *
243 * @param absolute_name
244 * See Native_socket_stream counterpart.
245 * @param err_code
246 * See Native_socket_stream counterpart.
247 * @return See Native_socket_stream counterpart.
248 */
249 bool sync_connect(const Shared_name& absolute_name, Error_code* err_code);
250
251 /**
252 * See Native_socket_stream counterpart.
253 *
254 * @param err_code
255 * See Native_socket_stream counterpart.
256 * @return See Native_socket_stream counterpart.
257 */
259
260 /**
261 * See Native_socket_stream counterpart.
262 * @return See Native_socket_stream counterpart.
263 */
264 size_t send_meta_blob_max_size() const;
265
266 /**
267 * See Native_socket_stream counterpart.
268 * @return See Native_socket_stream counterpart.
269 */
270 size_t send_blob_max_size() const;
271
272 /**
273 * See Native_socket_stream counterpart.
274 *
275 * @param hndl_or_null
276 * See Native_socket_stream counterpart.
277 * @param meta_blob
278 * See Native_socket_stream counterpart.
279 * @param err_code
280 * See Native_socket_stream counterpart.
281 * @return See Native_socket_stream counterpart.
282 */
283 bool send_native_handle(Native_handle hndl_or_null, const util::Blob_const& meta_blob, Error_code* err_code);
284
285 /**
286 * See Native_socket_stream counterpart.
287 *
288 * @param blob
289 * See Native_socket_stream counterpart.
290 * @param err_code
291 * See Native_socket_stream counterpart.
292 * @return See Native_socket_stream counterpart.
293 */
294 bool send_blob(const util::Blob_const& blob, Error_code* err_code);
295
296 /**
297 * See Native_socket_stream counterpart.
298 *
299 * @param on_done_func
300 * See Native_socket_stream counterpart.
301 * @return See Native_socket_stream counterpart.
302 */
303 bool async_end_sending(flow::async::Task_asio_err&& on_done_func);
304
305 /// See Native_socket_stream counterpart. @return Ditto.
306 bool end_sending();
307
308 /**
309 * See Native_socket_stream counterpart.
310 *
311 * @param period
312 * See Native_socket_stream counterpart.
313 * @return See Native_socket_stream counterpart.
314 */
315 bool auto_ping(util::Fine_duration period);
316
317 /**
318 * See Native_socket_stream counterpart.
319 * @return See Native_socket_stream counterpart.
320 */
321 size_t receive_meta_blob_max_size() const;
322
323 /**
324 * See Native_socket_stream counterpart.
325 * @return See Native_socket_stream counterpart.
326 */
327 size_t receive_blob_max_size() const;
328
329 /**
330 * See Native_socket_stream counterpart.
331 *
332 * @param target_hndl
333 * See Native_socket_stream counterpart.
334 * @param target_meta_blob
335 * See Native_socket_stream counterpart.
336 * @param on_done_func
337 * See Native_socket_stream counterpart.
338 * @return See Native_socket_stream counterpart.
339 */
341 const util::Blob_mutable& target_meta_blob,
342 flow::async::Task_asio_err_sz&& on_done_func);
343
344 /**
345 * See Native_socket_stream counterpart.
346 *
347 * @param target_blob
348 * See Native_socket_stream counterpart.
349 * @param on_done_func
350 * See Native_socket_stream counterpart.
351 * @return See Native_socket_stream counterpart.
352 */
353 bool async_receive_blob(const util::Blob_mutable& target_blob,
354 flow::async::Task_asio_err_sz&& on_done_func);
355
356 /**
357 * See Native_socket_stream counterpart.
358 *
359 * @param timeout
360 * See Native_socket_stream counterpart.
361 * @return See Native_socket_stream counterpart.
362 */
364
365private:
366 // Constructors.
367
368 /**
369 * Core delegated-to ctor: does everything except `m_sync_io.start_*_ops()`.
370 * Delegating ctor shall do nothing further if starting in NULL state;
371 * else (starting directly in PEER state) it will delegate `m_sync_io.start_receive/send_*_ops()`
372 * to #m_snd_sync_io_adapter and #m_rcv_sync_io_adapter which it will initialize.
373 *
374 * Therefore if entering PEER state due to successful `*_connect()`, #m_snd_sync_io_adapter and
375 * #m_rcv_sync_io_adapter will be created at that time.
376 *
377 * `get_logger()` and nickname() values are obtained from `sync_io_core_moved`.
378 *
379 * @param sync_io_core_moved
380 * A freshly constructed sync_io::Native_socket_stream subsumed by `*this` (this object
381 * becomes as-if-default-cted). It can be in NULL or PEER state.
382 * @param tag
383 * Ctor-selecting tag.
384 */
385 explicit Impl(sync_io::Native_socket_stream&& sync_io_core_moved, std::nullptr_t tag);
386
387 // Data.
388
389 // General data (both-direction pipes, not connect-ops-related).
390
391 /**
392 * Single-thread worker pool for all internal async work in both directions. Referred to as thread W in comments.
393 * See doc header impl section for discussion.
394 *
395 * Ordering: Must be either declared after mutex(es), or `.stop()`ed explicitly in dtor: Thread must be joined,
396 * before mutex possibly-locked-in-it destructs.
397 *
398 * Why is it wrapped in `unique_ptr`? As of this writing the only reason is release() needs to be able to
399 * `move()` it to a temporary stack object before destroying it outright.
400 */
401 boost::movelib::unique_ptr<flow::async::Single_thread_task_loop> m_worker;
402
403 /**
404 * The core `Native_socket_stream` engine, implementing the `sync_io` pattern (see util::sync_io doc header).
405 * See our class doc header for overview of how we use it (the aforementioned `sync_io` doc header talks about
406 * the `sync_io` pattern generally).
407 *
408 * Thus, #m_sync_io is the synchronous engine that we use to perform our work in our asynchronous boost.asio
409 * loop running in thread W (#m_worker) while collaborating with user thread(s) a/k/a thread U.
410 * (Recall that the user may choose to set up their own event loop/thread(s) --
411 * boost.asio-based or otherwise -- and use their own equivalent of an #m_sync_io instead.)
412 *
413 * ### Order subtlety versus `m_worker` ###
414 * When constructing #m_sync_io, we need the `Task_engine` from #m_worker. On the other hand tasks operating
415 * in #m_worker access #m_sync_io. So in destructor it is important to `m_worker.stop()` explicitly, so that
416 * the latter is no longer a factor. Then when automatic destruction occurs in the opposite order of
417 * creation, the fact that #m_sync_io is destroyed before #m_worker has no bad effect.
418 */
420
421 // Send-ops data.
422
423 /**
424 * Null until PEER state, this handles ~all send-ops logic in that state. sync_io::Async_adapter_sender adapts
425 * any sync_io::Native_handle_sender and makes available ~all necessary async-I/O Native_handle_sender APIs.
426 * So in PEER state we forward ~everything to this guy.
427 *
428 * ### Creation ###
429 * By its contract, this guy's ctor will handle what it needs to, as long as #m_worker (to which it stores a pointer)
430 * has been `.start()`ed by that time, and #m_sync_io (to which it stores... ditto) has been
431 * `.replace_event_wait_handles()`ed as required.
432 *
433 * ### Destruction ###
434 * By its contract, this guy's dtor will handle what it needs to, as long as #m_worker (to which it stores a pointer)
435 * has been `.stop()`ed by that time, and any queued-up (ready to execute) handlers on it have been
436 * `Task_enginer::poll()`ed-through by that time as well.
437 */
439
440 // Receive-ops data.
441
442 /**
443 * Null until PEER state, this handles ~all receive-ops logic in that state. sync_io::Async_adapter_sender adapts
444 * any sync_io::Native_handle_receiver and makes available ~all necessary async-I/O Native_handle_receiver APIs.
445 * So in PEER state we forward ~everything to this guy.
446 *
447 * ### Creation ###
448 * See #m_snd_sync_io_adapter -- same stuff.
449 *
450 * ### Destruction ###
451 * See #m_snd_sync_io_adapter -- same stuff.
452 */
454}; // class Native_socket_stream::Impl
455
456// Free functions.
457
458/**
459 * Prints string representation of the given Native_socket_stream::Impl to the given `ostream`.
460 *
461 * @relatesalso Native_socket_stream::Impl
462 *
463 * @param os
464 * Stream to which to write.
465 * @param val
466 * Object to serialize.
467 * @return `os`.
468 */
469std::ostream& operator<<(std::ostream& os, const Native_socket_stream::Impl& val);
470
471} // namespace ipc::transport
Internal, non-movable pImpl implementation of Native_socket_stream class.
size_t send_blob_max_size() const
See Native_socket_stream counterpart.
~Impl()
See Native_socket_stream counterpart.
bool send_blob(const util::Blob_const &blob, Error_code *err_code)
See Native_socket_stream counterpart.
bool idle_timer_run(util::Fine_duration timeout)
See Native_socket_stream counterpart.
size_t send_meta_blob_max_size() const
See Native_socket_stream counterpart.
bool end_sending()
See Native_socket_stream counterpart.
bool sync_connect(const Shared_name &absolute_name, Error_code *err_code)
See Native_socket_stream counterpart.
bool async_receive_native_handle(Native_handle *target_hndl, const util::Blob_mutable &target_meta_blob, flow::async::Task_asio_err_sz &&on_done_func)
See Native_socket_stream counterpart.
util::Process_credentials remote_peer_process_credentials(Error_code *err_code) const
See Native_socket_stream counterpart.
boost::movelib::unique_ptr< flow::async::Single_thread_task_loop > m_worker
Single-thread worker pool for all internal async work in both directions.
size_t receive_blob_max_size() const
See Native_socket_stream counterpart.
bool async_receive_blob(const util::Blob_mutable &target_blob, flow::async::Task_asio_err_sz &&on_done_func)
See Native_socket_stream counterpart.
sync_io::Native_socket_stream m_sync_io
The core Native_socket_stream engine, implementing the sync_io pattern (see util::sync_io doc header)...
bool send_native_handle(Native_handle hndl_or_null, const util::Blob_const &meta_blob, Error_code *err_code)
See Native_socket_stream counterpart.
std::optional< sync_io::Async_adapter_receiver< decltype(m_sync_io)> > m_rcv_sync_io_adapter
Null until PEER state, this handles ~all receive-ops logic in that state.
const std::string & nickname() const
See Native_socket_stream counterpart.
bool async_end_sending(flow::async::Task_asio_err &&on_done_func)
See Native_socket_stream counterpart.
std::optional< sync_io::Async_adapter_sender< decltype(m_sync_io)> > m_snd_sync_io_adapter
Null until PEER state, this handles ~all send-ops logic in that state.
Impl(flow::log::Logger *logger_ptr, util::String_view nickname_str)
See Native_socket_stream counterpart.
bool auto_ping(util::Fine_duration period)
See Native_socket_stream counterpart.
size_t receive_meta_blob_max_size() const
See Native_socket_stream counterpart.
Internal-use type that adapts a given PEER-state sync_io::Native_handle_receiver or sync_io::Blob_rec...
Internal-use type that adapts a given PEER-state sync_io::Native_handle_sender or sync_io::Blob_sende...
Implements both sync_io::Native_handle_sender and sync_io::Native_handle_receiver concepts by using a...
A process's credentials (PID, UID, GID as of this writing).
String-wrapping abstraction representing a name uniquely distinguishing a kernel-persistent entity fr...
Flow-IPC module providing transmission of structured messages and/or low-level blobs (and more) betwe...
std::ostream & operator<<(std::ostream &os, const Bipc_mq_handle &val)
Prints string representation of the given Bipc_mq_handle to the given ostream.
boost::asio::mutable_buffer Blob_mutable
Short-hand for an mutable blob somewhere in memory, stored as exactly a void* and a size_t.
Definition: util_fwd.hpp:134
flow::Fine_duration Fine_duration
Short-hand for Flow's Fine_duration.
Definition: util_fwd.hpp:111
boost::asio::const_buffer Blob_const
Short-hand for an immutable blob somewhere in memory, stored as exactly a void const * and a size_t.
Definition: util_fwd.hpp:128
flow::util::String_view String_view
Short-hand for Flow's String_view.
Definition: util_fwd.hpp:109
flow::Error_code Error_code
Short-hand for flow::Error_code which is very common.
Definition: common.hpp:297
A monolayer-thin wrapper around a native handle, a/k/a descriptor a/k/a FD.