Flow 1.0.1
Flow project: Full implementation reference.
basic_blob.hpp
Go to the documentation of this file.
1/* Flow
2 * Copyright 2023 Akamai Technologies, Inc.
3 *
4 * Licensed under the Apache License, Version 2.0 (the
5 * "License"); you may not use this file except in
6 * compliance with the License. You may obtain a copy
7 * of the License at
8 *
9 * https://www.apache.org/licenses/LICENSE-2.0
10 *
11 * Unless required by applicable law or agreed to in
12 * writing, software distributed under the License is
13 * distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
14 * CONDITIONS OF ANY KIND, either express or implied.
15 * See the License for the specific language governing
16 * permissions and limitations under the License. */
17
18/// @file
19#pragma once
20
22#include "flow/log/log.hpp"
23#include <boost/interprocess/smart_ptr/shared_ptr.hpp>
24#include <boost/move/make_unique.hpp>
25#include <optional>
26#include <limits>
27
28namespace flow::util
29{
30
31/**
32 * A hand-optimized and API-tweaked replacement for `vector<uint8_t>`, i.e., buffer of bytes inside an allocated area
33 * of equal or larger size; also optionally supports limited garbage-collected memory pool functionality and
34 * SHM-friendly custom-allocator support.
35 *
36 * @see Blob_with_log_context (and especially aliases #Blob and #Sharing_blob), our non-polymorphic sub-class which
37 * adds some ease of use in exchange for a small perf trade-off. (More info below under "Logging.")
38 * @see #Blob_sans_log_context + #Sharing_blob_sans_log_context, each simply an alias to
39 * `Basic_blob<std::allocator, B>` (with `B = false` or `true` respectively), in a fashion vaguely similar to
40 * what `string` is to `basic_string` (a little). This is much like #Blob/#Sharing_blob, in that it is a
41 * non-template concrete type; but does not take or store a `Logger*`.
42 *
43 * The rationale for its existence mirrors its essential differences from `vector<uint8_t>` which are as follows.
44 * To summarize, though, it exists to guarantee specific performance by reducing implementation uncertainty via
45 * lower-level operations; and force user to explicitly authorize any allocation to ensure thoughtfully performant use.
46 * Update: Plus, it adds non-prefix-sub-buffer feature, which can be useful for zero-copy deserialization.
47 * Update: Plus, it adds a simple form of garbage-collected memory pools, useful for operating multiple `Basic_blob`s
48 * that share a common over-arching memory area (buffer).
49 * Update: Plus, it adds SHM-friendly custom allocator support. (While all `vector` impls support custom allocators,
50 * only some later versions of gcc `std::vector` work with shared-memory (SHM) allocators and imperfectly at that.
51 * `boost::container::vector` a/k/a `boost::interprocess::vector` is fully SHM-friendly.)
52 *
53 * - It adds a feature over `vector<uint8_t>`: The logical contents `[begin(), end())` can optionally begin
54 * not at the start of the internally allocated buffer but somewhere past it. In other words, the logical buffer
55 * is not necessarily a prefix of the internal allocated buffer. This feature is critical when one wants
56 * to use some sub-buffer of a buffer without reallocating a smaller buffer and copying the sub-buffer into it.
57 * For example, if we read a DATA packet the majority of which is the payload, which begins a few bytes
58 * from the start -- past a short header -- it may be faster to keep passing around the whole thing with move
59 * semantics but use only the payload part, after logically
60 * deserializing it (a/k/a zero-copy deserialization semantics). Of course
61 * one can do this with `vector` as well; but one would need to always remember the prefix length even after
62 * deserializing, at which point such details would be ideally forgotten instead. So this API is significantly
63 * more pleasant in that case. Moreover it can then be used generically more easily, alongside other containers.
64 * - Its performance is guaranteed by internally executing low-level operations such as `memcpy()` directly instead of
65 * hoping that using a higher-level abstraction will ultimately do the same.
66 * - In particular, the iterator types exposed by the API *are* pointers instead of introducing any performance
67 * uncertainty by possibly using wrapper/proxy iterator class.
68 * - In particular, no element or memory area is *ever* initialized to zero(es) or any other particular filler
69 * value(s). (This is surprisingly difficult to avoid with STL containers! Google it. Though, e.g.,
70 * boost.container does provide a `default_init_t` extension to various APIs like `.resize()`.) If an allocation
71 * does occur, the area is left as-is unless user specifies a source memory area from which to copy data.
72 * - Note that I am making no assertion about `vector` being slow; the idea is to guarantee *we* aren't by removing
73 * any *question* about it; it's entirely possible a given `vector` is equally fast, but it cannot be guaranteed by
74 * standard except in terms of complexity guarantees (which is usually pretty good but not everything).
75 * - That said a quick story about `std::vector<uint8_t>` (in gcc-8.3 anyway): I (ygoldfel) once used it with
76 * a custom allocator (which worked in shared memory) and stored a megabytes-long buffer in one. Its
77 * destructor, I noticed, spent milliseconds (with 2022 hardware) -- outside the actual dealloc call.
78 * Reason: It was iterating over every (1-byte) element and invoking its (non-existent/trivial) destructor. It
79 * did not specialize to avoid this, intentionally so according to a comment, when using a custom allocator.
80 * `boost::container::vector<uint8_t>` lacked this problem; but nevertheless it shows generally written
81 * containers can have hidden such perf quirks.
82 * - To help achieve the previous bullet point, as well as to keep the code simple, the class does not parameterize
83 * on element type; it stores unsigned bytes, period (though Basic_blob::value_type is good to use if you need to
84 * refer to that type in code generically).
85 * Perhaps the same could be achieved by specialization, but we don't need the parameterization in the first place.
86 * - Unlike `vector`, it has an explicit state where there is no underlying buffer; in this case zero() is `true`.
87 * Also in that case, `capacity() == 0` and `size() == 0` (and `start() == 0`). `zero() == true` is the case on
88 * default-constructed object of this class. The reason for this is I am never sure, at least, what a
89 * default-constructed `vector` looks like internally; a null buffer always seemed like a reasonable starting point
90 * worth guaranteeing explicitly.
91 * - If `!zero()`:
92 * - make_zero() deallocates any allocated buffer and ensures zero() is `true`, as if upon default construction.
93 * - Like `vector`, it keeps an allocated memory chunk of size M, at the start of which is the
94 * logical buffer of size `N <= M`, where `N == size()`, and `M == capacity()`. However, `M >= 1` always.
95 * - There is the aforementioned added feature wherein the logical buffer begins to the right of the allocated
96 * buffer, namely at index `start()`. In this case `M >= start() + size()`, and the buffer range
97 * is in indices `[start(), start() + size())` of the allocated buffer. By default `start() == 0`, as
98 * in `vector`, but this can be changed via the 2nd, optional, argument to resize().
99 * - Like `vector`, `reserve(Mnew)`, with `Mnew <= M`, does nothing. However, unlike `vector`, the same call is
100 * *illegal* when `Mnew > M >= 1`. However, any reserve() call *is* allowed when zero() is `true`. Therefore,
101 * if the user is intentionally okay with the performance implications of a reallocation, they can call make_zero()
102 * and *then* force the reallocating reserve() call.
103 * - Like `vector`, `resize(Nnew)` merely guarantees post-condition `size() == Nnew`; which means that
104 * it is essentially equivalent to `reserve(Nnew)` followed by setting internal N member to Nnew.
105 * However, remember that resize() therefore keeps all the behaviors of reserve(), including that it cannot
106 * grow the buffer (only allocate it when zero() is `true`).
107 * - If changing `start()` from default, then: `resize(Nnew, Snew)` means `reserve(Nnew + Snew)`, plus saving
108 * internal N and S members.
109 * - The *only* way to allocate is to (directly or indirectly) call `reserve(Mnew)` when `zero() == true`.
110 * Moreover, *exactly* Mnew bytes elements are allocated and no more (unlike with `vector`, where the policy used is
111 * not known). Moreover, if `reserve(Mnew)` is called indirectly (by another method of the class), `Mnew` arg
112 * is set to no greater than size necessary to complete the operation (again, by contrast, it is unknown what `vector`
113 * does w/r/t capacity policy).
114 * - The rest of the API is common-sense but generally kept to only what has been necessary to date,
115 * in on-demand fashion.
116 *
117 * ### Optional, simple garbage-collected shared ownership functionality ###
118 * The following feature was added quite some time after `Blob` was first introduced and matured. However it seamlessly
119 * subsumes all of the above basic functionality with full backwards compatibility. It can also be disabled
120 * (and is by default) by setting #S_SHARING to `false` at compile-time. (This gains back a little bit of perf
121 * namely by turning an internal `shared_ptr` to `unique_ptr`.)
122 *
123 * The feature itself is simple: Suppose one has a blob A, constructed or otherwise `resize()`d or `reserve()`d
124 * so as to have `zero() == false`; meaning `capacity() >= 1`. Now suppose one calls the core method of this *pool*
125 * feature: share() which returns a new blob B. B will have the same exact start(), size(), capacity() -- and,
126 * in fact, the pointer `data() - start()` (i.e., the underlying buffer start pointer, buffer being capacity() long).
127 * That is, B now shares the underlying memory buffer with A. Normally, that underlying buffer would be deallocated
128 * when either `A.make_zero()` is called, or A is destructed. Now that it's shared by A and B, however,
129 * the buffer is deallocated only once make_zero() or destruction occurs for *both* A and B. That is, there is an
130 * internal (thread-safe) ref-count that must reach 0.
131 *
132 * Both A and B may now again be share()d into further sharing `Basic_blob`s. This further increments the ref-count of
133 * original buffer; all such `Basic_blob`s C, D, ... must now either make_zero() or destruct, at which point the dealloc
134 * occurs.
135 *
136 * In that way the buffer -- or *pool* -- is *garbage-collected* as a whole, with reserve() (and APIs like resize()
137 * and ctors that call it) initially allocating and setting internal ref-count to 1, share() incrementing it, and
138 * make_zero() and ~Basic_blob() decrementing it (and deallocating when ref-count=0).
139 *
140 * ### Application of shared ownership: Simple pool-of-`Basic_blob`s functionality ###
141 * The other aspect of this feature is its pool-of-`Basic_blob`s application. All of the sharing `Basic_blob`s A, B,
142 * ... retain all the aforementioned features including the ability to use resize(), start_past_prefix_inc(), etc.,
143 * to control the location of the logical sub-range [begin(), end()) within the underlying buffer (pool).
144 * E.g., suppose A was 10 bytes, with `start() = 0` and `size() = capacity() = 10`; then share() B is also that way.
145 * Now `B.start_past_prefix_inc(5); A.resize(5);` makes it so that A = the 1st 5 bytes of the pool,
146 * B the last 5 bytes (and they don't overlap -- can even be concurrently modified safely). In that way A and B
147 * are now independent `Basic_blob`s -- potentially passed, say, to independent TCP-receive calls, each of which reads
148 * up to 5 bytes -- that share an over-arching pool.
149 *
150 * The API share_after_split_left() is a convenience operation that splits a `Basic_blob`'s [begin(), end()) area into
151 * 2 areas of specified length, then returns a new Basic_blob representing the first area in the split and
152 * modifies `*this` to represent the remainder (the 2nd area). This simply performs the op described in the preceding
153 * paragraph. share_after_split_right() is similar but acts symmetrically from the right. Lastly
154 * `share_after_split_equally*()` splits a Basic_blob into several equally-sized (except the last one potentially)
155 * sub-`Basic_blob`s of size N, where N is an arg. (It can be thought of as just calling `share_after_split_left(N)`
156 * repeatedly, then returning a sequence of the resulting post-split `Basic_blob`s.)
157 *
158 * To summarize: The `share_after_split*()` APIs are useful to divide (potentially progressively) a pool into
159 * non-overlapping `Basic_blob`s within a pool while ensuring the pool continues to exist while `Basic_blob`s refer
160 * to any part of it (but no later). Meanwhile direct use of share() with resize() and `start_past_prefix*()` allows
161 * for overlapping such sharing `Basic_blob`s.
162 *
163 * Note that deallocation occurs regardless of which areas of that pool the relevant `Basic_blob`s represent,
164 * and whether they overlap or not (and, for that matter, whether they even together comprise the entire pool or
165 * leave "gaps" in-between). The whole pool is deallocated the moment the last of the co-owning `Basic_blob`s
166 * performs either make_zero() or ~Basic_blob() -- the values of start() and size() at the time are not relevant.
167 *
168 * ### Custom allocator (and SHared Memory) support ###
169 * Like STL containers this one optionally takes a custom allocator type (#Allocator_raw) as a compile-time parameter
170 * instead of using the regular heap (`std::allocator`). Unlike many STL container implementations, including
171 * at least older `std::vector`, it supports SHM-storing allocators without a constant cross-process vaddr scheme.
172 * (Some do support this but with surprising perf flaws when storing raw integers/bytes. boost.container `vector`
173 * has solid support but lacks various other properties of Basic_blob.) While a detailed discussion is outside
174 * our scope here, the main point is internally `*this` stores no raw `value_type*` but rather
175 * `Allocator_raw::pointer` -- which in many cases *is* `value_type*`; but for advanced applications like SHM
176 * it might be a fancy-pointer like `boost::interprocess::offset_ptr<value_type>`. For general education
177 * check out boost.interprocess docs covering storage of STL containers in SHM. (However note that the
178 * allocators provided by that library are only one option even for SHM storage alone; e.g., they are stateful,
179 * and often one would like a stateless -- zero-size -- allocator. Plus there are other limitations to
180 * boost.interprocess SHM support, robust though it is.)
181 *
182 * ### Logging ###
183 * When and if `*this` logs, it is with log::Sev::S_TRACE severity or more verbose.
184 *
185 * Unlike many other Flow API classes this one does not derive from log::Log_context nor take a `Logger*` in
186 * ctor (and store it). Instead each API method/ctor/function capable of logging takes an optional
187 * (possibly null) log::Logger pointer. If supplied it's used by that API alone (with some minor async exceptions).
188 * If you would like more typical Flow-style logging API then use our non-polymorphic sub-class Blob_with_log_context
189 * (more likely aliases #Blob, #Sharing_blob). However consider the following first.
190 *
191 * Why this design? Answer:
192 * - Basic_blob is meant to be lean, both in terms of RAM used and processor cycles spent. Storing a `Logger*`
193 * takes some space; and storing it, copying/moving it, etc., takes a little compute. In a low-level
194 * API like Basic_blob this is potentially nice to avoid when not actively needed. (That said the logging
195 * can be extremely useful when debugging and/or profiling RAM use + allocations.)
196 * - This isn't a killer. The original `Blob` (before Basic_blob existed) stored a `Logger*`, and it was fine.
197 * However:
198 * - Storing a `Logger*` is always okay when `*this` itself is stored in regular heap or on the stack.
199 * However, `*this` itself may be stored in SHM; #Allocator_raw parameterization (see above regarding
200 * "Custom allocator") suggests as much (i.e., if the buffer is stored in SHM, we might be too).
201 * In that case `Logger*` does not, usually, make sense. As of this writing `Logger` in process 1
202 * has no relationship with any `Logger` in process 2; and even if the `Logger` were stored in SHM itself,
203 * `Logger` would need to be supplied via an in-SHM fancy-pointer, not `Logger*`, typically. The latter is
204 * a major can of worms and not supported by flow::log in any case as of this writing.
205 * - Therefore, even if we don't care about RAM/perf implications of storing `Logger*` with the blob, at least
206 * in some real applications it makes no sense.
207 *
208 * #Blob/#Sharing_blob provides this support while ensuring #Allocator_raw (no longer a template parameter in its case)
209 * is the vanilla `std::allocator`. The trade-off is as noted just above.
210 *
211 * ### Thread safety ###
212 * Before share() (or `share_*()`) is called: Essentially: Thread safety is the same as for `vector<uint8_t>`.
213 *
214 * Without `share*()` any two Basic_blob objects refer to separate areas in memory; hence it is safe to access
215 * Basic_blob A concurrently with accessing Basic_blob B in any fashion (read, write).
216 *
217 * However: If 2 `Basic_blob`s A and B co-own a pool, via a `share*()` chain, then concurrent write and read/write
218 * to A and B respectively are thread-safe if and only if their [begin(), end()) ranges don't overlap. Otherwise,
219 * naturally, one would be writing to an area while it is being read simultaneously -- not safe.
220 *
221 * Tip: When working in `share*()` mode, exclusive use of `share_after_split*()` is a great way to guarantee no 2
222 * `Basic_blob`s ever overlap. Meanwhile one must be careful when using share() directly and/or subsequently sliding
223 * the range around via resize(), `start_past_prefix*()`: `A.share()` and A not only (originally) overlap but
224 * simply represent the same area of memory; and resize() and co. can turn a non-overlapping range into an overlapping
225 * one (encroaching on someone else's "territory" within the pool).
226 *
227 * @todo Write a class template, perhaps `Tight_blob<Allocator, bool>`, which would be identical to Basic_blob
228 * but forego the framing features, namely size() and start(), thus storing only the RAII array pointer data()
229 * and capacity(); rewrite Basic_blob in terms of this `Tight_blob`. This simple container type has had some demand
230 * in practice, and Basic_blob can and should be cleanly built on top of it (perhaps even as an IS-A subclass).
231 *
232 * @tparam Allocator
233 * An allocator, with `value_type` equal to our #value_type, per the standard C++1x `Allocator` concept.
234 * In most uses this shall be left at the default `std::allocator<value_type>` which allocates in
235 * standard heap (`new[]`, `delete[]`). A custom allocator may be used instead. SHM-storing allocators,
236 * and generally allocators for which `pointer` is not simply `value_type*` but rather a fancy-pointer
237 * (see cppreference.com) are correctly supported. (Note this may not be the case for your compiler's
238 * `std::vector`.)
239 * @tparam S_SHARING_ALLOWED
240 * If `true`, share() and all derived methods, plus blobs_sharing(), can be instantiated (invoked in compiled
241 * code). If `false` they cannot (`static_assert()` will trip), but the resulting Basic_blob concrete
242 * class will be slightly more performant (internally, a `shared_ptr` becomes instead a `unique_ptr` which
243 * means smaller allocations and no ref-count logic invoked).
244 */
245template<typename Allocator, bool S_SHARING_ALLOWED>
247{
248public:
249 // Types.
250
251 /// Short-hand for values, which in this case are unsigned bytes.
253
254 /// Type for index into blob or length of blob or sub-blob.
255 using size_type = std::size_t;
256
257 /// Type for difference of `size_type`s.
258 using difference_type = std::ptrdiff_t;
259
260 /// Type for iterator pointing into a mutable structure of this type.
262
263 /// Type for iterator pointing into an immutable structure of this type.
264 using Const_iterator = value_type const *;
265
266 /// Short-hand for the allocator type specified at compile-time. Its element type is our #value_type.
267 using Allocator_raw = Allocator;
268 static_assert(std::is_same_v<typename Allocator_raw::value_type, value_type>,
269 "Allocator template param must be of form A<V> where V is our value_type.");
270
271 /// For container compliance (hence the irregular capitalization): pointer to element.
273 /// For container compliance (hence the irregular capitalization): pointer to `const` element.
275 /// For container compliance (hence the irregular capitalization): reference to element.
277 /// For container compliance (hence the irregular capitalization): reference to `const` element.
279 /// For container compliance (hence the irregular capitalization): #Iterator type.
281 /// For container compliance (hence the irregular capitalization): #Const_iterator type.
283
284 // Constants.
285
286 /// Value of template parameter `S_SHARING_ALLOWED` (for generic programming).
287 static constexpr bool S_SHARING = S_SHARING_ALLOWED;
288
289 /// Special value indicating an unchanged `size_type` value; such as in resize().
290 static constexpr size_type S_UNCHANGED = size_type(-1); // Same trick as std::string::npos.
291
292 /**
293 * `true` if #Allocator_raw underlying allocator template is simply `std::allocator`; `false`
294 * otherwise.
295 *
296 * Note that if this is `true`, it may be worth using #Blob/#Sharing_blob, instead of its `Basic_blob<std::allocator>`
297 * super-class; at the cost of a marginally larger RAM footprint (an added `Logger*`) you'll get a more convenient
298 * set of logging API knobs (namely `Logger*` stored permanently from construction; and there will be no need to
299 * supply it as arg to subsequent APIs when logging is desired).
300 *
301 * ### Implications of #S_IS_VANILLA_ALLOC being `false` ###
302 * This is introduced in our class doc header. Briefly however:
303 * - The underlying buffer, if any, and possibly some small aux data shall be allocated
304 * via #Allocator_raw, not simply the regular heap's `new[]` and/or `new`.
305 * - They shall be deallocated, if needed, via #Allocator_raw, not simply the regular heap's
306 * `delete[]` and/or `delete`.
307 * - Because storing a pointer to log::Logger may be meaningless when storing in an area allocated
308 * by some custom allocators (particularly SHM-heap ones), we shall not auto-TRACE-log on dealloc.
309 * - This caveat applies only if #S_SHARING is `true`.
310 *
311 * @internal
312 * - (If #S_SHARING)
313 * Accordingly the ref-counted buffer pointer #m_buf_ptr shall be a `boost::interprocess::shared_ptr`
314 * instead of a vanilla `shared_ptr`; the latter may be faster and more full-featured, but it is likely
315 * to internally store a raw `T*`; we need one that stores an `Allocator_raw::pointer` instead;
316 * e.g., a fancy-pointer type (like `boost::interprocess::offset_ptr`) when dealing with
317 * SHM-heaps (typically).
318 * - If #S_IS_VANILLA_ALLOC is `true`, then we revert to the faster/more-mature/full-featured
319 * `shared_ptr`. In particular it is faster (if used with `make_shared()` and similar) by storing
320 * the user buffer and aux data/ref-count in one contiguously-allocated buffer.
321 * - (If #S_SHARING is `false`)
322 * It's a typical `unique_ptr` template either way (because it supports non-raw-pointer storage out of the
323 * box) but:
324 * - A custom deleter is necessary similarly to the above.
325 * - Its `pointer` member alias crucially causes the `unique_ptr` to store
326 * an `Allocator_raw::pointer` instead of a `value_type*`.
327 *
328 * See #Buf_ptr doc header regarding the latter two bullet points.
329 */
330 static constexpr bool S_IS_VANILLA_ALLOC = std::is_same_v<Allocator_raw, std::allocator<value_type>>;
331
332 // Constructors/destructor.
333
334 /**
335 * Constructs blob with `zero() == true`. Note this means no buffer is allocated.
336 *
337 * @param alloc_raw
338 * Allocator to copy and store in `*this` for all buffer allocations/deallocations.
339 * If #Allocator_raw is stateless, then this has size zero, so nothing is copied at runtime,
340 * and by definition it is to equal `Allocator_raw()`.
341 */
343
344 /**
345 * Constructs blob with size() and capacity() equal to the given `size`, and `start() == 0`. Performance note:
346 * elements are not initialized to zero or any other value. A new over-arching buffer (pool) is therefore allocated.
347 *
348 * Corner case note: a post-condition is `zero() == (size() == 0)`. Note, also, that the latter is *not* a universal
349 * invariant (see zero() doc header).
350 *
351 * Formally: If `size >= 1`, then a buffer is allocated; and the internal ownership ref-count is set to 1.
352 *
353 * @param size
354 * A non-negative desired size.
355 * @param logger_ptr
356 * The Logger implementation to use in *this* routine (synchronously) or asynchronously when TRACE-logging
357 * in the event of buffer dealloc. Null allowed.
358 * @param alloc_raw
359 * Allocator to copy and store in `*this` for all buffer allocations/deallocations.
360 * If #Allocator_raw is stateless, then this has size zero, so nothing is copied at runtime,
361 * and by definition it is to equal `Allocator_raw()`.
362 */
363 explicit Basic_blob(size_type size, log::Logger* logger_ptr = 0, const Allocator_raw& alloc_raw = Allocator_raw());
364
365 /**
366 * Move constructor, constructing a blob exactly internally equal to pre-call `moved_src`, while the latter is
367 * made to be exactly as if it were just constructed as `Basic_blob(nullptr)` (allocator subtleties aside).
368 * Performance: constant-time, at most copying a few scalars.
369 *
370 * @param moved_src
371 * The object whose internals to move to `*this` and replace with a blank-constructed object's internals.
372 * @param logger_ptr
373 * The Logger implementation to use in *this* routine (synchronously) only. Null allowed.
374 */
375 Basic_blob(Basic_blob&& moved_src, log::Logger* logger_ptr = 0);
376
377 /**
378 * Copy constructor, constructing a blob logically equal to `src`. More formally, guarantees post-condition wherein
379 * `[this->begin(), this->end())` range is equal by value (including length) to `src` equivalent range but no memory
380 * overlap. A post-condition is `capacity() == size()`, and `start() == 0`. Performance: see copying assignment
381 * operator.
382 *
383 * Corner case note: the range equality guarantee includes the degenerate case where that range is empty, meaning we
384 * simply guarantee post-condition `src.empty() == this->empty()`.
385 *
386 * Corner case note 2: post-condition: `this->zero() == this->empty()`
387 * (note `src.zero()` state is not necessarily preserved in `*this`).
388 *
389 * Note: This is `explicit`, which is atypical for a copy constructor, to generate compile errors in hard-to-see
390 * (and often unintentional) instances of copying. Copies of Basic_blob should be quite intentional and explicit.
391 * (One example where one might forget about a copy would be when using a Basic_blob argument without `cref` or
392 * `ref` in a `bind()`; or when capturing by value, not by ref, in a lambda.)
393 *
394 * Formally: If `src.size() >= 1`, then a buffer is allocated; and the internal ownership ref-count is set to 1.
395 *
396 * @param src
397 * Object whose range of bytes of length `src.size()` starting at `src.begin()` is copied into `*this`.
398 * @param logger_ptr
399 * The Logger implementation to use in *this* routine (synchronously) or asynchronously when TRACE-logging
400 * in the event of buffer dealloc. Null allowed.
401 */
402 explicit Basic_blob(const Basic_blob& src, log::Logger* logger_ptr = 0);
403
404 /**
405 * Destructor that drops `*this` ownership of the allocated internal buffer if any, as by make_zero();
406 * if no other Basic_blob holds ownership of that buffer, then that buffer is deallocated also.
407 * Recall that other `Basic_blob`s can only gain co-ownership via `share*()`; hence if one does not use that
408 * feature, the destructor will in fact deallocate the buffer (if any).
409 *
410 * Formally: If `!zero()`, then the internal ownership ref-count is decremented by 1, and if it reaches
411 * 0, then a buffer is deallocated.
412 *
413 * ### Logging ###
414 * This will not log, as it is not possible to pass a `Logger*` to a dtor without storing it (which we avoid
415 * for reasons outlined in class doc header). Use #Blob/#Sharing_blob if it is important to log in this situation
416 * (although there are some minor trade-offs).
417 */
419
420 // Methods.
421
422 /**
423 * Move assignment. Allocator subtleties aside and assuming `this != &moved_src` it is equivalent to:
424 * `make_zero(); this->swap(moved_src, logger_ptr)`. (If `this == &moved_src`, this is a no-op.)
425 *
426 * @param moved_src
427 * See swap().
428 * @param logger_ptr
429 * The Logger implementation to use in *this* routine (synchronously) only. Null allowed.
430 * @return `*this`.
431 */
432 Basic_blob& assign(Basic_blob&& moved_src, log::Logger* logger_ptr = 0);
433
434 /**
435 * Move assignment operator (no logging): equivalent to `assign(std::move(moved_src), nullptr)`.
436 *
437 * @param moved_src
438 * See assign() (move overload).
439 * @return `*this`.
440 */
442
443 /**
444 * Copy assignment: assuming `(this != &src) && (!blobs_sharing(*this, src))`,
445 * makes `*this` logically equal to `src`; but behavior undefined if a reallocation would be necessary to do this.
446 * (If `this == &src`, this is a no-op. If not but `blobs_sharing(*this, src) == true`, see "Sharing blobs" below.
447 * This is assumed to not be the case in further discussion.)
448 *
449 * More formally:
450 * no-op if `this == &src`; "Sharing blobs" behavior if not so, but `src` shares buffer with `*this`; otherwise:
451 * Guarantees post-condition wherein `[this->begin(), this->end())` range is equal
452 * by value (including length) to `src` equivalent range but no memory overlap. Post-condition: `start() == 0`;
453 * capacity() either does not change or equals size(). capacity() growth is not allowed: behavior is undefined if
454 * `src.size()` exceeds pre-call `this->capacity()`, unless `this->zero() == true` pre-call. Performance: at most a
455 * memory area of size `src.size()` is copied and some scalars updated; a memory area of that size is allocated only
456 * if required; no ownership drop or deallocation occurs.
457 *
458 * Corner case note: the range equality guarantee includes the degenerate case where that range is empty, meaning we
459 * simply guarantee post-condition `src.empty() == this->empty()`.
460 *
461 * Corner case note 2: post-condition: if `this->empty() == true` then `this.zero()` has the same value as at entry
462 * to this call. In other words, no deallocation occurs, even if
463 * `this->empty() == true` post-condition holds; at most internally a scalar storing size is assigned 0.
464 * (You may force deallocation in that case via make_zero() post-call, but this means you'll have to intentionally
465 * perform that relatively slow op.)
466 *
467 * As with reserve(), IF pre-condition `zero() == false`, THEN pre-condition `src.size() <= this->capacity()`
468 * must hold, or behavior is undefined (i.e., as noted above, capacity() growth is not allowed except from 0).
469 * Therefore, NO REallocation occurs! However, also as with reserve(), if you want to intentionally allow such a
470 * REallocation, then simply first call make_zero(); then execute the
471 * `assign()` copy as planned. This is an intentional restriction forcing caller to explicitly allow a relatively
472 * slow reallocation op.
473 *
474 * Formally: If `src.size() >= 1`, and `zero() == true`, then a buffer is allocated; and the internal ownership
475 * ref-count is set to 1.
476 *
477 * ### Sharing blobs ###
478 * If `blobs_sharing(*this, src) == true`, meaning the target and source are operating on the same buffer, then
479 * behavior is undefined (assertion may trip). Rationale for this design is as follows. The possibilities were:
480 * -# Undefined behavior/assertion.
481 * -# Just adjust `this->start()` and `this->size()` to match `src`; continue co-owning the underlying buffer;
482 * copy no data.
483 * -# `this->make_zero()` -- losing `*this` ownership, while `src` keeps it -- and then allocate a new buffer
484 * and copy `src` data into it.
485 *
486 * Choosing between these is tough, as this is an odd corner case. 3 is not criminal, but generally no method
487 * ever forces make_zero() behavior, always leaving it to the user to consciously do, so it seems prudent to keep
488 * to that practice (even though this case is a bit different from, say, resize() -- since make_zero() here has
489 * no chance to deallocate anything, only decrement ref-count). 2 is performant and slick but suggests a special
490 * behavior in a corner case; this *feels* slightly ill-advised in a standard copy assignment operator. Therefore
491 * it seems better to crash-and-burn (choice 1), in the same way an attempt to resize()-higher a non-zero() blob would
492 * crash and burn, forcing the user to explicitly execute what they want. After all, 3 is done by simply calling
493 * make_zero() first; and 2 is possible with a simple resize() call; and the blobs_sharing() check is both easy
494 * and performant.
495 *
496 * @warning A post-condition is `start() == 0`; meaning `start()` at entry is ignored and reset to 0; the entire
497 * (co-)owned buffer -- if any -- is potentially used to store the copied values. In particular, if one
498 * plans to work on a sub-blob of a shared pool (see class doc header), then using this assignment op is
499 * not advised. Use emplace_copy() instead; or perform your own copy onto
500 * mutable_buffer().
501 *
502 * @param src
503 * Object whose range of bytes of length `src.size()` starting at `src.begin()` is copied into `*this`.
504 * Behavior is undefined if pre-condition is `!zero()`, and this memory area overlaps at any point with the
505 * memory area of same size in `*this` (unless that size is zero -- a degenerate case).
506 * (This can occur only via the use of `share*()` -- otherwise `Basic_blob`s always refer to separate areas.)
507 * Also behavior undefined if pre-condition is `!zero()`, and `*this` (co-)owned buffer is too short to
508 * accomodate all `src.size()` bytes (assertion may trip).
509 * @param logger_ptr
510 * The Logger implementation to use in *this* routine (synchronously) only. Null allowed.
511 * @return `*this`.
512 */
513 Basic_blob& assign(const Basic_blob& src, log::Logger* logger_ptr = 0);
514
515 /**
516 * Copy assignment operator (no logging): equivalent to `assign(src, nullptr)`.
517 *
518 * @param src
519 * See assign() (copy overload).
520 * @return `*this`.
521 */
523
524 /**
525 * Swaps the contents of this structure and `other`, or no-op if `this == &other`. Performance: at most this
526 * involves swapping a few scalars which is constant-time.
527 *
528 * @param other
529 * The other structure.
530 * @param logger_ptr
531 * The Logger implementation to use in *this* routine (synchronously) only. Null allowed.
532 */
533 void swap(Basic_blob& other, log::Logger* logger_ptr = 0);
534
535 /**
536 * Applicable to `!zero()` blobs, this returns an identical Basic_blob that shares (co-owns) `*this` allocated buffer
537 * along with `*this` and any other `Basic_blob`s also sharing it. Behavior is undefined (assertion may trip) if
538 * `zero() == true`: it is nonsensical to co-own nothing; just use the default ctor then.
539 *
540 * The returned Basic_blob is identical in that not only does it share the same memory area (hence same capacity())
541 * but has identical start(), size() (and hence begin() and end()). If you'd like to work on a different
542 * part of the allocated buffer, please consider `share_after_split*()` instead; the pool-of-sub-`Basic_blob`s
543 * paradigm suggested in the class doc header is probably best accomplished using those methods and not share().
544 *
545 * You can also adjust various sharing `Basic_blob`s via resize(), start_past_prefix_inc(), etc., directly -- after
546 * share() returns.
547 *
548 * Formally: Before this returns, the internal ownership ref-count (shared among `*this` and the returned
549 * Basic_blob) is incremented.
550 *
551 * @param logger_ptr
552 * The Logger implementation to use in *this* routine (synchronously) only. Null allowed.
553 * @return An identical Basic_blob to `*this` that shares the underlying allocated buffer. See above.
554 */
555 Basic_blob share(log::Logger* logger_ptr = 0) const;
556
557 /**
558 * Applicable to `!zero()` blobs, this shifts `this->begin()` by `size` to the right without changing
559 * end(); and returns a Basic_blob containing the shifted-past values that shares (co-owns) `*this` allocated buffer
560 * along with `*this` and any other `Basic_blob`s also sharing it.
561 *
562 * More formally, this is identical to simply `auto b = share(); b.resize(size); start_past_prefix_inc(size);`.
563 *
564 * This is useful when working in the pool-of-sub-`Basic_blob`s paradigm. This and other `share_after_split*()`
565 * methods are usually better to use rather than share() directly (for that paradigm).
566 *
567 * Behavior is undefined (assertion may trip) if `zero() == true`.
568 *
569 * Corner case: If `size > size()`, then it is taken to equal size().
570 *
571 * Degenerate case: If `size` (or size(), whichever is smaller) is 0, then this method is identical to
572 * share(). Probably you don't mean to call share_after_split_left() in that case, but it's your decision.
573 *
574 * Degenerate case: If `size == size()` (and not 0), then `this->empty()` becomes `true` -- though
575 * `*this` continues to share the underlying buffer despite [begin(), end()) becoming empty. Typically this would
576 * only be done as, perhaps, the last iteration of some progressively-splitting loop; but it's your decision.
577 *
578 * Formally: Before this returns, the internal ownership ref-count (shared among `*this` and the returned
579 * Basic_blob) is incremented.
580 *
581 * @param size
582 * Desired size() of the returned Basic_blob; and the number of elements by which `this->begin()` is
583 * shifted right (hence start() is incremented). Any value exceeding size() is taken to equal it.
584 * @param logger_ptr
585 * The Logger implementation to use in *this* routine (synchronously) only. Null allowed.
586 * @return The split-off-on-the-left Basic_blob that shares the underlying allocated buffer with `*this`. See above.
587 */
589
590 /**
591 * Identical to share_after_split_left(), except `this->end()` shifts by `size` to the left (instead of
592 * `this->begin() to the right), and the split-off Basic_blob contains the *right-most* `size` elements
593 * (instead of the left-most).
594 *
595 * More formally, this is identical to simply
596 * `auto lt_size = size() - size; auto b = share(); resize(lt_size); b.start_past_prefix_inc(lt_size);`.
597 * Cf. share_after_split_left() formal definition and note the symmetry.
598 *
599 * All other characteristics are as written for share_after_split_left().
600 *
601 * @param size
602 * Desired size() of the returned Basic_blob; and the number of elements by which `this->end()` is
603 * shifted left (hence size() is decremented). Any value exceeding size() is taken to equal it.
604 * @param logger_ptr
605 * The Logger implementation to use in *this* routine (synchronously) only. Null allowed.
606 * @return The split-off-on-the-right Basic_blob that shares the underlying allocated buffer with `*this`. See above.
607 */
609
610 /**
611 * Identical to successively performing `share_after_split_left(size)` until `this->empty() == true`;
612 * the resultings `Basic_blob`s are emitted via `emit_blob_func()` callback in the order they're split off from
613 * the left. In other words this partitions a non-zero() `Basic_blob` -- perhaps typically used as a pool --
614 * into equally-sized (except possibly the last one) adjacent sub-`Basic_blob`s.
615 *
616 * A post-condition is that `empty() == true` (`size() == 0`). In addition, if `headless_pool == true`,
617 * then `zero() == true` is also a post-condition; i.e., the pool is "headless": it disappears once all the
618 * resulting sub-`Basic_blob`s drop their ownership (as well as any other co-owning `Basic_blob`s).
619 * Otherwise, `*this` will continue to share the pool despite size() becoming 0. (Of course, even then, one is
620 * free to make_zero() or destroy `*this` -- the former, before returning, is all that `headless_pool == true`
621 * really adds.)
622 *
623 * Behavior is undefined (assertion may trip) if `empty() == true` (including if `zero() == true`, but even if not)
624 * or if `size == 0`.
625 *
626 * @see share_after_split_equally_emit_seq() for a convenience wrapper to emit to, say, `vector<Basic_blob>`.
627 * @see share_after_split_equally_emit_ptr_seq() for a convenience wrapper to emit to, say,
628 * `vector<unique_ptr<Basic_blob>>`.
629 *
630 * @tparam Emit_blob_func
631 * A callback compatible with signature `void F(Basic_blob&& blob_moved)`.
632 * @param size
633 * Desired size() of each successive out-Basic_blob, except the last one. Behavior undefined (assertion may
634 * trip) if not positive.
635 * @param headless_pool
636 * Whether to perform `this->make_zero()` just before returning. See above.
637 * @param emit_blob_func
638 * `F` such that `F(std::move(blob))` shall be called with each successive sub-Basic_blob.
639 * @param logger_ptr
640 * The Logger implementation to use in *this* routine (synchronously) only. Null allowed.
641 */
642 template<typename Emit_blob_func>
643 void share_after_split_equally(size_type size, bool headless_pool, Emit_blob_func&& emit_blob_func,
644 log::Logger* logger_ptr = 0);
645
646 /**
647 * share_after_split_equally() wrapper that places `Basic_blob`s into the given container via
648 * `push_back()`.
649 *
650 * @tparam Blob_container
651 * Something with method compatible with `push_back(Basic_blob&& blob_moved)`.
652 * @param size
653 * See share_after_split_equally().
654 * @param headless_pool
655 * See share_after_split_equally().
656 * @param out_blobs
657 * `out_blobs->push_back()` shall be executed 1+ times.
658 * @param logger_ptr
659 * The Logger implementation to use in *this* routine (synchronously) only. Null allowed.
660 */
661 template<typename Blob_container>
662 void share_after_split_equally_emit_seq(size_type size, bool headless_pool, Blob_container* out_blobs,
663 log::Logger* logger_ptr = 0);
664
665 /**
666 * share_after_split_equally() wrapper that places `Ptr<Basic_blob>`s into the given container via
667 * `push_back()`, where the type `Ptr<>` is determined via `Blob_ptr_container::value_type`.
668 *
669 * @tparam Blob_ptr_container
670 * Something with method compatible with `push_back(Ptr&& blob_ptr_moved)`,
671 * where `Ptr` is `Blob_ptr_container::value_type`, and `Ptr(new Basic_blob)` can be created.
672 * `Ptr` is to be a smart pointer type such as `unique_ptr<Basic_blob>` or `shared_ptr<Basic_blob>`.
673 * @param size
674 * See share_after_split_equally().
675 * @param headless_pool
676 * See share_after_split_equally().
677 * @param out_blobs
678 * `out_blobs->push_back()` shall be executed 1+ times.
679 * @param logger_ptr
680 * The Logger implementation to use in *this* routine (synchronously) only. Null allowed.
681 */
682 template<typename Blob_ptr_container>
683 void share_after_split_equally_emit_ptr_seq(size_type size, bool headless_pool, Blob_ptr_container* out_blobs,
684 log::Logger* logger_ptr = 0);
685
686 /**
687 * Replaces logical contents with a copy of the given non-overlapping area anywhere in memory. More formally:
688 * This is exactly equivalent to copy-assignment (`*this = b`), where `const Basic_blob b` owns exactly
689 * the memory area given by `src`. However, note the newly relevant restriction documented for `src` parameter below
690 * (no overlap allowed).
691 *
692 * All characteristics are as written for the copy assignment operator, including "Formally" and the warning.
693 *
694 * @param src
695 * Source memory area. Behavior is undefined if pre-condition is `!zero()`, and this memory area overlaps
696 * at any point with the memory area of same size at `begin()`. Otherwise it can be anywhere at all.
697 * Also behavior undefined if pre-condition is `!zero()`, and `*this` (co-)owned buffer is too short to
698 * accomodate all `src.size()` bytes (assertion may trip).
699 * @param logger_ptr
700 * The Logger implementation to use in *this* routine (synchronously) only. Null allowed.
701 * @return Number of elements copied, namely `src.size()`, or simply size().
702 */
703 size_type assign_copy(const boost::asio::const_buffer& src, log::Logger* logger_ptr = 0);
704
705 /**
706 * Copies `src` buffer directly onto equally sized area within `*this` at location `dest`; `*this` must have
707 * sufficient size() to accomodate all of the data copied. Performance: The only operation performed is a copy from
708 * `src` to `dest` using the fastest reasonably available technique.
709 *
710 * None of the following changes: zero(), empty(), size(), capacity(), begin(), end(); nor the location (or size) of
711 * internally stored buffer.
712 *
713 * @param dest
714 * Destination location within this blob. This must be in `[begin(), end()]`; and,
715 * unless `src.size() == 0`, must not equal end() either.
716 * @param src
717 * Source memory area. Behavior is undefined if this memory area overlaps
718 * at any point with the memory area of same size at `dest` (unless that size is zero -- a degenerate
719 * case). Otherwise it can be anywhere at all, even partially or fully within `*this`.
720 * Also behavior undefined if `*this` blob is too short to accomodate all `src.size()` bytes
721 * (assertion may trip).
722 * @param logger_ptr
723 * The Logger implementation to use in *this* routine (synchronously) only. Null allowed.
724 * @return Location in this blob just past the last element copied; `dest` if none copied; in particular end() is a
725 * possible value.
726 */
727 Iterator emplace_copy(Const_iterator dest, const boost::asio::const_buffer& src, log::Logger* logger_ptr = 0);
728
729 /**
730 * The opposite of emplace_copy() in every way, copying a sub-blob onto a target memory area. Note that the size
731 * of that target buffer (`dest.size()`) determines how much of `*this` is copied.
732 *
733 * @param src
734 * Source location within this blob. This must be in `[begin(), end()]`; and,
735 * unless `dest.size() == 0`, must not equal end() either.
736 * @param dest
737 * Destination memory area. Behavior is undefined if this memory area overlaps
738 * at any point with the memory area of same size at `src` (unless that size is zero -- a degenerate
739 * case). Otherwise it can be anywhere at all, even partially or fully within `*this`.
740 * Also behavior undefined if `src + dest.size()` is past end of `*this` blob (assertion may trip).
741 * @param logger_ptr
742 * The Logger implementation to use in *this* routine (synchronously) only. Null allowed.
743 * @return Location in this blob just past the last element copied; `src` if none copied; end() is a possible value.
744 */
745 Const_iterator sub_copy(Const_iterator src, const boost::asio::mutable_buffer& dest,
746 log::Logger* logger_ptr = 0) const;
747
748 /**
749 * Returns number of elements stored, namely `end() - begin()`. If zero(), this is 0; but if this is 0, then
750 * zero() may or may not be `true`.
751 *
752 * @return See above.
753 */
755
756 /**
757 * Returns the offset between `begin()` and the start of the internally allocated buffer. If zero(), this is 0; but
758 * if this is 0, then zero() may or may not be `true`.
759 *
760 * @return See above.
761 */
763
764 /**
765 * Returns `size() == 0`. If zero(), this is `true`; but if this is `true`, then
766 * zero() may or may not be `true`.
767 *
768 * @return See above.
769 */
770 bool empty() const;
771
772 /**
773 * Returns the number of elements in the internally allocated buffer, which is 1 or more; or 0 if no buffer
774 * is internally allocated. Some formal invariants: `(capacity() == 0) == zero()`; `start() + size() <= capacity()`.
775 *
776 * See important notes on capacity() policy in the class doc header.
777 *
778 * @return See above.
779 */
781
782 /**
783 * Returns `false` if a buffer is allocated and owned; `true` otherwise. See important notes on how this relates
784 * to empty() and capacity() in those methods' doc headers. See also other important notes in class doc header.
785 *
786 * Note that zero() is `true` for any default-constructed Basic_blob.
787 *
788 * @return See above.
789 */
790 bool zero() const;
791
792 /**
793 * Ensures that an internal buffer of at least `capacity` elements is allocated and owned; disallows growing an
794 * existing buffer; never shrinks an existing buffer; if a buffer is allocated, it is no larger than `capacity`.
795 *
796 * reserve() may be called directly but should be formally understood to be called by resize(), assign_copy(),
797 * copy assignment operator, copy constructor. In all cases, the value passed to reserve() is exactly the size
798 * needed to perform the particular task -- no more (and no less). As such, reserve() policy is key to knowing
799 * how the class behaves elsewhere. See class doc header for discussion in larger context.
800 *
801 * Performance/behavior: If zero() is true pre-call, `capacity` sized buffer is allocated. Otherwise,
802 * no-op if `capacity <= capacity()` pre-call. Behavior is undefined if `capacity > capacity()` pre-call
803 * (again, unless zero(), meaning `capacity() == 0`). In other words, no deallocation occurs, and an allocation
804 * occurs only if necessary. Growing an existing buffer is disallowed. However, if you want to intentionally
805 * REallocate, then simply first check for `zero() == false` and call make_zero() if that holds; then execute the
806 * `reserve()` as planned. This is an intentional restriction forcing caller to explicitly allow a relatively slow
807 * reallocation op. You'll note a similar suggestion for the other reserve()-using methods/operators.
808 *
809 * Formally: If `capacity >= 1`, and `zero() == true`, then a buffer is allocated; and the internal ownership
810 * ref-count is set to 1.
811 *
812 * @param capacity
813 * Non-negative desired minimum capacity.
814 * @param logger_ptr
815 * The Logger implementation to use in *this* routine (synchronously) or asynchronously when TRACE-logging
816 * in the event of buffer dealloc. Null allowed.
817 */
818 void reserve(size_type capacity, log::Logger* logger_ptr = 0);
819
820 /**
821 * Guarantees post-condition `zero() == true` by dropping `*this` ownership of the allocated internal buffer if any;
822 * if no other Basic_blob holds ownership of that buffer, then that buffer is deallocated also. Recall that
823 * other `Basic_blob`s can only gain co-ownership via `share*()`; hence if one does not use that feature, make_zero()
824 * will in fact deallocate the buffer (if any).
825 *
826 * That post-condition can also be thought of as `*this` becoming indistinguishable from a default-constructed
827 * Basic_blob.
828 *
829 * Performance/behavior: Assuming zero() is not already `true`, this will deallocate capacity() sized buffer
830 * and save a null pointer.
831 *
832 * The many operations that involve reserve() in their doc headers will explain importance of this method:
833 * As a rule, no method except make_zero() allows one to request an ownership-drop or deallocation of the existing
834 * buffer, even if this would be necessary for a larger buffer to be allocated. Therefore, if you intentionally want
835 * to allow such an operation, you CAN, but then you MUST explicitly call make_zero() first.
836 *
837 * Formally: If `!zero()`, then the internal ownership ref-count is decremented by 1, and if it reaches
838 * 0, then a buffer is deallocated.
839 *
840 * @param logger_ptr
841 * The Logger implementation to use in *this* routine (synchronously) only. Null allowed.
842 */
843 void make_zero(log::Logger* logger_ptr = 0);
844
845 /**
846 * Guarantees post-condition `size() == size` and `start() == start`; no values in pre-call range `[begin(), end())`
847 * are changed; any values *added* to that range by the call are not initialized to zero or otherwise.
848 *
849 * From other invariants and behaviors described, you'll realize
850 * this essentially means `reserve(size + start)` followed by saving `size` and `start` into internal size members.
851 * The various implications of this can be deduced by reading the related methods' doc headers. The key is to
852 * understand how reserve() works, including what it disallows (growth in size of an existing buffer).
853 *
854 * Formally: If `size >= 1`, and `zero() == true`, then a buffer is allocated; and the internal ownership
855 * ref-count is set to 1.
856 *
857 * ### Leaving start() unmodified ###
858 * `start` is taken to be the value of arg `start_or_unchanged`; unless the latter is set to special value
859 * #S_UNCHANGED; in which case `start` is taken to equal start(). Since the default is indeed #S_UNCHANGED,
860 * the oft-encountered expression `resize(N)` will adjust only size() and leave start() unmodified -- often the
861 * desired behavior.
862 *
863 * @param size
864 * Non-negative desired value for size().
865 * @param start_or_unchanged
866 * Non-negative desired value for start(); or special value #S_UNCHANGED. See above.
867 * @param logger_ptr
868 * The Logger implementation to use in *this* routine (synchronously) or asynchronously when TRACE-logging
869 * in the event of buffer dealloc. Null allowed.
870 */
871 void resize(size_type size, size_type start_or_unchanged = S_UNCHANGED, log::Logger* logger_ptr = 0);
872
873 /**
874 * Restructures blob to consist of an internally allocated buffer and a `[begin(), end)` range starting at
875 * offset `prefix_size` within that buffer. More formally, it is a simple resize() wrapper that ensures
876 * the internally allocated buffer remains unchanged or, if none is currently large enough to
877 * store `prefix_size` elements, is allocated to be of size `prefix_size`; and that `start() == prefix_size`.
878 *
879 * All of resize()'s behavior, particularly any restrictions about capacity() growth, applies, so in particular
880 * remember you may need to first make_zero() if the internal buffer would need to be REallocated to satisfy the
881 * above requirements.
882 *
883 * In practice, with current reserve() (and thus resize()) restrictions -- which are intentional -- this method is
884 * most useful if you already have a Basic_blob with internally allocated buffer of size *at least*
885 * `n == size() + start()` (and `start() == 0` for simplicity), and you'd like to treat this buffer as containing
886 * no-longer-relevant prefix of length S (which becomes new value for start()) and have size() be readjusted down
887 * accordingly, while `start() + size() == n` remains unchaged. If the buffer also contains irrelevant data
888 * *past* a certain offset N, you can first make it irrelevant via `resize(N)` (then call `start_past_prefix(S)`
889 * as just described):
890 *
891 * ~~~
892 * Basic_blob b;
893 * // ...
894 * // b now has start() == 0, size() == M.
895 * // We want all elements outside [S, N] to be irrelevant, where S > 0, N < M.
896 * // (E.g., first S are a frame prefix, while all bytes past N are a frame postfix, and we want just the frame
897 * // without any reallocating or copying.)
898 * b.resize(N);
899 * b.start_past_prefix(S);
900 * // Now, [b.begin(), b.end()) are the frame bytes, and no copying/allocation/deallocation has occurred.
901 * ~~~
902 *
903 * @param prefix_size
904 * Desired prefix length. `prefix_size == 0` is allowed and is a degenerate case equivalent to:
905 * `resize(start() + size(), 0)`.
906 */
907 void start_past_prefix(size_type prefix_size);
908
909 /**
910 * Like start_past_prefix() but shifts the *current* prefix position by the given *incremental* value
911 * (positive or negative). Identical to `start_past_prefix(start() + prefix_size_inc)`.
912 *
913 * Behavior is undefined for negative `prefix_size_inc` whose magnitue exceeds start() (assertion may trip).
914 *
915 * Behavior is undefined in case of positive `prefix_size_inc` that results in overflow.
916 *
917 * @param prefix_size_inc
918 * Positive, negative (or zero) increment, so that start() is changed to `start() + prefix_size_inc`.
919 */
921
922 /**
923 * Equivalent to `resize(0, start())`.
924 *
925 * Note that the value returned by start() will *not* change due to this call. Only size() (and the corresponding
926 * internally stored datum) may change. If one desires to reset start(), use resize() directly (but if one
927 * plans to work on a sub-Basic_blob of a shared pool -- see class doc header -- please think twice first).
928 */
929 void clear();
930
931 /**
932 * Performs the minimal number of operations to make range `[begin(), end())` unchanged except for lacking
933 * sub-range `[first, past_last)`.
934 *
935 * Performance/behavior: At most, this copies the range `[past_last, end())` to area starting at `first`;
936 * and then adjusts internally stored size member.
937 *
938 * @param first
939 * Pointer to first element to erase. It must be dereferenceable, or behavior is undefined (assertion may
940 * trip).
941 * @param past_last
942 * Pointer to one past the last element to erase. If `past_last <= first`, call is a no-op.
943 * @return Iterator equal to `first`. (This matches standard expectation for container `erase()` return value:
944 * iterator to element past the last one erased. In this contiguous sequence that simply equals `first`,
945 * since everything starting with `past_last` slides left onto `first`. In particular:
946 * If `past_last()` equaled `end()` at entry, then the new end() is returned: everything starting with
947 * `first` was erased and thus `first == end()` now. If nothing is erased `first` is still returned.)
948 */
950
951 /**
952 * Returns pointer to mutable first element; or end() if empty(). Null is a possible value in the latter case.
953 *
954 * @return Pointer, possibly null.
955 */
957
958 /**
959 * Returns pointer to immutable first element; or end() if empty(). Null is a possible value in the latter case.
960 *
961 * @return Pointer, possibly null.
962 */
964
965 /**
966 * Equivalent to const_begin().
967 *
968 * @return Pointer, possibly null.
969 */
971
972 /**
973 * Returns pointer one past mutable last element; empty() is possible. Null is a possible value in the latter case.
974 *
975 * @return Pointer, possibly null.
976 */
978
979 /**
980 * Returns pointer one past immutable last element; empty() is possible. Null is a possible value in the latter case.
981 *
982 * @return Pointer, possibly null.
983 */
985
986 /**
987 * Equivalent to const_end().
988 *
989 * @return Pointer, possibly null.
990 */
992
993 /**
994 * Returns reference to immutable first element. Behavior is undefined if empty().
995 *
996 * @return See above.
997 */
998 const value_type& const_front() const;
999
1000 /**
1001 * Returns reference to immutable last element. Behavior is undefined if empty().
1002 *
1003 * @return See above.
1004 */
1005 const value_type& const_back() const;
1006
1007 /**
1008 * Equivalent to const_front().
1009 *
1010 * @return See above.
1011 */
1012 const value_type& front() const;
1013
1014 /**
1015 * Equivalent to const_back().
1016 *
1017 * @return See above.
1018 */
1019 const value_type& back() const;
1020
1021 /**
1022 * Returns reference to mutable first element. Behavior is undefined if empty().
1023 *
1024 * @return See above.
1025 */
1027
1028 /**
1029 * Returns reference to mutable last element. Behavior is undefined if empty().
1030 *
1031 * @return See above.
1032 */
1034
1035 /**
1036 * Equivalent to const_begin().
1037 *
1038 * @return Pointer, possibly null.
1039 */
1040 value_type const * const_data() const;
1041
1042 /**
1043 * Equivalent to begin().
1044 *
1045 * @return Pointer, possibly null.
1046 */
1048
1049 /**
1050 * Synonym of const_begin(). Exists as standard container method (hence the odd formatting).
1051 *
1052 * @return See const_begin().
1053 */
1055
1056 /**
1057 * Synonym of const_end(). Exists as standard container method (hence the odd formatting).
1058 *
1059 * @return See const_end().
1060 */
1062
1063 /**
1064 * Returns `true` if and only if: `this->derefable_iterator(it) || (it == this->const_end())`.
1065 *
1066 * @param it
1067 * Iterator/pointer to check.
1068 * @return See above.
1069 */
1071
1072 /**
1073 * Returns `true` if and only if the given iterator points to an element within this blob's size() elements.
1074 * In particular, this is always `false` if empty(); and also when `it == this->const_end()`.
1075 *
1076 * @param it
1077 * Iterator/pointer to check.
1078 * @return See above.
1079 */
1081
1082 /**
1083 * Convenience accessor returning an immutable boost.asio buffer "view" into the entirety of the blob.
1084 * Equivalent to `const_buffer(const_data(), size())`.
1085 *
1086 * @return See above.
1087 */
1088 boost::asio::const_buffer const_buffer() const;
1089
1090 /**
1091 * Same as const_buffer() but the returned view is mutable.
1092 * Equivalent to `mutable_buffer(data(), size())`.
1093 *
1094 * @return See above.
1095 */
1096 boost::asio::mutable_buffer mutable_buffer();
1097
1098 /**
1099 * Returns a copy of the internally cached #Allocator_raw as set by a constructor or assign() or
1100 * assignment-operator, whichever happened last.
1101 *
1102 * @return See above.
1103 */
1105
1106protected:
1107 // Constants.
1108
1109 /// Our flow::log::Component.
1110 static constexpr Flow_log_component S_LOG_COMPONENT = Flow_log_component::S_UTIL;
1111
1112 // Methods.
1113
1114 /**
1115 * Impl of share_after_split_equally() but capable of emitting not just `*this` type (`Basic_blob<...>`)
1116 * but any sub-class (such as `Blob`/`Sharing_blob`) provided a functor like share_after_split_left() but returning
1117 * an object of that appropriate type.
1118 *
1119 * @tparam Emit_blob_func
1120 * See share_after_split_equally(); however it is to take the type to emit which can
1121 * be `*this` Basic_blob or a sub-class.
1122 * @tparam Share_after_split_left_func
1123 * A callback with signature identical to share_after_split_left() but returning
1124 * the same type emitted by `Emit_blob_func`.
1125 * @param size
1126 * See share_after_split_equally().
1127 * @param headless_pool
1128 * See share_after_split_equally().
1129 * @param emit_blob_func
1130 * See `Emit_blob_func`.
1131 * @param logger_ptr
1132 * See share_after_split_equally().
1133 * @param share_after_split_left_func
1134 * See `Share_after_split_left_func`.
1135 */
1136 template<typename Emit_blob_func, typename Share_after_split_left_func>
1138 Emit_blob_func&& emit_blob_func,
1139 log::Logger* logger_ptr,
1140 Share_after_split_left_func&& share_after_split_left_func);
1141
1142private:
1143
1144 // Types.
1145
1146 /**
1147 * Internal deleter functor used if and only if #S_IS_VANILLA_ALLOC is `false` and therefore only with
1148 * #Buf_ptr being `boost::interprocess::shared_ptr` or
1149 * deleter-parameterized `unique_ptr`. Basically its task is to undo the
1150 * `m_alloc_raw.allocate()` call made when allocating a buffer in reserve(); the result of that call is
1151 * passed-to `shared_ptr::reset()` or `unique_ptr::reset()`; as is #m_alloc_raw (for any aux allocation needed,
1152 * but only for `shared_ptr` -- `unique_ptr` needs no aux data); as is
1153 * an instance of this Deleter_raw (to specifically dealloc the buffer when the ref-count reaches 0).
1154 * (In the `unique_ptr` case there is no ref-count per se; or one can think of it as a ref-count that equals 1.)
1155 *
1156 * Note that Deleter_raw is used only to dealloc the buffer actually controlled by the `shared_ptr` group
1157 * or `unique_ptr`. `shared_ptr` will use the #Allocator_raw directly to dealloc aux data. (We guess Deleter_raw
1158 * is a separate argument to `shared_ptr` to support array deletion; `boost::interprocess:shared_ptr` does not
1159 * provide built-in support for `U[]` as the pointee type; but the deleter can do whatever it wants/needs.)
1160 *
1161 * Note: this is not used except with custom #Allocator_raw. With `std::allocator` the usual default `delete[]`
1162 * behavior is fine.
1163 */
1165 {
1166 public:
1167 // Types.
1168
1169 /**
1170 * Short-hand for the allocator's pointer type, pointing to Basic_blob::value_type.
1171 * This may or may not simply be `value_type*`; in cases including SHM-storing allocators without
1172 * a constant cross-process vaddr scheme it needs to be a fancy-pointer type instead (e.g.,
1173 * `boost::interprocess::offset_ptr<value_type>`).
1174 */
1175 using Pointer_raw = typename std::allocator_traits<Allocator_raw>::pointer;
1176
1177 /// For `boost::interprocess::shared_ptr` and `unique_ptr` compliance (hence the irregular capitalization).
1179
1180 // Constructors/destructor.
1181
1182 /**
1183 * Default ctor: Must never be invoked; suitable only for a null smart-pointer.
1184 * Without this a `unique_ptr<..., Deleter_Raw>` cannot be default-cted.
1185 */
1187
1188 /**
1189 * Constructs deleter by memorizing the allocator (of zero size if #Allocator_raw is stateless, usually)
1190 * used to allocate whatever shall be passed-to `operator()()`; and the size (in # of `value_type`s)
1191 * of the buffer allocated there. The latter is required, at least technically, because
1192 * `Allocator_raw::deallocate()` requires the value count, equal to that when `allocate()` was called,
1193 * to be passed-in. Many allocators probably don't really need this, as array size is typically recorded
1194 * invisibly near the array itself, but formally this is not guaranteed for all allocators.
1195 *
1196 * @param alloc_raw
1197 * Allocator to copy and store.
1198 * @param buf_sz
1199 * See above.
1200 */
1201 explicit Deleter_raw(const Allocator_raw& alloc_raw, size_type buf_sz);
1202
1203 // Methods.
1204
1205 /**
1206 * Deallocates using `Allocator_raw::deallocate()`, passing-in the supplied pointer (to `value_type`) `to_delete`
1207 * and the number of `value_type`s to delete (from ctor).
1208 *
1209 * @param to_delete
1210 * Pointer to buffer to delete as supplied by `shared_ptr` or `unique_ptr` internals.
1211 */
1212 void operator()(Pointer_raw to_delete);
1213
1214 private:
1215 // Data.
1216
1217 /**
1218 * See ctor: the allocator that `operator()()` shall use to deallocate. For stateless allocators this
1219 * typically has size zero.
1220 *
1221 * ### What's with `optional<>`? ###
1222 * ...Okay, so actually this has size (whatever `optional` adds, probably a `bool`) + `sizeof(Allocator_raw)`,
1223 * the latter being indeed zero for stateless allocators. Why use `optional<>` though? Well, we only do
1224 * to support stateful allocators which cannot be default-cted; and our own default ctor requires that
1225 * #m_alloc_raw is initialized to *something*... even though it (by default ctor contract) will never be accessed.
1226 *
1227 * It is slightly annoying that we waste the extra space for `optional` internals even when `Allocator_raw`
1228 * is stateless (and it is often stateless!). Plus, when #Buf_ptr is `shared_ptr` instead of `unique_ptr`
1229 * our default ctor is not even needed. Probably some meta-programming thing could be done to avoid even this
1230 * overhead, but in my (ygoldfel) opinion the overhead is so minor, it does not even rise to the level of a to-do.
1231 */
1232 std::optional<Allocator_raw> m_alloc_raw;
1233
1234 /// See ctor and operator()(): the size of the buffer to deallocate.
1236 }; // class Deleter_raw
1237
1238 /**
1239 * The smart-pointer type used for #m_buf_ptr; a custom-allocator-and-SHM-friendly impl and parameterization is
1240 * used if necessary; otherwise a more typical concrete type is used.
1241 *
1242 * The following discussion assumes the more complex case wherein #S_SHARING is `true`. We discuss the simpler
1243 * converse case below that.
1244 *
1245 * Two things affect how #m_buf_ptr shall behave:
1246 * - Which type this resolves-to depending on #S_IS_VANILLA_ALLOC (ultimately #Allocator_raw). This affects
1247 * many key things but most relevantly how it is dereferenced. Namely:
1248 * - Typical `shared_ptr` (used with vanilla allocator) will internally store simply a raw `value_type*`
1249 * and dereference trivially. This, however, will not work with some custom allocators, particularly
1250 * SHM-heap ones (without a constant cross-process vaddr scheme), wherein a raw `T*` meaningful in
1251 * the original process is meaningless in another.
1252 * - `boost::shared_ptr` and `std::shared_ptr` both have custom-allocator support via
1253 * `allocate_shared()` and co. However, as of this writing, they are not SHM-friendly; or another
1254 * way of putting it is they don't support custom allocators fully: `Allocator::pointer` is ignored;
1255 * it is assumed to essentially be raw `value_type*`, in that the `shared_ptr` internally stores
1256 * a raw pointer. boost.interprocess refers to this as the impetus for implementing the following:
1257 * - `boost::interprocess::shared_ptr` (used with custom allocator) will internally store an
1258 * instance of `Allocator_raw::pointer` (to `value_type`) instead. To dereference it, its operators
1259 * such as `*` and `->` (etc.) will execute to properly translate to a raw `T*`.
1260 * The aforementioned `pointer` may simply be `value_type*` again; in which case there is no difference
1261 * to the standard `shared_ptr` situation; but it can instead be a fancy-pointer (actual technical term, yes,
1262 * in cppreference.com et al), in which case some custom code will run to translate some internal
1263 * data members (which have process-agnostic values) inside the fancy-pointer to a raw `T*`.
1264 * For example `boost::interprocess::offset_ptr<value_type>` does this by adding a stored offset to its
1265 * own `this`.
1266 * - How it is reset to a newly allocated buffer in reserve() (when needed).
1267 * - Typical `shared_ptr` is efficiently assigned using a `make_shared()` variant. However, here we store
1268 * a pointer to an array, not a single value (hence `<value_type[]>`); and we specifically want to avoid
1269 * any 0-initialization of the elements (per one of Basic_blob's promises). See reserve() which uses a
1270 * `make_shared()` variant that accomplishes all this.
1271 * - `boost::interprocess::shared_ptr` is reset differently due to a couple of restrictions, as it is made
1272 * to be usable in SHM (SHared Memory), specifically, plus it seems to refrain from tacking on every
1273 * normal `shared_ptr` feature. To wit: 1, `virtual` cannot be used; therefore the deleter type must be
1274 * declared at compile-time. 2, it has no special support for a native-array element-type (`value_type[]`).
1275 * Therefore it leaves that part up to the user: the buffer must be pre-allocated by the user
1276 * (and passed to `.reset()`); there is no `make_shared()` equivalent (which also means somewhat lower
1277 * perf, as aux data and user buffer are separately allocated and stored). Accordingly deletion is left
1278 * to the user, as there is no default deleter; one must be supplied. Thus:
1279 * - See reserve(); it calls `.reset()` as explained here, including using #m_alloc_raw to pre-allocate.
1280 * - See Deleter_raw, the deleter functor type an instance of which is saved by the `shared_ptr` to
1281 * invoke when ref-count reaches 0.
1282 *
1283 * Other than that, it's a `shared_ptr`; it works as usual.
1284 *
1285 * ### Why use typical `shared_ptr` at all? Won't the fancy-allocator-supporting one work for the vanilla case? ###
1286 * Yes, it would work. And there would be less code without this dichotomy (although the differences are,
1287 * per above, local to this alias definition; and reserve() where it allocates buffer). There are however reasons
1288 * why typical `shared_ptr` (we choose `boost::shared_ptr` over `std::shared_ptr`; that discussion is elsewhere,
1289 * but basically `boost::shared_ptr` is solid and full-featured/mature, though either choice would've worked).
1290 * They are at least:
1291 * - It is much more frequently used, preceding and anticipating its acceptance into the STL standard, so
1292 * maturity and performance are likelier.
1293 * - Specifically it supports a perf-enhancing use mode: using `make_shared()` (and similar) instead of
1294 * `.reset(<raw ptr>)` (or similar ctor) replaces 2 allocs (1 for user data, 1 for aux data/ref-count)
1295 * with 1 (for both).
1296 * - If verbose logging in the deleter is desired its `virtual`-based type-erased deleter semantics make that
1297 * quite easy to achieve.
1298 *
1299 * ### The case where #S_SHARING is `false` ###
1300 * Firstly: if so then the method -- share() -- that would *ever* increment `Buf_ptr::use_count()` beyond 1
1301 * is simply not compiled. Therefore using any type of `shared_ptr` is a waste of RAM (on the ref-count)
1302 * and cycles (on aux memory allocation and ref-count math), albeit a minor one. Hence we use `unique_ptr`
1303 * in that case instead. Even so, the above #S_IS_VANILLA_ALLOC dichotomy still applies but is quite a bit
1304 * simpler to handle; it's a degenerate case in a way.
1305 * - Typical `unique_ptr` already stores `Deleter::pointer` instead of `value_ptr*`. Therefore
1306 * We can use it for both cases; in the vanilla case supplying no `Deleter` template param
1307 * (the default `Deleter` has `pointer = value_ptr*`); otherwise supplying Deleter_raw whose
1308 * Deleter_raw::pointer comes from `Allocator_raw::pointer`. This also, same as with
1309 * `boost::interprocess::shared_ptr`, takes care of the dealloc upon being nullified or destroyed.
1310 * - As for initialization:
1311 * - With #S_IS_VANILLA_ALLOC at `true`: Similarly to using a special array-friendly `make_shared()` variant,
1312 * we use a special array-friendly `make_unique()` variant.
1313 * - Otherwise: As with `boost::interprocess::shared_ptr` we cannot `make_*()` -- though AFAIK without
1314 * any perf penalty (there is no aux data) -- but reserve() must be quite careful to also
1315 * replace `m_buf_ptr`'s deleter (which `.reset()` does not do... while `boost::interprocess::shared_ptr`
1316 * does).
1317 */
1318 using Buf_ptr = std::conditional_t<S_IS_VANILLA_ALLOC,
1319 std::conditional_t<S_SHARING,
1320 boost::shared_ptr<value_type[]>,
1321 boost::movelib::unique_ptr<value_type[]>>,
1322 std::conditional_t<S_SHARING,
1323 boost::interprocess::shared_ptr
1325 boost::movelib::unique_ptr<value_type, Deleter_raw>>>;
1326
1327 // Methods.
1328
1329 /**
1330 * The body of swap(), except for the part that swaps (or decides not to swap) #m_alloc_raw. As of this writing
1331 * used by swap() and assign() (move overload) which perform mutually different steps w/r/t #m_alloc_raw.
1332 *
1333 * @param other
1334 * See swap().
1335 * @param logger_ptr
1336 * See swap().
1337 */
1338 void swap_impl(Basic_blob& other, log::Logger* logger_ptr = 0);
1339
1340 /**
1341 * Returns iterator-to-mutable equivalent to given iterator-to-immutable.
1342 *
1343 * @param it
1344 * Self-explanatory. No assumptions are made about valid_iterator() or derefable_iterator() status.
1345 * @return Iterator to same location as `it`.
1346 */
1348
1349 // Data.
1350
1351 /**
1352 * See get_allocator(): copy of the allocator supplied by the user (though, if #Allocator_raw is stateless,
1353 * it is typically defaulted to `Allocator_raw()`), as set by a constructor or assign() or
1354 * assignment-operator, whichever happened last. Used exclusively when allocating and deallocating
1355 * #m_buf_ptr in the *next* reserve() (potentially).
1356 *
1357 * By the rules of `Allocator_aware_container` (see cppreference.com):
1358 * - If `*this` is move-cted: member move-cted from source member counterpart.
1359 * - If `*this` is move-assigned: member move-assigned from source member counterpart if
1360 * `std::allocator_traits<Allocator_raw>::propagate_on_container_move_assignment::value == true` (else untouched).
1361 * - If `*this` is copy-cted: member set to
1362 * `std::allocator_traits<Allocator_raw>::select_on_container_copy_construction()` (pass-in source member
1363 * counterpart).
1364 * - If `*this` is copy-assigned: member copy-assigned if
1365 * `std::allocator_traits<Allocator_raw>::propagate_on_container_copy_assignment::value == true` (else untouched).
1366 * - If `*this` is `swap()`ed: member ADL-`swap()`ed with source member counterpart if
1367 * `std::allocator_traits<Allocator_raw>::propagate_on_container_swap::value == true` (else untouched).
1368 * - Otherwise this is supplied via a non-copy/move ctor arg by user.
1369 *
1370 * ### Specially treated value ###
1371 * If #Allocator_raw is `std::allocator<value_type>` (as supposed to `something_else<value_type>`), then
1372 * #m_alloc_raw (while guaranteed set to the zero-sized copy of `std::allocator<value_type>()`) is never
1373 * in practice touched (outside of the above-mentioned moves/copies/swaps, though they also do nothing in reality
1374 * for this stateless allocator). This value by definition means we are to allocate on the regular heap;
1375 * and as of this writing for perf/other reasons we choose to use a vanilla
1376 * `*_ptr` with its default alloc-dealloc APIs (which perform `new[]`-`delete[]` respectively); we do not pass-in
1377 * #m_alloc_raw anywhere. See #Buf_ptr doc header for more. If we did pass it in to
1378 * `allocate_shared*()` or `boost::interprocess::shared_ptr::reset` the end result would be functionally
1379 * the same (`std::allocator::[de]allocate()` would get called; these call `new[]`/`delete[]`).
1380 *
1381 * ### Relationship between #m_alloc_raw and the allocator/deleter in #m_buf_ptr ###
1382 * (This is only applicable if #S_IS_VANILLA_ALLOC is `false`.)
1383 * #m_buf_ptr caches #m_alloc_raw internally in its centrally linked data. Ordinarily, then, they compare as equal.
1384 * In the corner case where (1) move-assign or copy-assign or swap() was used on `*this`, *and*
1385 * (2) #Allocator_raw is stateful and *can* compare unequal (e.g., `boost::interprocess::allocator`):
1386 * they may come to compare as unequal. It is, however, not (in our case) particularly important:
1387 * #m_alloc_raw affects the *next* reserve() (potentially); the thing stored in #m_buf_ptr affects the logic when
1388 * the underlying buffer is next deallocated. The two don't depend on each other.
1389 */
1391
1392 /**
1393 * Pointer to currently allocated buffer of size #m_capacity; null if and only if `zero() == true`.
1394 * Buffer is auto-freed at destruction; or in make_zero(); but only if by that point any share()-generated
1395 * other `Basic_blob`s have done the same. Otherwise the ref-count is merely decremented.
1396 * In the case of #S_SHARING being `false`, one can think of this ref-count as being always at most 1;
1397 * since share() is not compiled, and as a corollary a `unique_ptr` is used to avoid perf costs.
1398 * Thus make_zero() and dtor always dealloc in that case.
1399 *
1400 * For performance, we never initialize the values in the array to zeroes or otherwise.
1401 * This contrasts with `vector` and most other standard or Boost containers which use an `allocator` to
1402 * allocate any internal buffer, and most allocators default-construct (which means assign 0 in case of `uint8_t`)
1403 * any elements within allocated buffers, immediately upon the actual allocation on heap. As noted in doc header,
1404 * this behavior is surprisingly difficult to avoid (involving custom allocators and such).
1405 */
1407
1408 /// See capacity(); but #m_capacity is meaningless (and containing unknown value) if `!m_buf_ptr` (i.e., zero()).
1410
1411 /// See start(); but #m_start is meaningless (and containing unknown value) if `!m_buf_ptr` (i.e., zero()).
1413
1414 /// See size(); but #m_size is meaningless (and containing unknown value) if `!m_buf_ptr` (i.e., zero()).
1416}; // class Basic_blob
1417
1418// Free functions: in *_fwd.hpp.
1419
1420// Template implementations.
1421
1422// m_buf_ptr initialized to null pointer. n_capacity and m_size remain uninit (meaningless until m_buf_ptr changes).
1423template<typename Allocator, bool S_SHARING_ALLOWED>
1425 m_alloc_raw(alloc_raw) // Copy allocator; stateless allocator should have size 0 (no-op for the processor).
1426{
1427 // OK.
1428}
1429
1430template<typename Allocator, bool S_SHARING_ALLOWED>
1432 (size_type size, log::Logger* logger_ptr, const Allocator_raw& alloc_raw) :
1433
1434 Basic_blob(alloc_raw) // Delegate.
1435{
1436 resize(size, 0, logger_ptr);
1437}
1438
1439template<typename Allocator, bool S_SHARING_ALLOWED>
1441 // Follow rules established in m_alloc_raw doc header:
1442 m_alloc_raw(std::allocator_traits<Allocator_raw>::select_on_container_copy_construction(src.m_alloc_raw))
1443{
1444 /* What we want to do here, ignoring allocators, is (for concision): `assign(src, logger_ptr);`
1445 * However copy-assignment also must do something different w/r/t m_alloc_raw than what we had to do above
1446 * (again see m_alloc_raw doc header); so just copy/paste the rest of what operator=(copy) would do.
1447 * Skipping most comments therein, as they don't much apply in our case. Code reuse level is all-right;
1448 * and we can skip the `if` from assign(). */
1449 assign_copy(src.const_buffer(), logger_ptr);
1450}
1451
1452template<typename Allocator, bool S_SHARING_ALLOWED>
1454 // Follow rules established in m_alloc_raw doc header:
1455 m_alloc_raw(std::move(moved_src.m_alloc_raw))
1456{
1457 /* Similar to copy ctor above, do the equivalent of assign(move(moved_src), logger_ptr) minus the allocator work.
1458 * That reduces to simply: */
1459 swap_impl(moved_src, logger_ptr);
1460}
1461
1462template<typename Allocator, bool S_SHARING_ALLOWED>
1464
1465template<typename Allocator, bool S_SHARING_ALLOWED>
1468{
1469 if (this != &src)
1470 {
1471 // Take care of the "Sharing blobs" corner case from our doc header. The rationale for this is pointed out there.
1472 if constexpr(S_SHARING)
1473 {
1474 assert(!blobs_sharing(*this, src));
1475 }
1476
1477 // For m_alloc_raw: Follow rules established in m_alloc_raw doc header.
1478 if constexpr(std::allocator_traits<Allocator_raw>::propagate_on_container_copy_assignment::value)
1479 {
1480 m_alloc_raw = src.m_alloc_raw;
1481 /* Let's consider what just happened. Allocator_raw's policy is to, yes, copy m_alloc_raw from
1482 * src to *this; so we did. Now suppose !zero() and !src.zero(); and that old m_alloc_raw != src.m_alloc_raw.
1483 * (E.g., boost::interprocess::allocator<>s with same type but set to different SHM segments S1 and S2 would
1484 * compare unequal.) What needs to happen is *m_buf_ptr buffer must be freed (more accurately, its
1485 * shared_ptr ref_count decremented and thus buffer possibly freed if not share()d); then allocated; then
1486 * contents linear-copied from *src.m_buf_ptr buffer to *m_buf_ptr buffer. assign_copy() below naively
1487 * does all that; but will it work since we've thrown away the old m_alloc_raw? Let's go through it:
1488 * -# Basically m_buf_ptr.reset(<new buffer ptr>) is kinda like m_buf_ptr.reset() followed by
1489 * m_buf_ptr.reset(<new buffer ptr>); the former part is the possible-dealloc. So will it work?
1490 * Yes: shared_ptr<> stores the buffer and aux data (ref-count, allocator, deleter) in one central
1491 * place shared with other shared_ptr<>s in its group. The .reset() dec-refs the ref-count and dissociates
1492 * m_buf_ptr from the central place; if the ref-count is 0, then it also deallocs the buffer and the
1493 * aux data and eliminates the central place... using the allocator/deleter cached in that central
1494 * place itself. Hence the old m_alloc_raw's copy will go in effect when the nullifying .reset() part
1495 * happens.
1496 * -# So then m_buf_ptr is .reset() to the newly allocated buffer which will be allocated by us explicitly
1497 * using m_alloc_raw (which we've replaced just now).
1498 * -# Then the linear copy in assign_copy() is uncontroversial; everything is allocated before this starts. */
1499 }
1500 /* else: Leave m_alloc_raw alone. Everything should be fine once we assign_copy() below: existing m_buf_ptr
1501 * (if not null) will dealloc without any allocator-related disruption/change; then it'll be reset to a new buffer
1502 * with contents linear-copied over. The unchanged m_alloc_raw will be used for the *next* allocating reserve()
1503 * if any. */
1504
1505 // Now to the relatively uncontroversial stuff. To copy the rest we'll just do:
1506
1507 /* resize(N, 0); copy over N bytes. Note that it disallows `N > capacity()` unless zero(), but they can explicitly
1508 * make_zero() before calling us, if they are explicitly OK with the performance cost of the reallocation that will
1509 * trigger. This is all as advertised; and it satisfies the top priority listed just below. */
1510 assign_copy(src.const_buffer(), logger_ptr);
1511
1512 /* ^-- Corner case: Suppose src.size() == 0. The above then reduces to: if (!zero()) { m_size = m_start = 0; }
1513 * (Look inside its source code; you'll see.)
1514 *
1515 * We guarantee certain specific behavior in doc header, and below implements that.
1516 * We will indicate how it does so; but ALSO why those promises are made in the first place (rationale).
1517 *
1518 * In fact, we will proceed under the following priorities, highest to lowest:
1519 * - User CAN switch order of our priorities sufficiently easily.
1520 * - Be as fast as possible, excluding minimizing constant-time operations such as scalar assignments.
1521 * - Use as little memory in *this as possible.
1522 *
1523 * We will NOT attempt to make *this have the same internal structure as src as its own independent virtue.
1524 * That doesn't seem useful and would make things more difficult obviously. Now:
1525 *
1526 * Either src.zero(), or not; but regardless src.size() == 0. Our options are essentially these:
1527 * make_zero(); or resize(0, 0). (We could also perhaps copy src.m_buf_ptr[] and then adjust m_size = 0, but
1528 * this is clearly slower and only gains the thing we specifically pointed out is not a virtue above.)
1529 *
1530 * Let's break down those 2 courses of action, by situation, then:
1531 * - zero() && src.zero(): make_zero() and resize(0, 0) are equivalent; so nothing to decide. Either would be fine.
1532 * - zero() && !src.zero(): Ditto.
1533 * - !zero() && !src.zero(): make_zero() is slower than resize(0, 0); and moreover the latter may mean faster
1534 * operations subsequently, if they subsequently choose to reserve(N) (including resize(N), etc.) to
1535 * N <= capacity(). So resize(0, 0) wins according to the priority order listed above.
1536 * - !zero() && src.zero(): Ditto.
1537 *
1538 * So then we decided: resize(0, 0). And, indeed, resize(0, 0) is equivalent to the above snippet.
1539 * So, we're good. */
1540 } // if (this != &src)
1541
1542 return *this;
1543} // Basic_blob::assign(copy)
1544
1545template<typename Allocator, bool S_SHARING_ALLOWED>
1547{
1548 return assign(src);
1549}
1550
1551template<typename Allocator, bool S_SHARING_ALLOWED>
1554{
1555 if (this != &moved_src)
1556 {
1557 // For m_alloc_raw: Follow rules established in m_alloc_raw doc header.
1558 if constexpr(std::allocator_traits<Allocator_raw>::propagate_on_container_move_assignment::value)
1559 {
1560 m_alloc_raw = std::move(moved_src.m_alloc_raw);
1561 /* Let's consider what just happened. Allocator_raw's policy is to, yes, move m_alloc_raw from
1562 * src to *this; so we did -- I guess src.m_alloc_raw is some null-ish empty-ish thing now.
1563 * Now suppose !zero() and !moved_src.zero(); and that old m_alloc_raw != new src.m_alloc_raw.
1564 * (That is fairly exotic; at least Allocator_raw is stateful to begin with.
1565 * E.g., boost::interprocess::allocator<>s with same type but set to different SHM pools S1 and S2 would
1566 * compare unequal.) What needs to happen is m_buf_ptr buffer must be freed (more accurately, its
1567 * shared_ptr ref_count decremented and thus buffer possibly freed if not share()d) + ptr nullified); then ideally
1568 * simply swap m_buf_ptr (which will get moved_src.m_buf_ptr old value) and moved_src.m_buf_ptr (which will
1569 * become null). That's what we do below. So will it work?
1570 * -# The m_buf_ptr.reset() below will work fine for the same reason the long comment in assign(copy)
1571 * states that nullifying m_buf_ptr, even with m_alloc_raw already replaced, will still use old m_alloc_raw:
1572 * for it is stored inside the central area linked-to in the m_buf_ptr being nullified.
1573 * -# The swap is absolutely smooth and fine. And indeed by making that swap we'll've ensured this->m_alloc_raw
1574 * and the allocator stored inside m_buf_ptr are equal. */
1575 }
1576 /* else: Leave m_alloc_raw alone. What does it mean though? Let's consider it. Suppose !zero() and
1577 * !moved_src.zero(), and the two `m_alloc_raw`s do not compare equal (e.g., boost::interprocess::allocator<>s
1578 * with mutually differing SHM-pools). m_buf_ptr.reset() below will work fine: m_alloc_raw is unchanged so no
1579 * controversy. However once m_buf_ptr is moved from moved_src.m_buf_ptr, it will (same reason as above --
1580 * it is cached) keep using old m_alloc_raw; meaning if/when it is .reset() or destroyed the old allocator
1581 * will deallocate. That is in fact what we want. It might seem odd that m_alloc_raw won't match what's
1582 * used for this->m_buf_ptr, but it is fine: m_alloc_raw affects the *next* allocating reserve().
1583 * (And, as usual, if Allocator_raw is stateless, then none of this matters.) */
1584
1585 // Now to the relatively uncontroversial stuff.
1586
1587 make_zero(logger_ptr); // Spoiler alert: it's: if (!zero()) { m_buf_ptr.reset(); }
1588 // So now m_buf_ptr is null; hence the other m_* (other than m_alloc_raw) are meaningless.
1589
1590 swap_impl(moved_src, logger_ptr);
1591 // Now *this is equal to old moved_src; new moved_src is valid and zero(); and nothing was copied -- as advertised.
1592 } // if (this != &moved_src)
1593
1594 return *this;
1595} // Basic_blob::assign(move)
1596
1597template<typename Allocator, bool S_SHARING_ALLOWED>
1599{
1600 return assign(std::move(moved_src));
1601}
1602
1603template<typename Allocator, bool S_SHARING_ALLOWED>
1605{
1606 using std::swap;
1607
1608 if (this != &other)
1609 {
1610 if (logger_ptr && logger_ptr->should_log(log::Sev::S_TRACE, S_LOG_COMPONENT))
1611 {
1612 FLOW_LOG_SET_CONTEXT(logger_ptr, S_LOG_COMPONENT);
1613 FLOW_LOG_TRACE_WITHOUT_CHECKING("Blob [" << this << "] (internal buffer sized [" << capacity() << "]) "
1614 "swapping <=> Blob [" << &other << "] (internal buffer sized "
1615 "[" << other.capacity() << "]).");
1616 }
1617
1618 swap(m_buf_ptr, other.m_buf_ptr);
1619
1620 /* Some compilers in some build configs issue maybe-uninitialized warning here, when `other` is as-if
1621 * default-cted (hence the following three are intentionally uninitialized), particularly with heavy
1622 * auto-inlining by the optimizer. False positive in our case, and in Blob-land we try not to give away perf
1623 * at all so: */
1624#pragma GCC diagnostic push
1625#pragma GCC diagnostic ignored "-Wpragmas" // For older versions, where the following does not exist/cannot be disabled.
1626#pragma GCC diagnostic ignored "-Wunknown-warning-option" // (Similarly for clang.)
1627#pragma GCC diagnostic ignored "-Wmaybe-uninitialized"
1628
1629 swap(m_capacity, other.m_capacity); // Meaningless if zero() but harmless.
1630 swap(m_size, other.m_size); // Ditto.
1631 swap(m_start, other.m_start); // Ditto.
1632
1633#pragma GCC diagnostic pop
1634
1635 /* Skip m_alloc_raw: swap() has to do it by itself; we are called from it + move-assign/ctor which require
1636 * mutually different treatment for m_alloc_raw. */
1637 }
1638} // Basic_blob::swap_impl()
1639
1640template<typename Allocator, bool S_SHARING_ALLOWED>
1642{
1643 using std::swap;
1644
1645 // For m_alloc_raw: Follow rules established in m_alloc_raw doc header.
1646 if constexpr(std::allocator_traits<Allocator_raw>::propagate_on_container_swap::value)
1647 {
1648 if (&this->m_alloc_raw != &other.m_alloc_raw) // @todo Is this redundant? Or otherwise unnecessary?
1649 {
1650 swap(m_alloc_raw, other.m_alloc_raw);
1651 }
1652 }
1653 /* else: Leave both `m_alloc_raw`s alone. What does it mean though? Well, see either assign(); the same
1654 * theme applies here: Each m_buf_ptr's cached allocator/deleter will potentially not equal its respective
1655 * m_alloc_raw anymore; but the latter affects only the *next* allocating reserve(); so it is fine.
1656 * That said, to quote cppreference.com: "Note: swapping two containers with unequal allocators if
1657 * propagate_on_container_swap is false is undefined behavior." So, while it will work for us, trying such
1658 * a swap() would be illegal user behavior in any case. */
1659
1660 // Now to the relatively uncontroversial stuff.
1661 swap_impl(other, logger_ptr);
1662}
1663
1664template<typename Allocator, bool S_SHARING_ALLOWED>
1667{
1668 return blob1.swap(blob2, logger_ptr);
1669}
1670
1671template<typename Allocator, bool S_SHARING_ALLOWED>
1673{
1674 static_assert(S_SHARING,
1675 "Do not invoke (and thus instantiate) share() or derived methods unless you set the S_SHARING_ALLOWED "
1676 "template parameter to true. Sharing will be enabled at a small perf cost; see class doc header.");
1677 // Note: The guys that call it will cause the same check to occur, since instantiating them will instantiate us.
1678
1679 assert(!zero()); // As advertised.
1680
1681 Basic_blob sharing_blob(m_alloc_raw, logger_ptr); // Null Basic_blob (let that ctor log via same Logger if any).
1682 assert(!sharing_blob.m_buf_ptr);
1683 sharing_blob.m_buf_ptr = m_buf_ptr;
1684 // These are currently (as of this writing) uninitialized (possibly garbage).
1685 sharing_blob.m_capacity = m_capacity;
1686 sharing_blob.m_start = m_start;
1687 sharing_blob.m_size = m_size;
1688
1689 if (logger_ptr && logger_ptr->should_log(log::Sev::S_TRACE, S_LOG_COMPONENT))
1690 {
1691 FLOW_LOG_SET_CONTEXT(logger_ptr, S_LOG_COMPONENT);
1693 ("Blob [" << this << "] shared with new Blob [" << &sharing_blob << "]; ref-count incremented.");
1694 }
1695
1696 return sharing_blob;
1697}
1698
1699template<typename Allocator, bool S_SHARING_ALLOWED>
1702{
1703 if (lt_size > size())
1704 {
1705 lt_size = size();
1706 }
1707
1708 if (logger_ptr && logger_ptr->should_log(log::Sev::S_TRACE, S_LOG_COMPONENT))
1709 {
1710 FLOW_LOG_SET_CONTEXT(logger_ptr, S_LOG_COMPONENT);
1711 FLOW_LOG_TRACE_WITHOUT_CHECKING("Blob [" << this << "] shall be shared with new Blob, splitting off the first "
1712 "[" << lt_size << "] values into that one and leaving the remaining "
1713 "[" << (size() - lt_size) << "] in this one.");
1714 }
1715
1716 auto sharing_blob = share(logger_ptr); // sharing_blob sub-Basic_blob is equal to *this sub-Basic_blob. Adjust:
1717 sharing_blob.resize(lt_size); // Note: sharing_blob.start() remains unchanged.
1718 start_past_prefix_inc(lt_size);
1719
1720 return sharing_blob;
1721}
1722
1723template<typename Allocator, bool S_SHARING_ALLOWED>
1726{
1727 if (rt_size > size())
1728 {
1729 rt_size = size();
1730 }
1731
1732 const auto lt_size = size() - rt_size;
1733 if (logger_ptr && logger_ptr->should_log(log::Sev::S_TRACE, S_LOG_COMPONENT))
1734 {
1735 FLOW_LOG_SET_CONTEXT(logger_ptr, S_LOG_COMPONENT);
1736 FLOW_LOG_TRACE_WITHOUT_CHECKING("Blob [" << this << "] shall be shared with new Blob, splitting off "
1737 "the last [" << rt_size << "] values into that one and leaving the "
1738 "remaining [" << lt_size << "] in this one.");
1739 }
1740
1741 auto sharing_blob = share(logger_ptr); // sharing_blob sub-Basic_blob is equal to *this sub-Basic_blob. Adjust:
1742 resize(lt_size); // Note: start() remains unchanged.
1743 sharing_blob.start_past_prefix_inc(lt_size);
1744
1745 return sharing_blob;
1746}
1747
1748template<typename Allocator, bool S_SHARING_ALLOWED>
1749template<typename Emit_blob_func, typename Share_after_split_left_func>
1751 (size_type size, bool headless_pool, Emit_blob_func&& emit_blob_func, log::Logger* logger_ptr,
1752 Share_after_split_left_func&& share_after_split_left_func)
1753{
1754 assert(size != 0);
1755 assert(!empty());
1756
1757 if (logger_ptr && logger_ptr->should_log(log::Sev::S_TRACE, S_LOG_COMPONENT))
1758 {
1759 FLOW_LOG_SET_CONTEXT(logger_ptr, S_LOG_COMPONENT);
1760 FLOW_LOG_TRACE_WITHOUT_CHECKING("Blob [" << this << "] of size [" << this->size() << "] shall be split into "
1761 "adjacent sharing sub-Blobs of size [" << size << "] each "
1762 "(last one possibly smaller).");
1763 }
1764
1765 do
1766 {
1767 emit_blob_func(share_after_split_left_func(size, logger_ptr)); // share_after_split_left_func() logs plenty.
1768 }
1769 while (!empty());
1770
1771 if (headless_pool)
1772 {
1773 make_zero(logger_ptr);
1774 }
1775} // Basic_blob::share_after_split_equally_impl()
1776
1777template<typename Allocator, bool S_SHARING_ALLOWED>
1778template<typename Emit_blob_func>
1780 Emit_blob_func&& emit_blob_func,
1781 log::Logger* logger_ptr)
1782{
1783 share_after_split_equally_impl(size, headless_pool, std::move(emit_blob_func), logger_ptr,
1784 [this](size_type lt_size, log::Logger* logger_ptr) -> Basic_blob
1785 {
1786 return share_after_split_left(lt_size, logger_ptr);
1787 });
1788}
1789
1790template<typename Allocator, bool S_SHARING_ALLOWED>
1791template<typename Blob_container>
1793 (size_type size, bool headless_pool, Blob_container* out_blobs_ptr, log::Logger* logger_ptr)
1794{
1795 // If changing this please see Blob_with_log_context::<same method>().
1796
1797 assert(out_blobs_ptr);
1798 share_after_split_equally(size, headless_pool, [&](Basic_blob&& blob_moved)
1799 {
1800 out_blobs_ptr->push_back(std::move(blob_moved));
1801 }, logger_ptr);
1802}
1803
1804template<typename Allocator, bool S_SHARING_ALLOWED>
1805template<typename Blob_ptr_container>
1807 bool headless_pool,
1808 Blob_ptr_container* out_blobs_ptr,
1809 log::Logger* logger_ptr)
1810{
1811 // If changing this please see Blob_with_log_context::<same method>().
1812
1813 // By documented requirements this should be, like, <...>_ptr<Basic_blob>.
1814 using Ptr = typename Blob_ptr_container::value_type;
1815
1816 assert(out_blobs_ptr);
1817
1818 share_after_split_equally(size, headless_pool, [&](Basic_blob&& blob_moved)
1819 {
1820 out_blobs_ptr->push_back(Ptr(new Basic_blob(std::move(blob_moved))));
1821 }, logger_ptr);
1822}
1823
1824template<typename Allocator, bool S_SHARING_ALLOWED>
1827{
1828 static_assert(S_SHARING_ALLOWED,
1829 "blobs_sharing() would only make sense on `Basic_blob`s with S_SHARING_ALLOWED=true. "
1830 "Even if we were to allow this to instantiate (compile) it would always return false.");
1831
1832 return ((!blob1.zero()) && (!blob2.zero())) // Can't co-own a buffer if doesn't own a buffer.
1833 && ((&blob1 == &blob2) // Same object => same buffer.
1834 // Only share() (as of this writing) can lead to the underlying buffer's start ptr being identical.
1835 || ((blob1.begin() - blob1.start())
1836 == (blob2.begin() - blob2.start())));
1837 // @todo Maybe throw in assert(blob1.capacity() == blob2.capacity()), if `true` is being returned.
1838}
1839
1840template<typename Allocator, bool S_SHARING_ALLOWED>
1842{
1843 return zero() ? 0 : m_size; // Note that zero() may or may not be true if we return 0.
1844}
1845
1846template<typename Allocator, bool S_SHARING_ALLOWED>
1848{
1849 return zero() ? 0 : m_start; // Note that zero() may or may not be true if we return 0.
1850}
1851
1852template<typename Allocator, bool S_SHARING_ALLOWED>
1854{
1855 return size() == 0; // Note that zero() may or may not be true if we return true.
1856}
1857
1858template<typename Allocator, bool S_SHARING_ALLOWED>
1860{
1861 return zero() ? 0 : m_capacity; // Note that zero() <=> we return non-zero. (m_capacity >= 1 if !zero().)
1862}
1863
1864template<typename Allocator, bool S_SHARING_ALLOWED>
1866{
1867 return !m_buf_ptr;
1868}
1869
1870template<typename Allocator, bool S_SHARING_ALLOWED>
1872{
1873 using boost::make_shared_noinit;
1874 using boost::shared_ptr;
1875 using std::numeric_limits;
1876
1877 /* As advertised do not allow enlarging existing buffer. They can call make_zero() though (but must do so consciously
1878 * hence considering the performance impact). */
1879 assert(zero() || ((new_capacity <= m_capacity) && (m_capacity > 0)));
1880
1881 /* OK, but what if new_capacity < m_capacity? Then post-condition (see below) is already satisfied, and it's fastest
1882 * to do nothing. If user believes lower memory use is higher-priority, they can explicitly call make_zero() first
1883 * but must make conscious decision to do so. */
1884
1885 if (zero() && (new_capacity != 0))
1886 {
1887 if (logger_ptr && logger_ptr->should_log(log::Sev::S_TRACE, S_LOG_COMPONENT))
1888 {
1889 FLOW_LOG_SET_CONTEXT(logger_ptr, S_LOG_COMPONENT);
1890 FLOW_LOG_TRACE_WITHOUT_CHECKING("Blob [" << this << "] "
1891 "allocating internal buffer sized [" << new_capacity << "].");
1892 }
1893
1894 if (new_capacity <= size_type(numeric_limits<std::ptrdiff_t>::max())) // (See explanation near bottom of method.)
1895 {
1896 /* Time to (1) allocate the buffer; (2) save the pointer; (3) ensure it is deallocated at the right time
1897 * and with the right steps. Due to Allocator_raw support this is a bit more complex than usual. Please
1898 * (1) see class doc header "Custom allocator" section; and (2) read Buf_ptr alias doc header for key background;
1899 * then come back here. */
1900
1901 if constexpr(S_IS_VANILLA_ALLOC)
1902 {
1903 /* In this case they specified std::allocator, so we are to just allocate/deallocate in regular heap using
1904 * new[]/delete[]. Hence we don't need to even use actual std::allocator; we know by definition it would
1905 * use new[]/delete[]. So simply use typical ..._ptr initialization. Caveats are unrelated to allocators:
1906 * - For some extra TRACE-logging -- if enabled! -- use an otherwise-vanilla logging deleter.
1907 * - Unnecessary in case of unique_ptr: dealloc always occurs in make_zero() or dtor and can be logged
1908 * there.
1909 * - If doing so (note it implies we've given up on performance) we cannot, and do not, use
1910 * make_shared*(); the use of custom deleter requires the .reset() form of init. */
1911
1912 /* If TRACE currently disabled, then skip the custom deleter that logs about dealloc. (TRACE may be enabled
1913 * by that point; but, hey, that is life.) This is for perf. */
1914 if constexpr(S_SHARING)
1915 {
1916 if (logger_ptr && logger_ptr->should_log(log::Sev::S_TRACE, S_LOG_COMPONENT))
1917 {
1918 /* This ensures delete[] call when m_buf_ptr ref-count reaches 0.
1919 * As advertised, for performance, the memory is NOT initialized. */
1920 m_buf_ptr.reset(new value_type[new_capacity],
1921 // Careful! *this might be gone if some other share()ing obj is the one that 0s ref-count.
1922 [logger_ptr, original_blob = this, new_capacity]
1923 (value_type* buf_ptr)
1924 {
1925 FLOW_LOG_SET_CONTEXT(logger_ptr, S_LOG_COMPONENT);
1926 FLOW_LOG_TRACE("Deallocating internal buffer sized [" << new_capacity << "] originally allocated by "
1927 "Blob [" << original_blob << "]; note that Blob may now be gone and furthermore another "
1928 "Blob might live at that address now. A message immediately preceding this one should "
1929 "indicate the last Blob to give up ownership of the internal buffer.");
1930 // Finally just do what the default one would've done, as we've done our custom thing (logging).
1931 delete [] buf_ptr;
1932 });
1933 }
1934 else // if (!should_log()): No logging deleter; just delete[] it.
1935 {
1936 /* This executes `new value_type[new_capacity]` and ensures delete[] when m_buf_ptr ref-count reaches 0.
1937 * As advertised, for performance, the memory is NOT initialized. */
1938 m_buf_ptr = make_shared_noinit<value_type[]>(new_capacity);
1939 }
1940 } // if constexpr(S_SHARING)
1941 else // if constexpr(!S_SHARING)
1942 {
1943 m_buf_ptr = boost::movelib::make_unique_definit<value_type[]>(new_capacity);
1944 // Again -- the logging in make_zero() (and Blob_with_log_context dtor) is sufficient.
1945 }
1946 } // if constexpr(S_IS_VANILLA_ALLOC)
1947 else // if constexpr(!S_IS_VANILLA_ALLOC)
1948 {
1949 /* Fancy (well, potentially) allocator time. Again, if you've read the Buf_ptr and Deleter_raw doc headers,
1950 * you'll know what's going on. */
1951
1952 if constexpr(S_SHARING)
1953 {
1954 m_buf_ptr.reset(m_alloc_raw.allocate(new_capacity), // Raw-allocate via Allocator_raw! No value-init occurs.
1955
1956 /* Let them allocate aux data (ref count block) via Allocator_raw::allocate()
1957 * (and dealloc it -- ref count block -- via Allocator_raw::deallocate())!
1958 * Have them store internal ptr bits as `Allocator_raw::pointer`s, not
1959 * necessarily raw `value_type*`s! */
1960 m_alloc_raw,
1961
1962 /* When the time comes to dealloc, invoke this guy like: D(<the ptr>)! It'll
1963 * perform m_alloc_raw.deallocate(<what .allocate() returned>, n).
1964 * Since only we happen to know the size of how much we actually allocated, we pass that info
1965 * into the Deleter_raw as well, as it needs to know the `n` to pass to
1966 * m_alloc_raw.deallocate(p, n). */
1967 Deleter_raw(m_alloc_raw, new_capacity));
1968 /* Note: Unlike the S_IS_VANILLA_ALLOC=true case above, here we omit any attempt to log at the time
1969 * of dealloc, even if the verbosity is currently set high enough. It is not practical to achieve:
1970 * Recall that the assumptions we take for granted when dealing with std::allocator/regular heap
1971 * may no longer apply when dealing with an arbitrary allocator/potentially SHM-heap. To be able
1972 * to log at dealloc time, the Deleter_raw we create would need to store a Logger*. Sure, we
1973 * could pass-in logger_ptr and Deleter_raw could store it; but now recall that we do not
1974 * store a Logger* in `*this` and why: because (see class doc header) doing so does not play well
1975 * in some custom-allocator situations, particularly when operating in SHM-heap. That is why
1976 * we take an optional Logger* as an arg to every possibly-logging API (we can't guarantee, if
1977 * S_IS_VANILLA_ALLOC=false, that a Logger* can meaningfully be stored in likely-Allocator-stored *this).
1978 * For that same reason we cannot pass it to the Deleter_raw functor; m_buf_ptr (whose bits are in
1979 * *this) will save a copy of that Deleter_raw and hence *this will end up storing the Logger* which
1980 * (as noted) may be nonsensical. (With S_IS_VANILLA_ALLOC=true, though, it's safe to store it; and
1981 * since deleter would only fire at dealloc time, it doesn't present a new perf problem -- since TRACE
1982 * log level alrady concedes bad perf -- which is the 2nd reason (see class doc header) for why
1983 * we don't generally record Logger* but rather take it as an arg to each logging API.)
1984 *
1985 * Anyway, long story short, we don't log on dealloc in this case, b/c we can't, and the worst that'll
1986 * happen as a result of that decision is: deallocs won't be trace-logged when a custom allocator
1987 * is enabled at compile-time. That price is okay to pay. */
1988 } // if constexpr(S_SHARING)
1989 else // if constexpr(!S_SHARING)
1990 {
1991 /* Conceptually it's quite similar to the S_SHARING case where we do shared_ptr::reset() above.
1992 * However there is an API difference that is subtle yet real (albeit only for stateful Allocator_raw):
1993 * Current m_alloc_raw was used to allocate *m_buf_ptr, so it must be used also to dealloc it.
1994 * unique_ptr::reset() does *not* take a new Deleter_raw; hence if we used it (alone) here it would retain
1995 * the m_alloc from ction time -- and if that does not equal current m_alloc => trouble in make_zero()
1996 * or dtor.
1997 *
1998 * Anyway, to beat that, we can either manually overwrite get_deleter() (<-- non-const ref);
1999 * or we can assign via unique_ptr move-ct. The latter is certainly pithier and prettier,
2000 * but the former might be a bit faster. (Caution! Recall m_buf_ptr is null currently. If it were not
2001 * we would need to explicitly nullify it before the get_deleter() assignment.) */
2002 m_buf_ptr.get_deleter() = Deleter_raw(m_alloc_raw, new_capacity);
2003 m_buf_ptr.reset(m_alloc_raw.allocate(new_capacity));
2004 } // else if constexpr(!S_SHARING)
2005 } // else if constexpr(!S_IS_VANILLA_ALLOC)
2006 } // if (new_capacity <= numeric_limits<std::ptrdiff_t>::max()) // (See explanation just below.)
2007 else
2008 {
2009 assert(false && "Enormous or corrupt new_capacity?!");
2010 }
2011 /* ^-- Explanation of the strange if/else:
2012 * In some gcc versions in some build configs, particularly with aggressive auto-inlining optimization,
2013 * a warning like this can be triggered (observed, as of this writing, only in the movelib::make_unique_definit()
2014 * branch above, but to be safe we're covering all the branches with our if/else work-around):
2015 * argument 1 value ‘18446744073709551608’ exceeds maximum object size
2016 * 9223372036854775807 [-Werror=alloc-size-larger-than=]
2017 * This occurs due to (among other things) inlining from above our frame down into the boost::movelib call
2018 * we make (and potentially the other allocating calls in the various branches above);
2019 * plus allegedly the C++ front-end supplying the huge value during the diagnostics pass.
2020 * No such huge value (which is 0xFFFFFFFFFFFFFFF8) is actually passed-in at run-time nor mentioned anywhere
2021 * in our code, here or in the unit-test(s) triggering the auto-inlining triggering the warning. So:
2022 *
2023 * The warning is wholly inaccurate. This situation is known in the gcc issue database; for example
2024 * see https://gcc.gnu.org/bugzilla/show_bug.cgi?id=85783 and related (linked) tickets.
2025 * The question was how to work around it; I admit that the discussion in that ticket (and friends) at times
2026 * gets into topics so obscure and gcc-internal as to be indecipherable to me (ygoldfel).
2027 * Since I don't seem to be doing anything wrong above (though: @todo *Maybe* it has something to do with
2028 * lacking `nothrow`? Would need investigation, nothrow could be good anyway...), the top work-arounds would be
2029 * perhaps: 1, pragma-away the alloc-size-larger-than warning; 2, use a compiler-placating
2030 * explicit `if (new_capacity < ...limit...)` branch. (2) was suggested in the above ticket by a
2031 * gcc person. Not wanting to give even a tiny bit of perf I attempted the pragma way (1); but at least gcc-13
2032 * has some bug which makes the pragma get ignored. So I reverted to (2) by default.
2033 * @todo Revisit this. Should skip workaround unless gcc; + possibly solve it some more elegant way; look into the
2034 * nothrow thing the ticket discussion briefly mentions (but might be irrelevant). */
2035
2036 m_capacity = new_capacity;
2037 m_size = 0; // Went from zero() to !zero(); so m_size went from meaningless to meaningful and must be set.
2038 m_start = 0; // Ditto for m_start.
2039
2040 assert(!zero());
2041 // This is the only path (other than swap()) that assigns to m_capacity; note m_capacity >= 1.
2042 }
2043 /* else { !zero(): Since new_capacity <= m_capacity, m_capacity is already large enough; no change needed.
2044 * zero() && (new_capacity == 0): Since 0-capacity wanted, we can continue being zero(), as that's enough. } */
2045
2046 assert(capacity() >= new_capacity); // Promised post-condition.
2047} // Basic_blob::reserve()
2048
2049template<typename Allocator, bool S_SHARING_ALLOWED>
2051 log::Logger* logger_ptr)
2052{
2053 auto& new_start = new_start_or_unchanged;
2054 if (new_start == S_UNCHANGED)
2055 {
2056 new_start = start();
2057 }
2058
2059 const size_type min_capacity = new_start + new_size;
2060
2061 // Sanity checks/input checks (depending on how you look at it).
2062 assert(min_capacity >= new_size);
2063 assert(min_capacity >= new_start);
2064
2065 /* Ensure there is enough space for new_size starting at new_start. Note, in particular, this disallows
2066 * enlarging non-zero() buffer.
2067 * (If they want, they can explicitly call make_zero() first. But they must do so consciously, so that they're
2068 * forced to consider the performance impact of such an action.) Also note that zero() continues to be true
2069 * if was true. */
2070 reserve(min_capacity, logger_ptr);
2071 assert(capacity() >= min_capacity);
2072
2073 if (!zero())
2074 {
2075 m_size = new_size;
2076 m_start = new_start;
2077 }
2078 // else { zero(): m_size is meaningless; size() == 0, as desired. }
2079
2080 assert(size() == new_size);
2081 assert(start() == new_start);
2082} // Basic_blob::resize()
2083
2084template<typename Allocator, bool S_SHARING_ALLOWED>
2086{
2087 resize(((start() + size()) > prefix_size)
2088 ? (start() + size() - prefix_size)
2089 : 0,
2090 prefix_size); // It won't log, as it cannot allocate, so no need to pass-through a Logger*.
2091 // Sanity check: `prefix_size == 0` translates to: resize(start() + size(), 0), as advertised.
2092}
2093
2094template<typename Allocator, bool S_SHARING_ALLOWED>
2096{
2097 assert((prefix_size_inc >= 0) || (start() >= size_type(-prefix_size_inc)));
2098 start_past_prefix(start() + prefix_size_inc);
2099}
2100
2101template<typename Allocator, bool S_SHARING_ALLOWED>
2103{
2104 // Note: start() remains unchanged (as advertised). resize(0, 0) can be used if that is unacceptable.
2105 resize(0); // It won't log, as it cannot allocate, so no need to pass-through a Logger*.
2106 // Note corner case: zero() remains true if was true (and false if was false).
2107}
2108
2109template<typename Allocator, bool S_SHARING_ALLOWED>
2111{
2112 /* Could also write more elegantly: `swap(Basic_blob());`, but following is a bit optimized (while equivalent);
2113 * logs better. */
2114 if (!zero())
2115 {
2116 if (logger_ptr && logger_ptr->should_log(log::Sev::S_TRACE, S_LOG_COMPONENT))
2117 {
2118 FLOW_LOG_SET_CONTEXT(logger_ptr, S_LOG_COMPONENT);
2119 if constexpr(S_SHARING_ALLOWED)
2120 {
2121 FLOW_LOG_TRACE_WITHOUT_CHECKING("Blob [" << this << "] giving up ownership of internal buffer sized "
2122 "[" << capacity() << "]; deallocation will immediately follow if no sharing "
2123 "`Blob`s remain; else ref-count merely decremented.");
2124 }
2125 else
2126 {
2127 FLOW_LOG_TRACE_WITHOUT_CHECKING("Blob [" << this << "] deallocating internal buffer sized "
2128 "[" << capacity() << "].");
2129 }
2130 }
2131
2132 m_buf_ptr.reset();
2133 } // if (!zero())
2134} // Basic_blob::make_zero()
2135
2136template<typename Allocator, bool S_SHARING_ALLOWED>
2138 Basic_blob<Allocator, S_SHARING_ALLOWED>::assign_copy(const boost::asio::const_buffer& src,
2139 log::Logger* logger_ptr)
2140{
2141 const size_type n = src.size();
2142
2143 /* Either just set m_start = 0 and decrease/keep-constant (m_start + m_size) = n; or allocate exactly n-sized buffer
2144 * and set m_start = 0, m_size = n.
2145 * As elsewhere, the latter case requires that zero() be true currently (but they can force that with make_zero()). */
2146 resize(n, 0); // It won't log, as it cannot allocate, so no need to pass-through a Logger*.
2147
2148 // Performance: Basically equals: memcpy(m_buf_ptr, src.start, src.size).
2149 emplace_copy(const_begin(), src, logger_ptr);
2150
2151 // Corner case: n == 0. Above is equivalent to: if (!zero()) { m_size = m_start = 0; }. That behavior is advertised.
2152
2153 return n;
2154}
2155
2156template<typename Allocator, bool S_SHARING_ALLOWED>
2159 log::Logger* logger_ptr)
2160{
2161 using std::memcpy;
2162
2163 // Performance: assert()s eliminated and values inlined, below boils down to: memcpy(dest, src.start, src.size);
2164
2165 assert(valid_iterator(dest));
2166
2167 const Iterator dest_it = iterator_sans_const(dest);
2168 const size_type n = src.size(); // Note the entire source buffer is copied over.
2169
2170 if (n != 0)
2171 {
2172 const auto src_data = static_cast<Const_iterator>(src.data());
2173
2174 if (logger_ptr && logger_ptr->should_log(log::Sev::S_TRACE, S_LOG_COMPONENT))
2175 {
2176 FLOW_LOG_SET_CONTEXT(logger_ptr, S_LOG_COMPONENT);
2177 FLOW_LOG_TRACE_WITHOUT_CHECKING("Blob [" << this << "] copying "
2178 "memory area [" << static_cast<const void*>(src_data) << "] sized "
2179 "[" << n << "] to internal buffer at offset [" << (dest - const_begin()) << "].");
2180 }
2181
2182 assert(derefable_iterator(dest_it)); // Input check.
2183 assert(difference_type(n) <= (const_end() - dest)); // Input check. (As advertised, we don't "correct" `n`.)
2184
2185 // Ensure no overlap by user.
2186 assert(((dest_it + n) <= src_data) || ((src_data + n) <= dest_it));
2187
2188 /* Some compilers in some build configs issue stringop-overflow warning here, when optimizer heavily auto-inlines:
2189 * error: ‘memcpy’ specified bound between 9223372036854775808 and 18446744073709551615
2190 * exceeds maximum object size 9223372036854775807 [-Werror=stringop-overflow=]
2191 * This occurs due to (among other things) inlining from above our frame down into the std::memcpy() call
2192 * we make; plus allegedly the C++ front-end supplying the huge values during the diagnostics pass.
2193 * No such huge values (which are 0x800000000000000F, 0xFFFFFFFFFFFFFFFF, 0x7FFFFFFFFFFFFFFF, respectively)
2194 * is actually passed-in at run-time nor mentioned anywhere
2195 * in our code, here or in the unit-test(s) triggering the auto-inlining triggering the warning. So:
2196 *
2197 * The warning is wholly inaccurate in a way reminiscent of the situation in reserve() with a somewhat
2198 * similar comment. In this case, however, a pragma does properly work, so we use that approach instead of
2199 * a run-time check/assert() which would give away a bit of perf. */
2200#pragma GCC diagnostic push
2201#pragma GCC diagnostic ignored "-Wpragmas" // For older versions, where the following does not exist/cannot be disabled.
2202#pragma GCC diagnostic ignored "-Wunknown-warning-option" // (Similarly for clang.)
2203#pragma GCC diagnostic ignored "-Wstringop-overflow"
2204#pragma GCC diagnostic ignored "-Wrestrict" // Another similar bogus one pops up after pragma-ing away preceding one.
2205
2206 /* Likely linear-time in `n` but hopefully optimized. Could use a C++ construct, but I've seen that be slower
2207 * that a direct memcpy() call in practice, at least in a Linux gcc. Could use boost.asio buffer_copy(), which
2208 * as of this writing does do memcpy(), but the following is an absolute guarantee of best performance, so better
2209 * safe than sorry (hence this whole Basic_blob class's existence, at least in part). */
2210 memcpy(dest_it, src_data, n);
2211
2212#pragma GCC diagnostic pop
2213 }
2214
2215 return dest_it + n;
2216} // Basic_blob::emplace_copy()
2217
2218template<typename Allocator, bool S_SHARING_ALLOWED>
2220 Basic_blob<Allocator, S_SHARING_ALLOWED>::sub_copy(Const_iterator src, const boost::asio::mutable_buffer& dest,
2221 log::Logger* logger_ptr) const
2222{
2223 // Code similar to emplace_copy(). Therefore keeping comments light.
2224
2225 using std::memcpy;
2226
2227 assert(valid_iterator(src));
2228
2229 const size_type n = dest.size(); // Note the entire destination buffer is filled.
2230 if (n != 0)
2231 {
2232 const auto dest_data = static_cast<Iterator>(dest.data());
2233
2234 if (logger_ptr && logger_ptr->should_log(log::Sev::S_TRACE, S_LOG_COMPONENT))
2235 {
2236 FLOW_LOG_SET_CONTEXT(logger_ptr, S_LOG_COMPONENT);
2237 FLOW_LOG_TRACE_WITHOUT_CHECKING("Blob [" << this << "] copying to "
2238 "memory area [" << static_cast<const void*>(dest_data) << "] sized "
2239 "[" << n << "] from internal buffer offset [" << (src - const_begin()) << "].");
2240 }
2241
2242 assert(derefable_iterator(src));
2243 assert(difference_type(n) <= (const_end() - src)); // Can't copy from beyond end of *this blob.
2244
2245 assert(((src + n) <= dest_data) || ((dest_data + n) <= src));
2246
2247 // See explanation for the pragma in emplace_copy(). While warning not yet observed here, preempting it.
2248#pragma GCC diagnostic push
2249#pragma GCC diagnostic ignored "-Wpragmas" // For older versions, where the following does not exist/cannot be disabled.
2250#pragma GCC diagnostic ignored "-Wunknown-warning-option" // (Similarly for clang.)
2251#pragma GCC diagnostic ignored "-Wstringop-overflow"
2252#pragma GCC diagnostic ignored "-Wrestrict"
2253 memcpy(dest_data, src, n);
2254#pragma GCC diagnostic pop
2255 }
2256
2257 return src + n;
2258}
2259
2260template<typename Allocator, bool S_SHARING_ALLOWED>
2263{
2264 using std::memmove;
2265
2266 assert(derefable_iterator(first)); // Input check.
2267 assert(valid_iterator(past_last)); // Input check.
2268
2269 const Iterator dest = iterator_sans_const(first);
2270
2271 if (past_last > first) // (Note: `past_last < first` allowed, not illegal.)
2272 {
2273 const auto n_moved = size_type(const_end() - past_last);
2274
2275 if (n_moved != 0)
2276 {
2277 // See explanation for the pragma in emplace_copy(). While warning not yet observed here, preempting it.
2278#pragma GCC diagnostic push
2279#pragma GCC diagnostic ignored "-Wpragmas" // For older versions, where the following does not exist/cannot be disabled.
2280#pragma GCC diagnostic ignored "-Wunknown-warning-option" // (Similarly for clang.)
2281#pragma GCC diagnostic ignored "-Wstringop-overflow"
2282#pragma GCC diagnostic ignored "-Wrestrict"
2283 memmove(dest, iterator_sans_const(past_last), n_moved); // Cannot use memcpy() due to possible overlap.
2284#pragma GCC diagnostic pop
2285 }
2286 // else { Everything past end() is to be erased: it's sufficient to just update m_size: }
2287
2288 m_size -= (past_last - first);
2289 // m_capacity does not change, as we advertised minimal operations possible to achieve result.
2290 } // if (past_last > first)
2291 // else if (past_last <= first) { Nothing to do. }
2292
2293 return dest;
2294} // Basic_blob::erase()
2295
2296template<typename Allocator, bool S_SHARING_ALLOWED>
2299{
2300 assert(!empty());
2301 return *const_begin();
2302}
2303
2304template<typename Allocator, bool S_SHARING_ALLOWED>
2307{
2308 assert(!empty());
2309 return const_end()[-1];
2310}
2311
2312template<typename Allocator, bool S_SHARING_ALLOWED>
2315{
2316 assert(!empty());
2317 return *begin();
2318}
2319
2320template<typename Allocator, bool S_SHARING_ALLOWED>
2323{
2324 assert(!empty());
2325 return end()[-1];
2326}
2327
2328template<typename Allocator, bool S_SHARING_ALLOWED>
2331{
2332 return const_front();
2333}
2334
2335template<typename Allocator, bool S_SHARING_ALLOWED>
2338{
2339 return const_back();
2340}
2341
2342template<typename Allocator, bool S_SHARING_ALLOWED>
2345{
2346 return const_cast<Basic_blob*>(this)->begin();
2347}
2348
2349template<typename Allocator, bool S_SHARING_ALLOWED>
2352{
2353 if (zero())
2354 {
2355 return 0;
2356 }
2357 // else
2358
2359 /* m_buf_ptr.get() is value_type* when Buf_ptr = regular shared_ptr; but possibly Some_fancy_ptr<value_type>
2360 * when Buf_ptr = boost::interprocess::shared_ptr<value_type, Allocator_raw>, namely when
2361 * Allocator_raw::pointer = Some_fancy_ptr<value_type> and not simply value_type* again. We need value_type*.
2362 * Fancy-pointer is not really an officially-defined concept (offset_ptr<> is an example of one).
2363 * Anyway the following works for both cases, but there are a bunch of different things we could write.
2364 * Since it's just this one location where we need to do this, I do not care too much, and the following
2365 * cheesy thing -- &(*p) -- is OK.
2366 *
2367 * @todo In C++20 can replace this with std::to_address(). Or can implement our own (copy cppreference.com impl). */
2368
2369 const auto raw_or_fancy_buf_ptr = m_buf_ptr.get();
2370 return &(*raw_or_fancy_buf_ptr) + m_start;
2371}
2372
2373template<typename Allocator, bool S_SHARING_ALLOWED>
2376{
2377 return zero() ? const_begin() : (const_begin() + size());
2378}
2379
2380template<typename Allocator, bool S_SHARING_ALLOWED>
2383{
2384 return zero() ? begin() : (begin() + size());
2385}
2386
2387template<typename Allocator, bool S_SHARING_ALLOWED>
2390{
2391 return const_begin();
2392}
2393
2394template<typename Allocator, bool S_SHARING_ALLOWED>
2397{
2398 return const_begin();
2399}
2400
2401template<typename Allocator, bool S_SHARING_ALLOWED>
2404{
2405 return const_end();
2406}
2407
2408template<typename Allocator, bool S_SHARING_ALLOWED>
2411{
2412 return const_end();
2413}
2414
2415template<typename Allocator, bool S_SHARING_ALLOWED>
2418{
2419 return const_begin();
2420}
2421
2422template<typename Allocator, bool S_SHARING_ALLOWED>
2425{
2426 return begin();
2427}
2428
2429template<typename Allocator, bool S_SHARING_ALLOWED>
2431{
2432 return empty() ? (it == const_end())
2433 : in_closed_range(const_begin(), it, const_end());
2434}
2435
2436template<typename Allocator, bool S_SHARING_ALLOWED>
2438{
2439 return empty() ? false
2440 : in_closed_open_range(const_begin(), it, const_end());
2441}
2442
2443template<typename Allocator, bool S_SHARING_ALLOWED>
2446{
2447 return const_cast<value_type*>(it); // Can be done without const_cast<> but might as well save some cycles.
2448}
2449
2450template<typename Allocator, bool S_SHARING_ALLOWED>
2452{
2453 using boost::asio::const_buffer;
2454 return const_buffer(const_data(), size());
2455}
2456
2457template<typename Allocator, bool S_SHARING_ALLOWED>
2459{
2460 using boost::asio::mutable_buffer;
2461 return mutable_buffer(data(), size());
2462}
2463
2464template<typename Allocator, bool S_SHARING_ALLOWED>
2467{
2468 return m_alloc_raw;
2469}
2470
2471template<typename Allocator, bool S_SHARING_ALLOWED>
2473 /* Copy allocator; a stateless allocator should have size 0 (no-op for the processor in that case... except
2474 * the optional<> registering it has-a-value). */
2475 m_alloc_raw(std::in_place, alloc_raw),
2476 m_buf_sz(buf_sz) // We store a T*, where T is a trivial-deleter PoD, but we delete an array of Ts: this many.
2477{
2478 // OK.
2479}
2480
2481template<typename Allocator, bool S_SHARING_ALLOWED>
2483 m_buf_sz(0)
2484{
2485 /* This ctor is never invoked (see this ctor's doc header). It can be left `= default;`, but some gcc versions
2486 * then complain m_buf_sz may be used uninitialized (not true but such is life). */
2487}
2488
2489template<typename Allocator, bool S_SHARING_ALLOWED>
2491{
2492 // No need to invoke dtor: Allocator_raw::value_type is Basic_blob::value_type, a boring int type with no real dtor.
2493
2494 // Free the raw buffer at location to_delete; which we know is m_buf_sz `value_type`s long.
2495 m_alloc_raw->deallocate(to_delete, m_buf_sz);
2496}
2497
2498} // namespace flow::util
Interface that the user should implement, passing the implementing Logger into logging classes (Flow'...
Definition: log.hpp:1291
virtual bool should_log(Sev sev, const Component &component) const =0
Given attributes of a hypothetical message that would be logged, return true if that message should b...
Internal deleter functor used if and only if S_IS_VANILLA_ALLOC is false and therefore only with Buf_...
Pointer_raw pointer
For boost::interprocess::shared_ptr and unique_ptr compliance (hence the irregular capitalization).
Deleter_raw()
Default ctor: Must never be invoked; suitable only for a null smart-pointer.
size_type m_buf_sz
See ctor and operator()(): the size of the buffer to deallocate.
std::optional< Allocator_raw > m_alloc_raw
See ctor: the allocator that operator()() shall use to deallocate.
void operator()(Pointer_raw to_delete)
Deallocates using Allocator_raw::deallocate(), passing-in the supplied pointer (to value_type) to_del...
typename std::allocator_traits< Allocator_raw >::pointer Pointer_raw
Short-hand for the allocator's pointer type, pointing to Basic_blob::value_type.
A hand-optimized and API-tweaked replacement for vector<uint8_t>, i.e., buffer of bytes inside an all...
Definition: basic_blob.hpp:247
Basic_blob(Basic_blob &&moved_src, log::Logger *logger_ptr=0)
Move constructor, constructing a blob exactly internally equal to pre-call moved_src,...
Basic_blob & operator=(Basic_blob &&moved_src)
Move assignment operator (no logging): equivalent to assign(std::move(moved_src), nullptr).
bool derefable_iterator(Const_iterator it) const
Returns true if and only if the given iterator points to an element within this blob's size() element...
bool zero() const
Returns false if a buffer is allocated and owned; true otherwise.
Const_iterator begin() const
Equivalent to const_begin().
size_type m_capacity
See capacity(); but m_capacity is meaningless (and containing unknown value) if !m_buf_ptr (i....
void resize(size_type size, size_type start_or_unchanged=S_UNCHANGED, log::Logger *logger_ptr=0)
Guarantees post-condition size() == size and start() == start; no values in pre-call range [begin(),...
boost::asio::mutable_buffer mutable_buffer()
Same as const_buffer() but the returned view is mutable.
std::conditional_t< S_IS_VANILLA_ALLOC, std::conditional_t< S_SHARING, boost::shared_ptr< value_type[]>, boost::movelib::unique_ptr< value_type[]> >, std::conditional_t< S_SHARING, boost::interprocess::shared_ptr< value_type, Allocator_raw, Deleter_raw >, boost::movelib::unique_ptr< value_type, Deleter_raw > > > Buf_ptr
The smart-pointer type used for m_buf_ptr; a custom-allocator-and-SHM-friendly impl and parameterizat...
void reserve(size_type capacity, log::Logger *logger_ptr=0)
Ensures that an internal buffer of at least capacity elements is allocated and owned; disallows growi...
Basic_blob & assign(Basic_blob &&moved_src, log::Logger *logger_ptr=0)
Move assignment.
Iterator emplace_copy(Const_iterator dest, const boost::asio::const_buffer &src, log::Logger *logger_ptr=0)
Copies src buffer directly onto equally sized area within *this at location dest; *this must have suf...
value_type const * Const_iterator
Type for iterator pointing into an immutable structure of this type.
Definition: basic_blob.hpp:264
Basic_blob share_after_split_left(size_type size, log::Logger *logger_ptr=0)
Applicable to !zero() blobs, this shifts this->begin() by size to the right without changing end(); a...
size_type assign_copy(const boost::asio::const_buffer &src, log::Logger *logger_ptr=0)
Replaces logical contents with a copy of the given non-overlapping area anywhere in memory.
value_type * data()
Equivalent to begin().
Buf_ptr m_buf_ptr
Pointer to currently allocated buffer of size m_capacity; null if and only if zero() == true.
Iterator erase(Const_iterator first, Const_iterator past_last)
Performs the minimal number of operations to make range [begin(), end()) unchanged except for lacking...
value_type & back()
Returns reference to mutable last element.
Const_iterator const_iterator
For container compliance (hence the irregular capitalization): Const_iterator type.
Definition: basic_blob.hpp:282
std::ptrdiff_t difference_type
Type for difference of size_types.
Definition: basic_blob.hpp:258
Iterator end()
Returns pointer one past mutable last element; empty() is possible.
void clear()
Equivalent to resize(0, start()).
size_type start() const
Returns the offset between begin() and the start of the internally allocated buffer.
Const_iterator cbegin() const
Synonym of const_begin().
static constexpr bool S_SHARING
Value of template parameter S_SHARING_ALLOWED (for generic programming).
Definition: basic_blob.hpp:287
void swap_impl(Basic_blob &other, log::Logger *logger_ptr=0)
The body of swap(), except for the part that swaps (or decides not to swap) m_alloc_raw.
Basic_blob share(log::Logger *logger_ptr=0) const
Applicable to !zero() blobs, this returns an identical Basic_blob that shares (co-owns) *this allocat...
size_type capacity() const
Returns the number of elements in the internally allocated buffer, which is 1 or more; or 0 if no buf...
value_type & reference
For container compliance (hence the irregular capitalization): reference to element.
Definition: basic_blob.hpp:276
Basic_blob & operator=(const Basic_blob &src)
Copy assignment operator (no logging): equivalent to assign(src, nullptr).
bool valid_iterator(Const_iterator it) const
Returns true if and only if: this->derefable_iterator(it) || (it == this->const_end()).
value_type const * const_data() const
Equivalent to const_begin().
static constexpr size_type S_UNCHANGED
Special value indicating an unchanged size_type value; such as in resize().
Definition: basic_blob.hpp:290
Basic_blob(size_type size, log::Logger *logger_ptr=0, const Allocator_raw &alloc_raw=Allocator_raw())
Constructs blob with size() and capacity() equal to the given size, and start() == 0.
bool empty() const
Returns size() == 0.
const value_type & const_back() const
Returns reference to immutable last element.
Basic_blob(const Basic_blob &src, log::Logger *logger_ptr=0)
Copy constructor, constructing a blob logically equal to src.
void swap(Basic_blob &other, log::Logger *logger_ptr=0)
Swaps the contents of this structure and other, or no-op if this == &other.
const value_type & const_reference
For container compliance (hence the irregular capitalization): reference to const element.
Definition: basic_blob.hpp:278
size_type size() const
Returns number of elements stored, namely end() - begin().
const value_type & const_front() const
Returns reference to immutable first element.
void share_after_split_equally_emit_ptr_seq(size_type size, bool headless_pool, Blob_ptr_container *out_blobs, log::Logger *logger_ptr=0)
share_after_split_equally() wrapper that places Ptr<Basic_blob>s into the given container via push_ba...
void share_after_split_equally_impl(size_type size, bool headless_pool, Emit_blob_func &&emit_blob_func, log::Logger *logger_ptr, Share_after_split_left_func &&share_after_split_left_func)
Impl of share_after_split_equally() but capable of emitting not just *this type (Basic_blob<....
value_type * Iterator
Type for iterator pointing into a mutable structure of this type.
Definition: basic_blob.hpp:261
const value_type & front() const
Equivalent to const_front().
Const_iterator end() const
Equivalent to const_end().
void start_past_prefix_inc(difference_type prefix_size_inc)
Like start_past_prefix() but shifts the current prefix position by the given incremental value (posit...
Iterator iterator_sans_const(Const_iterator it)
Returns iterator-to-mutable equivalent to given iterator-to-immutable.
Iterator begin()
Returns pointer to mutable first element; or end() if empty().
Const_iterator sub_copy(Const_iterator src, const boost::asio::mutable_buffer &dest, log::Logger *logger_ptr=0) const
The opposite of emplace_copy() in every way, copying a sub-blob onto a target memory area.
Basic_blob share_after_split_right(size_type size, log::Logger *logger_ptr=0)
Identical to share_after_split_left(), except this->end() shifts by size to the left (instead of this...
size_type m_start
See start(); but m_start is meaningless (and containing unknown value) if !m_buf_ptr (i....
Const_iterator const_pointer
For container compliance (hence the irregular capitalization): pointer to const element.
Definition: basic_blob.hpp:274
static constexpr bool S_IS_VANILLA_ALLOC
true if Allocator_raw underlying allocator template is simply std::allocator; false otherwise.
Definition: basic_blob.hpp:330
value_type & front()
Returns reference to mutable first element.
size_type m_size
See size(); but m_size is meaningless (and containing unknown value) if !m_buf_ptr (i....
const value_type & back() const
Equivalent to const_back().
Const_iterator const_begin() const
Returns pointer to immutable first element; or end() if empty().
Allocator_raw m_alloc_raw
See get_allocator(): copy of the allocator supplied by the user (though, if Allocator_raw is stateles...
static constexpr Flow_log_component S_LOG_COMPONENT
Our flow::log::Component.
Const_iterator cend() const
Synonym of const_end().
void share_after_split_equally(size_type size, bool headless_pool, Emit_blob_func &&emit_blob_func, log::Logger *logger_ptr=0)
Identical to successively performing share_after_split_left(size) until this->empty() == true; the re...
Iterator pointer
For container compliance (hence the irregular capitalization): pointer to element.
Definition: basic_blob.hpp:272
Allocator Allocator_raw
Short-hand for the allocator type specified at compile-time. Its element type is our value_type.
Definition: basic_blob.hpp:267
Const_iterator const_end() const
Returns pointer one past immutable last element; empty() is possible.
void share_after_split_equally_emit_seq(size_type size, bool headless_pool, Blob_container *out_blobs, log::Logger *logger_ptr=0)
share_after_split_equally() wrapper that places Basic_blobs into the given container via push_back().
Iterator iterator
For container compliance (hence the irregular capitalization): Iterator type.
Definition: basic_blob.hpp:280
Basic_blob(const Allocator_raw &alloc_raw=Allocator_raw())
Constructs blob with zero() == true.
~Basic_blob()
Destructor that drops *this ownership of the allocated internal buffer if any, as by make_zero(); if ...
uint8_t value_type
Short-hand for values, which in this case are unsigned bytes.
Definition: basic_blob.hpp:252
Allocator_raw get_allocator() const
Returns a copy of the internally cached Allocator_raw as set by a constructor or assign() or assignme...
void make_zero(log::Logger *logger_ptr=0)
Guarantees post-condition zero() == true by dropping *this ownership of the allocated internal buffer...
void start_past_prefix(size_type prefix_size)
Restructures blob to consist of an internally allocated buffer and a [begin(), end) range starting at...
std::size_t size_type
Type for index into blob or length of blob or sub-blob.
Definition: basic_blob.hpp:255
boost::asio::const_buffer const_buffer() const
Convenience accessor returning an immutable boost.asio buffer "view" into the entirety of the blob.
Basic_blob & assign(const Basic_blob &src, log::Logger *logger_ptr=0)
Copy assignment: assuming (this != &src) && (!blobs_sharing(*this, src)), makes *this logically equal...
#define FLOW_LOG_TRACE_WITHOUT_CHECKING(ARG_stream_fragment)
Logs a TRACE message into flow::log::Logger *get_logger() with flow::log::Component get_log_component...
Definition: log.hpp:354
#define FLOW_LOG_SET_CONTEXT(ARG_logger_ptr, ARG_component_payload)
For the rest of the block within which this macro is instantiated, causes all FLOW_LOG_....
Definition: log.hpp:405
#define FLOW_LOG_TRACE(ARG_stream_fragment)
Logs a TRACE message into flow::log::Logger *get_logger() with flow::log::Component get_log_component...
Definition: log.hpp:227
@ S_TRACE
Message indicates any condition that may occur with great frequency (thus verbose if logged).
Flow module containing miscellaneous general-use facilities that don't fit into any other Flow module...
Definition: basic_blob.hpp:29
bool in_closed_open_range(T const &min_val, T const &val, T const &max_val)
Returns true if and only if the given value is within the given range, given as a [low,...
Definition: util.hpp:262
void swap(Basic_blob< Allocator, S_SHARING_ALLOWED > &blob1, Basic_blob< Allocator, S_SHARING_ALLOWED > &blob2, log::Logger *logger_ptr)
Equivalent to blob1.swap(blob2).
bool in_closed_range(T const &min_val, T const &val, T const &max_val)
Returns true if and only if the given value is within the given range, inclusive.
Definition: util.hpp:246
bool blobs_sharing(const Basic_blob< Allocator, S_SHARING_ALLOWED > &blob1, const Basic_blob< Allocator, S_SHARING_ALLOWED > &blob2)
Returns true if and only if both given objects are not zero() == true, and they either co-own a commo...
Flow_log_component
The flow::log::Component payload enumeration comprising various log components used by Flow's own int...
Definition: common.hpp:632
unsigned char uint8_t
Byte. Best way to represent a byte of binary data. This is 8 bits on all modern systems.
Definition: common.hpp:385