Flow 2.0.0
Flow project: Full implementation reference.
basic_blob.hpp
Go to the documentation of this file.
1/* Flow
2 * Copyright 2023 Akamai Technologies, Inc.
3 *
4 * Licensed under the Apache License, Version 2.0 (the
5 * "License"); you may not use this file except in
6 * compliance with the License. You may obtain a copy
7 * of the License at
8 *
9 * https://www.apache.org/licenses/LICENSE-2.0
10 *
11 * Unless required by applicable law or agreed to in
12 * writing, software distributed under the License is
13 * distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
14 * CONDITIONS OF ANY KIND, either express or implied.
15 * See the License for the specific language governing
16 * permissions and limitations under the License. */
17
18/// @file
19#pragma once
20
22#include "flow/log/log.hpp"
23#include <boost/interprocess/smart_ptr/shared_ptr.hpp>
24#include <boost/move/make_unique.hpp>
25#include <boost/compressed_pair.hpp>
26#include <optional>
27#include <limits>
28#include <cstring>
29
30namespace flow::util
31{
32
33// Types.
34
35/**
36 * Tag type used at least in Basic_blob and Blob_with_log_context to specify that an allocated buffer be zeroed.
37 *
38 * @see util::CLEAR_ON_ALLOC, the value to pass-in to relevant APIs such as Basic_blob::resize().
39 */
41
42/**
43 * A hand-optimized and API-tweaked replacement for `vector<uint8_t>`, i.e., buffer of bytes inside an allocated area
44 * of equal or larger size; also optionally supports limited garbage-collected memory pool functionality and
45 * SHM-friendly custom-allocator support.
46 *
47 * @see Blob_with_log_context (and especially aliases #Blob and #Sharing_blob), our non-polymorphic sub-class which
48 * adds some ease of use in exchange for a small perf trade-off. (More info below under "Logging.")
49 * @see #Blob_sans_log_context + #Sharing_blob_sans_log_context, each simply an alias to
50 * `Basic_blob<std::allocator, B>` (with `B = false` or `true` respectively), in a fashion vaguely similar to
51 * what `string` is to `basic_string` (a little). This is much like #Blob/#Sharing_blob, in that it is a
52 * non-template concrete type; but does not take or store a `Logger*`.
53 *
54 * The rationale for its existence mirrors its essential differences from `vector<uint8_t>` which are as follows.
55 * To summarize, though, it exists to guarantee specific performance by reducing implementation uncertainty via
56 * lower-level operations; and force user to explicitly authorize any allocation to ensure thoughtfully performant use.
57 * Update: Plus, it adds non-prefix-sub-buffer feature, which can be useful for zero-copy deserialization.
58 * Update: Plus, it adds a simple form of garbage-collected memory pools, useful for operating multiple `Basic_blob`s
59 * that share a common over-arching memory area (buffer).
60 * Update: Plus, it adds SHM-friendly custom allocator support. (While all `vector` impls support custom allocators,
61 * only some later versions of gcc `std::vector` work with shared-memory (SHM) allocators and imperfectly at that.
62 * `boost::container::vector` a/k/a `boost::interprocess::vector` is fully SHM-friendly.)
63 *
64 * - It adds a feature over `vector<uint8_t>`: The logical contents `[begin(), end())` can optionally begin
65 * not at the start of the internally allocated buffer but somewhere past it. In other words, the logical buffer
66 * is not necessarily a prefix of the internal allocated buffer. This feature is critical when one wants
67 * to use some sub-buffer of a buffer without reallocating a smaller buffer and copying the sub-buffer into it.
68 * For example, if we read a DATA packet the majority of which is the payload, which begins a few bytes
69 * from the start -- past a short header -- it may be faster to keep passing around the whole thing with move
70 * semantics but use only the payload part, after logically
71 * deserializing it (a/k/a zero-copy deserialization semantics). Of course
72 * one can do this with `vector` as well; but one would need to always remember the prefix length even after
73 * deserializing, at which point such details would be ideally forgotten instead. So this API is significantly
74 * more pleasant in that case. Moreover it can then be used generically more easily, alongside other containers.
75 * - Its performance is guaranteed by internally executing low-level operations such as `memcpy()` directly instead of
76 * hoping that using a higher-level abstraction will ultimately do the same.
77 * - In particular, the iterator types exposed by the API *are* pointers instead of introducing any performance
78 * uncertainty by possibly using wrapper/proxy iterator class.
79 * - In particular (unless explicitly requested via optional Clear_on_alloc tag)
80 * no element or memory area is *ever* initialized to zero(es) or any other particular filler
81 * value(s). (This is surprisingly difficult to avoid with STL containers! Google it. Though, e.g.,
82 * boost.container does provide a `default_init_t` extension to various APIs like `.resize()`.) If an allocation
83 * does occur, the area is left as-is unless user specifies a source memory area from which to copy data.
84 * - However, if you *do* desire the zeroing of memory immediately upon allocation, you may request it
85 * via Clear_on_alloc tag arg to size-taking ctor, resize(), or reserve(). This is in many cases faster
86 * than an explicit `memset()` or `fill_n()` of your own; so do you use it; it is not mere syntactic sugar.
87 * - Note that I am making no assertion about `vector` being slow; the idea is to guarantee *we* aren't by removing
88 * any *question* about it; it's entirely possible a given `vector` is equally fast, but it cannot be guaranteed by
89 * standard except in terms of complexity guarantees (which is usually pretty good but not everything).
90 * - That said a quick story about `std::vector<uint8_t>` (in gcc-8.3 anyway): I (ygoldfel) once used it with
91 * a custom allocator (which worked in shared memory) and stored a megabytes-long buffer in one. Its
92 * destructor, I noticed, spent milliseconds (with 2022 hardware) -- outside the actual dealloc call.
93 * Reason: It was iterating over every (1-byte) element and invoking its (non-existent/trivial) destructor. It
94 * did not specialize to avoid this, intentionally so according to a comment, when using a custom allocator.
95 * `boost::container::vector<uint8_t>` lacked this problem; but nevertheless it shows generally written
96 * containers can have hidden such perf quirks.
97 * - To help achieve the previous bullet point, as well as to keep the code simple, the class does not parameterize
98 * on element type; it stores unsigned bytes, period (though Basic_blob::value_type is good to use if you need to
99 * refer to that type in code generically).
100 * Perhaps the same could be achieved by specialization, but we don't need the parameterization in the first place.
101 * - Unlike `vector`, it has an explicit state where there is no underlying buffer; in this case zero() is `true`.
102 * Also in that case, `capacity() == 0` and `size() == 0` (and `start() == 0`). `zero() == true` is the case on
103 * default-constructed object of this class. The reason for this is I am never sure, at least, what a
104 * default-constructed `vector` looks like internally; a null buffer always seemed like a reasonable starting point
105 * worth guaranteeing explicitly.
106 * - If `!zero()`:
107 * - make_zero() deallocates any allocated buffer and ensures zero() is `true`, as if upon default construction.
108 * - Like `vector`, it keeps an allocated memory chunk of size M, at the start of which is the
109 * logical buffer of size `N <= M`, where `N == size()`, and `M == capacity()`. However, `M >= 1` always.
110 * - There is the aforementioned added feature wherein the logical buffer begins to the right of the allocated
111 * buffer, namely at index `start()`. In this case `M >= start() + size()`, and the buffer range
112 * is in indices `[start(), start() + size())` of the allocated buffer. By default `start() == 0`, as
113 * in `vector`, but this can be changed via the 2nd, optional, argument to resize().
114 * - Like `vector`, `reserve(Mnew)`, with `Mnew <= M`, does nothing. However, unlike `vector`, the same call is
115 * *illegal* when `Mnew > M >= 1`. However, any reserve() call *is* allowed when zero() is `true`. Therefore,
116 * if the user is intentionally okay with the performance implications of a reallocation, they can call make_zero()
117 * and *then* force the reallocating reserve() call.
118 * - Like `vector`, `resize(Nnew)` merely guarantees post-condition `size() == Nnew`; which means that
119 * it is essentially equivalent to `reserve(Nnew)` followed by setting internal N member to Nnew.
120 * However, remember that resize() therefore keeps all the behaviors of reserve(), including that it cannot
121 * grow the buffer (only allocate it when zero() is `true`).
122 * - If changing `start()` from default, then: `resize(Nnew, Snew)` means `reserve(Nnew + Snew)`, plus saving
123 * internal N and S members.
124 * - The *only* way to allocate is to (directly or indirectly) call `reserve(Mnew)` when `zero() == true`.
125 * Moreover, *exactly* Mnew bytes elements are allocated and no more (unlike with `vector`, where the policy used is
126 * not known). Moreover, if `reserve(Mnew)` is called indirectly (by another method of the class), `Mnew` arg
127 * is set to no greater than size necessary to complete the operation (again, by contrast, it is unknown what `vector`
128 * does w/r/t capacity policy).
129 * - The rest of the API is common-sense but generally kept to only what has been necessary to date,
130 * in on-demand fashion.
131 *
132 * ### Optional, simple garbage-collected shared ownership functionality ###
133 * The following feature was added quite some time after `Blob` was first introduced and matured. However it seamlessly
134 * subsumes all of the above basic functionality with full backwards compatibility. It can also be disabled
135 * (and is by default) by setting #S_SHARING to `false` at compile-time. (This gains back a little bit of perf
136 * namely by turning an internal `shared_ptr` to `unique_ptr`.)
137 *
138 * The feature itself is simple: Suppose one has a blob A, constructed or otherwise `resize()`d or `reserve()`d
139 * so as to have `zero() == false`; meaning `capacity() >= 1`. Now suppose one calls the core method of this *pool*
140 * feature: share() which returns a new blob B. B will have the same exact start(), size(), capacity() -- and,
141 * in fact, the pointer `data() - start()` (i.e., the underlying buffer start pointer, buffer being capacity() long).
142 * That is, B now shares the underlying memory buffer with A. Normally, that underlying buffer would be deallocated
143 * when either `A.make_zero()` is called, or A is destructed. Now that it's shared by A and B, however,
144 * the buffer is deallocated only once make_zero() or destruction occurs for *both* A and B. That is, there is an
145 * internal (thread-safe) ref-count that must reach 0.
146 *
147 * Both A and B may now again be share()d into further sharing `Basic_blob`s. This further increments the ref-count of
148 * original buffer; all such `Basic_blob`s C, D, ... must now either make_zero() or destruct, at which point the dealloc
149 * occurs.
150 *
151 * In that way the buffer -- or *pool* -- is *garbage-collected* as a whole, with reserve() (and APIs like resize()
152 * and ctors that call it) initially allocating and setting internal ref-count to 1, share() incrementing it, and
153 * make_zero() and ~Basic_blob() decrementing it (and deallocating when ref-count=0).
154 *
155 * ### Application of shared ownership: Simple pool-of-`Basic_blob`s functionality ###
156 * The other aspect of this feature is its pool-of-`Basic_blob`s application. All of the sharing `Basic_blob`s A, B,
157 * ... retain all the aforementioned features including the ability to use resize(), start_past_prefix_inc(), etc.,
158 * to control the location of the logical sub-range [begin(), end()) within the underlying buffer (pool).
159 * E.g., suppose A was 10 bytes, with `start() = 0` and `size() = capacity() = 10`; then share() B is also that way.
160 * Now `B.start_past_prefix_inc(5); A.resize(5);` makes it so that A = the 1st 5 bytes of the pool,
161 * B the last 5 bytes (and they don't overlap -- can even be concurrently modified safely). In that way A and B
162 * are now independent `Basic_blob`s -- potentially passed, say, to independent TCP-receive calls, each of which reads
163 * up to 5 bytes -- that share an over-arching pool.
164 *
165 * The API share_after_split_left() is a convenience operation that splits a `Basic_blob`'s [begin(), end()) area into
166 * 2 areas of specified length, then returns a new Basic_blob representing the first area in the split and
167 * modifies `*this` to represent the remainder (the 2nd area). This simply performs the op described in the preceding
168 * paragraph. share_after_split_right() is similar but acts symmetrically from the right. Lastly
169 * `share_after_split_equally*()` splits a Basic_blob into several equally-sized (except the last one potentially)
170 * sub-`Basic_blob`s of size N, where N is an arg. (It can be thought of as just calling `share_after_split_left(N)`
171 * repeatedly, then returning a sequence of the resulting post-split `Basic_blob`s.)
172 *
173 * To summarize: The `share_after_split*()` APIs are useful to divide (potentially progressively) a pool into
174 * non-overlapping `Basic_blob`s within a pool while ensuring the pool continues to exist while `Basic_blob`s refer
175 * to any part of it (but no later). Meanwhile direct use of share() with resize() and `start_past_prefix*()` allows
176 * for overlapping such sharing `Basic_blob`s.
177 *
178 * Note that deallocation occurs regardless of which areas of that pool the relevant `Basic_blob`s represent,
179 * and whether they overlap or not (and, for that matter, whether they even together make up the entire pool or
180 * leave "gaps" in-between). The whole pool is deallocated the moment the last of the co-owning `Basic_blob`s
181 * performs either make_zero() or ~Basic_blob() -- the values of start() and size() at the time are not relevant.
182 *
183 * ### Custom allocator (and SHared Memory) support ###
184 * Like STL containers this one optionally takes a custom allocator type (#Allocator_raw) as a compile-time parameter
185 * instead of using the regular heap (`std::allocator`). Unlike many STL container implementations, including
186 * at least older `std::vector`, it supports SHM-storing allocators without a constant cross-process vaddr scheme.
187 * (Some do support this but with surprising perf flaws when storing raw integers/bytes. boost.container `vector`
188 * has solid support but lacks various other properties of Basic_blob.) While a detailed discussion is outside
189 * our scope here, the main point is internally `*this` stores no raw `value_type*` but rather
190 * `Allocator_raw::pointer` -- which in many cases *is* `value_type*`; but for advanced applications like SHM
191 * it might be a fancy-pointer like `boost::interprocess::offset_ptr<value_type>`. For general education
192 * check out boost.interprocess docs covering storage of STL containers in SHM. (However note that the
193 * allocators provided by that library are only one option even for SHM storage alone; e.g., they are stateful,
194 * and often one would like a stateless -- zero-size -- allocator. Plus there are other limitations to
195 * boost.interprocess SHM support, robust though it is.)
196 *
197 * @note In the somewhat-exotic case wherein #Allocator_raw is stateful (therefore not `std::allocator` default),
198 * such that it is possible for two objects of that type to value-compare as not-equal, the following rules
199 * apply. Propagation of allocators via move-ct, copy-ct, move-assign, copy-assign, or swap follows standard
200 * rules (see cppreference.com or the like for those). This is normal. However the following is different
201 * from at least some standard containers and derivatives (e.g., boost.container ones), at least potentially:
202 * Even if an aforementioned op *did* propagate the allocator object from the source `Basic_blob`,
203 * any existing buffer (meaning `!zero()`) shall be deallocated using the same allocator object that allocated
204 * it originally (hence in this scenario there are now 2 allocator objects stored in `*this`). A reallocation
205 * with the new allocator object will *not* be forced. (Among other considerations this means that original
206 * allocator's resources -- the source pool or whatever it is -- must stay alive until the deallocation does
207 * occur according to the simple above-documented rules of when that must happen.) The rationale is that
208 * Basic_blob is biased toward simple, predictable behavior w/r/t deallocs and allocs occurring, even in the
209 * fact of exotic get_allocator() changes.
210 *
211 * ### Logging ###
212 * When and if `*this` logs, it is with log::Sev::S_TRACE severity or more verbose.
213 *
214 * Unlike many other Flow API classes this one does not derive from log::Log_context nor take a `Logger*` in
215 * ctor (and store it). Instead each API method/ctor/function capable of logging takes an optional
216 * (possibly null) log::Logger pointer. If supplied it's used by that API alone (with some minor async exceptions).
217 * If you would like more typical Flow-style logging API then use our non-polymorphic sub-class Blob_with_log_context
218 * (more likely aliases #Blob, #Sharing_blob). However consider the following first.
219 *
220 * Why this design? Answer:
221 * - Basic_blob is meant to be lean, both in terms of RAM used and processor cycles spent. Storing a `Logger*`
222 * takes some space; and storing it, copying/moving it, etc., takes a little compute. In a low-level
223 * API like Basic_blob this is potentially nice to avoid when not actively needed. (That said the logging
224 * can be extremely useful when debugging and/or profiling RAM use + allocations.)
225 * - This isn't a killer. The original `Blob` (before Basic_blob existed) stored a `Logger*`, and it was fine.
226 * However:
227 * - Storing a `Logger*` is always okay when `*this` itself is stored in regular heap or on the stack.
228 * However, `*this` itself may be stored in SHM; #Allocator_raw parameterization (see above regarding
229 * "Custom allocator") suggests as much (i.e., if the buffer is stored in SHM, we might be too).
230 * In that case `Logger*` does not, usually, make sense. As of this writing `Logger` in process 1
231 * has no relationship with any `Logger` in process 2; and even if the `Logger` were stored in SHM itself,
232 * `Logger` would need to be supplied via an in-SHM fancy-pointer, not `Logger*`, typically. The latter is
233 * a major can of worms and not supported by flow::log in any case as of this writing.
234 * - Therefore, even if we don't care about RAM/perf implications of storing `Logger*` with the blob, at least
235 * in some real applications it makes no sense.
236 *
237 * #Blob/#Sharing_blob provides this support while ensuring #Allocator_raw (no longer a template parameter in its case)
238 * is the vanilla `std::allocator`. The trade-off is as noted just above.
239 *
240 * ### Thread safety ###
241 * Before share() (or `share_*()`) is called: Essentially: Thread safety is the same as for `vector<uint8_t>`.
242 *
243 * Without `share*()` any two Basic_blob objects refer to separate areas in memory; hence it is safe to access
244 * Basic_blob A concurrently with accessing Basic_blob B in any fashion (read, write).
245 *
246 * However: If 2 `Basic_blob`s A and B co-own a pool, via a `share*()` chain, then concurrent write and read/write
247 * to A and B respectively are thread-safe if and only if their [begin(), end()) ranges don't overlap. Otherwise,
248 * naturally, one would be writing to an area while it is being read simultaneously -- not safe.
249 *
250 * Tip: When working in `share*()` mode, exclusive use of `share_after_split*()` is a great way to guarantee no 2
251 * `Basic_blob`s ever overlap. Meanwhile one must be careful when using share() directly and/or subsequently sliding
252 * the range around via resize(), `start_past_prefix*()`: `A.share()` and A not only (originally) overlap but
253 * simply represent the same area of memory; and resize() and co. can turn a non-overlapping range into an overlapping
254 * one (encroaching on someone else's "territory" within the pool).
255 *
256 * @todo Write a class template, perhaps `Tight_blob<Allocator, bool>`, which would be identical to Basic_blob
257 * but forego the framing features, namely size() and start(), thus storing only the RAII array pointer data()
258 * and capacity(); rewrite Basic_blob in terms of this `Tight_blob`. This simple container type has had some demand
259 * in practice, and Basic_blob can and should be cleanly built on top of it (perhaps even as an IS-A subclass).
260 *
261 * @tparam Allocator
262 * An allocator, with `value_type` equal to our #value_type, per the standard C++1x `Allocator` concept.
263 * In most uses this shall be left at the default `std::allocator<value_type>` which allocates in
264 * standard heap (`new[]`, `delete[]`). A custom allocator may be used instead. SHM-storing allocators,
265 * and generally allocators for which `pointer` is not simply `value_type*` but rather a fancy-pointer
266 * (see cppreference.com) are correctly supported. (Note this may not be the case for your compiler's
267 * `std::vector`.)
268 * @tparam SHARING
269 * If `true`, share() and all derived methods, plus blobs_sharing(), can be instantiated (invoked in compiled
270 * code). If `false` they cannot (`static_assert()` will trip), but the resulting Basic_blob concrete
271 * class will be slightly more performant (internally, a `shared_ptr` becomes instead a `unique_ptr` which
272 * means smaller allocations and no ref-count logic invoked).
273 */
274template<typename Allocator, bool SHARING>
276{
277public:
278 // Types.
279
280 /// Short-hand for values, which in this case are unsigned bytes.
282
283 /// Type for index into blob or length of blob or sub-blob.
284 using size_type = std::size_t;
285
286 /// Type for difference of `size_type`s.
287 using difference_type = std::ptrdiff_t;
288
289 /// Type for iterator pointing into a mutable structure of this type.
291
292 /// Type for iterator pointing into an immutable structure of this type.
293 using Const_iterator = value_type const *;
294
295 /// Short-hand for the allocator type specified at compile-time. Its element type is our #value_type.
296 using Allocator_raw = Allocator;
297 static_assert(std::is_same_v<typename Allocator_raw::value_type, value_type>,
298 "Allocator template param must be of form A<V> where V is our value_type.");
299
300 /// For container compliance (hence the irregular capitalization): pointer to element.
302 /// For container compliance (hence the irregular capitalization): pointer to `const` element.
304 /// For container compliance (hence the irregular capitalization): reference to element.
306 /// For container compliance (hence the irregular capitalization): reference to `const` element.
308 /// For container compliance (hence the irregular capitalization): #Iterator type.
310 /// For container compliance (hence the irregular capitalization): #Const_iterator type.
312
313 // Constants.
314
315 /// Value of template parameter `SHARING` (for generic programming).
316 static constexpr bool S_SHARING = SHARING;
317
318 /// Special value indicating an unchanged `size_type` value; such as in resize().
319 static constexpr size_type S_UNCHANGED = size_type(-1); // Same trick as std::string::npos.
320
321 /**
322 * `true` if #Allocator_raw underlying allocator template is simply `std::allocator`; `false`
323 * otherwise.
324 *
325 * Note that if this is `true`, it may be worth using #Blob/#Sharing_blob, instead of its `Basic_blob<std::allocator>`
326 * super-class; at the cost of a marginally larger RAM footprint (an added `Logger*`) you'll get a more convenient
327 * set of logging API knobs (namely `Logger*` stored permanently from construction; and there will be no need to
328 * supply it as arg to subsequent APIs when logging is desired).
329 *
330 * ### Implications of #S_IS_VANILLA_ALLOC being `false` ###
331 * This is introduced in our class doc header. Briefly however:
332 * - The underlying buffer, if any, and possibly some small aux data shall be allocated
333 * via #Allocator_raw, not simply the regular heap's `new[]` and/or `new`.
334 * - They shall be deallocated, if needed, via #Allocator_raw, not simply the regular heap's
335 * `delete[]` and/or `delete`.
336 * - Because storing a pointer to log::Logger may be meaningless when storing in an area allocated
337 * by some custom allocators (particularly SHM-heap ones), we shall not auto-TRACE-log on dealloc.
338 * - This caveat applies only if #S_SHARING is `true`.
339 *
340 * @internal
341 * - (If #S_SHARING)
342 * Accordingly the ref-counted buffer pointer buf_ptr() shall be a `boost::interprocess::shared_ptr`
343 * instead of a vanilla `shared_ptr`; the latter may be faster and more full-featured, but it is likely
344 * to internally store a raw `T*`; we need one that stores an `Allocator_raw::pointer` instead;
345 * e.g., a fancy-pointer type (like `boost::interprocess::offset_ptr`) when dealing with
346 * SHM-heaps (typically).
347 * - If #S_IS_VANILLA_ALLOC is `true`, then we revert to the faster/more-mature/full-featured
348 * `shared_ptr`. In particular it is faster (if used with `make_shared()` and similar) by storing
349 * the user buffer and aux data/ref-count in one contiguously-allocated buffer.
350 * - (If #S_SHARING is `false`)
351 * It's a typical `unique_ptr` template either way (because it supports non-raw-pointer storage out of the
352 * box) but:
353 * - A custom deleter is necessary similarly to the above.
354 * - Its `pointer` member alias crucially causes the `unique_ptr` to store
355 * an `Allocator_raw::pointer` instead of a `value_type*`.
356 *
357 * See #Buf_ptr doc header regarding the latter two bullet points.
358 */
359 static constexpr bool S_IS_VANILLA_ALLOC = std::is_same_v<Allocator_raw, std::allocator<value_type>>;
360
361 // Constructors/destructor.
362
363 /**
364 * Constructs blob with `zero() == true`. Note this means no buffer is allocated.
365 *
366 * @param alloc_raw_src
367 * Allocator to copy and store in `*this` for all buffer allocations/deallocations.
368 * If #Allocator_raw is stateless, then this has size zero, so nothing is copied at runtime,
369 * and by definition it is to equal `Allocator_raw{}`.
370 */
371 Basic_blob(const Allocator_raw& alloc_raw_src = {});
372
373 /**
374 * Constructs blob with size() and capacity() equal to the given `size`, and `start() == 0`. Performance note:
375 * elements are not initialized to zero or any other value. A new over-arching buffer (pool) is therefore allocated.
376 *
377 * @see a similar ctor that takes Clear_on_alloc tag arg, if you *do* want the elements to be zero-initialized.
378 * Doing so is often faster than your own explicit `memset(X.data(), 0, X.size())` (or similar).
379 *
380 * Corner case note: a post-condition is `zero() == (size() == 0)`. Note, also, that the latter is *not* a universal
381 * invariant (see zero() doc header).
382 *
383 * Formally: If `size >= 1`, then a buffer is allocated; and the internal ownership ref-count is set to 1.
384 *
385 * @param size
386 * A non-negative desired size.
387 * @param logger_ptr
388 * The Logger implementation to use in *this* routine (synchronously) or asynchronously when TRACE-logging
389 * in the event of buffer dealloc. Null allowed.
390 * @param alloc_raw_src
391 * Allocator to copy and store in `*this` for all buffer allocations/deallocations.
392 * If #Allocator_raw is stateless, then this has size zero, so nothing is copied at runtime,
393 * and by definition it is to equal `Allocator_raw{}`.
394 */
395 explicit Basic_blob(size_type size, log::Logger* logger_ptr = nullptr,
396 const Allocator_raw& alloc_raw_src = {});
397
398 /**
399 * Identical to similar-sig ctor except, if `size > 0`, all `size` elements are performantly initialized to zero.
400 *
401 * Using this ctor form, instead of using the non-init one followed by your own explicit
402 * `memset(X.data(), 0, X.size())` (or similar), is likely to be significantly faster in at least some cases.
403 * It is *not* mere syntactic sugar.
404 *
405 * @see resize() and reserve() also have Clear_on_alloc forms.
406 *
407 * @param coa_tag
408 * API-choosing tag util::CLEAR_ON_ALLOC.
409 * @param size
410 * See similar ctor.
411 * @param logger_ptr
412 * See similar ctor.
413 * @param alloc_raw_src
414 * See similar ctor.
415 */
416 explicit Basic_blob(size_type size, Clear_on_alloc coa_tag, log::Logger* logger_ptr = nullptr,
417 const Allocator_raw& alloc_raw_src = {});
418
419 /**
420 * Move constructor, constructing a blob exactly internally equal to pre-call `moved_src`, while the latter is
421 * made to be exactly as if it were just constructed as `Basic_blob{nullptr}` (allocator subtleties aside).
422 * Performance: constant-time, at most copying a few scalars.
423 *
424 * @note It is important this be `noexcept`, if a copying counterpart to us exists in this class; otherwise
425 * (e.g.) `vector<Basic_blob>` will, on realloc, default to copying `*this`es around instead of moving:
426 * a terrible (in its stealthiness) perf loss.
427 *
428 * @param moved_src
429 * The object whose internals to move to `*this` and replace with a blank-constructed object's internals.
430 * @param logger_ptr
431 * The Logger implementation to use in *this* routine (synchronously) only. Null allowed.
432 */
433 Basic_blob(Basic_blob&& moved_src, log::Logger* logger_ptr = nullptr) noexcept;
434
435 /**
436 * Copy constructor, constructing a blob logically equal to `src`. More formally, guarantees post-condition wherein
437 * `[this->begin(), this->end())` range is equal by value (including length) to `src` equivalent range but no memory
438 * overlap. A post-condition is `capacity() == size()`, and `start() == 0`. Performance: see copying assignment
439 * operator.
440 *
441 * Corner case note: the range equality guarantee includes the degenerate case where that range is empty, meaning we
442 * simply guarantee post-condition `src.empty() == this->empty()`.
443 *
444 * Corner case note 2: post-condition: `this->zero() == this->empty()`
445 * (note `src.zero()` state is not necessarily preserved in `*this`).
446 *
447 * Note: This is `explicit`, which is atypical for a copy constructor, to generate compile errors in hard-to-see
448 * (and often unintentional) instances of copying. Copies of Basic_blob should be quite intentional and explicit.
449 * (One example where one might forget about a copy would be when using a Basic_blob argument without `cref` or
450 * `ref` in a `bind()`; or when capturing by value, not by ref, in a lambda.)
451 *
452 * Formally: If `src.size() >= 1`, then a buffer is allocated; and the internal ownership ref-count is set to 1.
453 *
454 * @param src
455 * Object whose range of bytes of length `src.size()` starting at `src.begin()` is copied into `*this`.
456 * @param logger_ptr
457 * The Logger implementation to use in *this* routine (synchronously) or asynchronously when TRACE-logging
458 * in the event of buffer dealloc. Null allowed.
459 */
460 explicit Basic_blob(const Basic_blob& src, log::Logger* logger_ptr = nullptr);
461
462 /**
463 * Destructor that drops `*this` ownership of the allocated internal buffer if any, as by make_zero();
464 * if no other Basic_blob holds ownership of that buffer, then that buffer is deallocated also.
465 * Recall that other `Basic_blob`s can only gain co-ownership via `share*()`; hence if one does not use that
466 * feature, the destructor will in fact deallocate the buffer (if any).
467 *
468 * Formally: If `!zero()`, then the internal ownership ref-count is decremented by 1, and if it reaches
469 * 0, then a buffer is deallocated.
470 *
471 * ### Logging ###
472 * This will not log, as it is not possible to pass a `Logger*` to a dtor without storing it (which we avoid
473 * for reasons outlined in class doc header). Use #Blob/#Sharing_blob if it is important to log in this situation
474 * (although there are some minor trade-offs).
475 */
477
478 // Methods.
479
480 /**
481 * Move assignment. Allocator subtleties aside and assuming `this != &moved_src` it is equivalent to:
482 * `make_zero(); this->swap(moved_src, logger_ptr)`. (If `this == &moved_src`, this is a no-op.)
483 *
484 * @param moved_src
485 * See swap().
486 * @param logger_ptr
487 * The Logger implementation to use in *this* routine (synchronously) only. Null allowed.
488 * @return `*this`.
489 */
490 Basic_blob& assign(Basic_blob&& moved_src, log::Logger* logger_ptr = nullptr) noexcept;
491
492 /**
493 * Move assignment operator (no logging): equivalent to `assign(std::move(moved_src), nullptr)`.
494 *
495 * @note It is important this be `noexcept`, if a copying counterpart to us exists in this class; otherwise
496 * (e.g.) `vector<Basic_blob>` will, on realloc, default to copying `*this`es around instead of moving:
497 * a terrible (in its stealthiness) perf loss.
498 *
499 * @param moved_src
500 * See assign() (move overload).
501 * @return `*this`.
502 */
503 Basic_blob& operator=(Basic_blob&& moved_src) noexcept;
504
505 /**
506 * Copy assignment: assuming `(this != &src) && (!blobs_sharing(*this, src))`,
507 * makes `*this` logically equal to `src`; but behavior undefined if a reallocation would be necessary to do this.
508 * (If `this == &src`, this is a no-op. If not but `blobs_sharing(*this, src) == true`, see "Sharing blobs" below.
509 * This is assumed to not be the case in further discussion.)
510 *
511 * More formally:
512 * no-op if `this == &src`; "Sharing blobs" behavior if not so, but `src` shares buffer with `*this`; otherwise:
513 * Guarantees post-condition wherein `[this->begin(), this->end())` range is equal
514 * by value (including length) to `src` equivalent range but no memory overlap. Post-condition: `start() == 0`;
515 * capacity() either does not change or equals size(). capacity() growth is not allowed: behavior is undefined if
516 * `src.size()` exceeds pre-call `this->capacity()`, unless `this->zero() == true` pre-call. Performance: at most a
517 * memory area of size `src.size()` is copied and some scalars updated; a memory area of that size is allocated only
518 * if required; no ownership drop or deallocation occurs.
519 *
520 * Corner case note: the range equality guarantee includes the degenerate case where that range is empty, meaning we
521 * simply guarantee post-condition `src.empty() == this->empty()`.
522 *
523 * Corner case note 2: post-condition: if `this->empty() == true` then `this.zero()` has the same value as at entry
524 * to this call. In other words, no deallocation occurs, even if
525 * `this->empty() == true` post-condition holds; at most internally a scalar storing size is assigned 0.
526 * (You may force deallocation in that case via make_zero() post-call, but this means you'll have to intentionally
527 * perform that relatively slow op.)
528 *
529 * As with reserve(), IF pre-condition `zero() == false`, THEN pre-condition `src.size() <= this->capacity()`
530 * must hold, or behavior is undefined (i.e., as noted above, capacity() growth is not allowed except from 0).
531 * Therefore, NO REallocation occurs! However, also as with reserve(), if you want to intentionally allow such a
532 * REallocation, then simply first call make_zero(); then execute the
533 * `assign()` copy as planned. This is an intentional restriction forcing caller to explicitly allow a relatively
534 * slow reallocation op.
535 *
536 * Formally: If `src.size() >= 1`, and `zero() == true`, then a buffer is allocated; and the internal ownership
537 * ref-count is set to 1.
538 *
539 * ### Sharing blobs ###
540 * If `blobs_sharing(*this, src) == true`, meaning the target and source are operating on the same buffer, then
541 * behavior is undefined (assertion may trip). Rationale for this design is as follows. The possibilities were:
542 * -# Undefined behavior/assertion.
543 * -# Just adjust `this->start()` and `this->size()` to match `src`; continue co-owning the underlying buffer;
544 * copy no data.
545 * -# `this->make_zero()` -- losing `*this` ownership, while `src` keeps it -- and then allocate a new buffer
546 * and copy `src` data into it.
547 *
548 * Choosing between these is tough, as this is an odd corner case. 3 is not criminal, but generally no method
549 * ever forces make_zero() behavior, always leaving it to the user to consciously do, so it seems prudent to keep
550 * to that practice (even though this case is a bit different from, say, resize() -- since make_zero() here has
551 * no chance to deallocate anything, only decrement ref-count). 2 is performant and slick but suggests a special
552 * behavior in a corner case; this *feels* slightly ill-advised in a standard copy assignment operator. Therefore
553 * it seems better to crash-and-burn (choice 1), in the same way an attempt to resize()-higher a non-zero() blob would
554 * crash and burn, forcing the user to explicitly execute what they want. After all, 3 is done by simply calling
555 * make_zero() first; and 2 is possible with a simple resize() call; and the blobs_sharing() check is both easy
556 * and performant.
557 *
558 * @warning A post-condition is `start() == 0`; meaning `start()` at entry is ignored and reset to 0; the entire
559 * (co-)owned buffer -- if any -- is potentially used to store the copied values. In particular, if one
560 * plans to work on a sub-blob of a shared pool (see class doc header), then using this assignment op is
561 * not advised. Use emplace_copy() instead; or perform your own copy onto
562 * mutable_buffer().
563 *
564 * @param src
565 * Object whose range of bytes of length `src.size()` starting at `src.begin()` is copied into `*this`.
566 * Behavior is undefined if pre-condition is `!zero()`, and this memory area overlaps at any point with the
567 * memory area of same size in `*this` (unless that size is zero -- a degenerate case).
568 * (This can occur only via the use of `share*()` -- otherwise `Basic_blob`s always refer to separate areas.)
569 * Also behavior undefined if pre-condition is `!zero()`, and `*this` (co-)owned buffer is too short to
570 * accomodate all `src.size()` bytes (assertion may trip).
571 * @param logger_ptr
572 * The Logger implementation to use in *this* routine (synchronously) only. Null allowed.
573 * @return `*this`.
574 */
575 Basic_blob& assign(const Basic_blob& src, log::Logger* logger_ptr = nullptr);
576
577 /**
578 * Copy assignment operator (no logging): equivalent to `assign(src, nullptr)`.
579 *
580 * @param src
581 * See assign() (copy overload).
582 * @return `*this`.
583 */
584 Basic_blob& operator=(const Basic_blob& src);
585
586 /**
587 * Swaps the contents of this structure and `other`, or no-op if `this == &other`. Performance: at most this
588 * involves swapping a few scalars which is constant-time.
589 *
590 * @param other
591 * The other structure.
592 * @param logger_ptr
593 * The Logger implementation to use in *this* routine (synchronously) only. Null allowed.
594 */
595 void swap(Basic_blob& other, log::Logger* logger_ptr = nullptr) noexcept;
596
597 /**
598 * Applicable to `!zero()` blobs, this returns an identical Basic_blob that shares (co-owns) `*this` allocated buffer
599 * along with `*this` and any other `Basic_blob`s also sharing it. Behavior is undefined (assertion may trip) if
600 * `zero() == true`: it is nonsensical to co-own nothing; just use the default ctor then.
601 *
602 * The returned Basic_blob is identical in that not only does it share the same memory area (hence same capacity())
603 * but has identical start(), size() (and hence begin() and end()). If you'd like to work on a different
604 * part of the allocated buffer, please consider `share_after_split*()` instead; the pool-of-sub-`Basic_blob`s
605 * paradigm suggested in the class doc header is probably best accomplished using those methods and not share().
606 *
607 * You can also adjust various sharing `Basic_blob`s via resize(), start_past_prefix_inc(), etc., directly -- after
608 * share() returns.
609 *
610 * Formally: Before this returns, the internal ownership ref-count (shared among `*this` and the returned
611 * Basic_blob) is incremented.
612 *
613 * @param logger_ptr
614 * The Logger implementation to use in *this* routine (synchronously) only. Null allowed.
615 * @return An identical Basic_blob to `*this` that shares the underlying allocated buffer. See above.
616 */
617 Basic_blob share(log::Logger* logger_ptr = nullptr) const;
618
619 /**
620 * Applicable to `!zero()` blobs, this shifts `this->begin()` by `size` to the right without changing
621 * end(); and returns a Basic_blob containing the shifted-past values that shares (co-owns) `*this` allocated buffer
622 * along with `*this` and any other `Basic_blob`s also sharing it.
623 *
624 * More formally, this is identical to simply `auto b = share(); b.resize(size); start_past_prefix_inc(size);`.
625 *
626 * This is useful when working in the pool-of-sub-`Basic_blob`s paradigm. This and other `share_after_split*()`
627 * methods are usually better to use rather than share() directly (for that paradigm).
628 *
629 * Behavior is undefined (assertion may trip) if `zero() == true`.
630 *
631 * Corner case: If `size > size()`, then it is taken to equal size().
632 *
633 * Degenerate case: If `size` (or size(), whichever is smaller) is 0, then this method is identical to
634 * share(). Probably you don't mean to call share_after_split_left() in that case, but it's your decision.
635 *
636 * Degenerate case: If `size == size()` (and not 0), then `this->empty()` becomes `true` -- though
637 * `*this` continues to share the underlying buffer despite [begin(), end()) becoming empty. Typically this would
638 * only be done as, perhaps, the last iteration of some progressively-splitting loop; but it's your decision.
639 *
640 * Formally: Before this returns, the internal ownership ref-count (shared among `*this` and the returned
641 * Basic_blob) is incremented.
642 *
643 * @param size
644 * Desired size() of the returned Basic_blob; and the number of elements by which `this->begin()` is
645 * shifted right (hence start() is incremented). Any value exceeding size() is taken to equal it.
646 * @param logger_ptr
647 * The Logger implementation to use in *this* routine (synchronously) only. Null allowed.
648 * @return The split-off-on-the-left Basic_blob that shares the underlying allocated buffer with `*this`. See above.
649 */
650 Basic_blob share_after_split_left(size_type size, log::Logger* logger_ptr = nullptr);
651
652 /**
653 * Identical to share_after_split_left(), except `this->end()` shifts by `size` to the left (instead of
654 * `this->begin() to the right), and the split-off Basic_blob contains the *right-most* `size` elements
655 * (instead of the left-most).
656 *
657 * More formally, this is identical to simply
658 * `auto lt_size = size() - size; auto b = share(); resize(lt_size); b.start_past_prefix_inc(lt_size);`.
659 * Cf. share_after_split_left() formal definition and note the symmetry.
660 *
661 * All other characteristics are as written for share_after_split_left().
662 *
663 * @param size
664 * Desired size() of the returned Basic_blob; and the number of elements by which `this->end()` is
665 * shifted left (hence size() is decremented). Any value exceeding size() is taken to equal it.
666 * @param logger_ptr
667 * The Logger implementation to use in *this* routine (synchronously) only. Null allowed.
668 * @return The split-off-on-the-right Basic_blob that shares the underlying allocated buffer with `*this`. See above.
669 */
670 Basic_blob share_after_split_right(size_type size, log::Logger* logger_ptr = nullptr);
671
672 /**
673 * Identical to successively performing `share_after_split_left(size)` until `this->empty() == true`;
674 * the resultings `Basic_blob`s are emitted via `emit_blob_func()` callback in the order they're split off from
675 * the left. In other words this partitions a non-zero() `Basic_blob` -- perhaps typically used as a pool --
676 * into equally-sized (except possibly the last one) adjacent sub-`Basic_blob`s.
677 *
678 * A post-condition is that `empty() == true` (`size() == 0`). In addition, if `headless_pool == true`,
679 * then `zero() == true` is also a post-condition; i.e., the pool is "headless": it disappears once all the
680 * resulting sub-`Basic_blob`s drop their ownership (as well as any other co-owning `Basic_blob`s).
681 * Otherwise, `*this` will continue to share the pool despite size() becoming 0. (Of course, even then, one is
682 * free to make_zero() or destroy `*this` -- the former, before returning, is all that `headless_pool == true`
683 * really adds.)
684 *
685 * Behavior is undefined (assertion may trip) if `empty() == true` (including if `zero() == true`, but even if not)
686 * or if `size == 0`.
687 *
688 * @see share_after_split_equally_emit_seq() for a convenience wrapper to emit to, say, `vector<Basic_blob>`.
689 * @see share_after_split_equally_emit_ptr_seq() for a convenience wrapper to emit to, say,
690 * `vector<unique_ptr<Basic_blob>>`.
691 *
692 * @tparam Emit_blob_func
693 * A callback compatible with signature `void F(Basic_blob&& blob_moved)`.
694 * @param size
695 * Desired size() of each successive out-Basic_blob, except the last one. Behavior undefined (assertion may
696 * trip) if not positive.
697 * @param headless_pool
698 * Whether to perform `this->make_zero()` just before returning. See above.
699 * @param emit_blob_func
700 * `F` such that `F(std::move(blob))` shall be called with each successive sub-Basic_blob.
701 * @param logger_ptr
702 * The Logger implementation to use in *this* routine (synchronously) only. Null allowed.
703 */
704 template<typename Emit_blob_func>
705 void share_after_split_equally(size_type size, bool headless_pool, Emit_blob_func&& emit_blob_func,
706 log::Logger* logger_ptr = nullptr);
707
708 /**
709 * share_after_split_equally() wrapper that places `Basic_blob`s into the given container via
710 * `push_back()`.
711 *
712 * @tparam Blob_container
713 * Something with method compatible with `push_back(Basic_blob&& blob_moved)`.
714 * @param size
715 * See share_after_split_equally().
716 * @param headless_pool
717 * See share_after_split_equally().
718 * @param out_blobs
719 * `out_blobs->push_back()` shall be executed 1+ times.
720 * @param logger_ptr
721 * The Logger implementation to use in *this* routine (synchronously) only. Null allowed.
722 */
723 template<typename Blob_container>
724 void share_after_split_equally_emit_seq(size_type size, bool headless_pool, Blob_container* out_blobs,
725 log::Logger* logger_ptr = nullptr);
726
727 /**
728 * share_after_split_equally() wrapper that places `Ptr<Basic_blob>`s into the given container via
729 * `push_back()`, where the type `Ptr<>` is determined via `Blob_ptr_container::value_type`.
730 *
731 * @tparam Blob_ptr_container
732 * Something with method compatible with `push_back(Ptr&& blob_ptr_moved)`,
733 * where `Ptr` is `Blob_ptr_container::value_type`, and `Ptr{new Basic_blob}` can be created.
734 * `Ptr` is to be a smart pointer type such as `unique_ptr<Basic_blob>` or `shared_ptr<Basic_blob>`.
735 * @param size
736 * See share_after_split_equally().
737 * @param headless_pool
738 * See share_after_split_equally().
739 * @param out_blobs
740 * `out_blobs->push_back()` shall be executed 1+ times.
741 * @param logger_ptr
742 * The Logger implementation to use in *this* routine (synchronously) only. Null allowed.
743 */
744 template<typename Blob_ptr_container>
745 void share_after_split_equally_emit_ptr_seq(size_type size, bool headless_pool, Blob_ptr_container* out_blobs,
746 log::Logger* logger_ptr = nullptr);
747
748 /**
749 * Replaces logical contents with a copy of the given non-overlapping area anywhere in memory. More formally:
750 * This is exactly equivalent to copy-assignment (`*this = b`), where `const Basic_blob b` owns exactly
751 * the memory area given by `src`. However, note the newly relevant restriction documented for `src` parameter below
752 * (no overlap allowed).
753 *
754 * All characteristics are as written for the copy assignment operator, including "Formally" and the warning.
755 *
756 * @param src
757 * Source memory area. Behavior is undefined if pre-condition is `!zero()`, and this memory area overlaps
758 * at any point with the memory area of same size at `begin()`. Otherwise it can be anywhere at all.
759 * Also behavior undefined if pre-condition is `!zero()`, and `*this` (co-)owned buffer is too short to
760 * accomodate all `src.size()` bytes (assertion may trip).
761 * @param logger_ptr
762 * The Logger implementation to use in *this* routine (synchronously) only. Null allowed.
763 * @return Number of elements copied, namely `src.size()`, or simply size().
764 */
765 size_type assign_copy(const boost::asio::const_buffer& src, log::Logger* logger_ptr = nullptr);
766
767 /**
768 * Copies `src` buffer directly onto equally sized area within `*this` at location `dest`; `*this` must have
769 * sufficient size() to accomodate all of the data copied. Performance: The only operation performed is a copy from
770 * `src` to `dest` using the fastest reasonably available technique.
771 *
772 * None of the following changes: zero(), empty(), size(), capacity(), begin(), end(); nor the location (or size) of
773 * internally stored buffer.
774 *
775 * @param dest
776 * Destination location within this blob. This must be in `[begin(), end()]`; and,
777 * unless `src.size() == 0`, must not equal end() either.
778 * @param src
779 * Source memory area. Behavior is undefined if this memory area overlaps
780 * at any point with the memory area of same size at `dest` (unless that size is zero -- a degenerate
781 * case). Otherwise it can be anywhere at all, even partially or fully within `*this`.
782 * Also behavior undefined if `*this` blob is too short to accomodate all `src.size()` bytes
783 * (assertion may trip).
784 * @param logger_ptr
785 * The Logger implementation to use in *this* routine (synchronously) only. Null allowed.
786 * @return Location in this blob just past the last element copied; `dest` if none copied; in particular end() is a
787 * possible value.
788 */
789 Iterator emplace_copy(Const_iterator dest, const boost::asio::const_buffer& src, log::Logger* logger_ptr = nullptr);
790
791 /**
792 * The opposite of emplace_copy() in every way, copying a sub-blob onto a target memory area. Note that the size
793 * of that target buffer (`dest.size()`) determines how much of `*this` is copied.
794 *
795 * @param src
796 * Source location within this blob. This must be in `[begin(), end()]`; and,
797 * unless `dest.size() == 0`, must not equal end() either.
798 * @param dest
799 * Destination memory area. Behavior is undefined if this memory area overlaps
800 * at any point with the memory area of same size at `src` (unless that size is zero -- a degenerate
801 * case). Otherwise it can be anywhere at all, even partially or fully within `*this`.
802 * Also behavior undefined if `src + dest.size()` is past end of `*this` blob (assertion may trip).
803 * @param logger_ptr
804 * The Logger implementation to use in *this* routine (synchronously) only. Null allowed.
805 * @return Location in this blob just past the last element copied; `src` if none copied; end() is a possible value.
806 */
808 log::Logger* logger_ptr = nullptr) const;
809
810 /**
811 * Returns number of elements stored, namely `end() - begin()`. If zero(), this is 0; but if this is 0, then
812 * zero() may or may not be `true`.
813 *
814 * @return See above.
815 */
817
818 /**
819 * Returns the offset between `begin()` and the start of the internally allocated buffer. If zero(), this is 0; but
820 * if this is 0, then zero() may or may not be `true`.
821 *
822 * @return See above.
823 */
825
826 /**
827 * Returns `size() == 0`. If zero(), this is `true`; but if this is `true`, then
828 * zero() may or may not be `true`.
829 *
830 * @return See above.
831 */
832 bool empty() const;
833
834 /**
835 * Returns the number of elements in the internally allocated buffer, which is 1 or more; or 0 if no buffer
836 * is internally allocated. Some formal invariants: `(capacity() == 0) == zero()`; `start() + size() <= capacity()`.
837 *
838 * See important notes on capacity() policy in the class doc header.
839 *
840 * @return See above.
841 */
843
844 /**
845 * Returns `false` if a buffer is allocated and owned; `true` otherwise. See important notes on how this relates
846 * to empty() and capacity() in those methods' doc headers. See also other important notes in class doc header.
847 *
848 * Note that zero() is `true` for any default-constructed Basic_blob.
849 *
850 * @return See above.
851 */
852 bool zero() const;
853
854 /**
855 * Ensures that an internal buffer of at least `capacity` elements is allocated and owned; disallows growing an
856 * existing buffer; never shrinks an existing buffer; if a buffer is allocated, it is no larger than `capacity`.
857 *
858 * @see a similar overload that takes Clear_on_alloc tag arg, if you *do* want the elements to be zero-initialized.
859 * Doing so is often faster than your own explicit `memset(X.data(), 0, X.size())` (or similar).
860 *
861 * reserve() may be called directly but should be formally understood to be called by resize(), assign_copy(),
862 * copy assignment operator, copy constructor. In all cases, the value passed to reserve() is exactly the size
863 * needed to perform the particular task -- no more (and no less). As such, reserve() policy is key to knowing
864 * how the class behaves elsewhere. See class doc header for discussion in larger context.
865 *
866 * Performance/behavior: If zero() is true pre-call, `capacity` sized buffer is allocated. Otherwise,
867 * no-op if `capacity <= capacity()` pre-call. Behavior is undefined if `capacity > capacity()` pre-call
868 * (again, unless zero(), meaning `capacity() == 0`). In other words, no deallocation occurs, and an allocation
869 * occurs only if necessary. Growing an existing buffer is disallowed. However, if you want to intentionally
870 * REallocate, then simply first check for `zero() == false` and call make_zero() if that holds; then execute the
871 * `reserve()` as planned. This is an intentional restriction forcing caller to explicitly allow a relatively slow
872 * reallocation op. You'll note a similar suggestion for the other reserve()-using methods/operators.
873 *
874 * Formally: If `capacity >= 1`, and `zero() == true`, then a buffer is allocated; and the internal ownership
875 * ref-count is set to 1.
876 *
877 * @param capacity
878 * Non-negative desired minimum capacity.
879 * @param logger_ptr
880 * The Logger implementation to use in *this* routine (synchronously) or asynchronously when TRACE-logging
881 * in the event of buffer dealloc. Null allowed.
882 */
883 void reserve(size_type capacity, log::Logger* logger_ptr = nullptr);
884
885 /**
886 * Identical to similar-sig overload except, if a `capacity`-sized buffer is allocated, then all `size` elements are
887 * performantly initialized to zero.
888 *
889 * Using this overload, instead of using the non-init one followed by your own explicit
890 * `memset(X.data(), 0, X.size())` (or similar), is likely to be significantly faster in at least some cases.
891 * It is *not* mere syntactic sugar.
892 *
893 * @see resize() and `size`-taking ctor also have Clear_on_alloc forms.
894 *
895 * @param coa_tag
896 * API-choosing tag util::CLEAR_ON_ALLOC.
897 * @param capacity
898 * See similar overload.
899 * @param logger_ptr
900 * See similar overload.
901 */
902 void reserve(size_type capacity, Clear_on_alloc coa_tag, log::Logger* logger_ptr = nullptr);
903
904 /**
905 * Guarantees post-condition `zero() == true` by dropping `*this` ownership of the allocated internal buffer if any;
906 * if no other Basic_blob holds ownership of that buffer, then that buffer is deallocated also. Recall that
907 * other `Basic_blob`s can only gain co-ownership via `share*()`; hence if one does not use that feature, make_zero()
908 * will in fact deallocate the buffer (if any).
909 *
910 * That post-condition can also be thought of as `*this` becoming indistinguishable from a default-constructed
911 * Basic_blob.
912 *
913 * Performance/behavior: Assuming zero() is not already `true`, this will deallocate capacity() sized buffer
914 * and save a null pointer.
915 *
916 * The many operations that involve reserve() in their doc headers will explain importance of this method:
917 * As a rule, no method except make_zero() allows one to request an ownership-drop or deallocation of the existing
918 * buffer, even if this would be necessary for a larger buffer to be allocated. Therefore, if you intentionally want
919 * to allow such an operation, you CAN, but then you MUST explicitly call make_zero() first.
920 *
921 * Formally: If `!zero()`, then the internal ownership ref-count is decremented by 1, and if it reaches
922 * 0, then a buffer is deallocated.
923 *
924 * @param logger_ptr
925 * The Logger implementation to use in *this* routine (synchronously) only. Null allowed.
926 */
927 void make_zero(log::Logger* logger_ptr = nullptr);
928
929 /**
930 * Guarantees post-condition `size() == size` and `start() == start`; no values in pre-call range `[begin(), end())`
931 * are changed; any values *added* to that range by the call are not initialized to zero or otherwise.
932 *
933 * From other invariants and behaviors described, you'll realize
934 * this essentially means `reserve(size + start)` followed by saving `size` and `start` into internal size members.
935 * The various implications of this can be deduced by reading the related methods' doc headers. The key is to
936 * understand how reserve() works, including what it disallows (growth in size of an existing buffer).
937 *
938 * Formally: If `size >= 1`, and `zero() == true`, then a buffer is allocated; and the internal ownership
939 * ref-count is set to 1.
940 *
941 * @see a similar overload that takes Clear_on_alloc tag arg, if you *do* want the elements to be zero-initialized.
942 * Doing so is often faster than your own explicit `memset(X.data(), 0, X.size())` (or similar).
943 *
944 * ### Leaving start() unmodified ###
945 * `start` is taken to be the value of arg `start_or_unchanged`; unless the latter is set to special value
946 * #S_UNCHANGED; in which case `start` is taken to equal start(). Since the default is indeed #S_UNCHANGED,
947 * the oft-encountered expression `resize(N)` will adjust only size() and leave start() unmodified -- often the
948 * desired behavior.
949 *
950 * @param size
951 * Non-negative desired value for size().
952 * @param start_or_unchanged
953 * Non-negative desired value for start(); or special value #S_UNCHANGED. See above.
954 * @param logger_ptr
955 * The Logger implementation to use in *this* routine (synchronously) or asynchronously when TRACE-logging
956 * in the event of buffer dealloc. Null allowed.
957 */
958 void resize(size_type size, size_type start_or_unchanged = S_UNCHANGED, log::Logger* logger_ptr = nullptr);
959
960 /**
961 * Identical to similar-sig overload except, if a `capacity`-sized buffer is allocated, then all `size` elements are
962 * performantly initialized to zero.
963 *
964 * Using this overload, instead of using the non-init one followed by your own explicit
965 * `memset(X.data(), 0, X.size())` (or similar), is likely to be significantly faster in at least some cases.
966 * It is *not* mere syntactic sugar.
967 *
968 * @see reserve() and `size`-taking ctor also have Clear_on_alloc forms.
969 *
970 * @param coa_tag
971 * API-choosing tag util::CLEAR_ON_ALLOC.
972 * @param size
973 * See similar overload.
974 * @param start_or_unchanged
975 * See similar overload.
976 * @param logger_ptr
977 * See similar overload.
978 */
980 size_type start_or_unchanged = S_UNCHANGED, log::Logger* logger_ptr = nullptr);
981
982 /**
983 * Restructures blob to consist of an internally allocated buffer and a `[begin(), end)` range starting at
984 * offset `prefix_size` within that buffer. More formally, it is a simple resize() wrapper that ensures
985 * the internally allocated buffer remains unchanged or, if none is currently large enough to
986 * store `prefix_size` elements, is allocated to be of size `prefix_size`; and that `start() == prefix_size`.
987 *
988 * All of resize()'s behavior, particularly any restrictions about capacity() growth, applies, so in particular
989 * remember you may need to first make_zero() if the internal buffer would need to be REallocated to satisfy the
990 * above requirements.
991 *
992 * In practice, with current reserve() (and thus resize()) restrictions -- which are intentional -- this method is
993 * most useful if you already have a Basic_blob with internally allocated buffer of size *at least*
994 * `n == size() + start()` (and `start() == 0` for simplicity), and you'd like to treat this buffer as containing
995 * no-longer-relevant prefix of length S (which becomes new value for start()) and have size() be readjusted down
996 * accordingly, while `start() + size() == n` remains unchaged. If the buffer also contains irrelevant data
997 * *past* a certain offset N, you can first make it irrelevant via `resize(N)` (then call `start_past_prefix(S)`
998 * as just described):
999 *
1000 * ~~~
1001 * Basic_blob b;
1002 * // ...
1003 * // b now has start() == 0, size() == M.
1004 * // We want all elements outside [S, N] to be irrelevant, where S > 0, N < M.
1005 * // (E.g., first S are a frame prefix, while all bytes past N are a frame postfix, and we want just the frame
1006 * // without any reallocating or copying.)
1007 * b.resize(N);
1008 * b.start_past_prefix(S);
1009 * // Now, [b.begin(), b.end()) are the frame bytes, and no copying/allocation/deallocation has occurred.
1010 * ~~~
1011 *
1012 * @param prefix_size
1013 * Desired prefix length. `prefix_size == 0` is allowed and is a degenerate case equivalent to:
1014 * `resize(start() + size(), 0)`.
1015 */
1016 void start_past_prefix(size_type prefix_size);
1017
1018 /**
1019 * Like start_past_prefix() but shifts the *current* prefix position by the given *incremental* value
1020 * (positive or negative). Identical to `start_past_prefix(start() + prefix_size_inc)`.
1021 *
1022 * Behavior is undefined for negative `prefix_size_inc` whose magnitue exceeds start() (assertion may trip).
1023 *
1024 * Behavior is undefined in case of positive `prefix_size_inc` that results in overflow.
1025 *
1026 * @param prefix_size_inc
1027 * Positive, negative (or zero) increment, so that start() is changed to `start() + prefix_size_inc`.
1028 */
1030
1031 /**
1032 * Equivalent to `resize(0, start())`.
1033 *
1034 * Note that the value returned by start() will *not* change due to this call. Only size() (and the corresponding
1035 * internally stored datum) may change. If one desires to reset start(), use resize() directly (but if one
1036 * plans to work on a sub-Basic_blob of a shared pool -- see class doc header -- please think twice first).
1037 */
1038 void clear();
1039
1040 /**
1041 * Performs the minimal number of operations to make range `[begin(), end())` unchanged except for lacking
1042 * sub-range `[first, past_last)`.
1043 *
1044 * Performance/behavior: At most, this copies the range `[past_last, end())` to area starting at `first`;
1045 * and then adjusts internally stored size member.
1046 *
1047 * @param first
1048 * Pointer to first element to erase. It must be dereferenceable, or behavior is undefined (assertion may
1049 * trip). Corollary: invoking `erase()` when `empty() == true` is undefined behavior.
1050 * @param past_last
1051 * Pointer to one past the last element to erase. If `past_last <= first`, call is a no-op.
1052 * @return Iterator equal to `first`. (This matches standard expectation for container `erase()` return value:
1053 * iterator to element past the last one erased. In this contiguous sequence that simply equals `first`,
1054 * since everything starting with `past_last` slides left onto `first`. In particular:
1055 * If `past_last()` equaled `end()` at entry, then the new end() is returned: everything starting with
1056 * `first` was erased and thus `first == end()` now. If nothing is erased `first` is still returned.)
1057 */
1059
1060 /**
1061 * Returns pointer to mutable first element; or end() if empty(). Null is a possible value in the latter case.
1062 *
1063 * @return Pointer, possibly null.
1064 */
1066
1067 /**
1068 * Returns pointer to immutable first element; or end() if empty(). Null is a possible value in the latter case.
1069 *
1070 * @return Pointer, possibly null.
1071 */
1073
1074 /**
1075 * Equivalent to const_begin().
1076 *
1077 * @return Pointer, possibly null.
1078 */
1080
1081 /**
1082 * Returns pointer one past mutable last element; empty() is possible. Null is a possible value in the latter case.
1083 *
1084 * @return Pointer, possibly null.
1085 */
1087
1088 /**
1089 * Returns pointer one past immutable last element; empty() is possible. Null is a possible value in the latter case.
1090 *
1091 * @return Pointer, possibly null.
1092 */
1094
1095 /**
1096 * Equivalent to const_end().
1097 *
1098 * @return Pointer, possibly null.
1099 */
1101
1102 /**
1103 * Returns reference to immutable first element. Behavior is undefined if empty().
1104 *
1105 * @return See above.
1106 */
1107 const value_type& const_front() const;
1108
1109 /**
1110 * Returns reference to immutable last element. Behavior is undefined if empty().
1111 *
1112 * @return See above.
1113 */
1114 const value_type& const_back() const;
1115
1116 /**
1117 * Equivalent to const_front().
1118 *
1119 * @return See above.
1120 */
1121 const value_type& front() const;
1122
1123 /**
1124 * Equivalent to const_back().
1125 *
1126 * @return See above.
1127 */
1128 const value_type& back() const;
1129
1130 /**
1131 * Returns reference to mutable first element. Behavior is undefined if empty().
1132 *
1133 * @return See above.
1134 */
1136
1137 /**
1138 * Returns reference to mutable last element. Behavior is undefined if empty().
1139 *
1140 * @return See above.
1141 */
1143
1144 /**
1145 * Equivalent to const_begin().
1146 *
1147 * @return Pointer, possibly null.
1148 */
1149 value_type const * const_data() const;
1150
1151 /**
1152 * Equivalent to begin().
1153 *
1154 * @return Pointer, possibly null.
1155 */
1157
1158 /**
1159 * Synonym of const_begin(). Exists as standard container method (hence the odd formatting).
1160 *
1161 * @return See const_begin().
1162 */
1164
1165 /**
1166 * Synonym of const_end(). Exists as standard container method (hence the odd formatting).
1167 *
1168 * @return See const_end().
1169 */
1171
1172 /**
1173 * Returns `true` if and only if: `this->derefable_iterator(it) || (it == this->const_end())`.
1174 *
1175 * @param it
1176 * Iterator/pointer to check.
1177 * @return See above.
1178 */
1180
1181 /**
1182 * Returns `true` if and only if the given iterator points to an element within this blob's size() elements.
1183 * In particular, this is always `false` if empty(); and also when `it == this->const_end()`.
1184 *
1185 * @param it
1186 * Iterator/pointer to check.
1187 * @return See above.
1188 */
1190
1191 /**
1192 * Convenience accessor returning an immutable boost.asio buffer "view" into the entirety of the blob.
1193 * Equivalent to `const_buffer(const_data(), size())`.
1194 *
1195 * @return See above.
1196 */
1197 boost::asio::const_buffer const_buffer() const;
1198
1199 /**
1200 * Same as const_buffer() but the returned view is mutable.
1201 * Equivalent to `mutable_buffer(data(), size())`.
1202 *
1203 * @return See above.
1204 */
1206
1207 /**
1208 * Returns a copy of the internally cached #Allocator_raw as set by a constructor or assign() or
1209 * assignment-operator, whichever happened last.
1210 *
1211 * @return See above.
1212 */
1214
1215protected:
1216 // Constants.
1217
1218 /// Our flow::log::Component.
1220
1221 // Methods.
1222
1223 /**
1224 * Impl of share_after_split_equally() but capable of emitting not just `*this` type (`Basic_blob<...>`)
1225 * but any sub-class (such as `Blob`/`Sharing_blob`) provided a functor like share_after_split_left() but returning
1226 * an object of that appropriate type.
1227 *
1228 * @tparam Emit_blob_func
1229 * See share_after_split_equally(); however it is to take the type to emit which can
1230 * be `*this` Basic_blob or a sub-class.
1231 * @tparam Share_after_split_left_func
1232 * A callback with signature identical to share_after_split_left() but returning
1233 * the same type emitted by `Emit_blob_func`.
1234 * @param size
1235 * See share_after_split_equally().
1236 * @param headless_pool
1237 * See share_after_split_equally().
1238 * @param emit_blob_func
1239 * See `Emit_blob_func`.
1240 * @param logger_ptr
1241 * See share_after_split_equally().
1242 * @param share_after_split_left_func
1243 * See `Share_after_split_left_func`.
1244 */
1245 template<typename Emit_blob_func, typename Share_after_split_left_func>
1247 Emit_blob_func&& emit_blob_func,
1248 log::Logger* logger_ptr,
1249 Share_after_split_left_func&& share_after_split_left_func);
1250
1251private:
1252
1253 // Types.
1254
1255 /**
1256 * Internal deleter functor used if and only if #S_IS_VANILLA_ALLOC is `false` and therefore only with
1257 * #Buf_ptr being `boost::interprocess::shared_ptr` or
1258 * deleter-parameterized `unique_ptr`. Basically its task is to undo the
1259 * `alloc_raw().allocate()` call made when allocating a buffer in reserve(); the result of that call is
1260 * passed-to `shared_ptr::reset()` or `unique_ptr::reset()`; as is alloc_raw() (for any aux allocation needed,
1261 * but only for `shared_ptr` -- `unique_ptr` needs no aux data); as is
1262 * an instance of this Deleter_raw (to specifically dealloc the buffer when the ref-count reaches 0).
1263 * (In the `unique_ptr` case there is no ref-count per se; or one can think of it as a ref-count that equals 1.)
1264 *
1265 * Note that Deleter_raw is used only to dealloc the buffer actually controlled by the `shared_ptr` group
1266 * or `unique_ptr`. `shared_ptr` will use the #Allocator_raw directly to dealloc aux data. (We guess Deleter_raw
1267 * is a separate argument to `shared_ptr` to support array deletion; `boost::interprocess:shared_ptr` does not
1268 * provide built-in support for `U[]` as the pointee type; but the deleter can do whatever it wants/needs.)
1269 *
1270 * Note: this is not used except with custom #Allocator_raw. With `std::allocator` the usual default `delete[]`
1271 * behavior is fine.
1272 *
1273 * ### How to delete using it ###
1274 * `operator()()` gets invoked by smart-pointer machinery; the pointer to delete is passed to it as an arg.
1275 * So we need not memorize it ourselves.
1276 *
1277 * ### How to initialize ###
1278 * Before `operator()()` will work, it has to be loaded with the buffer size and allocator (both items needed
1279 * by that operator in addition to the pointer itself). There are 3 ways to do this:
1280 * - Construct via 2-arg ctor that takes those values. As usual the allocator object is to be copied (if it's
1281 * even stateful; else that's a no-op).
1282 * - Move-construct from already-initialized Deleter_raw.
1283 * - First, default-construct. Then, move-assign from an existing already-initialized other Deleter_raw.
1284 *
1285 * For ~brevity we won't fully enumerate who uses these and when; but mainly wanted to
1286 * place you attention on that last possibility. Basic_blob::reserve_impl(), in the non-sharing, non-vanilla-alloc
1287 * case, when it does need to allocate `buf_ptr()`, will exercise that use-case. E.g., Basic_blob
1288 * default-ctor would default-ct a Deleter_raw; then the next `reserve_impl()`
1289 * would 2-arg-ct Deleter_raw and then move-assign it onto the default-cted Deleter_raw.
1290 */
1292 {
1293 public:
1294 // Types.
1295
1296 /**
1297 * Short-hand for the allocator's pointer type, pointing to Basic_blob::value_type.
1298 * This may or may not simply be `value_type*`; in cases including SHM-storing allocators without
1299 * a constant cross-process vaddr scheme it needs to be a fancy-pointer type instead (e.g.,
1300 * `boost::interprocess::offset_ptr<value_type>`).
1301 */
1302 using Pointer_raw = typename std::allocator_traits<Allocator_raw>::pointer;
1303
1304 /// For `boost::interprocess::shared_ptr` and `unique_ptr` compliance (hence the irregular capitalization).
1306
1307 // Constructors/destructor.
1308
1309 /**
1310 * Default ctor: Deleter must never be invoked to delete anything in this state; suitable only for a null
1311 * smart-pointer. Without this a `unique_ptr<..., Deleter_Raw>` cannot be default-cted.
1312 */
1313 Deleter_raw();
1314
1315 /**
1316 * Constructs deleter by memorizing the allocator (of zero size if #Allocator_raw is stateless, usually)
1317 * used to allocate whatever shall be passed-to `operator()()`; and the size (in # of `value_type`s)
1318 * of the buffer allocated there. The latter is required, at least technically, because
1319 * `Allocator_raw::deallocate()` requires the value count, equal to that when `allocate()` was called,
1320 * to be passed-in. Many allocators probably don't really need this, as array size is typically recorded
1321 * invisibly near the array itself, but formally this is not guaranteed for all allocators.
1322 *
1323 * @param alloc_raw_src
1324 * Allocator to copy and store.
1325 * @param buf_sz
1326 * See above.
1327 */
1328 explicit Deleter_raw(const Allocator_raw& alloc_raw_src, size_type buf_sz);
1329
1330 /**
1331 * Move-construction which may be required when we are used in `unique_ptr`. This is equivalent to
1332 * default-construction followed by move-assignment. See move-assignment operator doc header regarding
1333 * why we are defining both of these.
1334 *
1335 * @param moved_src
1336 * Moved guy. For cleanliness it becomes as-if default-cted.
1337 */
1338 Deleter_raw(Deleter_raw&& moved_src);
1339
1340 /**
1341 * Copy-construction which may be required when we are used in `boost::interprocess::shared_ptr` which
1342 * as of this writing requires copyable deleter in its `.reset()` and other places.
1343 *
1344 * @param src
1345 * Copied guy.
1346 */
1348
1349 // Methods.
1350
1351 /**
1352 * Move-assignment which is required when we are used in `unique_ptr`. User might invoke move-construction
1353 * or move-assignment of the Basic_blob; this reduces to Basic_blob::assign() (move overload); which will
1354 * do a swap -- that ultimately will move the stored Deleter_raw up to a few times.
1355 *
1356 * As of this writing we also manually overwrite `.get_deleter()` in one case in Basic_blob::reserve_impl();
1357 * so this is useful for that too.
1358 *
1359 * @param moved_src
1360 * Moved guy. For cleanliness it becomes as-if default-cted (unless it is the same object as `*this`).
1361 * @return `*this`.
1362 */
1363 Deleter_raw& operator=(Deleter_raw&& moved_src);
1364
1365 /**
1366 * Copy-assignment which is required when we are used in `boost::interprocess::shared_ptr` which
1367 * as of this writing requires copyable deleter in its `.reset()` and other places.
1368 *
1369 * @param src
1370 * Copied guy.
1371 * @return `*this`.
1372 */
1373 Deleter_raw& operator=(const Deleter_raw& src);
1374
1375 /**
1376 * Deallocates using `Allocator_raw::deallocate()`, passing-in the supplied pointer (to `value_type`) `to_delete`
1377 * and the number of `value_type`s to delete (from ctor).
1378 *
1379 * @param to_delete
1380 * Pointer to buffer to delete as supplied by `shared_ptr` or `unique_ptr` internals.
1381 */
1382 void operator()(Pointer_raw to_delete);
1383
1384 private:
1385 // Data.
1386
1387 /**
1388 * See ctor: the allocator that `operator()()` shall use to deallocate. For stateless allocators this
1389 * typically has size zero.
1390 *
1391 * ### What's with `optional<>`? ###
1392 * ...Okay, so actually this has size (whatever `optional` adds, probably a `bool`) + `sizeof(Allocator_raw)`,
1393 * the latter being indeed zero for stateless allocators. Why use `optional<>` though? Two reasons at least:
1394 * - Stateful allocators often cannot be default-cted; and our own default ctor requires that
1395 * #m_alloc_raw is initialized to *something*... even though it (by default ctor contract) will never be
1396 * accessed via `operator()()` in that form. Bottom line is a null smart-pointer needs a default-cted `*this`
1397 * for at least some smart-pointer types (namely `unique_ptr` at least; probably not `shared_ptr`).
1398 * - Yay, `optional` can be uninitialized.
1399 * - Some allocators (such as `boost::interprocess::allocator`) are not assignable, only
1400 * copy-constructible, while we need to be move-assignable (see doc header for move-assignment; spoiler alert:
1401 * Basic_blob::reserve_impl() may need that).
1402 * - Yay, `optional<T>` has `.emplace()` which will construct (including copy-construct) a `T`.
1403 *
1404 * It is slightly annoying that we waste the extra space for `optional` internals even when `Allocator_raw`
1405 * is stateless (and it is often stateless!). Plus, when #Buf_ptr is `shared_ptr` instead of `unique_ptr`
1406 * these bullet points probably do not apply. Probably some meta-programming thing could be done to avoid even this
1407 * overhead, but in my (ygoldfel) opinion the overhead is so minor, it does not even rise to the level of a to-do.
1408 */
1409 std::optional<Allocator_raw> m_alloc_raw;
1410
1411 /// See ctor and `operator()()`: the size of the buffer to deallocate.
1413 }; // class Deleter_raw
1414
1415 /**
1416 * The smart-pointer type used for buf_ptr(); a custom-allocator-and-SHM-friendly impl and parameterization is
1417 * used if necessary; otherwise a more typical concrete type is used.
1418 *
1419 * The following discussion assumes the more complex case wherein #S_SHARING is `true`. We discuss the simpler
1420 * converse case below that.
1421 *
1422 * Two things affect how buf_ptr() shall behave:
1423 * - Which type this resolves-to depending on #S_IS_VANILLA_ALLOC (ultimately #Allocator_raw). This affects
1424 * many key things but most relevantly how it is dereferenced. Namely:
1425 * - Typical `shared_ptr` (used with vanilla allocator) will internally store simply a raw `value_type*`
1426 * and dereference trivially. This, however, will not work with some custom allocators, particularly
1427 * SHM-heap ones (without a constant cross-process vaddr scheme), wherein a raw `T*` meaningful in
1428 * the original process is meaningless in another.
1429 * - `boost::shared_ptr` and `std::shared_ptr` both have custom-allocator support via
1430 * `allocate_shared()` and co. However, as of this writing, they are not SHM-friendly; or another
1431 * way of putting it is they don't support custom allocators fully: `Allocator::pointer` is ignored;
1432 * it is assumed to essentially be raw `value_type*`, in that the `shared_ptr` internally stores
1433 * a raw pointer. boost.interprocess refers to this as the impetus for implementing the following:
1434 * - `boost::interprocess::shared_ptr` (used with custom allocator) will internally store an
1435 * instance of `Allocator_raw::pointer` (to `value_type`) instead. To dereference it, its operators
1436 * such as `*` and `->` (etc.) will execute to properly translate to a raw `T*`.
1437 * The aforementioned `pointer` may simply be `value_type*` again; in which case there is no difference
1438 * to the standard `shared_ptr` situation; but it can instead be a fancy-pointer (actual technical term, yes,
1439 * in cppreference.com et al), in which case some custom code will run to translate some internal
1440 * data members (which have process-agnostic values) inside the fancy-pointer to a raw `T*`.
1441 * For example `boost::interprocess::offset_ptr<value_type>` does this by adding a stored offset to its
1442 * own `this`.
1443 * - How it is reset to a newly allocated buffer in reserve() (when needed).
1444 * - Typical `shared_ptr` is efficiently assigned using a `make_shared()` variant. However, here we store
1445 * a pointer to an array, not a single value (hence `<value_type[]>`); and we specifically want to avoid
1446 * any 0-initialization of the elements (per one of Basic_blob's promises). See reserve() which uses a
1447 * `make_shared()` variant that accomplishes all this.
1448 * - `boost::interprocess::shared_ptr` is reset differently due to a couple of restrictions, as it is made
1449 * to be usable in SHM (SHared Memory), specifically, plus it seems to refrain from tacking on every
1450 * normal `shared_ptr` feature. To wit: 1, `virtual` cannot be used; therefore the deleter type must be
1451 * declared at compile-time. 2, it has no special support for a native-array element-type (`value_type[]`).
1452 * Therefore it leaves that part up to the user: the buffer must be pre-allocated by the user
1453 * (and passed to `.reset()`); there is no `make_shared()` equivalent (which also means somewhat lower
1454 * perf, as aux data and user buffer are separately allocated and stored). Accordingly deletion is left
1455 * to the user, as there is no default deleter; one must be supplied. Thus:
1456 * - See reserve(); it calls `.reset()` as explained here, including using alloc_raw() to pre-allocate.
1457 * - See Deleter_raw, the deleter functor type an instance of which is saved by the `shared_ptr` to
1458 * invoke when ref-count reaches 0.
1459 *
1460 * Other than that, it's a `shared_ptr`; it works as usual.
1461 *
1462 * ### Why use typical `shared_ptr` at all? Won't the fancy-allocator-supporting one work for the vanilla case? ###
1463 * Yes, it would work. And there would be less code without this dichotomy (although the differences are,
1464 * per above, local to this alias definition; and reserve() where it allocates buffer). There are however reasons
1465 * why typical `shared_ptr` (we choose `boost::shared_ptr` over `std::shared_ptr`; that discussion is elsewhere,
1466 * but basically `boost::shared_ptr` is solid and full-featured/mature, though either choice would've worked).
1467 * They are at least:
1468 * - It is much more frequently used, preceding and anticipating its acceptance into the STL standard, so
1469 * maturity and performance are likelier.
1470 * - Specifically it supports a perf-enhancing use mode: using `make_shared()` (and similar) instead of
1471 * `.reset(<raw ptr>)` (or similar ctor) replaces 2 allocs (1 for user data, 1 for aux data/ref-count)
1472 * with 1 (for both).
1473 * - If verbose logging in the deleter is desired, its `virtual`-based type-erased deleter semantics make that
1474 * quite easy to achieve.
1475 *
1476 * ### The case where #S_SHARING is `false` ###
1477 * Firstly: if so then the method -- share() -- that would *ever* increment `Buf_ptr::use_count()` beyond 1
1478 * is simply not compiled. Therefore using any type of `shared_ptr` is a waste of RAM (on the ref-count)
1479 * and cycles (on aux memory allocation and ref-count math), albeit a minor one. Hence we use `unique_ptr`
1480 * in that case instead. Even so, the above #S_IS_VANILLA_ALLOC dichotomy still applies but is quite a bit
1481 * simpler to handle; it's a degenerate case in a way.
1482 * - Typical `unique_ptr` already stores `Deleter::pointer` instead of `value_ptr*`. Therefore
1483 * We can use it for both cases; in the vanilla case supplying no `Deleter` template param
1484 * (the default `Deleter` has `pointer = value_ptr*`); otherwise supplying Deleter_raw whose
1485 * Deleter_raw::pointer comes from `Allocator_raw::pointer`. This also, same as with
1486 * `boost::interprocess::shared_ptr`, takes care of the dealloc upon being nullified or destroyed.
1487 * - As for initialization:
1488 * - With #S_IS_VANILLA_ALLOC at `true`: Similarly to using a special array-friendly `make_shared()` variant,
1489 * we use a special array-friendly `make_unique()` variant.
1490 * - Otherwise: As with `boost::interprocess::shared_ptr` we cannot `make_*()` -- though AFAIK without
1491 * any perf penalty (there is no aux data) -- but reserve() must be quite careful to also
1492 * replace `buf_ptr()`'s deleter (which `.reset()` does not do... while `boost::interprocess::shared_ptr`
1493 * does).
1494 */
1495 using Buf_ptr = std::conditional_t<S_IS_VANILLA_ALLOC,
1496 std::conditional_t<S_SHARING,
1497 boost::shared_ptr<value_type[]>,
1498 boost::movelib::unique_ptr<value_type[]>>,
1499 std::conditional_t<S_SHARING,
1500 boost::interprocess::shared_ptr
1502 boost::movelib::unique_ptr<value_type, Deleter_raw>>>;
1503
1504 // Methods.
1505
1506 /**
1507 * For convenience/expressiveness, the pointer-to-main-buf for `*this`. The actual datum is inside a
1508 * `compressed_pair` for reasons explained elsewhere, and it is annoying/ugly having to specify that detail
1509 * all over the code. So one can just use this reference-returning accessor.
1510 *
1511 * ### Documentation for the datum referred-to by the return value ###
1512 *
1513 * Pointer to currently allocated buffer of size #m_capacity; null if and only if `zero() == true`.
1514 * Buffer is auto-freed at destruction; or in make_zero(); but only if by that point any share()-generated
1515 * other `Basic_blob`s have done the same. Otherwise the ref-count is merely decremented.
1516 * In the case of #S_SHARING being `false`, one can think of this ref-count as being always at most 1;
1517 * since share() is not compiled, and as a corollary a `unique_ptr` is used to avoid perf costs.
1518 * Thus make_zero() and dtor always dealloc in that case.
1519 *
1520 * For performance, we never initialize the values in the array to zeroes or otherwise.
1521 * This contrasts with `vector` and most other standard or Boost containers which use an `allocator` to
1522 * allocate any internal buffer, and most allocators default-construct (which means assign 0 in case of `uint8_t`)
1523 * any elements within allocated buffers, immediately upon the actual allocation on heap. As noted in doc header,
1524 * this behavior is surprisingly difficult to avoid (involving custom allocators and such).
1525 *
1526 * @return See above.
1527 */
1529
1530 /**
1531 * Ref-to-immutable counterpart to the other overload.
1532 * @return See above.
1533 */
1534 const Buf_ptr& buf_ptr() const;
1535
1536 /**
1537 * For convenience/expressiveness, the allocator object for `*this`. The actual datum is inside a
1538 * `compressed_pair` for reasons explained elsewhere, and it is annoying/ugly having to specify that detail
1539 * all over the code. So one can just use this reference-returning accessor.
1540 *
1541 * ### Documentation for the datum referred-to by the return value ###
1542 *
1543 * Copy of the allocator supplied by the user (though, if #Allocator_raw is stateless,
1544 * it is typically defaulted to `Allocator_raw{}`), as set by a constructor or assign() or
1545 * assignment-operator, whichever happened last. Used exclusively when allocating and deallocating
1546 * buf_ptr() in the *next* reserve() (potentially).
1547 *
1548 * By the rules of `Allocator_aware_container` (see cppreference.com):
1549 * - If `*this` is move-cted: datum move-cted from source datum counterpart.
1550 * - If `*this` is move-assigned: datum move-assigned from source datum counterpart if
1551 * `std::allocator_traits<Allocator_raw>::propagate_on_container_move_assignment::value == true` (else untouched).
1552 * - If `*this` is copy-cted: datum set to
1553 * `std::allocator_traits<Allocator_raw>::select_on_container_copy_construction()` (pass-in source datum
1554 * counterpart).
1555 * - If `*this` is copy-assigned: datum copy-assigned if
1556 * `std::allocator_traits<Allocator_raw>::propagate_on_container_copy_assignment::value == true` (else untouched).
1557 * - If `*this` is `swap()`ed: datum ADL-`swap()`ed with source datum counterpart if
1558 * `std::allocator_traits<Allocator_raw>::propagate_on_container_swap::value == true` (else untouched).
1559 * - Otherwise this is supplied via a non-copy/move ctor arg by user.
1560 *
1561 * ### Specially treated value ###
1562 * If #Allocator_raw is `std::allocator<value_type>` (as supposed to `something_else<value_type>`), then
1563 * this datum (while guaranteed set to the zero-sized copy of `std::allocator<value_type>()`) is never
1564 * in practice touched (outside of the above-mentioned moves/copies/swaps, though they also do nothing in reality
1565 * for this stateless allocator). This value by definition means we are to allocate on the regular heap;
1566 * and as of this writing for perf/other reasons we choose to use a vanilla
1567 * `*_ptr` with its default alloc-dealloc APIs (which perform `new[]`-`delete[]` respectively); we do not pass-in
1568 * alloc_raw() anywhere. See #Buf_ptr doc header for more. If we did pass it in to
1569 * `allocate_shared*()` or `boost::interprocess::shared_ptr::reset` the end result would be functionally
1570 * the same (`std::allocator::[de]allocate()` would get called; these call `new[]`/`delete[]`).
1571 *
1572 * ### Relationship between this datum and the allocator/deleter in `buf_ptr()` ###
1573 * (This is only applicable if #S_IS_VANILLA_ALLOC is `false`.)
1574 * buf_ptr() caches this datum internally in its centrally linked data. Ordinarily, then, they compare as equal.
1575 * In the corner case where (1) move-assign or copy-assign or swap() was used on `*this`, *and*
1576 * (2) #Allocator_raw is stateful and *can* compare unequal (e.g., `boost::interprocess::allocator`):
1577 * they may come to compare as unequal. It is, however, not (in our case) particularly important:
1578 * this datum affects the *next* reserve() (potentially); the thing stored in buf_ptr() affects the logic when
1579 * the underlying buffer is next deallocated. The two don't depend on each other.
1580 *
1581 * @return See above.
1582 */
1584
1585 /**
1586 * Ref-to-immutable counterpart to the other overload.
1587 * @return See above.
1588 */
1589 const Allocator_raw& alloc_raw() const;
1590
1591 /**
1592 * Implements reserve() overloads.
1593 *
1594 * @param clear_on_alloc
1595 * Whether the Clear_on_alloc overload or the other one was called.
1596 * @param capacity
1597 * See reserve().
1598 * @param logger_ptr
1599 * See reserve().
1600 */
1601 void reserve_impl(size_type capacity, bool clear_on_alloc, log::Logger* logger_ptr);
1602
1603 /**
1604 * Implements resize() overloads.
1605 *
1606 * @param clear_on_alloc
1607 * Whether the Clear_on_alloc overload or the other one was called.
1608 * @param size
1609 * See resize().
1610 * @param start_or_unchanged
1611 * See resize().
1612 * @param logger_ptr
1613 * See resize().
1614 */
1615 void resize_impl(size_type size, bool clear_on_alloc, size_type start_or_unchanged, log::Logger* logger_ptr);
1616
1617 /**
1618 * The body of swap(), except for the part that swaps (or decides not to swap) alloc_raw(). As of this writing
1619 * used by swap() and assign() (move overload) which perform mutually different steps w/r/t alloc_raw().
1620 *
1621 * @param other
1622 * See swap().
1623 * @param logger_ptr
1624 * See swap().
1625 */
1626 void swap_impl(Basic_blob& other, log::Logger* logger_ptr) noexcept;
1627
1628 /**
1629 * Returns iterator-to-mutable equivalent to given iterator-to-immutable.
1630 *
1631 * @param it
1632 * Self-explanatory. No assumptions are made about valid_iterator() or derefable_iterator() status.
1633 * @return Iterator to same location as `it`.
1634 */
1636
1637 // Data.
1638
1639 /**
1640 * Combined -- to enable empty base-class optimization (EBO) -- storage for the two data items, refs to which are
1641 * returned by alloc_raw() and buf_ptr() respectively.
1642 *
1643 * @see alloc_raw() and buf_ptr() doc headers for actual documentation for these two important items (especially
1644 * buf_ptr()).
1645 *
1646 * ### Rationale ###
1647 * Please look into EBO to grok this. That aside -- just think of `alloc_raw()` as essentially an
1648 * `m_alloc_raw` data member, `buf_ptr()` as an `m_buf_ptr` data member. They are only stored in this pair thingie
1649 * due to an obscure, but perf-affecting, C++ technicality. The aforementioned ref-returning accessors avoid having
1650 * to write `m_alloc_and_buf_ptr.second` and `.first` all over the place.
1651 */
1652 boost::compressed_pair<Allocator_raw, Buf_ptr> m_alloc_and_buf_ptr;
1653
1654 /// See capacity(); but #m_capacity is meaningless (and containing unknown value) if `!buf_ptr()` (i.e., zero()).
1656
1657 /// See start(); but #m_start is meaningless (and containing unknown value) if `!buf_ptr()` (i.e., zero()).
1659
1660 /// See size(); but #m_size is meaningless (and containing unknown value) if `!buf_ptr()` (i.e., zero()).
1662}; // class Basic_blob
1663
1664// Free functions: in *_fwd.hpp.
1665
1666// Template implementations.
1667
1668// buf_ptr() initialized to null pointer. n_capacity and m_size remain uninit (meaningless until buf_ptr() changes).
1669template<typename Allocator, bool SHARING>
1671 m_alloc_and_buf_ptr(alloc_raw_src), // Copy allocator; stateless alloc should have size 0 (no-op for the processor).
1672 m_capacity(0), // Not necessary, but some compilers will warn in some situations. Fine; it's cheap enough.
1673 m_start(0), // Ditto.
1674 m_size(0) // Ditto.
1675{
1676 // OK.
1677}
1678
1679template<typename Allocator, bool SHARING>
1681 (size_type size, log::Logger* logger_ptr, const Allocator_raw& alloc_raw_src) :
1682
1683 Basic_blob(alloc_raw_src) // Delegate.
1684{
1685 resize(size, 0, logger_ptr);
1686}
1687
1688template<typename Allocator, bool SHARING>
1690 (size_type size, Clear_on_alloc coa_tag, log::Logger* logger_ptr, const Allocator_raw& alloc_raw_src) :
1691
1692 Basic_blob(alloc_raw_src) // Delegate.
1693{
1694 resize(size, coa_tag, 0, logger_ptr);
1695}
1696
1697template<typename Allocator, bool SHARING>
1699 // Follow rules established in alloc_raw() doc header. This is compatible with the delegated-to ctor.
1700 Basic_blob(std::allocator_traits<Allocator_raw>::select_on_container_copy_construction(src.alloc_raw()))
1701{
1702 /* What we want to do here, ignoring allocators, is (for concision): `assign(src, logger_ptr);`
1703 * However copy-assignment also must do something different w/r/t alloc_raw() than what we had to do above
1704 * (again see alloc_raw() doc header); so just copy/paste the rest of what operator=(copy) would do.
1705 * Skipping most comments therein, as they don't much apply in our case. Code reuse level is all-right;
1706 * and we can skip the `if` from assign(). */
1707 assign_copy(src.const_buffer(), logger_ptr);
1708}
1709
1710template<typename Allocator, bool SHARING>
1712 // Follow rules established in alloc_raw() doc header:
1713 m_alloc_and_buf_ptr(std::move(moved_src.alloc_raw())),
1714 m_capacity(0), // See comment in first delegated ctor above.
1715 m_start(0), // Ditto.
1716 m_size(0) // Ditto
1717{
1718 /* Similar to copy ctor above, do the equivalent of assign(move(moved_src), logger_ptr) minus the allocator work.
1719 * That reduces to simply: */
1720 swap_impl(moved_src, logger_ptr);
1721}
1722
1723template<typename Allocator, bool SHARING>
1725
1726template<typename Allocator, bool SHARING>
1729{
1730 if (this != &src)
1731 {
1732 // Take care of the "Sharing blobs" corner case from our doc header. The rationale for this is pointed out there.
1733 if constexpr(S_SHARING)
1734 {
1735 assert(!blobs_sharing(*this, src));
1736 }
1737
1738 // For alloc_raw(): Follow rules established in alloc_raw() doc header.
1739 if constexpr(std::allocator_traits<Allocator_raw>::propagate_on_container_copy_assignment::value)
1740 {
1741 alloc_raw() = src.alloc_raw(); // No copy-assignment for some allocators, but then p_o_c_c_a would be false.
1742 }
1743 // else: Leave alloc_raw() alone.
1744
1745 /* Either way: for stateful (not-always-equal) allocators: the allocator used to dealloc buf_ptr() (if
1746 * buf_ptr() not null) is cached inside buf_ptr(). New alloc_raw(), even if it was changed, is relevant only to
1747 * the future allocation(s) if any. */
1748
1749 // Now to the relatively uncontroversial stuff. To copy the rest we'll just do:
1750
1751 /* resize(N, 0); copy over N bytes. Note that it disallows `N > capacity()` unless zero(), but they can explicitly
1752 * make_zero() before calling us, if they are explicitly OK with the performance cost of the reallocation that will
1753 * trigger. This is all as advertised; and it satisfies the top priority listed just below. */
1754 assign_copy(src.const_buffer(), logger_ptr);
1755
1756 /* ^-- Corner case: Suppose src.size() == 0. The above then reduces to: if (!zero()) { m_size = m_start = 0; }
1757 * (Look inside its source code; you'll see.)
1758 *
1759 * We guarantee certain specific behavior in doc header, and below implements that.
1760 * We will indicate how it does so; but ALSO why those promises are made in the first place (rationale).
1761 *
1762 * In fact, we will proceed under the following priorities, highest to lowest:
1763 * - User CAN switch order of our priorities sufficiently easily.
1764 * - Be as fast as possible, excluding minimizing constant-time operations such as scalar assignments.
1765 * - Use as little memory in *this as possible.
1766 *
1767 * We will NOT attempt to make *this have the same internal structure as src as its own independent virtue.
1768 * That doesn't seem useful and would make things more difficult obviously. Now:
1769 *
1770 * Either src.zero(), or not; but regardless src.size() == 0. Our options are essentially these:
1771 * make_zero(); or resize(0, 0). (We could also perhaps copy src.buf_ptr()[] and then adjust m_size = 0, but
1772 * this is clearly slower and only gains the thing we specifically pointed out is not a virtue above.)
1773 *
1774 * Let's break down those 2 courses of action, by situation, then:
1775 * - zero() && src.zero(): make_zero() and resize(0, 0) are equivalent; so nothing to decide. Either would be fine.
1776 * - zero() && !src.zero(): Ditto.
1777 * - !zero() && !src.zero(): make_zero() is slower than resize(0, 0); and moreover the latter may mean faster
1778 * operations subsequently, if they subsequently choose to reserve(N) (including resize(N), etc.) to
1779 * N <= capacity(). So resize(0, 0) wins according to the priority order listed above.
1780 * - !zero() && src.zero(): Ditto.
1781 *
1782 * So then we decided: resize(0, 0). And, indeed, resize(0, 0) is equivalent to the above snippet.
1783 * So, we're good. */
1784 } // if (this != &src)
1785
1786 return *this;
1787} // Basic_blob::assign(copy)
1788
1789template<typename Allocator, bool SHARING>
1791{
1792 return assign(src);
1793}
1794
1795template<typename Allocator, bool SHARING>
1798{
1799 if (this != &moved_src)
1800 {
1801 // For alloc_raw(): Follow rules established in alloc_raw() doc header.
1802 if constexpr(std::allocator_traits<Allocator_raw>::propagate_on_container_move_assignment::value)
1803 {
1804 alloc_raw() = std::move(moved_src.alloc_raw()); // Similar comment here as for assign() copy overload.
1805 }
1806 // else: Leave alloc_raw() alone.
1807
1808 /* Either way: for stateful (not-always-equal) allocators: the allocator used to dealloc buf_ptr() (if
1809 * moved_src.buf_ptr() not null) is cached inside moved_src.buf_ptr() and will be swap_impl()ed into
1810 * our buf_ptr() as part of the deleter. New alloc_raw(), even if it was changed, is relevant only to
1811 * the future allocation(s) if any. */
1812
1813 // Now to the relatively uncontroversial stuff.
1814
1815 make_zero(logger_ptr); // Spoiler alert: it's: if (!zero()) { buf_ptr().reset(); }
1816 // So now buf_ptr() is null; hence the other m_* (other than alloc_raw()) are meaningless.
1817
1818 swap_impl(moved_src, logger_ptr);
1819 /* Now *this is equal to old moved_src; new moved_src is valid and zero(); and nothing was copied -- as advertised.
1820 * swap_impl() does not touch alloc_raw() or moved_src.alloc_raw(), and we handled that already. */
1821 } // if (this != &moved_src)
1822
1823 return *this;
1824} // Basic_blob::assign(move)
1825
1826template<typename Allocator, bool SHARING>
1829{
1830 return assign(std::move(moved_src));
1831}
1832
1833template<typename Allocator, bool SHARING>
1835{
1836 using std::swap;
1837
1838 /* As of this writing move-ct and move-assign both use us as the core of what needs to happen; so the below code
1839 * has a particularly high responsibility of correctness and performance. */
1840
1841 if (this != &other)
1842 {
1843 if (logger_ptr && logger_ptr->should_log(log::Sev::S_TRACE, S_LOG_COMPONENT))
1844 {
1845 FLOW_LOG_SET_CONTEXT(logger_ptr, S_LOG_COMPONENT);
1846 FLOW_LOG_TRACE_WITHOUT_CHECKING("Blob [" << this << "] (internal buffer sized [" << capacity() << "]) "
1847 "swapping <=> Blob [" << &other << "] (internal buffer sized "
1848 "[" << other.capacity() << "]).");
1849 }
1850
1851 /* The following looks simple, and that's great. Just realize that buf_ptr() refers to a smart-pointer of one
1852 * of several types, and (see reserve_impl()) a custom deleter of type Deleter_raw may be stored therein. When
1853 * the smart-pointer is of a shared_ptr<> variety, that doesn't complicate anything, probably, as the deleter is
1854 * in the control block, so a swap just swaps a pair of ctl-block pointers, and that's that. However when it is
1855 * a unique_ptr, then Deleter_raw has to be move-assignable and probably
1856 * move-ctible; the swap (*) will then do a move-ct of a Deleter_raw followed by 2 move-assigns of them (probably).
1857 * So, as of this writing, we specifically made Deleter_raw move-ctible and move-assignable carefully; and
1858 * disabled any copy-ct or copy-assign for cleanliness and determinism.
1859 *
1860 * (*) We could make a swap(Deleter_raw&, Deleter_raw&) as well, which would be directly invoked via ADL-lookup
1861 * in the following statement; but let's leave well-enough alone and leave it as std::swap() and move-ct/assigns.
1862 * As of this writing reserve_impl() uses Deleter_raw move-assignment also, anyway. */
1863 swap(buf_ptr(), other.buf_ptr());
1864
1865 /* Some compilers in some build configs issue maybe-uninitialized warning here, when `other` is as-if
1866 * default-cted (hence the following three are intentionally uninitialized), particularly with heavy
1867 * auto-inlining by the optimizer. False positive in our case, and in Blob-land we try not to give away perf
1868 * at all so: */
1869#pragma GCC diagnostic push
1870#pragma GCC diagnostic ignored "-Wpragmas" // For older versions, where the following does not exist/cannot be disabled.
1871#pragma GCC diagnostic ignored "-Wunknown-warning-option" // (Similarly for clang.)
1872#pragma GCC diagnostic ignored "-Wmaybe-uninitialized"
1873
1874 swap(m_capacity, other.m_capacity); // Meaningless if zero() but harmless.
1875 swap(m_size, other.m_size); // Ditto.
1876 swap(m_start, other.m_start); // Ditto.
1877
1878#pragma GCC diagnostic pop
1879
1880 /* Skip alloc_raw(): swap() has to do it by itself; we are called from it + move-assign/ctor which require
1881 * mutually different treatment for alloc_raw(). */
1882 }
1883} // Basic_blob::swap_impl()
1884
1885template<typename Allocator, bool SHARING>
1887{
1888 using std::swap;
1889
1890 // For alloc_raw(): Follow rules established in m_alloc_and_buf_ptr doc header.
1891 if constexpr(std::allocator_traits<Allocator_raw>::propagate_on_container_swap::value)
1892 {
1893 if (&alloc_raw() != &other.alloc_raw()) // @todo Is this redundant? Or otherwise unnecessary?
1894 {
1895 swap(alloc_raw(), other.alloc_raw());
1896 }
1897 }
1898 /* else: Leave both `alloc_raw()`s alone. What does it mean though? Well, see either assign(); the same
1899 * theme applies here: Each buf_ptr()'s cached allocator/deleter will potentially not equal its respective
1900 * alloc_raw() anymore; but the latter affects only the *next* allocating reserve(); so it is fine.
1901 * That said, to quote cppreference.com: "Note: swapping two containers with unequal allocators if
1902 * propagate_on_container_swap is false is undefined behavior." So, while it will work for us, trying such
1903 * a swap() would be illegal user behavior in any case. */
1904
1905 // Now to the relatively uncontroversial stuff.
1906 swap_impl(other, logger_ptr);
1907}
1908
1909template<typename Allocator, bool SHARING>
1911 Basic_blob<Allocator, SHARING>& blob2, log::Logger* logger_ptr) noexcept
1912{
1913 return blob1.swap(blob2, logger_ptr);
1914}
1915
1916template<typename Allocator, bool SHARING>
1918{
1919 static_assert(S_SHARING,
1920 "Do not invoke (and thus instantiate) share() or derived methods unless you set the SHARING "
1921 "template parameter to true. Sharing will be enabled at a small perf cost; see class doc header.");
1922 // Note: The guys that call it will cause the same check to occur, since instantiating them will instantiate us.
1923
1924 assert(!zero()); // As advertised.
1925
1926 Basic_blob sharing_blob{alloc_raw(), logger_ptr}; // Null Basic_blob (let that ctor log via same Logger if any).
1927 assert(!sharing_blob.buf_ptr());
1928 sharing_blob.buf_ptr() = buf_ptr();
1929 // These are currently (as of this writing) uninitialized (possibly garbage).
1930 sharing_blob.m_capacity = m_capacity;
1931 sharing_blob.m_start = m_start;
1932 sharing_blob.m_size = m_size;
1933
1934 if (logger_ptr && logger_ptr->should_log(log::Sev::S_TRACE, S_LOG_COMPONENT))
1935 {
1936 FLOW_LOG_SET_CONTEXT(logger_ptr, S_LOG_COMPONENT);
1938 ("Blob [" << this << "] shared with new Blob [" << &sharing_blob << "]; ref-count incremented.");
1939 }
1940
1941 return sharing_blob;
1942}
1943
1944template<typename Allocator, bool SHARING>
1947{
1948 if (lt_size > size())
1949 {
1950 lt_size = size();
1951 }
1952
1953 if (logger_ptr && logger_ptr->should_log(log::Sev::S_TRACE, S_LOG_COMPONENT))
1954 {
1955 FLOW_LOG_SET_CONTEXT(logger_ptr, S_LOG_COMPONENT);
1956 FLOW_LOG_TRACE_WITHOUT_CHECKING("Blob [" << this << "] shall be shared with new Blob, splitting off the first "
1957 "[" << lt_size << "] values into that one and leaving the remaining "
1958 "[" << (size() - lt_size) << "] in this one.");
1959 }
1960
1961 auto sharing_blob = share(logger_ptr); // sharing_blob sub-Basic_blob is equal to *this sub-Basic_blob. Adjust:
1962 sharing_blob.resize(lt_size); // Note: sharing_blob.start() remains unchanged.
1963 start_past_prefix_inc(lt_size);
1964
1965 return sharing_blob;
1966}
1967
1968template<typename Allocator, bool SHARING>
1971{
1972 if (rt_size > size())
1973 {
1974 rt_size = size();
1975 }
1976
1977 const auto lt_size = size() - rt_size;
1978 if (logger_ptr && logger_ptr->should_log(log::Sev::S_TRACE, S_LOG_COMPONENT))
1979 {
1980 FLOW_LOG_SET_CONTEXT(logger_ptr, S_LOG_COMPONENT);
1981 FLOW_LOG_TRACE_WITHOUT_CHECKING("Blob [" << this << "] shall be shared with new Blob, splitting off "
1982 "the last [" << rt_size << "] values into that one and leaving the "
1983 "remaining [" << lt_size << "] in this one.");
1984 }
1985
1986 auto sharing_blob = share(logger_ptr); // sharing_blob sub-Basic_blob is equal to *this sub-Basic_blob. Adjust:
1987 resize(lt_size); // Note: start() remains unchanged.
1988 sharing_blob.start_past_prefix_inc(lt_size);
1989
1990 return sharing_blob;
1991}
1992
1993template<typename Allocator, bool SHARING>
1994template<typename Emit_blob_func, typename Share_after_split_left_func>
1996 (size_type size, bool headless_pool, Emit_blob_func&& emit_blob_func, log::Logger* logger_ptr,
1997 Share_after_split_left_func&& share_after_split_left_func)
1998{
1999 assert(size != 0);
2000 assert(!empty());
2001
2002 if (logger_ptr && logger_ptr->should_log(log::Sev::S_TRACE, S_LOG_COMPONENT))
2003 {
2004 FLOW_LOG_SET_CONTEXT(logger_ptr, S_LOG_COMPONENT);
2005 FLOW_LOG_TRACE_WITHOUT_CHECKING("Blob [" << this << "] of size [" << this->size() << "] shall be split into "
2006 "adjacent sharing sub-Blobs of size [" << size << "] each "
2007 "(last one possibly smaller).");
2008 }
2009
2010 do
2011 {
2012 emit_blob_func(share_after_split_left_func(size, logger_ptr)); // share_after_split_left_func() logs plenty.
2013 }
2014 while (!empty());
2015
2016 if (headless_pool)
2017 {
2018 make_zero(logger_ptr);
2019 }
2020} // Basic_blob::share_after_split_equally_impl()
2021
2022template<typename Allocator, bool SHARING>
2023template<typename Emit_blob_func>
2025 Emit_blob_func&& emit_blob_func,
2026 log::Logger* logger_ptr)
2027{
2028 share_after_split_equally_impl(size, headless_pool, std::move(emit_blob_func), logger_ptr,
2029 [this](size_type lt_size, log::Logger* logger_ptr) -> Basic_blob
2030 {
2031 return share_after_split_left(lt_size, logger_ptr);
2032 });
2033}
2034
2035template<typename Allocator, bool SHARING>
2036template<typename Blob_container>
2038 (size_type size, bool headless_pool, Blob_container* out_blobs_ptr, log::Logger* logger_ptr)
2039{
2040 // If changing this please see Blob_with_log_context::<same method>().
2041
2042 assert(out_blobs_ptr);
2043 share_after_split_equally(size, headless_pool, [&](Basic_blob&& blob_moved)
2044 {
2045 out_blobs_ptr->push_back(std::move(blob_moved));
2046 }, logger_ptr);
2047}
2048
2049template<typename Allocator, bool SHARING>
2050template<typename Blob_ptr_container>
2052 bool headless_pool,
2053 Blob_ptr_container* out_blobs_ptr,
2054 log::Logger* logger_ptr)
2055{
2056 // If changing this please see Blob_with_log_context::<same method>().
2057
2058 // By documented requirements this should be, like, <...>_ptr<Basic_blob>.
2059 using Ptr = typename Blob_ptr_container::value_type;
2060
2061 assert(out_blobs_ptr);
2062
2063 share_after_split_equally(size, headless_pool, [&](Basic_blob&& blob_moved)
2064 {
2065 out_blobs_ptr->push_back(Ptr{new Basic_blob{std::move(blob_moved)}});
2066 }, logger_ptr);
2067}
2068
2069template<typename Allocator, bool SHARING>
2071 const Basic_blob<Allocator, SHARING>& blob2)
2072{
2073 static_assert(SHARING,
2074 "blobs_sharing() would only make sense on `Basic_blob`s with SHARING=true. "
2075 "Even if we were to allow this to instantiate (compile) it would always return false.");
2076
2077 return ((!blob1.zero()) && (!blob2.zero())) // Can't co-own a buffer if doesn't own a buffer.
2078 && ((&blob1 == &blob2) // Same object => same buffer.
2079 // Only share() (as of this writing) can lead to the underlying buffer's start ptr being identical.
2080 || ((blob1.begin() - blob1.start())
2081 == (blob2.begin() - blob2.start())));
2082 // @todo Maybe throw in assert(blob1.capacity() == blob2.capacity()), if `true` is being returned.
2083}
2084
2085template<typename Allocator, bool SHARING>
2087{
2088 return zero() ? 0 : m_size; // Note that zero() may or may not be true if we return 0.
2089}
2090
2091template<typename Allocator, bool SHARING>
2093{
2094 return zero() ? 0 : m_start; // Note that zero() may or may not be true if we return 0.
2095}
2096
2097template<typename Allocator, bool SHARING>
2099{
2100 return size() == 0; // Note that zero() may or may not be true if we return true.
2101}
2102
2103template<typename Allocator, bool SHARING>
2105{
2106 return zero() ? 0 : m_capacity; // Note that zero() <=> we return non-zero. (m_capacity >= 1 if !zero().)
2107}
2108
2109template<typename Allocator, bool SHARING>
2111{
2112 return !buf_ptr();
2113}
2114
2115template<typename Allocator, bool SHARING>
2117{
2118 reserve_impl(new_capacity, false, logger_ptr);
2119}
2120
2121template<typename Allocator, bool SHARING>
2123{
2124 reserve_impl(new_capacity, true, logger_ptr);
2125}
2126
2127template<typename Allocator, bool SHARING>
2128void Basic_blob<Allocator, SHARING>::reserve_impl(size_type new_capacity, bool clear_on_alloc, log::Logger* logger_ptr)
2129{
2130 using boost::make_shared_noinit;
2131 using boost::make_shared;
2132 using boost::shared_ptr;
2133 using std::numeric_limits;
2134 using std::memset;
2135
2136 /* As advertised do not allow enlarging existing buffer. They can call make_zero() though (but must do so consciously
2137 * hence considering the performance impact). */
2138 assert((zero() || ((new_capacity <= m_capacity) && (m_capacity > 0)))
2139 && "Basic_blob intentionally disallows reserving N>M>0, where M is current capacity. make_zero() first.");
2140
2141 /* OK, but what if new_capacity < m_capacity? Then post-condition (see below) is already satisfied, and it's fastest
2142 * to do nothing. If user believes lower memory use is higher-priority, they can explicitly call make_zero() first
2143 * but must make conscious decision to do so. */
2144
2145 if (zero() && (new_capacity != 0))
2146 {
2147 if (logger_ptr && logger_ptr->should_log(log::Sev::S_TRACE, S_LOG_COMPONENT))
2148 {
2149 FLOW_LOG_SET_CONTEXT(logger_ptr, S_LOG_COMPONENT);
2150 FLOW_LOG_TRACE_WITHOUT_CHECKING("Blob [" << this << "] "
2151 "allocating internal buffer sized [" << new_capacity << "]; "
2152 "will zero-fill? = [" << clear_on_alloc << "].");
2153 }
2154
2155 if (new_capacity <= size_type(numeric_limits<difference_type>::max())) // (See explanation near bottom of method.)
2156 {
2157 /* Time to (1) allocate the buffer; (2) save the pointer; (3) ensure it is deallocated at the right time
2158 * and with the right steps. Due to Allocator_raw support this is a bit more complex than usual. Please
2159 * (1) see class doc header "Custom allocator" section; and (2) read Buf_ptr alias doc header for key background;
2160 * then come back here. */
2161
2162 if constexpr(S_IS_VANILLA_ALLOC)
2163 {
2164 /* In this case they specified std::allocator, so we are to just allocate/deallocate in regular heap using
2165 * new[]/delete[]. Hence we don't need to even use actual std::allocator; we know by definition it would
2166 * use new[]/delete[]. So simply use typical ..._ptr initialization. Caveats are unrelated to allocators:
2167 * - For some extra TRACE-logging -- if enabled! -- use an otherwise-vanilla logging deleter.
2168 * - Unnecessary in case of unique_ptr: dealloc always occurs in make_zero() or dtor and can be logged
2169 * there.
2170 * - If doing so (note it implies we've given up on performance) we cannot, and do not, use
2171 * make_shared*(); the use of custom deleter requires the .reset() form of init. */
2172
2173 /* If TRACE currently disabled, then skip the custom deleter that logs about dealloc. (TRACE may be enabled
2174 * by that point; but, hey, that is life.) This is for perf. */
2175 if constexpr(S_SHARING)
2176 {
2177 if (logger_ptr && logger_ptr->should_log(log::Sev::S_TRACE, S_LOG_COMPONENT))
2178 {
2179 /* This ensures delete[] call when buf_ptr() ref-count reaches 0.
2180 * As advertised, for performance, the memory is NOT initialized unless so instructed. */
2181 buf_ptr().reset(clear_on_alloc ? (new value_type[new_capacity]()) : (new value_type[new_capacity]),
2182 // Careful! *this might be gone if some other share()ing obj is the one that 0s ref-count.
2183 [logger_ptr, original_blob = this, new_capacity]
2184 (value_type* buf_ptr_to_delete)
2185 {
2186 FLOW_LOG_SET_CONTEXT(logger_ptr, S_LOG_COMPONENT);
2187 FLOW_LOG_TRACE("Deallocating internal buffer sized [" << new_capacity << "] originally allocated by "
2188 "Blob [" << original_blob << "]; note that Blob may now be gone and furthermore another "
2189 "Blob might live at that address now. A message immediately preceding this one should "
2190 "indicate the last Blob to give up ownership of the internal buffer.");
2191 // Finally just do what the default one would've done, as we've done our custom thing (logging).
2192 delete [] buf_ptr_to_delete;
2193 });
2194 }
2195 else // if (!should_log()): No logging deleter; just delete[] it.
2196 {
2197 /* This executes `new value_type[new_capacity]` and ensures delete[] when buf_ptr() ref-count reaches 0.
2198 * As advertised, for performance, the memory is NOT initialized. */
2199 buf_ptr() = clear_on_alloc ? make_shared<value_type[]>(new_capacity)
2200 : make_shared_noinit<value_type[]>(new_capacity);
2201 }
2202 } // if constexpr(S_SHARING)
2203 else // if constexpr(!S_SHARING)
2204 {
2205 buf_ptr() = clear_on_alloc ? boost::movelib::make_unique<value_type[]>(new_capacity)
2206 : boost::movelib::make_unique_definit<value_type[]>(new_capacity);
2207 // Again -- the logging in make_zero() (and Blob_with_log_context dtor) is sufficient.
2208 }
2209 } // if constexpr(S_IS_VANILLA_ALLOC)
2210 else // if constexpr(!S_IS_VANILLA_ALLOC)
2211 {
2212 /* Fancy (well, potentially) allocator time. Again, if you've read the Buf_ptr and Deleter_raw doc headers,
2213 * you'll know what's going on. */
2214
2215 // Raw-allocate via Allocator_raw! No value-init occurs... but see below.
2216 const auto ptr = alloc_raw().allocate(new_capacity);
2217
2218 if constexpr(S_SHARING)
2219 {
2220 buf_ptr().reset(ptr,
2221
2222 /* Let them allocate aux data (ref count block) via Allocator_raw::allocate()
2223 * (and dealloc it -- ref count block -- via Allocator_raw::deallocate())!
2224 * Have them store internal ptr bits as `Allocator_raw::pointer`s, not
2225 * necessarily raw `value_type*`s! */
2226 alloc_raw(),
2227
2228 /* When the time comes to dealloc, invoke this guy like: D(<the ptr>)! It'll
2229 * perform alloc_raw().deallocate(<what .allocate() returned>, n).
2230 * Since only we happen to know the size of how much we actually allocated, we pass that info
2231 * into the Deleter_raw as well, as it needs to know the `n` to pass to
2232 * alloc_raw().deallocate(p, n). */
2233 Deleter_raw{alloc_raw(), new_capacity});
2234 /* Note: Unlike the S_IS_VANILLA_ALLOC=true case above, here we omit any attempt to log at the time
2235 * of dealloc, even if the verbosity is currently set high enough. It is not practical to achieve:
2236 * Recall that the assumptions we take for granted when dealing with std::allocator/regular heap
2237 * may no longer apply when dealing with an arbitrary allocator/potentially SHM-heap. To be able
2238 * to log at dealloc time, the Deleter_raw we create would need to store a Logger*. Sure, we
2239 * could pass-in logger_ptr and Deleter_raw could store it; but now recall that we do not
2240 * store a Logger* in `*this` and why: because (see class doc header) doing so does not play well
2241 * in some custom-allocator situations, particularly when operating in SHM-heap. That is why
2242 * we take an optional Logger* as an arg to every possibly-logging API (we can't guarantee, if
2243 * S_IS_VANILLA_ALLOC=false, that a Logger* can meaningfully be stored in likely-Allocator-stored *this).
2244 * For that same reason we cannot pass it to the Deleter_raw functor; buf_ptr() (whose bits are in
2245 * *this) will save a copy of that Deleter_raw and hence *this will end up storing the Logger* which
2246 * (as noted) may be nonsensical. (With S_IS_VANILLA_ALLOC=true, though, it's safe to store it; and
2247 * since deleter would only fire at dealloc time, it doesn't present a new perf problem -- since TRACE
2248 * log level alrady concedes bad perf -- which is the 2nd reason (see class doc header) for why
2249 * we don't generally record Logger* but rather take it as an arg to each logging API.)
2250 *
2251 * Anyway, long story short, we don't log on dealloc in this case, b/c we can't, and the worst that'll
2252 * happen as a result of that decision is: deallocs won't be trace-logged when a custom allocator
2253 * is enabled at compile-time. That price is okay to pay. */
2254 } // if constexpr(S_SHARING)
2255 else // if constexpr(!S_SHARING)
2256 {
2257 /* Conceptually it's quite similar to the S_SHARING case where we do shared_ptr::reset() above.
2258 * However there is an API difference that is subtle yet real (albeit only for stateful Allocator_raw):
2259 * Current alloc_raw() was used to allocate *(buf_ptr()), so it must be used also to dealloc it.
2260 * unique_ptr::reset() does *not* take a new Deleter_raw; hence if we used it (alone) here it would retain
2261 * the alloc_raw() from ction (or possibly last assignment) time -- and if that does not equal current
2262 * m_alloc => trouble in make_zero() or dtor.
2263 *
2264 * Anyway, to beat that, we can manually overwrite get_deleter() (<-- non-const ref).
2265 * This will require a Deleter_raw move-assignment to exist (and it does, carefully and explicitly, as
2266 * of this writing; at least for it to be used here).
2267 * (Also: Caution! Recall buf_ptr() is null currently. If it were not
2268 * we would need to explicitly nullify it before the get_deleter() assignment.) */
2269 buf_ptr().get_deleter() = Deleter_raw{alloc_raw(), new_capacity};
2270 buf_ptr().reset(ptr);
2271 } // else if constexpr(!S_SHARING)
2272
2273 if (clear_on_alloc)
2274 {
2275 memset(ptr.get(), 0, new_capacity);
2276 /* Perf discussion: That is obviously correct functionally; but can it be done faster? Why do we ask?
2277 * Answer: See the S_IS_VANILLA_ALLOC case. Notice we use operations, when clear_on_alloc, that
2278 * will allocate-and-clear "at the same time." (More specifically, really, the make_shared()s and
2279 * the make_unique()s (as opposed to make_shared_noinit() or make_unique_definit()) promise to perform
2280 * `new T[]()` (as opposed to `new T[]`) which does in fact clear-while-allocating. Is that faster though?
2281 * Actually yes; empirically we've seen it be ~20% faster for a 64Ki buffer when comparing
2282 * `new T[N]()` versus `p = new T[N]; memset(p, 0, N)`; and theoretically perhaps `new T[]()` ends up
2283 * as `calloc()`, which in glibc might be clever -- making use of mmap()-ed areas being pre-zeroed,
2284 * knowing when a page is being dirtied by the calloc(), and so forth.) Here, though, we are not doing
2285 * any such thing; we simply A.allocate(N) via allocator (not std::allocator; possibly SHM-aware) --
2286 * then zero it. The user could even do it themselves, in this case making clear_on_alloc syntactic
2287 * sugar at best. So could we do better, like we did in the S_IS_VANILLA_ALLOC=true case above?
2288 * Answer: Well... I (ygoldfel) think... no, not per se. Not here at least. We do have use the
2289 * Allocator_raw, and nothing in the C++1x or C++17 Allocator concept docs suggests it is possible to
2290 * ask it to allocate-and-clear. We only did so for S_IS_VANILLA_ALLOC=true, because we know
2291 * std::allocator by definition does heap new/delete; so we can call such things ourselves and not actually
2292 * mention the Allocator_raw; it is used more as a binary determinant of S_IS_VANILLA_ALLOC=true.
2293 * So by contract, since there's no way to alloc-and-zero at the same time, if we are told to
2294 * clear_on_alloc, then we have to memset(); no choice.
2295 *
2296 * @todo However the code (as of this writing at least in Flow-IPC's SHM-related code including
2297 * ipc::transport::struc::shm::Capnp_message_builder) that uses Basic_blob and *can* possibly guarantee
2298 * that Allocator_raw::allocate(N) will pre-zero the N bytes as-needed -- such code could
2299 * (1) specify clear_on_alloc=false and (2) explicitly guarantee .allocate() will alloc-and-zero.
2300 * As of this writing that is in Flow-IPC (not Flow), a sister/dependent component that shares Flow's DNA;
2301 * and in particular in that case we've got:
2302 * - SHM-classic: Ultimately it leverages boost::interprocess::managed_shared_memory; look into it
2303 * whether there's some kind of sped-up alloc-and-zero hook available.
2304 * - SHM-jemalloc: That one has a home-grown jemalloc extension; it might be possible to use some kind of
2305 * knob(s) to ensure an alloc-and-zero is performed. MALLOCX_ZERO flag to mallocx() will do it, and
2306 * docs suggest care is taken to do this performantly; perhaps this mode can be set/unset through some
2307 * kind of thread-local config system -- as long as it is not slow -- not sure. As of this writing
2308 * there's already a mandatory activator-context object, so maybe it could be merely extended with
2309 * this knob. Not sure... it's doable though.
2310 * - Either way: The user that does must be careful; zeroing in *all* allocs would probably be bad, as
2311 * many situations do not require it; it should only be done when actually desired.
2312 * To be clear: all that is out of Basic_blob's purview; so really this to-do should be elsewhere arguably;
2313 * but it's a closely related project, so here is better than nowhere. Plus it provides some non-obvious
2314 * context. */
2315 } // if (clear_on_alloc)
2316 } // else if constexpr(!S_IS_VANILLA_ALLOC)
2317 } // if (new_capacity <= numeric_limits<difference_type>::max()) // (See explanation just below.)
2318 else
2319 {
2320 assert(false && "Enormous or corrupt new_capacity?!");
2321 }
2322 /* ^-- Explanation of the strange if/else:
2323 * In some gcc versions in some build configs, particularly with aggressive auto-inlining optimization,
2324 * a warning like this can be triggered (observed, as of this writing, only in the movelib::make_unique_definit()
2325 * branch above, but to be safe we're covering all the branches with our if/else work-around):
2326 * argument 1 value ‘18446744073709551608’ exceeds maximum object size
2327 * 9223372036854775807 [-Werror=alloc-size-larger-than=]
2328 * This occurs due to (among other things) inlining from above our frame down into the boost::movelib call
2329 * we make (and potentially the other allocating calls in the various branches above);
2330 * plus allegedly the C++ front-end supplying the huge value during the diagnostics pass.
2331 * No such huge value (which is 0xFFFFFFFFFFFFFFF8) is actually passed-in at run-time nor mentioned anywhere
2332 * in our code, here or in the unit-test(s) triggering the auto-inlining triggering the warning. So:
2333 *
2334 * The warning is wholly inaccurate. This situation is known in the gcc issue database; for example
2335 * see https://gcc.gnu.org/bugzilla/show_bug.cgi?id=85783 and related (linked) tickets.
2336 * The question was how to work around it; I admit that the discussion in that ticket (and friends) at times
2337 * gets into topics so obscure and gcc-internal as to be indecipherable to me (ygoldfel).
2338 * Since I don't seem to be doing anything wrong above (though: @todo *Maybe* it has something to do with
2339 * lacking `nothrow`? Would need investigation, nothrow could be good anyway...), the top work-arounds would be
2340 * perhaps: 1, pragma-away the alloc-size-larger-than warning; 2, use a compiler-placating
2341 * explicit `if (new_capacity < ...limit...)` branch. (2) was suggested in the above ticket by a
2342 * gcc person. Not wanting to give even a tiny bit of perf I attempted the pragma way (1); but at least gcc-13
2343 * has some bug which makes the pragma get ignored. So I reverted to (2) by default.
2344 * @todo Revisit this. Should skip workaround unless gcc; + possibly solve it some more elegant way; look into the
2345 * nothrow thing the ticket discussion briefly mentions (but might be irrelevant). */
2346
2347 m_capacity = new_capacity;
2348 m_size = 0; // Went from zero() to !zero(); so m_size went from meaningless to meaningful and must be set.
2349 m_start = 0; // Ditto for m_start.
2350
2351 assert(!zero());
2352 // This is the only path (other than swap()) that assigns to m_capacity; note m_capacity >= 1.
2353 }
2354 /* else { !zero(): Since new_capacity <= m_capacity, m_capacity is already large enough; no change needed.
2355 * zero() && (new_capacity == 0): Since 0-capacity wanted, we can continue being zero(), as that's enough. } */
2356
2357 assert(capacity() >= new_capacity); // Promised post-condition.
2358} // Basic_blob::reserve()
2359
2360template<typename Allocator, bool SHARING>
2361void Basic_blob<Allocator, SHARING>::resize(size_type new_size, size_type new_start_or_unchanged,
2362 log::Logger* logger_ptr)
2363{
2364 resize_impl(new_size, false, new_start_or_unchanged, logger_ptr);
2365}
2366
2367template<typename Allocator, bool SHARING>
2369 size_type new_start_or_unchanged, log::Logger* logger_ptr)
2370{
2371 resize_impl(new_size, true, new_start_or_unchanged, logger_ptr);
2372}
2373
2374template<typename Allocator, bool SHARING>
2376 size_type new_start_or_unchanged, log::Logger* logger_ptr)
2377{
2378 auto& new_start = new_start_or_unchanged;
2379 if (new_start == S_UNCHANGED)
2380 {
2381 new_start = start();
2382 }
2383
2384 const size_type min_capacity = new_start + new_size;
2385
2386 /* Ensure there is enough space for new_size starting at new_start. Note, in particular, this disallows
2387 * enlarging non-zero() buffer.
2388 * (If they want, they can explicitly call make_zero() first. But they must do so consciously, so that they're
2389 * forced to consider the performance impact of such an action.) Also note that zero() continues to be true
2390 * if was true. */
2391 reserve_impl(min_capacity, clear_on_alloc, logger_ptr);
2392 assert(capacity() >= min_capacity);
2393
2394 if (!zero())
2395 {
2396 m_size = new_size;
2397 m_start = new_start;
2398 }
2399 // else { zero(): m_size, m_start are meaningless; size() and start() == 0, as desired. }
2400
2401 assert(size() == new_size);
2402 assert(start() == new_start);
2403} // Basic_blob::resize()
2404
2405template<typename Allocator, bool SHARING>
2407{
2408 resize(((start() + size()) > prefix_size)
2409 ? (start() + size() - prefix_size)
2410 : 0,
2411 prefix_size); // It won't log, as it cannot allocate, so no need to pass-through a Logger*.
2412 // Sanity check: `prefix_size == 0` translates to: resize(start() + size(), 0), as advertised.
2413}
2414
2415template<typename Allocator, bool SHARING>
2417{
2418 assert((prefix_size_inc >= 0) || (start() >= size_type(-prefix_size_inc)));
2419 start_past_prefix(start() + prefix_size_inc);
2420}
2421
2422template<typename Allocator, bool SHARING>
2424{
2425 // Note: start() remains unchanged (as advertised). resize(0, 0) can be used if that is unacceptable.
2426 resize(0); // It won't log, as it cannot allocate, so no need to pass-through a Logger*.
2427 // Note corner case: zero() remains true if was true (and false if was false).
2428}
2429
2430template<typename Allocator, bool SHARING>
2432{
2433 /* Could also write more elegantly: `swap(Basic_blob{});`, but following is a bit optimized (while equivalent);
2434 * logs better. */
2435 if (!zero())
2436 {
2437 if (logger_ptr && logger_ptr->should_log(log::Sev::S_TRACE, S_LOG_COMPONENT))
2438 {
2439 FLOW_LOG_SET_CONTEXT(logger_ptr, S_LOG_COMPONENT);
2440 if constexpr(SHARING)
2441 {
2442 FLOW_LOG_TRACE_WITHOUT_CHECKING("Blob [" << this << "] giving up ownership of internal buffer sized "
2443 "[" << capacity() << "]; deallocation will immediately follow if no sharing "
2444 "`Blob`s remain; else ref-count merely decremented.");
2445 }
2446 else
2447 {
2448 FLOW_LOG_TRACE_WITHOUT_CHECKING("Blob [" << this << "] deallocating internal buffer sized "
2449 "[" << capacity() << "].");
2450 }
2451 }
2452
2453 buf_ptr().reset();
2454 } // if (!zero())
2455} // Basic_blob::make_zero()
2456
2457template<typename Allocator, bool SHARING>
2459 Basic_blob<Allocator, SHARING>::assign_copy(const boost::asio::const_buffer& src, log::Logger* logger_ptr)
2460{
2461 const size_type n = src.size();
2462
2463 /* Either just set m_start = 0 and decrease/keep-constant (m_start + m_size) = n; or allocate exactly n-sized buffer
2464 * and set m_start = 0, m_size = n.
2465 * As elsewhere, the latter case requires that zero() be true currently (but they can force that with make_zero()). */
2466 resize(n, 0, logger_ptr);
2467
2468 // Performance: Basically equals: memcpy(buf_ptr(), src.start, src.size).
2469 emplace_copy(const_begin(), src, logger_ptr);
2470
2471 // Corner case: n == 0. Above is equivalent to: if (!zero()) { m_size = m_start = 0; }. That behavior is advertised.
2472
2473 return n;
2474}
2475
2476template<typename Allocator, bool SHARING>
2478 Basic_blob<Allocator, SHARING>::emplace_copy(Const_iterator dest, const boost::asio::const_buffer& src,
2479 log::Logger* logger_ptr)
2480{
2481 using std::memcpy;
2482
2483 // Performance: assert()s eliminated and values inlined, below boils down to: memcpy(dest, src.start, src.size);
2484
2485 assert(valid_iterator(dest));
2486
2487 const Iterator dest_it = iterator_sans_const(dest);
2488 const size_type n = src.size(); // Note the entire source buffer is copied over.
2489
2490 if (n != 0)
2491 {
2492 const auto src_data = static_cast<Const_iterator>(src.data());
2493
2494 if (logger_ptr && logger_ptr->should_log(log::Sev::S_TRACE, S_LOG_COMPONENT))
2495 {
2496 FLOW_LOG_SET_CONTEXT(logger_ptr, S_LOG_COMPONENT);
2497 FLOW_LOG_TRACE_WITHOUT_CHECKING("Blob [" << this << "] copying "
2498 "memory area [" << static_cast<const void*>(src_data) << "] sized "
2499 "[" << n << "] to internal buffer at offset [" << (dest - const_begin()) << "].");
2500 }
2501
2502 assert(derefable_iterator(dest_it)); // Input check.
2503 assert(difference_type(n) <= (const_end() - dest)); // Input check. (As advertised, we don't "correct" `n`.)
2504
2505 // Ensure no overlap by user.
2506 assert(((dest_it + n) <= src_data) || ((src_data + n) <= dest_it));
2507
2508 /* Some compilers in some build configs issue stringop-overflow warning here, when optimizer heavily auto-inlines:
2509 * error: ‘memcpy’ specified bound between 9223372036854775808 and 18446744073709551615
2510 * exceeds maximum object size 9223372036854775807 [-Werror=stringop-overflow=]
2511 * This occurs due to (among other things) inlining from above our frame down into the std::memcpy() call
2512 * we make; plus allegedly the C++ front-end supplying the huge values during the diagnostics pass.
2513 * No such huge values (which are 0x800000000000000F, 0xFFFFFFFFFFFFFFFF, 0x7FFFFFFFFFFFFFFF, respectively)
2514 * are actually passed-in at run-time nor mentioned anywhere
2515 * in our code, here or in the unit-test(s) triggering the auto-inlining triggering the warning. So:
2516 *
2517 * The warning is wholly inaccurate in a way reminiscent of the situation in reserve() with a somewhat
2518 * similar comment. In this case, however, a pragma does properly work, so we use that approach instead of
2519 * a run-time check/assert() which would give away a bit of perf. */
2520#pragma GCC diagnostic push
2521#pragma GCC diagnostic ignored "-Wpragmas" // For older versions, where the following does not exist/cannot be disabled.
2522#pragma GCC diagnostic ignored "-Wunknown-warning-option" // (Similarly for clang.)
2523#pragma GCC diagnostic ignored "-Wstringop-overflow"
2524#pragma GCC diagnostic ignored "-Wrestrict" // Another similar bogus one pops up after pragma-ing away preceding one.
2525
2526 /* Likely linear-time in `n` but hopefully optimized. Could use a C++ construct, but I've seen that be slower
2527 * than a direct memcpy() call in practice, at least in a Linux gcc. Could use boost.asio buffer_copy(), which
2528 * as of this writing does do memcpy(), but the following is an absolute guarantee of best performance, so better
2529 * safe than sorry (hence this whole Basic_blob class's existence, at least in part). */
2530 memcpy(dest_it, src_data, n);
2531
2532#pragma GCC diagnostic pop
2533 }
2534
2535 return dest_it + n;
2536} // Basic_blob::emplace_copy()
2537
2538template<typename Allocator, bool SHARING>
2540 Basic_blob<Allocator, SHARING>::sub_copy(Const_iterator src, const boost::asio::mutable_buffer& dest,
2541 log::Logger* logger_ptr) const
2542{
2543 // Code similar to emplace_copy(). Therefore keeping comments light.
2544
2545 using std::memcpy;
2546
2547 assert(valid_iterator(src));
2548
2549 const size_type n = dest.size(); // Note the entire destination buffer is filled.
2550 if (n != 0)
2551 {
2552 const auto dest_data = static_cast<Iterator>(dest.data());
2553
2554 if (logger_ptr && logger_ptr->should_log(log::Sev::S_TRACE, S_LOG_COMPONENT))
2555 {
2556 FLOW_LOG_SET_CONTEXT(logger_ptr, S_LOG_COMPONENT);
2557 FLOW_LOG_TRACE_WITHOUT_CHECKING("Blob [" << this << "] copying to "
2558 "memory area [" << static_cast<const void*>(dest_data) << "] sized "
2559 "[" << n << "] from internal buffer offset [" << (src - const_begin()) << "].");
2560 }
2561
2562 assert(derefable_iterator(src));
2563 assert(difference_type(n) <= (const_end() - src)); // Can't copy from beyond end of *this blob.
2564
2565 assert(((src + n) <= dest_data) || ((dest_data + n) <= src));
2566
2567 // See explanation for the pragma in emplace_copy(). While warning not yet observed here, preempting it.
2568#pragma GCC diagnostic push
2569#pragma GCC diagnostic ignored "-Wpragmas" // For older versions, where the following does not exist/cannot be disabled.
2570#pragma GCC diagnostic ignored "-Wunknown-warning-option" // (Similarly for clang.)
2571#pragma GCC diagnostic ignored "-Wstringop-overflow"
2572#pragma GCC diagnostic ignored "-Wrestrict"
2573 memcpy(dest_data, src, n);
2574#pragma GCC diagnostic pop
2575 }
2576
2577 return src + n;
2578}
2579
2580template<typename Allocator, bool SHARING>
2583{
2584 using std::memmove;
2585
2586 assert(derefable_iterator(first)); // Input check.
2587 assert(valid_iterator(past_last)); // Input check.
2588
2589 const Iterator dest = iterator_sans_const(first);
2590
2591 if (past_last > first) // (Note: `past_last <= first` allowed, not illegal.)
2592 {
2593 const auto n_moved = size_type(const_end() - past_last);
2594
2595 if (n_moved != 0)
2596 {
2597 // See explanation for the pragma in emplace_copy(). While warning not yet observed here, preempting it.
2598#pragma GCC diagnostic push
2599#pragma GCC diagnostic ignored "-Wpragmas" // For older versions, where the following does not exist/cannot be disabled.
2600#pragma GCC diagnostic ignored "-Wunknown-warning-option" // (Similarly for clang.)
2601#pragma GCC diagnostic ignored "-Wstringop-overflow"
2602#pragma GCC diagnostic ignored "-Wrestrict"
2603 memmove(dest, iterator_sans_const(past_last), n_moved); // Cannot use memcpy() due to possible overlap.
2604#pragma GCC diagnostic pop
2605 }
2606 // else { Everything past end() is to be erased: it's sufficient to just update m_size: }
2607
2608 m_size -= (past_last - first);
2609 // m_capacity does not change, as we advertised minimal operations possible to achieve result.
2610 } // if (past_last > first)
2611 // else if (past_last <= first) { Nothing to do. }
2612
2613 return dest;
2614} // Basic_blob::erase()
2615
2616template<typename Allocator, bool SHARING>
2619{
2620 assert(!empty());
2621 return *const_begin();
2622}
2623
2624template<typename Allocator, bool SHARING>
2627{
2628 assert(!empty());
2629 return const_end()[-1];
2630}
2631
2632template<typename Allocator, bool SHARING>
2635{
2636 assert(!empty());
2637 return *begin();
2638}
2639
2640template<typename Allocator, bool SHARING>
2643{
2644 assert(!empty());
2645 return end()[-1];
2646}
2647
2648template<typename Allocator, bool SHARING>
2651{
2652 return const_front();
2653}
2654
2655template<typename Allocator, bool SHARING>
2658{
2659 return const_back();
2660}
2661
2662template<typename Allocator, bool SHARING>
2665{
2666 return const_cast<Basic_blob*>(this)->begin();
2667}
2668
2669template<typename Allocator, bool SHARING>
2672{
2673 if (zero())
2674 {
2675 return nullptr;
2676 }
2677 // else
2678
2679 /* buf_ptr().get() is value_type* when Buf_ptr = regular shared_ptr; but possibly Some_fancy_ptr<value_type>
2680 * when Buf_ptr = boost::interprocess::shared_ptr<value_type, Allocator_raw>, namely when
2681 * Allocator_raw::pointer = Some_fancy_ptr<value_type> and not simply value_type* again. We need value_type*.
2682 * Fancy-pointer is not really an officially-defined concept (offset_ptr<> is an example of one).
2683 * Anyway the following works for both cases, but there are a bunch of different things we could write.
2684 * Since it's just this one location where we need to do this, I do not care too much, and the following
2685 * cheesy thing -- &(*p) -- is OK.
2686 *
2687 * @todo In C++20 can replace this with std::to_address(). Or can implement our own (copy cppreference.com impl). */
2688
2689 const auto raw_or_fancy_buf_ptr = buf_ptr().get();
2690 return &(*raw_or_fancy_buf_ptr) + m_start;
2691}
2692
2693template<typename Allocator, bool SHARING>
2696{
2697 return zero() ? const_begin() : (const_begin() + size());
2698}
2699
2700template<typename Allocator, bool SHARING>
2703{
2704 return zero() ? begin() : (begin() + size());
2705}
2706
2707template<typename Allocator, bool SHARING>
2710{
2711 return const_begin();
2712}
2713
2714template<typename Allocator, bool SHARING>
2717{
2718 return const_begin();
2719}
2720
2721template<typename Allocator, bool SHARING>
2724{
2725 return const_end();
2726}
2727
2728template<typename Allocator, bool SHARING>
2731{
2732 return const_end();
2733}
2734
2735template<typename Allocator, bool SHARING>
2738{
2739 return const_begin();
2740}
2741
2742template<typename Allocator, bool SHARING>
2745{
2746 return begin();
2747}
2748
2749template<typename Allocator, bool SHARING>
2751{
2752 return empty() ? (it == const_end())
2753 : in_closed_range(const_begin(), it, const_end());
2754}
2755
2756template<typename Allocator, bool SHARING>
2758{
2759 return empty() ? false
2760 : in_closed_open_range(const_begin(), it, const_end());
2761}
2762
2763template<typename Allocator, bool SHARING>
2766{
2767 return const_cast<value_type*>(it); // Can be done without const_cast<> but might as well save some cycles.
2768}
2769
2770template<typename Allocator, bool SHARING>
2771boost::asio::const_buffer Basic_blob<Allocator, SHARING>::const_buffer() const
2772{
2773 return boost::asio::const_buffer{const_data(), size()};
2774}
2775
2776template<typename Allocator, bool SHARING>
2778{
2779 return boost::asio::mutable_buffer{data(), size()};
2780}
2781
2782template<typename Allocator, bool SHARING>
2785{
2786 return alloc_raw();
2787}
2788
2789template<typename Allocator, bool SHARING>
2791{
2792 return m_alloc_and_buf_ptr.second();
2793}
2794
2795template<typename Allocator, bool SHARING>
2798{
2799 return const_cast<Basic_blob*>(this)->buf_ptr();
2800}
2801
2802template<typename Allocator, bool SHARING>
2804{
2805 return m_alloc_and_buf_ptr.first();
2806}
2807
2808template<typename Allocator, bool SHARING>
2811{
2812 return const_cast<Basic_blob*>(this)->alloc_raw();
2813}
2814
2815
2816template<typename Allocator, bool SHARING>
2818 m_buf_sz(0)
2819{
2820 /* It can be left `= default;`, but some gcc versions then complain m_buf_sz may be used uninitialized (not true but
2821 * such is life). */
2822}
2823
2824template<typename Allocator, bool SHARING>
2826 /* Copy allocator; a stateless allocator should have size 0 (no-op for the processor in that case... except
2827 * the optional<> registering it has-a-value). */
2828 m_alloc_raw(std::in_place, alloc_raw_src),
2829 m_buf_sz(buf_sz) // Smart-ptr stores a T*, where T is a trivial-deleter PoD, but we delete an array of Ts: this many.
2830{
2831 // OK.
2832}
2833
2834template<typename Allocator, bool SHARING>
2836{
2837 /* We advertised our action is as-if we default-ct, then move-assign. While we skipped delegating to default-ctor,
2838 * the only difference is that would've initialized m_buf_sz; but the following will just overwrite it anyway. So
2839 * we can in fact move-assign now, and that's it. */
2840 operator=(std::move(moved_src));
2841}
2842
2843/* Auto-generated copy-ct should be fine; the only conceivable source of trouble might be Allocator_raw copy-ction,
2844 * but that must exist for all allocators. */
2845template<typename Allocator, bool SHARING>
2847
2848template<typename Allocator, bool SHARING>
2851{
2852 using std::swap;
2853
2854 if (this != &moved_src) // @todo Maybe assert() on this, since our uses are so locked-down?
2855 {
2856 m_buf_sz = 0;
2857 swap(m_buf_sz, moved_src.m_buf_sz);
2858
2859 /* That's that for m_buf_sz; that leaves m_alloc_raw. That is trickier than one might think; a swap
2860 * or explicitly copy- or move-assigning it will work with many allocators, but some are not assignable at all
2861 * (for example boost::interprocess::allocator which is stateful). (There are good reasons for that having to
2862 * do with propagate_on_container_*_assignment, but never mind; our task here is simpler than those worries.)
2863 * Bottom line is, every allocator is copy-constructible, and we store m_alloc_raw as an optional<>, so
2864 * we can simulate an assignment via destroy (if needed) + copy-construction, namely using optional::emplace().
2865 *
2866 * Plus, arguably an optimization: it is very common they're the same allocator by-value (e.g., stateless
2867 * allocators of the same class are always mutually equal, period); in which case can no-op. */
2868 if (!moved_src.m_alloc_raw)
2869 {
2870 // Another corner case. @todo Maybe assert() on this not being the case, since our uses are so locked-down?
2871 m_alloc_raw.reset();
2872 // m_alloc_raw has been as-if copied over; and moved_src's guy is already as-if default-cted, as promised.
2873 }
2874 else
2875 {
2876 const auto& src_alloc_raw = *moved_src.m_alloc_raw;
2877 if ((!m_alloc_raw) || (*m_alloc_raw != src_alloc_raw))
2878 {
2879 m_alloc_raw.emplace(src_alloc_raw); // Destroy if needed; then copy-construct.
2880 }
2881 // else { m_alloc_raw is already as-if copied from moved_src.m_alloc_raw: the aforementioned optimization. }
2882
2883 // m_alloc_raw has been copied over; as promised reset moved_src's guy to as-if-default-cted.
2884 moved_src.m_alloc_raw.reset();
2885 }
2886 } // if (this != &moved_src)
2887
2888 return *this;
2889} // Basic_blob::Deleter_raw::operator=(&&)
2890
2891template<typename Allocator, bool SHARING>
2894{
2895 /* Ideally we'd just use `= default;`, but that might not compile, when Allocator_raw has no copy-assignment
2896 * (as noted elsewhere this is entirely possible). So basically perform a simpler version of the move-assignment
2897 * impl. Keeping comments light; please see move-assignment impl. */
2898
2899 if (this != &src)
2900 {
2901 m_buf_sz = src.m_buf_sz;
2902
2903 if (!src.m_alloc_raw)
2904 {
2905 m_alloc_raw.reset();
2906 }
2907 else
2908 {
2909 const auto& src_alloc_raw = *src.m_alloc_raw;
2910 if ((!m_alloc_raw) || (*m_alloc_raw != src_alloc_raw))
2911 {
2912 m_alloc_raw.emplace(src_alloc_raw); // Having to do this for some `Allocator_raw`s is why we can't `= default;`.
2913 }
2914 }
2915 } // if (this != &src)
2916
2917 return *this;
2918} // Basic_blob::Deleter_raw::operator=(const&)
2919
2920template<typename Allocator, bool SHARING>
2922{
2923 // No need to invoke dtor: Allocator_raw::value_type is Basic_blob::value_type, a boring int type with no real dtor.
2924
2925 // Free the raw buffer at location to_delete; which we know is m_buf_sz `value_type`s long.
2926 m_alloc_raw->deallocate(to_delete, m_buf_sz);
2927}
2928
2929} // namespace flow::util
Interface that the user should implement, passing the implementing Logger into logging classes (Flow'...
Definition: log.hpp:1286
virtual bool should_log(Sev sev, const Component &component) const =0
Given attributes of a hypothetical message that would be logged, return true if that message should b...
Internal deleter functor used if and only if S_IS_VANILLA_ALLOC is false and therefore only with Buf_...
Pointer_raw pointer
For boost::interprocess::shared_ptr and unique_ptr compliance (hence the irregular capitalization).
typename std::allocator_traits< Allocator_raw >::pointer Pointer_raw
Short-hand for the allocator's pointer type, pointing to Basic_blob::value_type.
std::optional< Allocator_raw > m_alloc_raw
See ctor: the allocator that operator()() shall use to deallocate.
size_type m_buf_sz
See ctor and operator()(): the size of the buffer to deallocate.
Deleter_raw()
Default ctor: Deleter must never be invoked to delete anything in this state; suitable only for a nul...
void operator()(Pointer_raw to_delete)
Deallocates using Allocator_raw::deallocate(), passing-in the supplied pointer (to value_type) to_del...
Deleter_raw(const Deleter_raw &src)
Copy-construction which may be required when we are used in boost::interprocess::shared_ptr which as ...
Deleter_raw & operator=(Deleter_raw &&moved_src)
Move-assignment which is required when we are used in unique_ptr.
A hand-optimized and API-tweaked replacement for vector<uint8_t>, i.e., buffer of bytes inside an all...
Definition: basic_blob.hpp:276
Basic_blob(Basic_blob &&moved_src, log::Logger *logger_ptr=nullptr) noexcept
Move constructor, constructing a blob exactly internally equal to pre-call moved_src,...
value_type const * const_data() const
Equivalent to const_begin().
Allocator Allocator_raw
Short-hand for the allocator type specified at compile-time. Its element type is our value_type.
Definition: basic_blob.hpp:296
bool valid_iterator(Const_iterator it) const
Returns true if and only if: this->derefable_iterator(it) || (it == this->const_end()).
value_type const * Const_iterator
Type for iterator pointing into an immutable structure of this type.
Definition: basic_blob.hpp:293
Basic_blob(const Allocator_raw &alloc_raw_src={})
Constructs blob with zero() == true.
size_type m_start
See start(); but m_start is meaningless (and containing unknown value) if !buf_ptr() (i....
Const_iterator const_begin() const
Returns pointer to immutable first element; or end() if empty().
Basic_blob share_after_split_left(size_type size, log::Logger *logger_ptr=nullptr)
Applicable to !zero() blobs, this shifts this->begin() by size to the right without changing end(); a...
value_type * Iterator
Type for iterator pointing into a mutable structure of this type.
Definition: basic_blob.hpp:290
bool zero() const
Returns false if a buffer is allocated and owned; true otherwise.
void start_past_prefix(size_type prefix_size)
Restructures blob to consist of an internally allocated buffer and a [begin(), end) range starting at...
bool empty() const
Returns size() == 0.
Basic_blob share_after_split_right(size_type size, log::Logger *logger_ptr=nullptr)
Identical to share_after_split_left(), except this->end() shifts by size to the left (instead of this...
Iterator iterator_sans_const(Const_iterator it)
Returns iterator-to-mutable equivalent to given iterator-to-immutable.
Iterator emplace_copy(Const_iterator dest, const boost::asio::const_buffer &src, log::Logger *logger_ptr=nullptr)
Copies src buffer directly onto equally sized area within *this at location dest; *this must have suf...
const value_type & back() const
Equivalent to const_back().
size_type capacity() const
Returns the number of elements in the internally allocated buffer, which is 1 or more; or 0 if no buf...
const value_type & const_reference
For container compliance (hence the irregular capitalization): reference to const element.
Definition: basic_blob.hpp:307
static constexpr Flow_log_component S_LOG_COMPONENT
Our flow::log::Component.
void swap(Basic_blob &other, log::Logger *logger_ptr=nullptr) noexcept
Swaps the contents of this structure and other, or no-op if this == &other.
size_type m_size
See size(); but m_size is meaningless (and containing unknown value) if !buf_ptr() (i....
std::size_t size_type
Type for index into blob or length of blob or sub-blob.
Definition: basic_blob.hpp:284
Const_iterator const_end() const
Returns pointer one past immutable last element; empty() is possible.
Iterator erase(Const_iterator first, Const_iterator past_last)
Performs the minimal number of operations to make range [begin(), end()) unchanged except for lacking...
const Buf_ptr & buf_ptr() const
Ref-to-immutable counterpart to the other overload.
void share_after_split_equally_impl(size_type size, bool headless_pool, Emit_blob_func &&emit_blob_func, log::Logger *logger_ptr, Share_after_split_left_func &&share_after_split_left_func)
Impl of share_after_split_equally() but capable of emitting not just *this type (Basic_blob<....
Iterator pointer
For container compliance (hence the irregular capitalization): pointer to element.
Definition: basic_blob.hpp:301
std::conditional_t< S_IS_VANILLA_ALLOC, std::conditional_t< S_SHARING, boost::shared_ptr< value_type[]>, boost::movelib::unique_ptr< value_type[]> >, std::conditional_t< S_SHARING, boost::interprocess::shared_ptr< value_type, Allocator_raw, Deleter_raw >, boost::movelib::unique_ptr< value_type, Deleter_raw > > > Buf_ptr
The smart-pointer type used for buf_ptr(); a custom-allocator-and-SHM-friendly impl and parameterizat...
Basic_blob share(log::Logger *logger_ptr=nullptr) const
Applicable to !zero() blobs, this returns an identical Basic_blob that shares (co-owns) *this allocat...
Iterator iterator
For container compliance (hence the irregular capitalization): Iterator type.
Definition: basic_blob.hpp:309
void reserve_impl(size_type capacity, bool clear_on_alloc, log::Logger *logger_ptr)
Implements reserve() overloads.
boost::compressed_pair< Allocator_raw, Buf_ptr > m_alloc_and_buf_ptr
Combined – to enable empty base-class optimization (EBO) – storage for the two data items,...
value_type & reference
For container compliance (hence the irregular capitalization): reference to element.
Definition: basic_blob.hpp:305
void make_zero(log::Logger *logger_ptr=nullptr)
Guarantees post-condition zero() == true by dropping *this ownership of the allocated internal buffer...
void reserve(size_type capacity, log::Logger *logger_ptr=nullptr)
Ensures that an internal buffer of at least capacity elements is allocated and owned; disallows growi...
void resize_impl(size_type size, bool clear_on_alloc, size_type start_or_unchanged, log::Logger *logger_ptr)
Implements resize() overloads.
size_type assign_copy(const boost::asio::const_buffer &src, log::Logger *logger_ptr=nullptr)
Replaces logical contents with a copy of the given non-overlapping area anywhere in memory.
void clear()
Equivalent to resize(0, start()).
boost::asio::const_buffer const_buffer() const
Convenience accessor returning an immutable boost.asio buffer "view" into the entirety of the blob.
Basic_blob & assign(Basic_blob &&moved_src, log::Logger *logger_ptr=nullptr) noexcept
Move assignment.
value_type * data()
Equivalent to begin().
static constexpr bool S_IS_VANILLA_ALLOC
true if Allocator_raw underlying allocator template is simply std::allocator; false otherwise.
Definition: basic_blob.hpp:359
Const_iterator cend() const
Synonym of const_end().
void share_after_split_equally_emit_seq(size_type size, bool headless_pool, Blob_container *out_blobs, log::Logger *logger_ptr=nullptr)
share_after_split_equally() wrapper that places Basic_blobs into the given container via push_back().
const value_type & front() const
Equivalent to const_front().
static constexpr size_type S_UNCHANGED
Special value indicating an unchanged size_type value; such as in resize().
Definition: basic_blob.hpp:319
void resize(size_type size, size_type start_or_unchanged=S_UNCHANGED, log::Logger *logger_ptr=nullptr)
Guarantees post-condition size() == size and start() == start; no values in pre-call range [begin(),...
const Allocator_raw & alloc_raw() const
Ref-to-immutable counterpart to the other overload.
uint8_t value_type
Short-hand for values, which in this case are unsigned bytes.
Definition: basic_blob.hpp:281
Basic_blob(size_type size, Clear_on_alloc coa_tag, log::Logger *logger_ptr=nullptr, const Allocator_raw &alloc_raw_src={})
Identical to similar-sig ctor except, if size > 0, all size elements are performantly initialized to ...
Const_iterator cbegin() const
Synonym of const_begin().
Iterator begin()
Returns pointer to mutable first element; or end() if empty().
Const_iterator const_pointer
For container compliance (hence the irregular capitalization): pointer to const element.
Definition: basic_blob.hpp:303
void share_after_split_equally(size_type size, bool headless_pool, Emit_blob_func &&emit_blob_func, log::Logger *logger_ptr=nullptr)
Identical to successively performing share_after_split_left(size) until this->empty() == true; the re...
std::ptrdiff_t difference_type
Type for difference of size_types.
Definition: basic_blob.hpp:287
Const_iterator const_iterator
For container compliance (hence the irregular capitalization): Const_iterator type.
Definition: basic_blob.hpp:311
static constexpr bool S_SHARING
Value of template parameter SHARING (for generic programming).
Definition: basic_blob.hpp:316
Basic_blob & operator=(Basic_blob &&moved_src) noexcept
Move assignment operator (no logging): equivalent to assign(std::move(moved_src), nullptr).
size_type m_capacity
See capacity(); but m_capacity is meaningless (and containing unknown value) if !buf_ptr() (i....
~Basic_blob()
Destructor that drops *this ownership of the allocated internal buffer if any, as by make_zero(); if ...
void share_after_split_equally_emit_ptr_seq(size_type size, bool headless_pool, Blob_ptr_container *out_blobs, log::Logger *logger_ptr=nullptr)
share_after_split_equally() wrapper that places Ptr<Basic_blob>s into the given container via push_ba...
const value_type & const_back() const
Returns reference to immutable last element.
size_type start() const
Returns the offset between begin() and the start of the internally allocated buffer.
bool derefable_iterator(Const_iterator it) const
Returns true if and only if the given iterator points to an element within this blob's size() element...
Allocator_raw get_allocator() const
Returns a copy of the internally cached Allocator_raw as set by a constructor or assign() or assignme...
Buf_ptr & buf_ptr()
For convenience/expressiveness, the pointer-to-main-buf for *this.
void start_past_prefix_inc(difference_type prefix_size_inc)
Like start_past_prefix() but shifts the current prefix position by the given incremental value (posit...
void swap_impl(Basic_blob &other, log::Logger *logger_ptr) noexcept
The body of swap(), except for the part that swaps (or decides not to swap) alloc_raw().
Iterator end()
Returns pointer one past mutable last element; empty() is possible.
const value_type & const_front() const
Returns reference to immutable first element.
Allocator_raw & alloc_raw()
For convenience/expressiveness, the allocator object for *this.
boost::asio::mutable_buffer mutable_buffer()
Same as const_buffer() but the returned view is mutable.
size_type size() const
Returns number of elements stored, namely end() - begin().
Const_iterator sub_copy(Const_iterator src, const boost::asio::mutable_buffer &dest, log::Logger *logger_ptr=nullptr) const
The opposite of emplace_copy() in every way, copying a sub-blob onto a target memory area.
Basic_blob(size_type size, log::Logger *logger_ptr=nullptr, const Allocator_raw &alloc_raw_src={})
Constructs blob with size() and capacity() equal to the given size, and start() == 0.
#define FLOW_LOG_TRACE_WITHOUT_CHECKING(ARG_stream_fragment)
Logs a TRACE message into flow::log::Logger *get_logger() with flow::log::Component get_log_component...
Definition: log.hpp:354
#define FLOW_LOG_SET_CONTEXT(ARG_logger_ptr, ARG_component_payload)
For the rest of the block within which this macro is instantiated, causes all FLOW_LOG_....
Definition: log.hpp:405
#define FLOW_LOG_TRACE(ARG_stream_fragment)
Logs a TRACE message into flow::log::Logger *get_logger() with flow::log::Component get_log_component...
Definition: log.hpp:227
@ S_TRACE
Message indicates any condition that may occur with great frequency (thus verbose if logged).
Flow module containing miscellaneous general-use facilities that don't fit into any other Flow module...
Definition: basic_blob.hpp:31
bool in_closed_open_range(T const &min_val, T const &val, T const &max_val)
Returns true if and only if the given value is within the given range, given as a [low,...
Definition: util.hpp:296
bool blobs_sharing(const Basic_blob< Allocator, SHARING > &blob1, const Basic_blob< Allocator, SHARING > &blob2)
Returns true if and only if both given objects are not zero() == true, and they either co-own a commo...
void swap(Basic_blob< Allocator, SHARING > &blob1, Basic_blob< Allocator, SHARING > &blob2, log::Logger *logger_ptr) noexcept
Equivalent to blob1.swap(blob2).
bool in_closed_range(T const &min_val, T const &val, T const &max_val)
Returns true if and only if the given value is within the given range, inclusive.
Definition: util.hpp:280
Flow_log_component
The flow::log::Component payload enumeration comprising various log components used by Flow's own int...
Definition: common.hpp:627
unsigned char uint8_t
Byte. Best way to represent a byte of binary data. This is 8 bits on all modern systems.
Definition: common.hpp:380
Tag type used at least in Basic_blob and Blob_with_log_context to specify that an allocated buffer be...
Definition: basic_blob.hpp:40