Flow 1.0.0
Flow project: Public API.
|
A hand-optimized and API-tweaked replacement for vector<uint8_t>
, i.e., buffer of bytes inside an allocated area of equal or larger size; also optionally supports limited garbage-collected memory pool functionality and SHM-friendly custom-allocator support.
More...
#include <basic_blob.hpp>
Public Types | |
using | value_type = uint8_t |
Short-hand for values, which in this case are unsigned bytes. | |
using | size_type = std::size_t |
Type for index into blob or length of blob or sub-blob. | |
using | difference_type = std::ptrdiff_t |
Type for difference of size_type s. | |
using | Iterator = value_type * |
Type for iterator pointing into a mutable structure of this type. | |
using | Const_iterator = value_type const * |
Type for iterator pointing into an immutable structure of this type. | |
using | Allocator_raw = Allocator |
Short-hand for the allocator type specified at compile-time. Its element type is our value_type. | |
using | pointer = Iterator |
For container compliance (hence the irregular capitalization): pointer to element. | |
using | const_pointer = Const_iterator |
For container compliance (hence the irregular capitalization): pointer to const element. | |
using | reference = value_type & |
For container compliance (hence the irregular capitalization): reference to element. | |
using | const_reference = const value_type & |
For container compliance (hence the irregular capitalization): reference to const element. | |
using | iterator = Iterator |
For container compliance (hence the irregular capitalization): Iterator type. | |
using | const_iterator = Const_iterator |
For container compliance (hence the irregular capitalization): Const_iterator type. | |
Public Member Functions | |
Basic_blob (const Allocator_raw &alloc_raw=Allocator_raw()) | |
Constructs blob with zero() == true . More... | |
Basic_blob (size_type size, log::Logger *logger_ptr=0, const Allocator_raw &alloc_raw=Allocator_raw()) | |
Constructs blob with size() and capacity() equal to the given size , and start() == 0 . More... | |
Basic_blob (Basic_blob &&moved_src, log::Logger *logger_ptr=0) | |
Move constructor, constructing a blob exactly internally equal to pre-call moved_src , while the latter is made to be exactly as if it were just constructed as Basic_blob(nullptr) (allocator subtleties aside). More... | |
Basic_blob (const Basic_blob &src, log::Logger *logger_ptr=0) | |
Copy constructor, constructing a blob logically equal to src . More... | |
~Basic_blob () | |
Destructor that drops *this ownership of the allocated internal buffer if any, as by make_zero(); if no other Basic_blob holds ownership of that buffer, then that buffer is deallocated also. More... | |
Basic_blob & | assign (Basic_blob &&moved_src, log::Logger *logger_ptr=0) |
Move assignment. More... | |
Basic_blob & | operator= (Basic_blob &&moved_src) |
Move assignment operator (no logging): equivalent to assign(std::move(moved_src), nullptr) . More... | |
Basic_blob & | assign (const Basic_blob &src, log::Logger *logger_ptr=0) |
Copy assignment: assuming (this != &src) && (!blobs_sharing(*this, src)) , makes *this logically equal to src ; but behavior undefined if a reallocation would be necessary to do this. More... | |
Basic_blob & | operator= (const Basic_blob &src) |
Copy assignment operator (no logging): equivalent to assign(src, nullptr) . More... | |
void | swap (Basic_blob &other, log::Logger *logger_ptr=0) |
Swaps the contents of this structure and other , or no-op if this == &other . More... | |
Basic_blob | share (log::Logger *logger_ptr=0) const |
Applicable to !zero() blobs, this returns an identical Basic_blob that shares (co-owns) *this allocated buffer along with *this and any other Basic_blob s also sharing it. More... | |
Basic_blob | share_after_split_left (size_type size, log::Logger *logger_ptr=0) |
Applicable to !zero() blobs, this shifts this->begin() by size to the right without changing end(); and returns a Basic_blob containing the shifted-past values that shares (co-owns) *this allocated buffer along with *this and any other Basic_blob s also sharing it. More... | |
Basic_blob | share_after_split_right (size_type size, log::Logger *logger_ptr=0) |
Identical to share_after_split_left(), except this->end() shifts by size to the left (instead of this->begin() to the right), and the split-off Basic_blob contains the *right-most* size` elements (instead of the left-most). More... | |
template<typename Emit_blob_func > | |
void | share_after_split_equally (size_type size, bool headless_pool, Emit_blob_func &&emit_blob_func, log::Logger *logger_ptr=0) |
Identical to successively performing share_after_split_left(size) until this->empty() == true ; the resultings Basic_blob s are emitted via emit_blob_func() callback in the order they're split off from the left. More... | |
template<typename Blob_container > | |
void | share_after_split_equally_emit_seq (size_type size, bool headless_pool, Blob_container *out_blobs, log::Logger *logger_ptr=0) |
share_after_split_equally() wrapper that places Basic_blob s into the given container via push_back() . More... | |
template<typename Blob_ptr_container > | |
void | share_after_split_equally_emit_ptr_seq (size_type size, bool headless_pool, Blob_ptr_container *out_blobs, log::Logger *logger_ptr=0) |
share_after_split_equally() wrapper that places Ptr<Basic_blob> s into the given container via push_back() , where the type Ptr<> is determined via Blob_ptr_container::value_type . More... | |
size_type | assign_copy (const boost::asio::const_buffer &src, log::Logger *logger_ptr=0) |
Replaces logical contents with a copy of the given non-overlapping area anywhere in memory. More... | |
Iterator | emplace_copy (Const_iterator dest, const boost::asio::const_buffer &src, log::Logger *logger_ptr=0) |
Copies src buffer directly onto equally sized area within *this at location dest ; *this must have sufficient size() to accomodate all of the data copied. More... | |
Const_iterator | sub_copy (Const_iterator src, const boost::asio::mutable_buffer &dest, log::Logger *logger_ptr=0) const |
The opposite of emplace_copy() in every way, copying a sub-blob onto a target memory area. More... | |
size_type | size () const |
Returns number of elements stored, namely end() - begin() . More... | |
size_type | start () const |
Returns the offset between begin() and the start of the internally allocated buffer. More... | |
bool | empty () const |
Returns size() == 0 . More... | |
size_type | capacity () const |
Returns the number of elements in the internally allocated buffer, which is 1 or more; or 0 if no buffer is internally allocated. More... | |
bool | zero () const |
Returns false if a buffer is allocated and owned; true otherwise. More... | |
void | reserve (size_type capacity, log::Logger *logger_ptr=0) |
Ensures that an internal buffer of at least capacity elements is allocated and owned; disallows growing an existing buffer; never shrinks an existing buffer; if a buffer is allocated, it is no larger than capacity . More... | |
void | make_zero (log::Logger *logger_ptr=0) |
Guarantees post-condition zero() == true by dropping *this ownership of the allocated internal buffer if any; if no other Basic_blob holds ownership of that buffer, then that buffer is deallocated also. More... | |
void | resize (size_type size, size_type start_or_unchanged=S_UNCHANGED, log::Logger *logger_ptr=0) |
Guarantees post-condition size() == size and start() == start ; no values in pre-call range [begin(), end()) are changed; any values added to that range by the call are not initialized to zero or otherwise. More... | |
void | start_past_prefix (size_type prefix_size) |
Restructures blob to consist of an internally allocated buffer and a [begin(), end) range starting at offset prefix_size within that buffer. More... | |
void | start_past_prefix_inc (difference_type prefix_size_inc) |
Like start_past_prefix() but shifts the current prefix position by the given incremental value (positive or negative). More... | |
void | clear () |
Equivalent to resize(0, start()) . More... | |
Iterator | erase (Const_iterator first, Const_iterator past_last) |
Performs the minimal number of operations to make range [begin(), end()) unchanged except for lacking sub-range [first, past_last) . More... | |
Iterator | begin () |
Returns pointer to mutable first element; or end() if empty(). More... | |
Const_iterator | const_begin () const |
Returns pointer to immutable first element; or end() if empty(). More... | |
Const_iterator | begin () const |
Equivalent to const_begin(). More... | |
Iterator | end () |
Returns pointer one past mutable last element; empty() is possible. More... | |
Const_iterator | const_end () const |
Returns pointer one past immutable last element; empty() is possible. More... | |
Const_iterator | end () const |
Equivalent to const_end(). More... | |
const value_type & | const_front () const |
Returns reference to immutable first element. More... | |
const value_type & | const_back () const |
Returns reference to immutable last element. More... | |
const value_type & | front () const |
Equivalent to const_front(). More... | |
const value_type & | back () const |
Equivalent to const_back(). More... | |
value_type & | front () |
Returns reference to mutable first element. More... | |
value_type & | back () |
Returns reference to mutable last element. More... | |
value_type const * | const_data () const |
Equivalent to const_begin(). More... | |
value_type * | data () |
Equivalent to begin(). More... | |
Const_iterator | cbegin () const |
Synonym of const_begin(). More... | |
Const_iterator | cend () const |
Synonym of const_end(). More... | |
bool | valid_iterator (Const_iterator it) const |
Returns true if and only if: this->derefable_iterator(it) || (it == this->const_end()) . More... | |
bool | derefable_iterator (Const_iterator it) const |
Returns true if and only if the given iterator points to an element within this blob's size() elements. More... | |
boost::asio::const_buffer | const_buffer () const |
Convenience accessor returning an immutable boost.asio buffer "view" into the entirety of the blob. More... | |
boost::asio::mutable_buffer | mutable_buffer () |
Same as const_buffer() but the returned view is mutable. More... | |
Allocator_raw | get_allocator () const |
Returns a copy of the internally cached Allocator_raw as set by a constructor or assign() or assignment-operator, whichever happened last. More... | |
Static Public Attributes | |
static constexpr bool | S_SHARING = S_SHARING_ALLOWED |
Value of template parameter S_SHARING_ALLOWED (for generic programming). | |
static constexpr size_type | S_UNCHANGED = size_type(-1) |
Special value indicating an unchanged size_type value; such as in resize(). | |
static constexpr bool | S_IS_VANILLA_ALLOC = std::is_same_v<Allocator_raw, std::allocator<value_type>> |
true if Allocator_raw underlying allocator template is simply std::allocator ; false otherwise. More... | |
Protected Member Functions | |
template<typename Emit_blob_func , typename Share_after_split_left_func > | |
void | share_after_split_equally_impl (size_type size, bool headless_pool, Emit_blob_func &&emit_blob_func, log::Logger *logger_ptr, Share_after_split_left_func &&share_after_split_left_func) |
Impl of share_after_split_equally() but capable of emitting not just *this type (Basic_blob<...> ) but any sub-class (such as Blob /Sharing_blob ) provided a functor like share_after_split_left() but returning an object of that appropriate type. More... | |
Static Protected Attributes | |
static constexpr Flow_log_component | S_LOG_COMPONENT = Flow_log_component::S_UTIL |
Our flow::log::Component. | |
Related Functions | |
(Note that these are not member functions.) | |
template<typename Allocator , bool S_SHARING_ALLOWED> | |
bool | blobs_sharing (const Basic_blob< Allocator, S_SHARING_ALLOWED > &blob1, const Basic_blob< Allocator, S_SHARING_ALLOWED > &blob2) |
Returns true if and only if both given objects are not zero() == true , and they either co-own a common underlying buffer, or are the same object. More... | |
template<typename Allocator , bool S_SHARING_ALLOWED> | |
void | swap (Basic_blob< Allocator, S_SHARING_ALLOWED > &blob1, Basic_blob< Allocator, S_SHARING_ALLOWED > &blob2, log::Logger *logger_ptr=0) |
Equivalent to blob1.swap(blob2) . More... | |
A hand-optimized and API-tweaked replacement for vector<uint8_t>
, i.e., buffer of bytes inside an allocated area of equal or larger size; also optionally supports limited garbage-collected memory pool functionality and SHM-friendly custom-allocator support.
Basic_blob<std::allocator, B>
(with B = false
or true
respectively), in a fashion vaguely similar to what string
is to basic_string
(a little). This is much like Blob/Sharing_blob, in that it is a non-template concrete type; but does not take or store a Logger*
.The rationale for its existence mirrors its essential differences from vector<uint8_t>
which are as follows. To summarize, though, it exists to guarantee specific performance by reducing implementation uncertainty via lower-level operations; and force user to explicitly authorize any allocation to ensure thoughtfully performant use. Update: Plus, it adds non-prefix-sub-buffer feature, which can be useful for zero-copy deserialization. Update: Plus, it adds a simple form of garbage-collected memory pools, useful for operating multiple Basic_blob
s that share a common over-arching memory area (buffer). Update: Plus, it adds SHM-friendly custom allocator support. (While all vector
impls support custom allocators, only some later versions of gcc std::vector
work with shared-memory (SHM) allocators and imperfectly at that. boost::container::vector
a/k/a boost::interprocess::vector
is fully SHM-friendly.)
vector<uint8_t>
: The logical contents [begin(), end())
can optionally begin not at the start of the internally allocated buffer but somewhere past it. In other words, the logical buffer is not necessarily a prefix of the internal allocated buffer. This feature is critical when one wants to use some sub-buffer of a buffer without reallocating a smaller buffer and copying the sub-buffer into it. For example, if we read a DATA packet the majority of which is the payload, which begins a few bytes from the start – past a short header – it may be faster to keep passing around the whole thing with move semantics but use only the payload part, after logically deserializing it (a/k/a zero-copy deserialization semantics). Of course one can do this with vector
as well; but one would need to always remember the prefix length even after deserializing, at which point such details would be ideally forgotten instead. So this API is significantly more pleasant in that case. Moreover it can then be used generically more easily, alongside other containers.memcpy()
directly instead of hoping that using a higher-level abstraction will ultimately do the same.default_init_t
extension to various APIs like .resize()
.) If an allocation does occur, the area is left as-is unless user specifies a source memory area from which to copy data.vector
being slow; the idea is to guarantee we aren't by removing any question about it; it's entirely possible a given vector
is equally fast, but it cannot be guaranteed by standard except in terms of complexity guarantees (which is usually pretty good but not everything).std::vector<uint8_t>
(in gcc-8.3 anyway): I (ygoldfel) once used it with a custom allocator (which worked in shared memory) and stored a megabytes-long buffer in one. Its destructor, I noticed, spent milliseconds (with 2022 hardware) – outside the actual dealloc call. Reason: It was iterating over every (1-byte) element and invoking its (non-existent/trivial) destructor. It did not specialize to avoid this, intentionally so according to a comment, when using a custom allocator. boost::container::vector<uint8_t>
lacked this problem; but nevertheless it shows generally written containers can have hidden such perf quirks.vector
, it has an explicit state where there is no underlying buffer; in this case zero() is true
. Also in that case, capacity() == 0
and size() == 0
(and start() == 0
). zero() == true
is the case on default-constructed object of this class. The reason for this is I am never sure, at least, what a default-constructed vector
looks like internally; a null buffer always seemed like a reasonable starting point worth guaranteeing explicitly.!zero()
:true
, as if upon default construction.vector
, it keeps an allocated memory chunk of size M, at the start of which is the logical buffer of size N <= M
, where N == size()
, and M == capacity()
. However, M >= 1
always.start()
. In this case M >= start() + size()
, and the buffer range is in indices [start(), start() + size())
of the allocated buffer. By default start() == 0
, as in vector
, but this can be changed via the 2nd, optional, argument to resize().vector
, reserve(Mnew)
, with Mnew <= M
, does nothing. However, unlike vector
, the same call is illegal when Mnew > M >= 1
. However, any reserve() call is allowed when zero() is true
. Therefore, if the user is intentionally okay with the performance implications of a reallocation, they can call make_zero() and then force the reallocating reserve() call.vector
, resize(Nnew)
merely guarantees post-condition size() == Nnew
; which means that it is essentially equivalent to reserve(Nnew)
followed by setting internal N member to Nnew. However, remember that resize() therefore keeps all the behaviors of reserve(), including that it cannot grow the buffer (only allocate it when zero() is true
).start()
from default, then: resize(Nnew, Snew)
means reserve(Nnew + Snew)
, plus saving internal N and S members.reserve(Mnew)
when zero() == true
. Moreover, exactly Mnew bytes elements are allocated and no more (unlike with vector
, where the policy used is not known). Moreover, if reserve(Mnew)
is called indirectly (by another method of the class), Mnew
arg is set to no greater than size necessary to complete the operation (again, by contrast, it is unknown what vector
does w/r/t capacity policy).The following feature was added quite some time after Blob
was first introduced and matured. However it seamlessly subsumes all of the above basic functionality with full backwards compatibility. It can also be disabled (and is by default) by setting S_SHARING to false
at compile-time. (This gains back a little bit of perf namely by turning an internal shared_ptr
to unique_ptr
.)
The feature itself is simple: Suppose one has a blob A, constructed or otherwise resize()
d or reserve()
d so as to have zero() == false
; meaning capacity() >= 1
. Now suppose one calls the core method of this pool feature: share() which returns a new blob B. B will have the same exact start(), size(), capacity() – and, in fact, the pointer data() - start()
(i.e., the underlying buffer start pointer, buffer being capacity() long). That is, B now shares the underlying memory buffer with A. Normally, that underlying buffer would be deallocated when either A.make_zero()
is called, or A is destructed. Now that it's shared by A and B, however, the buffer is deallocated only once make_zero() or destruction occurs for both A and B. That is, there is an internal (thread-safe) ref-count that must reach 0.
Both A and B may now again be share()d into further sharing Basic_blob
s. This further increments the ref-count of original buffer; all such Basic_blob
s C, D, ... must now either make_zero() or destruct, at which point the dealloc occurs.
In that way the buffer – or pool – is garbage-collected as a whole, with reserve() (and APIs like resize() and ctors that call it) initially allocating and setting internal ref-count to 1, share() incrementing it, and make_zero() and ~Basic_blob() decrementing it (and deallocating when ref-count=0).
Basic_blob
s functionalityThe other aspect of this feature is its pool-of-Basic_blob
s application. All of the sharing Basic_blob
s A, B, ... retain all the aforementioned features including the ability to use resize(), start_past_prefix_inc(), etc., to control the location of the logical sub-range [begin(), end()) within the underlying buffer (pool). E.g., suppose A was 10 bytes, with start() = 0
and size() = capacity() = 10
; then share() B is also that way. Now B.start_past_prefix_inc(5); A.resize(5);
makes it so that A = the 1st 5 bytes of the pool, B the last 5 bytes (and they don't overlap – can even be concurrently modified safely). In that way A and B are now independent Basic_blob
s – potentially passed, say, to independent TCP-receive calls, each of which reads up to 5 bytes – that share an over-arching pool.
The API share_after_split_left() is a convenience operation that splits a Basic_blob
's [begin(), end()) area into 2 areas of specified length, then returns a new Basic_blob representing the first area in the split and modifies *this
to represent the remainder (the 2nd area). This simply performs the op described in the preceding paragraph. share_after_split_right() is similar but acts symmetrically from the right. Lastly share_after_split_equally*()
splits a Basic_blob into several equally-sized (except the last one potentially) sub-Basic_blob
s of size N, where N is an arg. (It can be thought of as just calling share_after_split_left(N)
repeatedly, then returning a sequence of the resulting post-split Basic_blob
s.)
To summarize: The share_after_split*()
APIs are useful to divide (potentially progressively) a pool into non-overlapping Basic_blob
s within a pool while ensuring the pool continues to exist while Basic_blob
s refer to any part of it (but no later). Meanwhile direct use of share() with resize() and start_past_prefix*()
allows for overlapping such sharing Basic_blob
s.
Note that deallocation occurs regardless of which areas of that pool the relevant Basic_blob
s represent, and whether they overlap or not (and, for that matter, whether they even together comprise the entire pool or leave "gaps" in-between). The whole pool is deallocated the moment the last of the co-owning Basic_blob
s performs either make_zero() or ~Basic_blob() – the values of start() and size() at the time are not relevant.
Like STL containers this one optionally takes a custom allocator type (Allocator_raw) as a compile-time parameter instead of using the regular heap (std::allocator
). Unlike many STL container implementations, including at least older std::vector
, it supports SHM-storing allocators without a constant cross-process vaddr scheme. (Some do support this but with surprising perf flaws when storing raw integers/bytes. boost.container vector
has solid support but lacks various other properties of Basic_blob.) While a detailed discussion is outside our scope here, the main point is internally *this
stores no raw value_type*
but rather Allocator_raw::pointer
– which in many cases is value_type*
; but for advanced applications like SHM it might be a fancy-pointer like boost::interprocess::offset_ptr<value_type>
. For general education check out boost.interprocess docs covering storage of STL containers in SHM. (However note that the allocators provided by that library are only one option even for SHM storage alone; e.g., they are stateful, and often one would like a stateless – zero-size – allocator. Plus there are other limitations to boost.interprocess SHM support, robust though it is.)
When and if *this
logs, it is with log::Sev::S_TRACE severity or more verbose.
Unlike many other Flow API classes this one does not derive from log::Log_context nor take a Logger*
in ctor (and store it). Instead each API method/ctor/function capable of logging takes an optional (possibly null) log::Logger pointer. If supplied it's used by that API alone (with some minor async exceptions). If you would like more typical Flow-style logging API then use our non-polymorphic sub-class Blob_with_log_context (more likely aliases Blob, Sharing_blob). However consider the following first.
Why this design? Answer:
Logger*
takes some space; and storing it, copying/moving it, etc., takes a little compute. In a low-level API like Basic_blob this is potentially nice to avoid when not actively needed. (That said the logging can be extremely useful when debugging and/or profiling RAM use + allocations.)Blob
(before Basic_blob existed) stored a Logger*
, and it was fine. However:Logger*
is always okay when *this
itself is stored in regular heap or on the stack. However, *this
itself may be stored in SHM; Allocator_raw parameterization (see above regarding "Custom allocator") suggests as much (i.e., if the buffer is stored in SHM, we might be too). In that case Logger*
does not, usually, make sense. As of this writing Logger
in process 1 has no relationship with any Logger
in process 2; and even if the Logger
were stored in SHM itself, Logger
would need to be supplied via an in-SHM fancy-pointer, not Logger*
, typically. The latter is a major can of worms and not supported by flow::log in any case as of this writing.Logger*
with the blob, at least in some real applications it makes no sense.Blob/Sharing_blob provides this support while ensuring Allocator_raw (no longer a template parameter in its case) is the vanilla std::allocator
. The trade-off is as noted just above.
Before share() (or share_*()
) is called: Essentially: Thread safety is the same as for vector<uint8_t>
.
Without share*()
any two Basic_blob objects refer to separate areas in memory; hence it is safe to access Basic_blob A concurrently with accessing Basic_blob B in any fashion (read, write).
However: If 2 Basic_blob
s A and B co-own a pool, via a share*()
chain, then concurrent write and read/write to A and B respectively are thread-safe if and only if their [begin(), end()) ranges don't overlap. Otherwise, naturally, one would be writing to an area while it is being read simultaneously – not safe.
Tip: When working in share*()
mode, exclusive use of share_after_split*()
is a great way to guarantee no 2 Basic_blob
s ever overlap. Meanwhile one must be careful when using share() directly and/or subsequently sliding the range around via resize(), start_past_prefix*()
: A.share()
and A not only (originally) overlap but simply represent the same area of memory; and resize() and co. can turn a non-overlapping range into an overlapping one (encroaching on someone else's "territory" within the pool).
Tight_blob<Allocator, bool>
, which would be identical to Basic_blob but forego the framing features, namely size() and start(), thus storing only the RAII array pointer data() and capacity(); rewrite Basic_blob in terms of this Tight_blob
. This simple container type has had some demand in practice, and Basic_blob can and should be cleanly built on top of it (perhaps even as an IS-A subclass).Allocator | An allocator, with value_type equal to our value_type, per the standard C++1x Allocator concept. In most uses this shall be left at the default std::allocator<value_type> which allocates in standard heap (new[] , delete[] ). A custom allocator may be used instead. SHM-storing allocators, and generally allocators for which pointer is not simply value_type* but rather a fancy-pointer (see cppreference.com) are correctly supported. (Note this may not be the case for your compiler's std::vector .) |
S_SHARING_ALLOWED | If true , share() and all derived methods, plus blobs_sharing(), can be instantiated (invoked in compiled code). If false they cannot (static_assert() will trip), but the resulting Basic_blob concrete class will be slightly more performant (internally, a shared_ptr becomes instead a unique_ptr which means smaller allocations and no ref-count logic invoked). |
flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::Basic_blob | ( | const Allocator_raw & | alloc_raw = Allocator_raw() | ) |
Constructs blob with zero() == true
.
Note this means no buffer is allocated.
alloc_raw | Allocator to copy and store in *this for all buffer allocations/deallocations. If Allocator_raw is stateless, then this has size zero, so nothing is copied at runtime, and by definition it is to equal Allocator_raw() . |
|
explicit |
Constructs blob with size() and capacity() equal to the given size
, and start() == 0
.
Performance note: elements are not initialized to zero or any other value. A new over-arching buffer (pool) is therefore allocated.
Corner case note: a post-condition is zero() == (size() == 0)
. Note, also, that the latter is not a universal invariant (see zero() doc header).
Formally: If size >= 1
, then a buffer is allocated; and the internal ownership ref-count is set to 1.
size | A non-negative desired size. |
logger_ptr | The Logger implementation to use in this routine (synchronously) or asynchronously when TRACE-logging in the event of buffer dealloc. Null allowed. |
alloc_raw | Allocator to copy and store in *this for all buffer allocations/deallocations. If Allocator_raw is stateless, then this has size zero, so nothing is copied at runtime, and by definition it is to equal Allocator_raw() . |
flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::Basic_blob | ( | Basic_blob< Allocator, S_SHARING_ALLOWED > && | moved_src, |
log::Logger * | logger_ptr = 0 |
||
) |
Move constructor, constructing a blob exactly internally equal to pre-call moved_src
, while the latter is made to be exactly as if it were just constructed as Basic_blob(nullptr)
(allocator subtleties aside).
Performance: constant-time, at most copying a few scalars.
moved_src | The object whose internals to move to *this and replace with a blank-constructed object's internals. |
logger_ptr | The Logger implementation to use in this routine (synchronously) only. Null allowed. |
|
explicit |
Copy constructor, constructing a blob logically equal to src
.
More formally, guarantees post-condition wherein [this->begin(), this->end())
range is equal by value (including length) to src
equivalent range but no memory overlap. A post-condition is capacity() == size()
, and start() == 0
. Performance: see copying assignment operator.
Corner case note: the range equality guarantee includes the degenerate case where that range is empty, meaning we simply guarantee post-condition src.empty() == this->empty()
.
Corner case note 2: post-condition: this->zero() == this->empty()
(note src.zero()
state is not necessarily preserved in *this
).
Note: This is explicit
, which is atypical for a copy constructor, to generate compile errors in hard-to-see (and often unintentional) instances of copying. Copies of Basic_blob should be quite intentional and explicit. (One example where one might forget about a copy would be when using a Basic_blob argument without cref
or ref
in a bind()
; or when capturing by value, not by ref, in a lambda.)
Formally: If src.size() >= 1
, then a buffer is allocated; and the internal ownership ref-count is set to 1.
src | Object whose range of bytes of length src.size() starting at src.begin() is copied into *this . |
logger_ptr | The Logger implementation to use in this routine (synchronously) or asynchronously when TRACE-logging in the event of buffer dealloc. Null allowed. |
|
default |
Destructor that drops *this
ownership of the allocated internal buffer if any, as by make_zero(); if no other Basic_blob holds ownership of that buffer, then that buffer is deallocated also.
Recall that other Basic_blob
s can only gain co-ownership via share*()
; hence if one does not use that feature, the destructor will in fact deallocate the buffer (if any).
Formally: If !zero()
, then the internal ownership ref-count is decremented by 1, and if it reaches 0, then a buffer is deallocated.
This will not log, as it is not possible to pass a Logger*
to a dtor without storing it (which we avoid for reasons outlined in class doc header). Use Blob/Sharing_blob if it is important to log in this situation (although there are some minor trade-offs).
Basic_blob< Allocator, S_SHARING_ALLOWED > & flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::assign | ( | Basic_blob< Allocator, S_SHARING_ALLOWED > && | moved_src, |
log::Logger * | logger_ptr = 0 |
||
) |
Move assignment.
Allocator subtleties aside and assuming this != &moved_src
it is equivalent to: make_zero(); this->swap(moved_src, logger_ptr)
. (If this == &moved_src
, this is a no-op.)
moved_src | See swap(). |
logger_ptr | The Logger implementation to use in this routine (synchronously) only. Null allowed. |
*this
. Basic_blob< Allocator, S_SHARING_ALLOWED > & flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::assign | ( | const Basic_blob< Allocator, S_SHARING_ALLOWED > & | src, |
log::Logger * | logger_ptr = 0 |
||
) |
Copy assignment: assuming (this != &src) && (!blobs_sharing(*this, src))
, makes *this
logically equal to src
; but behavior undefined if a reallocation would be necessary to do this.
(If this == &src
, this is a no-op. If not but blobs_sharing(*this, src) == true
, see "Sharing blobs" below. This is assumed to not be the case in further discussion.)
More formally: no-op if this == &src
; "Sharing blobs" behavior if not so, but src
shares buffer with *this
; otherwise: Guarantees post-condition wherein [this->begin(), this->end())
range is equal by value (including length) to src
equivalent range but no memory overlap. Post-condition: start() == 0
; capacity() either does not change or equals size(). capacity() growth is not allowed: behavior is undefined if src.size()
exceeds pre-call this->capacity()
, unless this->zero() == true
pre-call. Performance: at most a memory area of size src.size()
is copied and some scalars updated; a memory area of that size is allocated only if required; no ownership drop or deallocation occurs.
Corner case note: the range equality guarantee includes the degenerate case where that range is empty, meaning we simply guarantee post-condition src.empty() == this->empty()
.
Corner case note 2: post-condition: if this->empty() == true
then this.zero()
has the same value as at entry to this call. In other words, no deallocation occurs, even if this->empty() == true
post-condition holds; at most internally a scalar storing size is assigned 0. (You may force deallocation in that case via make_zero() post-call, but this means you'll have to intentionally perform that relatively slow op.)
As with reserve(), IF pre-condition zero() == false
, THEN pre-condition src.size() <= this->capacity()
must hold, or behavior is undefined (i.e., as noted above, capacity() growth is not allowed except from 0). Therefore, NO REallocation occurs! However, also as with reserve(), if you want to intentionally allow such a REallocation, then simply first call make_zero(); then execute the assign()
copy as planned. This is an intentional restriction forcing caller to explicitly allow a relatively slow reallocation op.
Formally: If src.size() >= 1
, and zero() == true
, then a buffer is allocated; and the internal ownership ref-count is set to 1.
If blobs_sharing(*this, src) == true
, meaning the target and source are operating on the same buffer, then behavior is undefined (assertion may trip). Rationale for this design is as follows. The possibilities were:
this->start()
and this->size()
to match src
; continue co-owning the underlying buffer; copy no data.this->make_zero()
– losing *this
ownership, while src
keeps it – and then allocate a new buffer and copy src
data into it.Choosing between these is tough, as this is an odd corner case. 3 is not criminal, but generally no method ever forces make_zero() behavior, always leaving it to the user to consciously do, so it seems prudent to keep to that practice (even though this case is a bit different from, say, resize() – since make_zero() here has no chance to deallocate anything, only decrement ref-count). 2 is performant and slick but suggests a special behavior in a corner case; this feels slightly ill-advised in a standard copy assignment operator. Therefore it seems better to crash-and-burn (choice 1), in the same way an attempt to resize()-higher a non-zero() blob would crash and burn, forcing the user to explicitly execute what they want. After all, 3 is done by simply calling make_zero() first; and 2 is possible with a simple resize() call; and the blobs_sharing() check is both easy and performant.
start() == 0
; meaning start()
at entry is ignored and reset to 0; the entire (co-)owned buffer – if any – is potentially used to store the copied values. In particular, if one plans to work on a sub-blob of a shared pool (see class doc header), then using this assignment op is not advised. Use emplace_copy() instead; or perform your own copy onto mutable_buffer().src | Object whose range of bytes of length src.size() starting at src.begin() is copied into *this . Behavior is undefined if pre-condition is !zero() , and this memory area overlaps at any point with the memory area of same size in *this (unless that size is zero – a degenerate case). (This can occur only via the use of share*() – otherwise Basic_blob s always refer to separate areas.) Also behavior undefined if pre-condition is !zero() , and *this (co-)owned buffer is too short to accomodate all src.size() bytes (assertion may trip). |
logger_ptr | The Logger implementation to use in this routine (synchronously) only. Null allowed. |
*this
. Basic_blob< Allocator, S_SHARING_ALLOWED >::size_type flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::assign_copy | ( | const boost::asio::const_buffer & | src, |
log::Logger * | logger_ptr = 0 |
||
) |
Replaces logical contents with a copy of the given non-overlapping area anywhere in memory.
More formally: This is exactly equivalent to copy-assignment (*this = b
), where const Basic_blob b
owns exactly the memory area given by src
. However, note the newly relevant restriction documented for src
parameter below (no overlap allowed).
All characteristics are as written for the copy assignment operator, including "Formally" and the warning.
src | Source memory area. Behavior is undefined if pre-condition is !zero() , and this memory area overlaps at any point with the memory area of same size at begin() . Otherwise it can be anywhere at all. Also behavior undefined if pre-condition is !zero() , and *this (co-)owned buffer is too short to accomodate all src.size() bytes (assertion may trip). |
logger_ptr | The Logger implementation to use in this routine (synchronously) only. Null allowed. |
src.size()
, or simply size(). Basic_blob< Allocator, S_SHARING_ALLOWED >::value_type & flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::back |
Basic_blob< Allocator, S_SHARING_ALLOWED >::value_type const & flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::back |
Equivalent to const_back().
Basic_blob< Allocator, S_SHARING_ALLOWED >::Iterator flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::begin |
Basic_blob< Allocator, S_SHARING_ALLOWED >::Const_iterator flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::begin |
Equivalent to const_begin().
Basic_blob< Allocator, S_SHARING_ALLOWED >::size_type flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::capacity |
Returns the number of elements in the internally allocated buffer, which is 1 or more; or 0 if no buffer is internally allocated.
Some formal invariants: (capacity() == 0) == zero()
; start() + size() <= capacity()
.
See important notes on capacity() policy in the class doc header.
Basic_blob< Allocator, S_SHARING_ALLOWED >::Const_iterator flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::cbegin |
Synonym of const_begin().
Exists as standard container method (hence the odd formatting).
Basic_blob< Allocator, S_SHARING_ALLOWED >::Const_iterator flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::cend |
Synonym of const_end().
Exists as standard container method (hence the odd formatting).
void flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::clear |
Equivalent to resize(0, start())
.
Note that the value returned by start() will not change due to this call. Only size() (and the corresponding internally stored datum) may change. If one desires to reset start(), use resize() directly (but if one plans to work on a sub-Basic_blob of a shared pool – see class doc header – please think twice first).
Basic_blob< Allocator, S_SHARING_ALLOWED >::value_type const & flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::const_back |
Basic_blob< Allocator, S_SHARING_ALLOWED >::Const_iterator flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::const_begin |
boost::asio::const_buffer flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::const_buffer |
Convenience accessor returning an immutable boost.asio buffer "view" into the entirety of the blob.
Equivalent to const_buffer(const_data(), size())
.
Basic_blob< Allocator, S_SHARING_ALLOWED >::value_type const * flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::const_data |
Equivalent to const_begin().
Basic_blob< Allocator, S_SHARING_ALLOWED >::Const_iterator flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::const_end |
Returns pointer one past immutable last element; empty() is possible.
Null is a possible value in the latter case.
Basic_blob< Allocator, S_SHARING_ALLOWED >::value_type const & flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::const_front |
Basic_blob< Allocator, S_SHARING_ALLOWED >::value_type * flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::data |
Equivalent to begin().
bool flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::derefable_iterator | ( | Const_iterator | it | ) | const |
Returns true
if and only if the given iterator points to an element within this blob's size() elements.
In particular, this is always false
if empty(); and also when it == this->const_end()
.
it | Iterator/pointer to check. |
Basic_blob< Allocator, S_SHARING_ALLOWED >::Iterator flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::emplace_copy | ( | Const_iterator | dest, |
const boost::asio::const_buffer & | src, | ||
log::Logger * | logger_ptr = 0 |
||
) |
Copies src
buffer directly onto equally sized area within *this
at location dest
; *this
must have sufficient size() to accomodate all of the data copied.
Performance: The only operation performed is a copy from src
to dest
using the fastest reasonably available technique.
None of the following changes: zero(), empty(), size(), capacity(), begin(), end(); nor the location (or size) of internally stored buffer.
dest | Destination location within this blob. This must be in [begin(), end()] ; and, unless src.size() == 0 , must not equal end() either. |
src | Source memory area. Behavior is undefined if this memory area overlaps at any point with the memory area of same size at dest (unless that size is zero – a degenerate case). Otherwise it can be anywhere at all, even partially or fully within *this . Also behavior undefined if *this blob is too short to accomodate all src.size() bytes (assertion may trip). |
logger_ptr | The Logger implementation to use in this routine (synchronously) only. Null allowed. |
dest
if none copied; in particular end() is a possible value. bool flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::empty |
Basic_blob< Allocator, S_SHARING_ALLOWED >::Iterator flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::end |
Returns pointer one past mutable last element; empty() is possible.
Null is a possible value in the latter case.
Basic_blob< Allocator, S_SHARING_ALLOWED >::Const_iterator flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::end |
Equivalent to const_end().
Basic_blob< Allocator, S_SHARING_ALLOWED >::Iterator flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::erase | ( | Const_iterator | first, |
Const_iterator | past_last | ||
) |
Performs the minimal number of operations to make range [begin(), end())
unchanged except for lacking sub-range [first, past_last)
.
Performance/behavior: At most, this copies the range [past_last, end())
to area starting at first
; and then adjusts internally stored size member.
first | Pointer to first element to erase. It must be dereferenceable, or behavior is undefined (assertion may trip). |
past_last | Pointer to one past the last element to erase. If past_last <= first , call is a no-op. |
first
. (This matches standard expectation for container erase()
return value: iterator to element past the last one erased. In this contiguous sequence that simply equals first
, since everything starting with past_last
slides left onto first
. In particular: If past_last()
equaled end()
at entry, then the new end() is returned: everything starting with first
was erased and thus first == end()
now. If nothing is erased first
is still returned.) Basic_blob< Allocator, S_SHARING_ALLOWED >::value_type & flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::front |
Basic_blob< Allocator, S_SHARING_ALLOWED >::value_type const & flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::front |
Equivalent to const_front().
Basic_blob< Allocator, S_SHARING_ALLOWED >::Allocator_raw flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::get_allocator |
Returns a copy of the internally cached Allocator_raw as set by a constructor or assign() or assignment-operator, whichever happened last.
void flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::make_zero | ( | log::Logger * | logger_ptr = 0 | ) |
Guarantees post-condition zero() == true
by dropping *this
ownership of the allocated internal buffer if any; if no other Basic_blob holds ownership of that buffer, then that buffer is deallocated also.
Recall that other Basic_blob
s can only gain co-ownership via share*()
; hence if one does not use that feature, make_zero() will in fact deallocate the buffer (if any).
That post-condition can also be thought of as *this
becoming indistinguishable from a default-constructed Basic_blob.
Performance/behavior: Assuming zero() is not already true
, this will deallocate capacity() sized buffer and save a null pointer.
The many operations that involve reserve() in their doc headers will explain importance of this method: As a rule, no method except make_zero() allows one to request an ownership-drop or deallocation of the existing buffer, even if this would be necessary for a larger buffer to be allocated. Therefore, if you intentionally want to allow such an operation, you CAN, but then you MUST explicitly call make_zero() first.
Formally: If !zero()
, then the internal ownership ref-count is decremented by 1, and if it reaches 0, then a buffer is deallocated.
logger_ptr | The Logger implementation to use in this routine (synchronously) only. Null allowed. |
boost::asio::mutable_buffer flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::mutable_buffer |
Same as const_buffer() but the returned view is mutable.
Equivalent to mutable_buffer(data(), size())
.
Basic_blob< Allocator, S_SHARING_ALLOWED > & flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::operator= | ( | Basic_blob< Allocator, S_SHARING_ALLOWED > && | moved_src | ) |
Move assignment operator (no logging): equivalent to assign(std::move(moved_src), nullptr)
.
moved_src | See assign() (move overload). |
*this
. Basic_blob< Allocator, S_SHARING_ALLOWED > & flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::operator= | ( | const Basic_blob< Allocator, S_SHARING_ALLOWED > & | src | ) |
Copy assignment operator (no logging): equivalent to assign(src, nullptr)
.
src | See assign() (copy overload). |
*this
. void flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::reserve | ( | size_type | capacity, |
log::Logger * | logger_ptr = 0 |
||
) |
Ensures that an internal buffer of at least capacity
elements is allocated and owned; disallows growing an existing buffer; never shrinks an existing buffer; if a buffer is allocated, it is no larger than capacity
.
reserve() may be called directly but should be formally understood to be called by resize(), assign_copy(), copy assignment operator, copy constructor. In all cases, the value passed to reserve() is exactly the size needed to perform the particular task – no more (and no less). As such, reserve() policy is key to knowing how the class behaves elsewhere. See class doc header for discussion in larger context.
Performance/behavior: If zero() is true pre-call, capacity
sized buffer is allocated. Otherwise, no-op if capacity <= capacity()
pre-call. Behavior is undefined if capacity > capacity()
pre-call (again, unless zero(), meaning capacity() == 0
). In other words, no deallocation occurs, and an allocation occurs only if necessary. Growing an existing buffer is disallowed. However, if you want to intentionally REallocate, then simply first check for zero() == false
and call make_zero() if that holds; then execute the reserve()
as planned. This is an intentional restriction forcing caller to explicitly allow a relatively slow reallocation op. You'll note a similar suggestion for the other reserve()-using methods/operators.
Formally: If capacity >= 1
, and zero() == true
, then a buffer is allocated; and the internal ownership ref-count is set to 1.
capacity | Non-negative desired minimum capacity. |
logger_ptr | The Logger implementation to use in this routine (synchronously) or asynchronously when TRACE-logging in the event of buffer dealloc. Null allowed. |
void flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::resize | ( | size_type | size, |
size_type | start_or_unchanged = S_UNCHANGED , |
||
log::Logger * | logger_ptr = 0 |
||
) |
Guarantees post-condition size() == size
and start() == start
; no values in pre-call range [begin(), end())
are changed; any values added to that range by the call are not initialized to zero or otherwise.
From other invariants and behaviors described, you'll realize this essentially means reserve(size + start)
followed by saving size
and start
into internal size members. The various implications of this can be deduced by reading the related methods' doc headers. The key is to understand how reserve() works, including what it disallows (growth in size of an existing buffer).
Formally: If size >= 1
, and zero() == true
, then a buffer is allocated; and the internal ownership ref-count is set to 1.
start
is taken to be the value of arg start_or_unchanged
; unless the latter is set to special value S_UNCHANGED; in which case start
is taken to equal start(). Since the default is indeed S_UNCHANGED, the oft-encountered expression resize(N)
will adjust only size() and leave start() unmodified – often the desired behavior.
size | Non-negative desired value for size(). |
start_or_unchanged | Non-negative desired value for start(); or special value S_UNCHANGED. See above. |
logger_ptr | The Logger implementation to use in this routine (synchronously) or asynchronously when TRACE-logging in the event of buffer dealloc. Null allowed. |
Basic_blob< Allocator, S_SHARING_ALLOWED > flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::share | ( | log::Logger * | logger_ptr = 0 | ) | const |
Applicable to !zero()
blobs, this returns an identical Basic_blob that shares (co-owns) *this
allocated buffer along with *this
and any other Basic_blob
s also sharing it.
Behavior is undefined (assertion may trip) if zero() == true
: it is nonsensical to co-own nothing; just use the default ctor then.
The returned Basic_blob is identical in that not only does it share the same memory area (hence same capacity()) but has identical start(), size() (and hence begin() and end()). If you'd like to work on a different part of the allocated buffer, please consider share_after_split*()
instead; the pool-of-sub-Basic_blob
s paradigm suggested in the class doc header is probably best accomplished using those methods and not share().
You can also adjust various sharing Basic_blob
s via resize(), start_past_prefix_inc(), etc., directly – after share() returns.
Formally: Before this returns, the internal ownership ref-count (shared among *this
and the returned Basic_blob) is incremented.
logger_ptr | The Logger implementation to use in this routine (synchronously) only. Null allowed. |
*this
that shares the underlying allocated buffer. See above. void flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::share_after_split_equally | ( | size_type | size, |
bool | headless_pool, | ||
Emit_blob_func && | emit_blob_func, | ||
log::Logger * | logger_ptr = 0 |
||
) |
Identical to successively performing share_after_split_left(size)
until this->empty() == true
; the resultings Basic_blob
s are emitted via emit_blob_func()
callback in the order they're split off from the left.
In other words this partitions a non-zero() Basic_blob
– perhaps typically used as a pool – into equally-sized (except possibly the last one) adjacent sub-Basic_blob
s.
A post-condition is that empty() == true
(size() == 0
). In addition, if headless_pool == true
, then zero() == true
is also a post-condition; i.e., the pool is "headless": it disappears once all the resulting sub-Basic_blob
s drop their ownership (as well as any other co-owning Basic_blob
s). Otherwise, *this
will continue to share the pool despite size() becoming 0. (Of course, even then, one is free to make_zero() or destroy *this
– the former, before returning, is all that headless_pool == true
really adds.)
Behavior is undefined (assertion may trip) if empty() == true
(including if zero() == true
, but even if not) or if size == 0
.
vector<Basic_blob>
. vector<unique_ptr<Basic_blob>>
.Emit_blob_func | A callback compatible with signature void F(Basic_blob&& blob_moved) . |
size | Desired size() of each successive out-Basic_blob, except the last one. Behavior undefined (assertion may trip) if not positive. |
headless_pool | Whether to perform this->make_zero() just before returning. See above. |
emit_blob_func | F such that F(std::move(blob)) shall be called with each successive sub-Basic_blob. |
logger_ptr | The Logger implementation to use in this routine (synchronously) only. Null allowed. |
void flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::share_after_split_equally_emit_ptr_seq | ( | size_type | size, |
bool | headless_pool, | ||
Blob_ptr_container * | out_blobs, | ||
log::Logger * | logger_ptr = 0 |
||
) |
share_after_split_equally() wrapper that places Ptr<Basic_blob>
s into the given container via push_back()
, where the type Ptr<>
is determined via Blob_ptr_container::value_type
.
Blob_ptr_container | Something with method compatible with push_back(Ptr&& blob_ptr_moved) , where Ptr is Blob_ptr_container::value_type , and Ptr(new Basic_blob) can be created. Ptr is to be a smart pointer type such as unique_ptr<Basic_blob> or shared_ptr<Basic_blob> . |
size | See share_after_split_equally(). |
headless_pool | See share_after_split_equally(). |
out_blobs | out_blobs->push_back() shall be executed 1+ times. |
logger_ptr | The Logger implementation to use in this routine (synchronously) only. Null allowed. |
void flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::share_after_split_equally_emit_seq | ( | size_type | size, |
bool | headless_pool, | ||
Blob_container * | out_blobs, | ||
log::Logger * | logger_ptr = 0 |
||
) |
share_after_split_equally() wrapper that places Basic_blob
s into the given container via push_back()
.
Blob_container | Something with method compatible with push_back(Basic_blob&& blob_moved) . |
size | See share_after_split_equally(). |
headless_pool | See share_after_split_equally(). |
out_blobs | out_blobs->push_back() shall be executed 1+ times. |
logger_ptr | The Logger implementation to use in this routine (synchronously) only. Null allowed. |
|
protected |
Impl of share_after_split_equally() but capable of emitting not just *this
type (Basic_blob<...>
) but any sub-class (such as Blob
/Sharing_blob
) provided a functor like share_after_split_left() but returning an object of that appropriate type.
Emit_blob_func | See share_after_split_equally(); however it is to take the type to emit which can be *this Basic_blob or a sub-class. |
Share_after_split_left_func | A callback with signature identical to share_after_split_left() but returning the same type emitted by Emit_blob_func . |
size | See share_after_split_equally(). |
headless_pool | See share_after_split_equally(). |
emit_blob_func | See Emit_blob_func . |
logger_ptr | See share_after_split_equally(). |
share_after_split_left_func | See Share_after_split_left_func . |
Basic_blob< Allocator, S_SHARING_ALLOWED > flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::share_after_split_left | ( | size_type | size, |
log::Logger * | logger_ptr = 0 |
||
) |
Applicable to !zero()
blobs, this shifts this->begin()
by size
to the right without changing end(); and returns a Basic_blob containing the shifted-past values that shares (co-owns) *this
allocated buffer along with *this
and any other Basic_blob
s also sharing it.
More formally, this is identical to simply auto b = share(); b.resize(size); start_past_prefix_inc(size);
.
This is useful when working in the pool-of-sub-Basic_blob
s paradigm. This and other share_after_split*()
methods are usually better to use rather than share() directly (for that paradigm).
Behavior is undefined (assertion may trip) if zero() == true
.
Corner case: If size > size()
, then it is taken to equal size().
Degenerate case: If size
(or size(), whichever is smaller) is 0, then this method is identical to share(). Probably you don't mean to call share_after_split_left() in that case, but it's your decision.
Degenerate case: If size == size()
(and not 0), then this->empty()
becomes true
– though *this
continues to share the underlying buffer despite [begin(), end()) becoming empty. Typically this would only be done as, perhaps, the last iteration of some progressively-splitting loop; but it's your decision.
Formally: Before this returns, the internal ownership ref-count (shared among *this
and the returned Basic_blob) is incremented.
size | Desired size() of the returned Basic_blob; and the number of elements by which this->begin() is shifted right (hence start() is incremented). Any value exceeding size() is taken to equal it. |
logger_ptr | The Logger implementation to use in this routine (synchronously) only. Null allowed. |
*this
. See above. Basic_blob< Allocator, S_SHARING_ALLOWED > flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::share_after_split_right | ( | size_type | size, |
log::Logger * | logger_ptr = 0 |
||
) |
Identical to share_after_split_left(), except this->end()
shifts by size
to the left (instead of this->begin() to the right), and the split-off Basic_blob contains the *right-most*
size` elements (instead of the left-most).
More formally, this is identical to simply auto lt_size = size() - size; auto b = share(); resize(lt_size); b.start_past_prefix_inc(lt_size);
. Cf. share_after_split_left() formal definition and note the symmetry.
All other characteristics are as written for share_after_split_left().
size | Desired size() of the returned Basic_blob; and the number of elements by which this->end() is shifted left (hence size() is decremented). Any value exceeding size() is taken to equal it. |
logger_ptr | The Logger implementation to use in this routine (synchronously) only. Null allowed. |
*this
. See above. Basic_blob< Allocator, S_SHARING_ALLOWED >::size_type flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::size |
Basic_blob< Allocator, S_SHARING_ALLOWED >::size_type flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::start |
void flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::start_past_prefix | ( | size_type | prefix_size | ) |
Restructures blob to consist of an internally allocated buffer and a [begin(), end)
range starting at offset prefix_size
within that buffer.
More formally, it is a simple resize() wrapper that ensures the internally allocated buffer remains unchanged or, if none is currently large enough to store prefix_size
elements, is allocated to be of size prefix_size
; and that start() == prefix_size
.
All of resize()'s behavior, particularly any restrictions about capacity() growth, applies, so in particular remember you may need to first make_zero() if the internal buffer would need to be REallocated to satisfy the above requirements.
In practice, with current reserve() (and thus resize()) restrictions – which are intentional – this method is most useful if you already have a Basic_blob with internally allocated buffer of size at least n == size() + start()
(and start() == 0
for simplicity), and you'd like to treat this buffer as containing no-longer-relevant prefix of length S (which becomes new value for start()) and have size() be readjusted down accordingly, while start() + size() == n
remains unchaged. If the buffer also contains irrelevant data past a certain offset N, you can first make it irrelevant via resize(N)
(then call start_past_prefix(S)
as just described):
void flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::start_past_prefix_inc | ( | difference_type | prefix_size_inc | ) |
Like start_past_prefix() but shifts the current prefix position by the given incremental value (positive or negative).
Identical to start_past_prefix(start() + prefix_size_inc)
.
Behavior is undefined for negative prefix_size_inc
whose magnitue exceeds start() (assertion may trip).
Behavior is undefined in case of positive prefix_size_inc
that results in overflow.
Basic_blob< Allocator, S_SHARING_ALLOWED >::Const_iterator flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::sub_copy | ( | Const_iterator | src, |
const boost::asio::mutable_buffer & | dest, | ||
log::Logger * | logger_ptr = 0 |
||
) | const |
The opposite of emplace_copy() in every way, copying a sub-blob onto a target memory area.
Note that the size of that target buffer (dest.size()
) determines how much of *this
is copied.
src | Source location within this blob. This must be in [begin(), end()] ; and, unless dest.size() == 0 , must not equal end() either. |
dest | Destination memory area. Behavior is undefined if this memory area overlaps at any point with the memory area of same size at src (unless that size is zero – a degenerate case). Otherwise it can be anywhere at all, even partially or fully within *this . Also behavior undefined if src + dest.size() is past end of *this blob (assertion may trip). |
logger_ptr | The Logger implementation to use in this routine (synchronously) only. Null allowed. |
src
if none copied; end() is a possible value. void flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::swap | ( | Basic_blob< Allocator, S_SHARING_ALLOWED > & | other, |
log::Logger * | logger_ptr = 0 |
||
) |
Swaps the contents of this structure and other
, or no-op if this == &other
.
Performance: at most this involves swapping a few scalars which is constant-time.
other | The other structure. |
logger_ptr | The Logger implementation to use in this routine (synchronously) only. Null allowed. |
bool flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::valid_iterator | ( | Const_iterator | it | ) | const |
Returns true
if and only if: this->derefable_iterator(it) || (it == this->const_end())
.
it | Iterator/pointer to check. |
bool flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::zero |
Returns false
if a buffer is allocated and owned; true
otherwise.
See important notes on how this relates to empty() and capacity() in those methods' doc headers. See also other important notes in class doc header.
Note that zero() is true
for any default-constructed Basic_blob.
|
related |
Returns true
if and only if both given objects are not zero() == true
, and they either co-own a common underlying buffer, or are the same object.
Note: by the nature of Basic_blob::share(), a true
returned value is orthogonal to whether Basic_blob::start() and Basic_blob::size() values are respectively equal; true
may be returned even if their [begin()
, end()
) ranges don't overlap at all – as long as the allocated buffer is co-owned by the 2 Basic_blob
s.
If &blob1 != &blob2
, true
indicates blob1
was obtained from blob2
via a chain of Basic_blob::share() (or wrapper thereof) calls, or vice versa.
blob1 | Object. |
blob2 | Object. |
blob1
and blob2
both operate on the same underlying buffer.
|
related |
Equivalent to blob1.swap(blob2)
.
blob1 | Object. |
blob2 | Object. |
logger_ptr | The Logger implementation to use in this routine (synchronously) only. Null allowed. |
|
staticconstexpr |
true
if Allocator_raw underlying allocator template is simply std::allocator
; false
otherwise.
Note that if this is true
, it may be worth using Blob/Sharing_blob, instead of its Basic_blob<std::allocator>
super-class; at the cost of a marginally larger RAM footprint (an added Logger*
) you'll get a more convenient set of logging API knobs (namely Logger*
stored permanently from construction; and there will be no need to supply it as arg to subsequent APIs when logging is desired).
false
This is introduced in our class doc header. Briefly however:
new[]
and/or new
.delete[]
and/or delete
.true
.