Flow 1.0.2
Flow project: Public API.
Public Types | Public Member Functions | Static Public Attributes | Protected Member Functions | Static Protected Attributes | Related Functions | List of all members
flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED > Class Template Reference

A hand-optimized and API-tweaked replacement for vector<uint8_t>, i.e., buffer of bytes inside an allocated area of equal or larger size; also optionally supports limited garbage-collected memory pool functionality and SHM-friendly custom-allocator support. More...

#include <basic_blob.hpp>

Public Types

using value_type = uint8_t
 Short-hand for values, which in this case are unsigned bytes.
 
using size_type = std::size_t
 Type for index into blob or length of blob or sub-blob.
 
using difference_type = std::ptrdiff_t
 Type for difference of size_types.
 
using Iterator = value_type *
 Type for iterator pointing into a mutable structure of this type.
 
using Const_iterator = value_type const *
 Type for iterator pointing into an immutable structure of this type.
 
using Allocator_raw = Allocator
 Short-hand for the allocator type specified at compile-time. Its element type is our value_type.
 
using pointer = Iterator
 For container compliance (hence the irregular capitalization): pointer to element.
 
using const_pointer = Const_iterator
 For container compliance (hence the irregular capitalization): pointer to const element.
 
using reference = value_type &
 For container compliance (hence the irregular capitalization): reference to element.
 
using const_reference = const value_type &
 For container compliance (hence the irregular capitalization): reference to const element.
 
using iterator = Iterator
 For container compliance (hence the irregular capitalization): Iterator type.
 
using const_iterator = Const_iterator
 For container compliance (hence the irregular capitalization): Const_iterator type.
 

Public Member Functions

 Basic_blob (const Allocator_raw &alloc_raw=Allocator_raw())
 Constructs blob with zero() == true. More...
 
 Basic_blob (size_type size, log::Logger *logger_ptr=0, const Allocator_raw &alloc_raw=Allocator_raw())
 Constructs blob with size() and capacity() equal to the given size, and start() == 0. More...
 
 Basic_blob (Basic_blob &&moved_src, log::Logger *logger_ptr=0)
 Move constructor, constructing a blob exactly internally equal to pre-call moved_src, while the latter is made to be exactly as if it were just constructed as Basic_blob(nullptr) (allocator subtleties aside). More...
 
 Basic_blob (const Basic_blob &src, log::Logger *logger_ptr=0)
 Copy constructor, constructing a blob logically equal to src. More...
 
 ~Basic_blob ()
 Destructor that drops *this ownership of the allocated internal buffer if any, as by make_zero(); if no other Basic_blob holds ownership of that buffer, then that buffer is deallocated also. More...
 
Basic_blobassign (Basic_blob &&moved_src, log::Logger *logger_ptr=0)
 Move assignment. More...
 
Basic_bloboperator= (Basic_blob &&moved_src)
 Move assignment operator (no logging): equivalent to assign(std::move(moved_src), nullptr). More...
 
Basic_blobassign (const Basic_blob &src, log::Logger *logger_ptr=0)
 Copy assignment: assuming (this != &src) && (!blobs_sharing(*this, src)), makes *this logically equal to src; but behavior undefined if a reallocation would be necessary to do this. More...
 
Basic_bloboperator= (const Basic_blob &src)
 Copy assignment operator (no logging): equivalent to assign(src, nullptr). More...
 
void swap (Basic_blob &other, log::Logger *logger_ptr=0)
 Swaps the contents of this structure and other, or no-op if this == &other. More...
 
Basic_blob share (log::Logger *logger_ptr=0) const
 Applicable to !zero() blobs, this returns an identical Basic_blob that shares (co-owns) *this allocated buffer along with *this and any other Basic_blobs also sharing it. More...
 
Basic_blob share_after_split_left (size_type size, log::Logger *logger_ptr=0)
 Applicable to !zero() blobs, this shifts this->begin() by size to the right without changing end(); and returns a Basic_blob containing the shifted-past values that shares (co-owns) *this allocated buffer along with *this and any other Basic_blobs also sharing it. More...
 
Basic_blob share_after_split_right (size_type size, log::Logger *logger_ptr=0)
 Identical to share_after_split_left(), except this->end() shifts by size to the left (instead of this->begin() to the right), and the split-off Basic_blob contains the *right-most*size` elements (instead of the left-most). More...
 
template<typename Emit_blob_func >
void share_after_split_equally (size_type size, bool headless_pool, Emit_blob_func &&emit_blob_func, log::Logger *logger_ptr=0)
 Identical to successively performing share_after_split_left(size) until this->empty() == true; the resultings Basic_blobs are emitted via emit_blob_func() callback in the order they're split off from the left. More...
 
template<typename Blob_container >
void share_after_split_equally_emit_seq (size_type size, bool headless_pool, Blob_container *out_blobs, log::Logger *logger_ptr=0)
 share_after_split_equally() wrapper that places Basic_blobs into the given container via push_back(). More...
 
template<typename Blob_ptr_container >
void share_after_split_equally_emit_ptr_seq (size_type size, bool headless_pool, Blob_ptr_container *out_blobs, log::Logger *logger_ptr=0)
 share_after_split_equally() wrapper that places Ptr<Basic_blob>s into the given container via push_back(), where the type Ptr<> is determined via Blob_ptr_container::value_type. More...
 
size_type assign_copy (const boost::asio::const_buffer &src, log::Logger *logger_ptr=0)
 Replaces logical contents with a copy of the given non-overlapping area anywhere in memory. More...
 
Iterator emplace_copy (Const_iterator dest, const boost::asio::const_buffer &src, log::Logger *logger_ptr=0)
 Copies src buffer directly onto equally sized area within *this at location dest; *this must have sufficient size() to accomodate all of the data copied. More...
 
Const_iterator sub_copy (Const_iterator src, const boost::asio::mutable_buffer &dest, log::Logger *logger_ptr=0) const
 The opposite of emplace_copy() in every way, copying a sub-blob onto a target memory area. More...
 
size_type size () const
 Returns number of elements stored, namely end() - begin(). More...
 
size_type start () const
 Returns the offset between begin() and the start of the internally allocated buffer. More...
 
bool empty () const
 Returns size() == 0. More...
 
size_type capacity () const
 Returns the number of elements in the internally allocated buffer, which is 1 or more; or 0 if no buffer is internally allocated. More...
 
bool zero () const
 Returns false if a buffer is allocated and owned; true otherwise. More...
 
void reserve (size_type capacity, log::Logger *logger_ptr=0)
 Ensures that an internal buffer of at least capacity elements is allocated and owned; disallows growing an existing buffer; never shrinks an existing buffer; if a buffer is allocated, it is no larger than capacity. More...
 
void make_zero (log::Logger *logger_ptr=0)
 Guarantees post-condition zero() == true by dropping *this ownership of the allocated internal buffer if any; if no other Basic_blob holds ownership of that buffer, then that buffer is deallocated also. More...
 
void resize (size_type size, size_type start_or_unchanged=S_UNCHANGED, log::Logger *logger_ptr=0)
 Guarantees post-condition size() == size and start() == start; no values in pre-call range [begin(), end()) are changed; any values added to that range by the call are not initialized to zero or otherwise. More...
 
void start_past_prefix (size_type prefix_size)
 Restructures blob to consist of an internally allocated buffer and a [begin(), end) range starting at offset prefix_size within that buffer. More...
 
void start_past_prefix_inc (difference_type prefix_size_inc)
 Like start_past_prefix() but shifts the current prefix position by the given incremental value (positive or negative). More...
 
void clear ()
 Equivalent to resize(0, start()). More...
 
Iterator erase (Const_iterator first, Const_iterator past_last)
 Performs the minimal number of operations to make range [begin(), end()) unchanged except for lacking sub-range [first, past_last). More...
 
Iterator begin ()
 Returns pointer to mutable first element; or end() if empty(). More...
 
Const_iterator const_begin () const
 Returns pointer to immutable first element; or end() if empty(). More...
 
Const_iterator begin () const
 Equivalent to const_begin(). More...
 
Iterator end ()
 Returns pointer one past mutable last element; empty() is possible. More...
 
Const_iterator const_end () const
 Returns pointer one past immutable last element; empty() is possible. More...
 
Const_iterator end () const
 Equivalent to const_end(). More...
 
const value_typeconst_front () const
 Returns reference to immutable first element. More...
 
const value_typeconst_back () const
 Returns reference to immutable last element. More...
 
const value_typefront () const
 Equivalent to const_front(). More...
 
const value_typeback () const
 Equivalent to const_back(). More...
 
value_typefront ()
 Returns reference to mutable first element. More...
 
value_typeback ()
 Returns reference to mutable last element. More...
 
value_type const * const_data () const
 Equivalent to const_begin(). More...
 
value_typedata ()
 Equivalent to begin(). More...
 
Const_iterator cbegin () const
 Synonym of const_begin(). More...
 
Const_iterator cend () const
 Synonym of const_end(). More...
 
bool valid_iterator (Const_iterator it) const
 Returns true if and only if: this->derefable_iterator(it) || (it == this->const_end()). More...
 
bool derefable_iterator (Const_iterator it) const
 Returns true if and only if the given iterator points to an element within this blob's size() elements. More...
 
boost::asio::const_buffer const_buffer () const
 Convenience accessor returning an immutable boost.asio buffer "view" into the entirety of the blob. More...
 
boost::asio::mutable_buffer mutable_buffer ()
 Same as const_buffer() but the returned view is mutable. More...
 
Allocator_raw get_allocator () const
 Returns a copy of the internally cached Allocator_raw as set by a constructor or assign() or assignment-operator, whichever happened last. More...
 

Static Public Attributes

static constexpr bool S_SHARING = S_SHARING_ALLOWED
 Value of template parameter S_SHARING_ALLOWED (for generic programming).
 
static constexpr size_type S_UNCHANGED = size_type(-1)
 Special value indicating an unchanged size_type value; such as in resize().
 
static constexpr bool S_IS_VANILLA_ALLOC = std::is_same_v<Allocator_raw, std::allocator<value_type>>
 true if Allocator_raw underlying allocator template is simply std::allocator; false otherwise. More...
 

Protected Member Functions

template<typename Emit_blob_func , typename Share_after_split_left_func >
void share_after_split_equally_impl (size_type size, bool headless_pool, Emit_blob_func &&emit_blob_func, log::Logger *logger_ptr, Share_after_split_left_func &&share_after_split_left_func)
 Impl of share_after_split_equally() but capable of emitting not just *this type (Basic_blob<...>) but any sub-class (such as Blob/Sharing_blob) provided a functor like share_after_split_left() but returning an object of that appropriate type. More...
 

Static Protected Attributes

static constexpr Flow_log_component S_LOG_COMPONENT = Flow_log_component::S_UTIL
 Our flow::log::Component.
 

Related Functions

(Note that these are not member functions.)

template<typename Allocator , bool S_SHARING_ALLOWED>
bool blobs_sharing (const Basic_blob< Allocator, S_SHARING_ALLOWED > &blob1, const Basic_blob< Allocator, S_SHARING_ALLOWED > &blob2)
 Returns true if and only if both given objects are not zero() == true, and they either co-own a common underlying buffer, or are the same object. More...
 
template<typename Allocator , bool S_SHARING_ALLOWED>
void swap (Basic_blob< Allocator, S_SHARING_ALLOWED > &blob1, Basic_blob< Allocator, S_SHARING_ALLOWED > &blob2, log::Logger *logger_ptr=0)
 Equivalent to blob1.swap(blob2). More...
 

Detailed Description

template<typename Allocator, bool S_SHARING_ALLOWED>
class flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >

A hand-optimized and API-tweaked replacement for vector<uint8_t>, i.e., buffer of bytes inside an allocated area of equal or larger size; also optionally supports limited garbage-collected memory pool functionality and SHM-friendly custom-allocator support.

See also
Blob_with_log_context (and especially aliases Blob and Sharing_blob), our non-polymorphic sub-class which adds some ease of use in exchange for a small perf trade-off. (More info below under "Logging.")
Blob_sans_log_context + Sharing_blob_sans_log_context, each simply an alias to Basic_blob<std::allocator, B> (with B = false or true respectively), in a fashion vaguely similar to what string is to basic_string (a little). This is much like Blob/Sharing_blob, in that it is a non-template concrete type; but does not take or store a Logger*.

The rationale for its existence mirrors its essential differences from vector<uint8_t> which are as follows. To summarize, though, it exists to guarantee specific performance by reducing implementation uncertainty via lower-level operations; and force user to explicitly authorize any allocation to ensure thoughtfully performant use. Update: Plus, it adds non-prefix-sub-buffer feature, which can be useful for zero-copy deserialization. Update: Plus, it adds a simple form of garbage-collected memory pools, useful for operating multiple Basic_blobs that share a common over-arching memory area (buffer). Update: Plus, it adds SHM-friendly custom allocator support. (While all vector impls support custom allocators, only some later versions of gcc std::vector work with shared-memory (SHM) allocators and imperfectly at that. boost::container::vector a/k/a boost::interprocess::vector is fully SHM-friendly.)

Optional, simple garbage-collected shared ownership functionality

The following feature was added quite some time after Blob was first introduced and matured. However it seamlessly subsumes all of the above basic functionality with full backwards compatibility. It can also be disabled (and is by default) by setting S_SHARING to false at compile-time. (This gains back a little bit of perf namely by turning an internal shared_ptr to unique_ptr.)

The feature itself is simple: Suppose one has a blob A, constructed or otherwise resize()d or reserve()d so as to have zero() == false; meaning capacity() >= 1. Now suppose one calls the core method of this pool feature: share() which returns a new blob B. B will have the same exact start(), size(), capacity() – and, in fact, the pointer data() - start() (i.e., the underlying buffer start pointer, buffer being capacity() long). That is, B now shares the underlying memory buffer with A. Normally, that underlying buffer would be deallocated when either A.make_zero() is called, or A is destructed. Now that it's shared by A and B, however, the buffer is deallocated only once make_zero() or destruction occurs for both A and B. That is, there is an internal (thread-safe) ref-count that must reach 0.

Both A and B may now again be share()d into further sharing Basic_blobs. This further increments the ref-count of original buffer; all such Basic_blobs C, D, ... must now either make_zero() or destruct, at which point the dealloc occurs.

In that way the buffer – or pool – is garbage-collected as a whole, with reserve() (and APIs like resize() and ctors that call it) initially allocating and setting internal ref-count to 1, share() incrementing it, and make_zero() and ~Basic_blob() decrementing it (and deallocating when ref-count=0).

Application of shared ownership: Simple pool-of-Basic_blobs functionality

The other aspect of this feature is its pool-of-Basic_blobs application. All of the sharing Basic_blobs A, B, ... retain all the aforementioned features including the ability to use resize(), start_past_prefix_inc(), etc., to control the location of the logical sub-range [begin(), end()) within the underlying buffer (pool). E.g., suppose A was 10 bytes, with start() = 0 and size() = capacity() = 10; then share() B is also that way. Now B.start_past_prefix_inc(5); A.resize(5); makes it so that A = the 1st 5 bytes of the pool, B the last 5 bytes (and they don't overlap – can even be concurrently modified safely). In that way A and B are now independent Basic_blobs – potentially passed, say, to independent TCP-receive calls, each of which reads up to 5 bytes – that share an over-arching pool.

The API share_after_split_left() is a convenience operation that splits a Basic_blob's [begin(), end()) area into 2 areas of specified length, then returns a new Basic_blob representing the first area in the split and modifies *this to represent the remainder (the 2nd area). This simply performs the op described in the preceding paragraph. share_after_split_right() is similar but acts symmetrically from the right. Lastly share_after_split_equally*() splits a Basic_blob into several equally-sized (except the last one potentially) sub-Basic_blobs of size N, where N is an arg. (It can be thought of as just calling share_after_split_left(N) repeatedly, then returning a sequence of the resulting post-split Basic_blobs.)

To summarize: The share_after_split*() APIs are useful to divide (potentially progressively) a pool into non-overlapping Basic_blobs within a pool while ensuring the pool continues to exist while Basic_blobs refer to any part of it (but no later). Meanwhile direct use of share() with resize() and start_past_prefix*() allows for overlapping such sharing Basic_blobs.

Note that deallocation occurs regardless of which areas of that pool the relevant Basic_blobs represent, and whether they overlap or not (and, for that matter, whether they even together comprise the entire pool or leave "gaps" in-between). The whole pool is deallocated the moment the last of the co-owning Basic_blobs performs either make_zero() or ~Basic_blob() – the values of start() and size() at the time are not relevant.

Custom allocator (and SHared Memory) support

Like STL containers this one optionally takes a custom allocator type (Allocator_raw) as a compile-time parameter instead of using the regular heap (std::allocator). Unlike many STL container implementations, including at least older std::vector, it supports SHM-storing allocators without a constant cross-process vaddr scheme. (Some do support this but with surprising perf flaws when storing raw integers/bytes. boost.container vector has solid support but lacks various other properties of Basic_blob.) While a detailed discussion is outside our scope here, the main point is internally *this stores no raw value_type* but rather Allocator_raw::pointer – which in many cases is value_type*; but for advanced applications like SHM it might be a fancy-pointer like boost::interprocess::offset_ptr<value_type>. For general education check out boost.interprocess docs covering storage of STL containers in SHM. (However note that the allocators provided by that library are only one option even for SHM storage alone; e.g., they are stateful, and often one would like a stateless – zero-size – allocator. Plus there are other limitations to boost.interprocess SHM support, robust though it is.)

Logging

When and if *this logs, it is with log::Sev::S_TRACE severity or more verbose.

Unlike many other Flow API classes this one does not derive from log::Log_context nor take a Logger* in ctor (and store it). Instead each API method/ctor/function capable of logging takes an optional (possibly null) log::Logger pointer. If supplied it's used by that API alone (with some minor async exceptions). If you would like more typical Flow-style logging API then use our non-polymorphic sub-class Blob_with_log_context (more likely aliases Blob, Sharing_blob). However consider the following first.

Why this design? Answer:

Blob/Sharing_blob provides this support while ensuring Allocator_raw (no longer a template parameter in its case) is the vanilla std::allocator. The trade-off is as noted just above.

Thread safety

Before share() (or share_*()) is called: Essentially: Thread safety is the same as for vector<uint8_t>.

Without share*() any two Basic_blob objects refer to separate areas in memory; hence it is safe to access Basic_blob A concurrently with accessing Basic_blob B in any fashion (read, write).

However: If 2 Basic_blobs A and B co-own a pool, via a share*() chain, then concurrent write and read/write to A and B respectively are thread-safe if and only if their [begin(), end()) ranges don't overlap. Otherwise, naturally, one would be writing to an area while it is being read simultaneously – not safe.

Tip: When working in share*() mode, exclusive use of share_after_split*() is a great way to guarantee no 2 Basic_blobs ever overlap. Meanwhile one must be careful when using share() directly and/or subsequently sliding the range around via resize(), start_past_prefix*(): A.share() and A not only (originally) overlap but simply represent the same area of memory; and resize() and co. can turn a non-overlapping range into an overlapping one (encroaching on someone else's "territory" within the pool).

Todo:
Write a class template, perhaps Tight_blob<Allocator, bool>, which would be identical to Basic_blob but forego the framing features, namely size() and start(), thus storing only the RAII array pointer data() and capacity(); rewrite Basic_blob in terms of this Tight_blob. This simple container type has had some demand in practice, and Basic_blob can and should be cleanly built on top of it (perhaps even as an IS-A subclass).
Template Parameters
AllocatorAn allocator, with value_type equal to our value_type, per the standard C++1x Allocator concept. In most uses this shall be left at the default std::allocator<value_type> which allocates in standard heap (new[], delete[]). A custom allocator may be used instead. SHM-storing allocators, and generally allocators for which pointer is not simply value_type* but rather a fancy-pointer (see cppreference.com) are correctly supported. (Note this may not be the case for your compiler's std::vector.)
S_SHARING_ALLOWEDIf true, share() and all derived methods, plus blobs_sharing(), can be instantiated (invoked in compiled code). If false they cannot (static_assert() will trip), but the resulting Basic_blob concrete class will be slightly more performant (internally, a shared_ptr becomes instead a unique_ptr which means smaller allocations and no ref-count logic invoked).

Constructor & Destructor Documentation

◆ Basic_blob() [1/4]

template<typename Allocator , bool S_SHARING_ALLOWED>
flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::Basic_blob ( const Allocator_raw alloc_raw = Allocator_raw())

Constructs blob with zero() == true.

Note this means no buffer is allocated.

Parameters
alloc_rawAllocator to copy and store in *this for all buffer allocations/deallocations. If Allocator_raw is stateless, then this has size zero, so nothing is copied at runtime, and by definition it is to equal Allocator_raw().

◆ Basic_blob() [2/4]

template<typename Allocator , bool S_SHARING_ALLOWED>
flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::Basic_blob ( size_type  size,
log::Logger logger_ptr = 0,
const Allocator_raw alloc_raw = Allocator_raw() 
)
explicit

Constructs blob with size() and capacity() equal to the given size, and start() == 0.

Performance note: elements are not initialized to zero or any other value. A new over-arching buffer (pool) is therefore allocated.

Corner case note: a post-condition is zero() == (size() == 0). Note, also, that the latter is not a universal invariant (see zero() doc header).

Formally: If size >= 1, then a buffer is allocated; and the internal ownership ref-count is set to 1.

Parameters
sizeA non-negative desired size.
logger_ptrThe Logger implementation to use in this routine (synchronously) or asynchronously when TRACE-logging in the event of buffer dealloc. Null allowed.
alloc_rawAllocator to copy and store in *this for all buffer allocations/deallocations. If Allocator_raw is stateless, then this has size zero, so nothing is copied at runtime, and by definition it is to equal Allocator_raw().

◆ Basic_blob() [3/4]

template<typename Allocator , bool S_SHARING_ALLOWED>
flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::Basic_blob ( Basic_blob< Allocator, S_SHARING_ALLOWED > &&  moved_src,
log::Logger logger_ptr = 0 
)

Move constructor, constructing a blob exactly internally equal to pre-call moved_src, while the latter is made to be exactly as if it were just constructed as Basic_blob(nullptr) (allocator subtleties aside).

Performance: constant-time, at most copying a few scalars.

Parameters
moved_srcThe object whose internals to move to *this and replace with a blank-constructed object's internals.
logger_ptrThe Logger implementation to use in this routine (synchronously) only. Null allowed.

◆ Basic_blob() [4/4]

template<typename Allocator , bool S_SHARING_ALLOWED>
flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::Basic_blob ( const Basic_blob< Allocator, S_SHARING_ALLOWED > &  src,
log::Logger logger_ptr = 0 
)
explicit

Copy constructor, constructing a blob logically equal to src.

More formally, guarantees post-condition wherein [this->begin(), this->end()) range is equal by value (including length) to src equivalent range but no memory overlap. A post-condition is capacity() == size(), and start() == 0. Performance: see copying assignment operator.

Corner case note: the range equality guarantee includes the degenerate case where that range is empty, meaning we simply guarantee post-condition src.empty() == this->empty().

Corner case note 2: post-condition: this->zero() == this->empty() (note src.zero() state is not necessarily preserved in *this).

Note: This is explicit, which is atypical for a copy constructor, to generate compile errors in hard-to-see (and often unintentional) instances of copying. Copies of Basic_blob should be quite intentional and explicit. (One example where one might forget about a copy would be when using a Basic_blob argument without cref or ref in a bind(); or when capturing by value, not by ref, in a lambda.)

Formally: If src.size() >= 1, then a buffer is allocated; and the internal ownership ref-count is set to 1.

Parameters
srcObject whose range of bytes of length src.size() starting at src.begin() is copied into *this.
logger_ptrThe Logger implementation to use in this routine (synchronously) or asynchronously when TRACE-logging in the event of buffer dealloc. Null allowed.

◆ ~Basic_blob()

template<typename Allocator , bool S_SHARING_ALLOWED>
flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::~Basic_blob ( )
default

Destructor that drops *this ownership of the allocated internal buffer if any, as by make_zero(); if no other Basic_blob holds ownership of that buffer, then that buffer is deallocated also.

Recall that other Basic_blobs can only gain co-ownership via share*(); hence if one does not use that feature, the destructor will in fact deallocate the buffer (if any).

Formally: If !zero(), then the internal ownership ref-count is decremented by 1, and if it reaches 0, then a buffer is deallocated.

Logging

This will not log, as it is not possible to pass a Logger* to a dtor without storing it (which we avoid for reasons outlined in class doc header). Use Blob/Sharing_blob if it is important to log in this situation (although there are some minor trade-offs).

Member Function Documentation

◆ assign() [1/2]

template<typename Allocator , bool S_SHARING_ALLOWED>
Basic_blob< Allocator, S_SHARING_ALLOWED > & flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::assign ( Basic_blob< Allocator, S_SHARING_ALLOWED > &&  moved_src,
log::Logger logger_ptr = 0 
)

Move assignment.

Allocator subtleties aside and assuming this != &moved_src it is equivalent to: make_zero(); this->swap(moved_src, logger_ptr). (If this == &moved_src, this is a no-op.)

Parameters
moved_srcSee swap().
logger_ptrThe Logger implementation to use in this routine (synchronously) only. Null allowed.
Returns
*this.

◆ assign() [2/2]

template<typename Allocator , bool S_SHARING_ALLOWED>
Basic_blob< Allocator, S_SHARING_ALLOWED > & flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::assign ( const Basic_blob< Allocator, S_SHARING_ALLOWED > &  src,
log::Logger logger_ptr = 0 
)

Copy assignment: assuming (this != &src) && (!blobs_sharing(*this, src)), makes *this logically equal to src; but behavior undefined if a reallocation would be necessary to do this.

(If this == &src, this is a no-op. If not but blobs_sharing(*this, src) == true, see "Sharing blobs" below. This is assumed to not be the case in further discussion.)

More formally: no-op if this == &src; "Sharing blobs" behavior if not so, but src shares buffer with *this; otherwise: Guarantees post-condition wherein [this->begin(), this->end()) range is equal by value (including length) to src equivalent range but no memory overlap. Post-condition: start() == 0; capacity() either does not change or equals size(). capacity() growth is not allowed: behavior is undefined if src.size() exceeds pre-call this->capacity(), unless this->zero() == true pre-call. Performance: at most a memory area of size src.size() is copied and some scalars updated; a memory area of that size is allocated only if required; no ownership drop or deallocation occurs.

Corner case note: the range equality guarantee includes the degenerate case where that range is empty, meaning we simply guarantee post-condition src.empty() == this->empty().

Corner case note 2: post-condition: if this->empty() == true then this.zero() has the same value as at entry to this call. In other words, no deallocation occurs, even if this->empty() == true post-condition holds; at most internally a scalar storing size is assigned 0. (You may force deallocation in that case via make_zero() post-call, but this means you'll have to intentionally perform that relatively slow op.)

As with reserve(), IF pre-condition zero() == false, THEN pre-condition src.size() <= this->capacity() must hold, or behavior is undefined (i.e., as noted above, capacity() growth is not allowed except from 0). Therefore, NO REallocation occurs! However, also as with reserve(), if you want to intentionally allow such a REallocation, then simply first call make_zero(); then execute the assign() copy as planned. This is an intentional restriction forcing caller to explicitly allow a relatively slow reallocation op.

Formally: If src.size() >= 1, and zero() == true, then a buffer is allocated; and the internal ownership ref-count is set to 1.

Sharing blobs

If blobs_sharing(*this, src) == true, meaning the target and source are operating on the same buffer, then behavior is undefined (assertion may trip). Rationale for this design is as follows. The possibilities were:

  1. Undefined behavior/assertion.
  2. Just adjust this->start() and this->size() to match src; continue co-owning the underlying buffer; copy no data.
  3. this->make_zero() – losing *this ownership, while src keeps it – and then allocate a new buffer and copy src data into it.

Choosing between these is tough, as this is an odd corner case. 3 is not criminal, but generally no method ever forces make_zero() behavior, always leaving it to the user to consciously do, so it seems prudent to keep to that practice (even though this case is a bit different from, say, resize() – since make_zero() here has no chance to deallocate anything, only decrement ref-count). 2 is performant and slick but suggests a special behavior in a corner case; this feels slightly ill-advised in a standard copy assignment operator. Therefore it seems better to crash-and-burn (choice 1), in the same way an attempt to resize()-higher a non-zero() blob would crash and burn, forcing the user to explicitly execute what they want. After all, 3 is done by simply calling make_zero() first; and 2 is possible with a simple resize() call; and the blobs_sharing() check is both easy and performant.

Warning
A post-condition is start() == 0; meaning start() at entry is ignored and reset to 0; the entire (co-)owned buffer – if any – is potentially used to store the copied values. In particular, if one plans to work on a sub-blob of a shared pool (see class doc header), then using this assignment op is not advised. Use emplace_copy() instead; or perform your own copy onto mutable_buffer().
Parameters
srcObject whose range of bytes of length src.size() starting at src.begin() is copied into *this. Behavior is undefined if pre-condition is !zero(), and this memory area overlaps at any point with the memory area of same size in *this (unless that size is zero – a degenerate case). (This can occur only via the use of share*() – otherwise Basic_blobs always refer to separate areas.) Also behavior undefined if pre-condition is !zero(), and *this (co-)owned buffer is too short to accomodate all src.size() bytes (assertion may trip).
logger_ptrThe Logger implementation to use in this routine (synchronously) only. Null allowed.
Returns
*this.

◆ assign_copy()

template<typename Allocator , bool S_SHARING_ALLOWED>
Basic_blob< Allocator, S_SHARING_ALLOWED >::size_type flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::assign_copy ( const boost::asio::const_buffer &  src,
log::Logger logger_ptr = 0 
)

Replaces logical contents with a copy of the given non-overlapping area anywhere in memory.

More formally: This is exactly equivalent to copy-assignment (*this = b), where const Basic_blob b owns exactly the memory area given by src. However, note the newly relevant restriction documented for src parameter below (no overlap allowed).

All characteristics are as written for the copy assignment operator, including "Formally" and the warning.

Parameters
srcSource memory area. Behavior is undefined if pre-condition is !zero(), and this memory area overlaps at any point with the memory area of same size at begin(). Otherwise it can be anywhere at all. Also behavior undefined if pre-condition is !zero(), and *this (co-)owned buffer is too short to accomodate all src.size() bytes (assertion may trip).
logger_ptrThe Logger implementation to use in this routine (synchronously) only. Null allowed.
Returns
Number of elements copied, namely src.size(), or simply size().

◆ back() [1/2]

template<typename Allocator , bool S_SHARING_ALLOWED>
Basic_blob< Allocator, S_SHARING_ALLOWED >::value_type & flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::back

Returns reference to mutable last element.

Behavior is undefined if empty().

Returns
See above.

◆ back() [2/2]

template<typename Allocator , bool S_SHARING_ALLOWED>
Basic_blob< Allocator, S_SHARING_ALLOWED >::value_type const & flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::back

Equivalent to const_back().

Returns
See above.

◆ begin() [1/2]

template<typename Allocator , bool S_SHARING_ALLOWED>
Basic_blob< Allocator, S_SHARING_ALLOWED >::Iterator flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::begin

Returns pointer to mutable first element; or end() if empty().

Null is a possible value in the latter case.

Returns
Pointer, possibly null.

◆ begin() [2/2]

template<typename Allocator , bool S_SHARING_ALLOWED>
Basic_blob< Allocator, S_SHARING_ALLOWED >::Const_iterator flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::begin

Equivalent to const_begin().

Returns
Pointer, possibly null.

◆ capacity()

template<typename Allocator , bool S_SHARING_ALLOWED>
Basic_blob< Allocator, S_SHARING_ALLOWED >::size_type flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::capacity

Returns the number of elements in the internally allocated buffer, which is 1 or more; or 0 if no buffer is internally allocated.

Some formal invariants: (capacity() == 0) == zero(); start() + size() <= capacity().

See important notes on capacity() policy in the class doc header.

Returns
See above.

◆ cbegin()

template<typename Allocator , bool S_SHARING_ALLOWED>
Basic_blob< Allocator, S_SHARING_ALLOWED >::Const_iterator flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::cbegin

Synonym of const_begin().

Exists as standard container method (hence the odd formatting).

Returns
See const_begin().

◆ cend()

template<typename Allocator , bool S_SHARING_ALLOWED>
Basic_blob< Allocator, S_SHARING_ALLOWED >::Const_iterator flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::cend

Synonym of const_end().

Exists as standard container method (hence the odd formatting).

Returns
See const_end().

◆ clear()

template<typename Allocator , bool S_SHARING_ALLOWED>
void flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::clear

Equivalent to resize(0, start()).

Note that the value returned by start() will not change due to this call. Only size() (and the corresponding internally stored datum) may change. If one desires to reset start(), use resize() directly (but if one plans to work on a sub-Basic_blob of a shared pool – see class doc header – please think twice first).

◆ const_back()

template<typename Allocator , bool S_SHARING_ALLOWED>
Basic_blob< Allocator, S_SHARING_ALLOWED >::value_type const & flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::const_back

Returns reference to immutable last element.

Behavior is undefined if empty().

Returns
See above.

◆ const_begin()

template<typename Allocator , bool S_SHARING_ALLOWED>
Basic_blob< Allocator, S_SHARING_ALLOWED >::Const_iterator flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::const_begin

Returns pointer to immutable first element; or end() if empty().

Null is a possible value in the latter case.

Returns
Pointer, possibly null.

◆ const_buffer()

template<typename Allocator , bool S_SHARING_ALLOWED>
boost::asio::const_buffer flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::const_buffer

Convenience accessor returning an immutable boost.asio buffer "view" into the entirety of the blob.

Equivalent to const_buffer(const_data(), size()).

Returns
See above.

◆ const_data()

template<typename Allocator , bool S_SHARING_ALLOWED>
Basic_blob< Allocator, S_SHARING_ALLOWED >::value_type const * flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::const_data

Equivalent to const_begin().

Returns
Pointer, possibly null.

◆ const_end()

template<typename Allocator , bool S_SHARING_ALLOWED>
Basic_blob< Allocator, S_SHARING_ALLOWED >::Const_iterator flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::const_end

Returns pointer one past immutable last element; empty() is possible.

Null is a possible value in the latter case.

Returns
Pointer, possibly null.

◆ const_front()

template<typename Allocator , bool S_SHARING_ALLOWED>
Basic_blob< Allocator, S_SHARING_ALLOWED >::value_type const & flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::const_front

Returns reference to immutable first element.

Behavior is undefined if empty().

Returns
See above.

◆ data()

template<typename Allocator , bool S_SHARING_ALLOWED>
Basic_blob< Allocator, S_SHARING_ALLOWED >::value_type * flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::data

Equivalent to begin().

Returns
Pointer, possibly null.

◆ derefable_iterator()

template<typename Allocator , bool S_SHARING_ALLOWED>
bool flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::derefable_iterator ( Const_iterator  it) const

Returns true if and only if the given iterator points to an element within this blob's size() elements.

In particular, this is always false if empty(); and also when it == this->const_end().

Parameters
itIterator/pointer to check.
Returns
See above.

◆ emplace_copy()

template<typename Allocator , bool S_SHARING_ALLOWED>
Basic_blob< Allocator, S_SHARING_ALLOWED >::Iterator flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::emplace_copy ( Const_iterator  dest,
const boost::asio::const_buffer &  src,
log::Logger logger_ptr = 0 
)

Copies src buffer directly onto equally sized area within *this at location dest; *this must have sufficient size() to accomodate all of the data copied.

Performance: The only operation performed is a copy from src to dest using the fastest reasonably available technique.

None of the following changes: zero(), empty(), size(), capacity(), begin(), end(); nor the location (or size) of internally stored buffer.

Parameters
destDestination location within this blob. This must be in [begin(), end()]; and, unless src.size() == 0, must not equal end() either.
srcSource memory area. Behavior is undefined if this memory area overlaps at any point with the memory area of same size at dest (unless that size is zero – a degenerate case). Otherwise it can be anywhere at all, even partially or fully within *this. Also behavior undefined if *this blob is too short to accomodate all src.size() bytes (assertion may trip).
logger_ptrThe Logger implementation to use in this routine (synchronously) only. Null allowed.
Returns
Location in this blob just past the last element copied; dest if none copied; in particular end() is a possible value.

◆ empty()

template<typename Allocator , bool S_SHARING_ALLOWED>
bool flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::empty

Returns size() == 0.

If zero(), this is true; but if this is true, then zero() may or may not be true.

Returns
See above.

◆ end() [1/2]

template<typename Allocator , bool S_SHARING_ALLOWED>
Basic_blob< Allocator, S_SHARING_ALLOWED >::Iterator flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::end

Returns pointer one past mutable last element; empty() is possible.

Null is a possible value in the latter case.

Returns
Pointer, possibly null.

◆ end() [2/2]

template<typename Allocator , bool S_SHARING_ALLOWED>
Basic_blob< Allocator, S_SHARING_ALLOWED >::Const_iterator flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::end

Equivalent to const_end().

Returns
Pointer, possibly null.

◆ erase()

template<typename Allocator , bool S_SHARING_ALLOWED>
Basic_blob< Allocator, S_SHARING_ALLOWED >::Iterator flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::erase ( Const_iterator  first,
Const_iterator  past_last 
)

Performs the minimal number of operations to make range [begin(), end()) unchanged except for lacking sub-range [first, past_last).

Performance/behavior: At most, this copies the range [past_last, end()) to area starting at first; and then adjusts internally stored size member.

Parameters
firstPointer to first element to erase. It must be dereferenceable, or behavior is undefined (assertion may trip).
past_lastPointer to one past the last element to erase. If past_last <= first, call is a no-op.
Returns
Iterator equal to first. (This matches standard expectation for container erase() return value: iterator to element past the last one erased. In this contiguous sequence that simply equals first, since everything starting with past_last slides left onto first. In particular: If past_last() equaled end() at entry, then the new end() is returned: everything starting with first was erased and thus first == end() now. If nothing is erased first is still returned.)

◆ front() [1/2]

template<typename Allocator , bool S_SHARING_ALLOWED>
Basic_blob< Allocator, S_SHARING_ALLOWED >::value_type & flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::front

Returns reference to mutable first element.

Behavior is undefined if empty().

Returns
See above.

◆ front() [2/2]

template<typename Allocator , bool S_SHARING_ALLOWED>
Basic_blob< Allocator, S_SHARING_ALLOWED >::value_type const & flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::front

Equivalent to const_front().

Returns
See above.

◆ get_allocator()

template<typename Allocator , bool S_SHARING_ALLOWED>
Basic_blob< Allocator, S_SHARING_ALLOWED >::Allocator_raw flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::get_allocator

Returns a copy of the internally cached Allocator_raw as set by a constructor or assign() or assignment-operator, whichever happened last.

Returns
See above.

◆ make_zero()

template<typename Allocator , bool S_SHARING_ALLOWED>
void flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::make_zero ( log::Logger logger_ptr = 0)

Guarantees post-condition zero() == true by dropping *this ownership of the allocated internal buffer if any; if no other Basic_blob holds ownership of that buffer, then that buffer is deallocated also.

Recall that other Basic_blobs can only gain co-ownership via share*(); hence if one does not use that feature, make_zero() will in fact deallocate the buffer (if any).

That post-condition can also be thought of as *this becoming indistinguishable from a default-constructed Basic_blob.

Performance/behavior: Assuming zero() is not already true, this will deallocate capacity() sized buffer and save a null pointer.

The many operations that involve reserve() in their doc headers will explain importance of this method: As a rule, no method except make_zero() allows one to request an ownership-drop or deallocation of the existing buffer, even if this would be necessary for a larger buffer to be allocated. Therefore, if you intentionally want to allow such an operation, you CAN, but then you MUST explicitly call make_zero() first.

Formally: If !zero(), then the internal ownership ref-count is decremented by 1, and if it reaches 0, then a buffer is deallocated.

Parameters
logger_ptrThe Logger implementation to use in this routine (synchronously) only. Null allowed.

◆ mutable_buffer()

template<typename Allocator , bool S_SHARING_ALLOWED>
boost::asio::mutable_buffer flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::mutable_buffer

Same as const_buffer() but the returned view is mutable.

Equivalent to mutable_buffer(data(), size()).

Returns
See above.

◆ operator=() [1/2]

template<typename Allocator , bool S_SHARING_ALLOWED>
Basic_blob< Allocator, S_SHARING_ALLOWED > & flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::operator= ( Basic_blob< Allocator, S_SHARING_ALLOWED > &&  moved_src)

Move assignment operator (no logging): equivalent to assign(std::move(moved_src), nullptr).

Parameters
moved_srcSee assign() (move overload).
Returns
*this.

◆ operator=() [2/2]

template<typename Allocator , bool S_SHARING_ALLOWED>
Basic_blob< Allocator, S_SHARING_ALLOWED > & flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::operator= ( const Basic_blob< Allocator, S_SHARING_ALLOWED > &  src)

Copy assignment operator (no logging): equivalent to assign(src, nullptr).

Parameters
srcSee assign() (copy overload).
Returns
*this.

◆ reserve()

template<typename Allocator , bool S_SHARING_ALLOWED>
void flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::reserve ( size_type  capacity,
log::Logger logger_ptr = 0 
)

Ensures that an internal buffer of at least capacity elements is allocated and owned; disallows growing an existing buffer; never shrinks an existing buffer; if a buffer is allocated, it is no larger than capacity.

reserve() may be called directly but should be formally understood to be called by resize(), assign_copy(), copy assignment operator, copy constructor. In all cases, the value passed to reserve() is exactly the size needed to perform the particular task – no more (and no less). As such, reserve() policy is key to knowing how the class behaves elsewhere. See class doc header for discussion in larger context.

Performance/behavior: If zero() is true pre-call, capacity sized buffer is allocated. Otherwise, no-op if capacity <= capacity() pre-call. Behavior is undefined if capacity > capacity() pre-call (again, unless zero(), meaning capacity() == 0). In other words, no deallocation occurs, and an allocation occurs only if necessary. Growing an existing buffer is disallowed. However, if you want to intentionally REallocate, then simply first check for zero() == false and call make_zero() if that holds; then execute the reserve() as planned. This is an intentional restriction forcing caller to explicitly allow a relatively slow reallocation op. You'll note a similar suggestion for the other reserve()-using methods/operators.

Formally: If capacity >= 1, and zero() == true, then a buffer is allocated; and the internal ownership ref-count is set to 1.

Parameters
capacityNon-negative desired minimum capacity.
logger_ptrThe Logger implementation to use in this routine (synchronously) or asynchronously when TRACE-logging in the event of buffer dealloc. Null allowed.

◆ resize()

template<typename Allocator , bool S_SHARING_ALLOWED>
void flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::resize ( size_type  size,
size_type  start_or_unchanged = S_UNCHANGED,
log::Logger logger_ptr = 0 
)

Guarantees post-condition size() == size and start() == start; no values in pre-call range [begin(), end()) are changed; any values added to that range by the call are not initialized to zero or otherwise.

From other invariants and behaviors described, you'll realize this essentially means reserve(size + start) followed by saving size and start into internal size members. The various implications of this can be deduced by reading the related methods' doc headers. The key is to understand how reserve() works, including what it disallows (growth in size of an existing buffer).

Formally: If size >= 1, and zero() == true, then a buffer is allocated; and the internal ownership ref-count is set to 1.

Leaving start() unmodified

start is taken to be the value of arg start_or_unchanged; unless the latter is set to special value S_UNCHANGED; in which case start is taken to equal start(). Since the default is indeed S_UNCHANGED, the oft-encountered expression resize(N) will adjust only size() and leave start() unmodified – often the desired behavior.

Parameters
sizeNon-negative desired value for size().
start_or_unchangedNon-negative desired value for start(); or special value S_UNCHANGED. See above.
logger_ptrThe Logger implementation to use in this routine (synchronously) or asynchronously when TRACE-logging in the event of buffer dealloc. Null allowed.

◆ share()

template<typename Allocator , bool S_SHARING_ALLOWED>
Basic_blob< Allocator, S_SHARING_ALLOWED > flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::share ( log::Logger logger_ptr = 0) const

Applicable to !zero() blobs, this returns an identical Basic_blob that shares (co-owns) *this allocated buffer along with *this and any other Basic_blobs also sharing it.

Behavior is undefined (assertion may trip) if zero() == true: it is nonsensical to co-own nothing; just use the default ctor then.

The returned Basic_blob is identical in that not only does it share the same memory area (hence same capacity()) but has identical start(), size() (and hence begin() and end()). If you'd like to work on a different part of the allocated buffer, please consider share_after_split*() instead; the pool-of-sub-Basic_blobs paradigm suggested in the class doc header is probably best accomplished using those methods and not share().

You can also adjust various sharing Basic_blobs via resize(), start_past_prefix_inc(), etc., directly – after share() returns.

Formally: Before this returns, the internal ownership ref-count (shared among *this and the returned Basic_blob) is incremented.

Parameters
logger_ptrThe Logger implementation to use in this routine (synchronously) only. Null allowed.
Returns
An identical Basic_blob to *this that shares the underlying allocated buffer. See above.

◆ share_after_split_equally()

template<typename Allocator , bool S_SHARING_ALLOWED>
template<typename Emit_blob_func >
void flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::share_after_split_equally ( size_type  size,
bool  headless_pool,
Emit_blob_func &&  emit_blob_func,
log::Logger logger_ptr = 0 
)

Identical to successively performing share_after_split_left(size) until this->empty() == true; the resultings Basic_blobs are emitted via emit_blob_func() callback in the order they're split off from the left.

In other words this partitions a non-zero() Basic_blob – perhaps typically used as a pool – into equally-sized (except possibly the last one) adjacent sub-Basic_blobs.

A post-condition is that empty() == true (size() == 0). In addition, if headless_pool == true, then zero() == true is also a post-condition; i.e., the pool is "headless": it disappears once all the resulting sub-Basic_blobs drop their ownership (as well as any other co-owning Basic_blobs). Otherwise, *this will continue to share the pool despite size() becoming 0. (Of course, even then, one is free to make_zero() or destroy *this – the former, before returning, is all that headless_pool == true really adds.)

Behavior is undefined (assertion may trip) if empty() == true (including if zero() == true, but even if not) or if size == 0.

See also
share_after_split_equally_emit_seq() for a convenience wrapper to emit to, say, vector<Basic_blob>.
share_after_split_equally_emit_ptr_seq() for a convenience wrapper to emit to, say, vector<unique_ptr<Basic_blob>>.
Template Parameters
Emit_blob_funcA callback compatible with signature void F(Basic_blob&& blob_moved).
Parameters
sizeDesired size() of each successive out-Basic_blob, except the last one. Behavior undefined (assertion may trip) if not positive.
headless_poolWhether to perform this->make_zero() just before returning. See above.
emit_blob_funcF such that F(std::move(blob)) shall be called with each successive sub-Basic_blob.
logger_ptrThe Logger implementation to use in this routine (synchronously) only. Null allowed.

◆ share_after_split_equally_emit_ptr_seq()

template<typename Allocator , bool S_SHARING_ALLOWED>
template<typename Blob_ptr_container >
void flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::share_after_split_equally_emit_ptr_seq ( size_type  size,
bool  headless_pool,
Blob_ptr_container *  out_blobs,
log::Logger logger_ptr = 0 
)

share_after_split_equally() wrapper that places Ptr<Basic_blob>s into the given container via push_back(), where the type Ptr<> is determined via Blob_ptr_container::value_type.

Template Parameters
Blob_ptr_containerSomething with method compatible with push_back(Ptr&& blob_ptr_moved), where Ptr is Blob_ptr_container::value_type, and Ptr(new Basic_blob) can be created. Ptr is to be a smart pointer type such as unique_ptr<Basic_blob> or shared_ptr<Basic_blob>.
Parameters
sizeSee share_after_split_equally().
headless_poolSee share_after_split_equally().
out_blobsout_blobs->push_back() shall be executed 1+ times.
logger_ptrThe Logger implementation to use in this routine (synchronously) only. Null allowed.

◆ share_after_split_equally_emit_seq()

template<typename Allocator , bool S_SHARING_ALLOWED>
template<typename Blob_container >
void flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::share_after_split_equally_emit_seq ( size_type  size,
bool  headless_pool,
Blob_container *  out_blobs,
log::Logger logger_ptr = 0 
)

share_after_split_equally() wrapper that places Basic_blobs into the given container via push_back().

Template Parameters
Blob_containerSomething with method compatible with push_back(Basic_blob&& blob_moved).
Parameters
sizeSee share_after_split_equally().
headless_poolSee share_after_split_equally().
out_blobsout_blobs->push_back() shall be executed 1+ times.
logger_ptrThe Logger implementation to use in this routine (synchronously) only. Null allowed.

◆ share_after_split_equally_impl()

template<typename Allocator , bool S_SHARING_ALLOWED>
template<typename Emit_blob_func , typename Share_after_split_left_func >
void flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::share_after_split_equally_impl ( size_type  size,
bool  headless_pool,
Emit_blob_func &&  emit_blob_func,
log::Logger logger_ptr,
Share_after_split_left_func &&  share_after_split_left_func 
)
protected

Impl of share_after_split_equally() but capable of emitting not just *this type (Basic_blob<...>) but any sub-class (such as Blob/Sharing_blob) provided a functor like share_after_split_left() but returning an object of that appropriate type.

Template Parameters
Emit_blob_funcSee share_after_split_equally(); however it is to take the type to emit which can be *this Basic_blob or a sub-class.
Share_after_split_left_funcA callback with signature identical to share_after_split_left() but returning the same type emitted by Emit_blob_func.
Parameters
sizeSee share_after_split_equally().
headless_poolSee share_after_split_equally().
emit_blob_funcSee Emit_blob_func.
logger_ptrSee share_after_split_equally().
share_after_split_left_funcSee Share_after_split_left_func.

◆ share_after_split_left()

template<typename Allocator , bool S_SHARING_ALLOWED>
Basic_blob< Allocator, S_SHARING_ALLOWED > flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::share_after_split_left ( size_type  size,
log::Logger logger_ptr = 0 
)

Applicable to !zero() blobs, this shifts this->begin() by size to the right without changing end(); and returns a Basic_blob containing the shifted-past values that shares (co-owns) *this allocated buffer along with *this and any other Basic_blobs also sharing it.

More formally, this is identical to simply auto b = share(); b.resize(size); start_past_prefix_inc(size);.

This is useful when working in the pool-of-sub-Basic_blobs paradigm. This and other share_after_split*() methods are usually better to use rather than share() directly (for that paradigm).

Behavior is undefined (assertion may trip) if zero() == true.

Corner case: If size > size(), then it is taken to equal size().

Degenerate case: If size (or size(), whichever is smaller) is 0, then this method is identical to share(). Probably you don't mean to call share_after_split_left() in that case, but it's your decision.

Degenerate case: If size == size() (and not 0), then this->empty() becomes true – though *this continues to share the underlying buffer despite [begin(), end()) becoming empty. Typically this would only be done as, perhaps, the last iteration of some progressively-splitting loop; but it's your decision.

Formally: Before this returns, the internal ownership ref-count (shared among *this and the returned Basic_blob) is incremented.

Parameters
sizeDesired size() of the returned Basic_blob; and the number of elements by which this->begin() is shifted right (hence start() is incremented). Any value exceeding size() is taken to equal it.
logger_ptrThe Logger implementation to use in this routine (synchronously) only. Null allowed.
Returns
The split-off-on-the-left Basic_blob that shares the underlying allocated buffer with *this. See above.

◆ share_after_split_right()

template<typename Allocator , bool S_SHARING_ALLOWED>
Basic_blob< Allocator, S_SHARING_ALLOWED > flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::share_after_split_right ( size_type  size,
log::Logger logger_ptr = 0 
)

Identical to share_after_split_left(), except this->end() shifts by size to the left (instead of this->begin() to the right), and the split-off Basic_blob contains the *right-most*size` elements (instead of the left-most).

More formally, this is identical to simply auto lt_size = size() - size; auto b = share(); resize(lt_size); b.start_past_prefix_inc(lt_size);. Cf. share_after_split_left() formal definition and note the symmetry.

All other characteristics are as written for share_after_split_left().

Parameters
sizeDesired size() of the returned Basic_blob; and the number of elements by which this->end() is shifted left (hence size() is decremented). Any value exceeding size() is taken to equal it.
logger_ptrThe Logger implementation to use in this routine (synchronously) only. Null allowed.
Returns
The split-off-on-the-right Basic_blob that shares the underlying allocated buffer with *this. See above.

◆ size()

template<typename Allocator , bool S_SHARING_ALLOWED>
Basic_blob< Allocator, S_SHARING_ALLOWED >::size_type flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::size

Returns number of elements stored, namely end() - begin().

If zero(), this is 0; but if this is 0, then zero() may or may not be true.

Returns
See above.

◆ start()

template<typename Allocator , bool S_SHARING_ALLOWED>
Basic_blob< Allocator, S_SHARING_ALLOWED >::size_type flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::start

Returns the offset between begin() and the start of the internally allocated buffer.

If zero(), this is 0; but if this is 0, then zero() may or may not be true.

Returns
See above.

◆ start_past_prefix()

template<typename Allocator , bool S_SHARING_ALLOWED>
void flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::start_past_prefix ( size_type  prefix_size)

Restructures blob to consist of an internally allocated buffer and a [begin(), end) range starting at offset prefix_size within that buffer.

More formally, it is a simple resize() wrapper that ensures the internally allocated buffer remains unchanged or, if none is currently large enough to store prefix_size elements, is allocated to be of size prefix_size; and that start() == prefix_size.

All of resize()'s behavior, particularly any restrictions about capacity() growth, applies, so in particular remember you may need to first make_zero() if the internal buffer would need to be REallocated to satisfy the above requirements.

In practice, with current reserve() (and thus resize()) restrictions – which are intentional – this method is most useful if you already have a Basic_blob with internally allocated buffer of size at least n == size() + start() (and start() == 0 for simplicity), and you'd like to treat this buffer as containing no-longer-relevant prefix of length S (which becomes new value for start()) and have size() be readjusted down accordingly, while start() + size() == n remains unchaged. If the buffer also contains irrelevant data past a certain offset N, you can first make it irrelevant via resize(N) (then call start_past_prefix(S) as just described):

// ...
// b now has start() == 0, size() == M.
// We want all elements outside [S, N] to be irrelevant, where S > 0, N < M.
// (E.g., first S are a frame prefix, while all bytes past N are a frame postfix, and we want just the frame
// without any reallocating or copying.)
b.resize(N);
// Now, [b.begin(), b.end()) are the frame bytes, and no copying/allocation/deallocation has occurred.
A hand-optimized and API-tweaked replacement for vector<uint8_t>, i.e., buffer of bytes inside an all...
Definition: basic_blob.hpp:247
void resize(size_type size, size_type start_or_unchanged=S_UNCHANGED, log::Logger *logger_ptr=0)
Guarantees post-condition size() == size and start() == start; no values in pre-call range [begin(),...
Definition: basic_blob.hpp:2050
void start_past_prefix(size_type prefix_size)
Restructures blob to consist of an internally allocated buffer and a [begin(), end) range starting at...
Definition: basic_blob.hpp:2085
Parameters
prefix_sizeDesired prefix length. prefix_size == 0 is allowed and is a degenerate case equivalent to: resize(start() + size(), 0).

◆ start_past_prefix_inc()

template<typename Allocator , bool S_SHARING_ALLOWED>
void flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::start_past_prefix_inc ( difference_type  prefix_size_inc)

Like start_past_prefix() but shifts the current prefix position by the given incremental value (positive or negative).

Identical to start_past_prefix(start() + prefix_size_inc).

Behavior is undefined for negative prefix_size_inc whose magnitue exceeds start() (assertion may trip).

Behavior is undefined in case of positive prefix_size_inc that results in overflow.

Parameters
prefix_size_incPositive, negative (or zero) increment, so that start() is changed to start() + prefix_size_inc.

◆ sub_copy()

template<typename Allocator , bool S_SHARING_ALLOWED>
Basic_blob< Allocator, S_SHARING_ALLOWED >::Const_iterator flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::sub_copy ( Const_iterator  src,
const boost::asio::mutable_buffer &  dest,
log::Logger logger_ptr = 0 
) const

The opposite of emplace_copy() in every way, copying a sub-blob onto a target memory area.

Note that the size of that target buffer (dest.size()) determines how much of *this is copied.

Parameters
srcSource location within this blob. This must be in [begin(), end()]; and, unless dest.size() == 0, must not equal end() either.
destDestination memory area. Behavior is undefined if this memory area overlaps at any point with the memory area of same size at src (unless that size is zero – a degenerate case). Otherwise it can be anywhere at all, even partially or fully within *this. Also behavior undefined if src + dest.size() is past end of *this blob (assertion may trip).
logger_ptrThe Logger implementation to use in this routine (synchronously) only. Null allowed.
Returns
Location in this blob just past the last element copied; src if none copied; end() is a possible value.

◆ swap()

template<typename Allocator , bool S_SHARING_ALLOWED>
void flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::swap ( Basic_blob< Allocator, S_SHARING_ALLOWED > &  other,
log::Logger logger_ptr = 0 
)

Swaps the contents of this structure and other, or no-op if this == &other.

Performance: at most this involves swapping a few scalars which is constant-time.

Parameters
otherThe other structure.
logger_ptrThe Logger implementation to use in this routine (synchronously) only. Null allowed.

◆ valid_iterator()

template<typename Allocator , bool S_SHARING_ALLOWED>
bool flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::valid_iterator ( Const_iterator  it) const

Returns true if and only if: this->derefable_iterator(it) || (it == this->const_end()).

Parameters
itIterator/pointer to check.
Returns
See above.

◆ zero()

template<typename Allocator , bool S_SHARING_ALLOWED>
bool flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::zero

Returns false if a buffer is allocated and owned; true otherwise.

See important notes on how this relates to empty() and capacity() in those methods' doc headers. See also other important notes in class doc header.

Note that zero() is true for any default-constructed Basic_blob.

Returns
See above.

Friends And Related Function Documentation

◆ blobs_sharing()

template<typename Allocator , bool S_SHARING_ALLOWED>
bool blobs_sharing ( const Basic_blob< Allocator, S_SHARING_ALLOWED > &  blob1,
const Basic_blob< Allocator, S_SHARING_ALLOWED > &  blob2 
)
related

Returns true if and only if both given objects are not zero() == true, and they either co-own a common underlying buffer, or are the same object.

Note: by the nature of Basic_blob::share(), a true returned value is orthogonal to whether Basic_blob::start() and Basic_blob::size() values are respectively equal; true may be returned even if their [begin(), end()) ranges don't overlap at all – as long as the allocated buffer is co-owned by the 2 Basic_blobs.

If &blob1 != &blob2, true indicates blob1 was obtained from blob2 via a chain of Basic_blob::share() (or wrapper thereof) calls, or vice versa.

Parameters
blob1Object.
blob2Object.
Returns
Whether blob1 and blob2 both operate on the same underlying buffer.

◆ swap()

template<typename Allocator , bool S_SHARING_ALLOWED>
void swap ( Basic_blob< Allocator, S_SHARING_ALLOWED > &  blob1,
Basic_blob< Allocator, S_SHARING_ALLOWED > &  blob2,
log::Logger logger_ptr = 0 
)
related

Equivalent to blob1.swap(blob2).

Parameters
blob1Object.
blob2Object.
logger_ptrThe Logger implementation to use in this routine (synchronously) only. Null allowed.

Member Data Documentation

◆ S_IS_VANILLA_ALLOC

template<typename Allocator , bool S_SHARING_ALLOWED>
constexpr bool flow::util::Basic_blob< Allocator, S_SHARING_ALLOWED >::S_IS_VANILLA_ALLOC = std::is_same_v<Allocator_raw, std::allocator<value_type>>
staticconstexpr

true if Allocator_raw underlying allocator template is simply std::allocator; false otherwise.

Note that if this is true, it may be worth using Blob/Sharing_blob, instead of its Basic_blob<std::allocator> super-class; at the cost of a marginally larger RAM footprint (an added Logger*) you'll get a more convenient set of logging API knobs (namely Logger* stored permanently from construction; and there will be no need to supply it as arg to subsequent APIs when logging is desired).

Implications of S_IS_VANILLA_ALLOC being false

This is introduced in our class doc header. Briefly however:

  • The underlying buffer, if any, and possibly some small aux data shall be allocated via Allocator_raw, not simply the regular heap's new[] and/or new.
    • They shall be deallocated, if needed, via Allocator_raw, not simply the regular heap's delete[] and/or delete.
  • Because storing a pointer to log::Logger may be meaningless when storing in an area allocated by some custom allocators (particularly SHM-heap ones), we shall not auto-TRACE-log on dealloc.
    • This caveat applies only if S_SHARING is true.

The documentation for this class was generated from the following files: