Flow 1.0.0
Flow project: Public API.
Namespaces | Classes | Typedefs | Functions
flow::util Namespace Reference

Flow module containing miscellaneous general-use facilities that don't fit into any other Flow module. More...

Namespaces

namespace  this_thread
 Short-hand for standard this-thread namespace.
 

Classes

class  Basic_blob
 A hand-optimized and API-tweaked replacement for vector<uint8_t>, i.e., buffer of bytes inside an allocated area of equal or larger size; also optionally supports limited garbage-collected memory pool functionality and SHM-friendly custom-allocator support. More...
 
class  Basic_string_view
 Essentially alias for a C++17-conforming string-view class template, which is a very lightweight std::string-like representation of a character sequence already in memory. More...
 
class  Blob_with_log_context
 Basic_blob that works in regular heap (and is itself placed in heap or stack) and memorizes a log::Logger, enabling easier logging albeit with a small perf trade-off. More...
 
class  Container_traits
 Properties of various container types. More...
 
class  Container_traits< boost::unordered_set< T > >
 Traits of boost::unordered_set. More...
 
class  Container_traits< std::map< K, V > >
 Traits of std::map. More...
 
class  Container_traits< std::set< T > >
 Traits of std::set. More...
 
class  Container_traits< util::Linked_hash_map< K, V > >
 Traits of flow::util::Linked_hash_map. More...
 
class  Container_traits< util::Linked_hash_set< T, Hash, Pred > >
 Traits of flow::util::Linked_hash_set. More...
 
class  Linked_hash_map
 An object of this class is a map that combines the lookup speed of a boost::unordered_map<> and ordering and iterator stability capabilities of an std::list<>. More...
 
class  Linked_hash_set
 An object of this class is a set that combines the lookup speed of an unordered_set<> and ordering and iterator stability capabilities of an std::list<>. More...
 
class  Null_interface
 An empty interface, consisting of nothing but a default virtual destructor, intended as a boiler-plate-reducing base for any other (presumably virtual-method-having) class that would otherwise require a default virtual destructor. More...
 
class  Rnd_gen_uniform_range
 Simple, non-thread-safe uniform-range random number generator. More...
 
class  Rnd_gen_uniform_range_base
 Base class for Rnd_gen_uniform_range and Rnd_gen_uniform_range_mt for various aliases and similar, so template arguments need not be involved. More...
 
class  Rnd_gen_uniform_range_mt
 Identical to Rnd_gen_uniform_range but safe for concurrent RNG given a single object. More...
 
class  Scoped_setter
 A simple RAII-pattern class template that, at construction, sets the specified location in memory to a specified value, memorizing the previous contents; and at destruction restores the value. More...
 
class  Shared_ptr_alias_holder
 Convenience class template that endows the given subclass T with nested aliases Ptr and Const_ptr aliased to shared_ptr<T> and shared_ptr<const T> respectively. More...
 
class  String_ostream
 Similar to ostringstream but allows fast read-only access directly into the std::string being written; and some limited write access to that string. More...
 
class  Unique_id_holder
 Each object of this class stores (at construction) and returns (on demand) a numeric ID unique from all other objects of the same class ever constructed, across all time from program start to end. More...
 

Typedefs

using Blob_sans_log_context = Basic_blob<>
 Short-hand for a Basic_blob that allocates/deallocates in regular heap and is itself assumed to be stored in heap or on stack; sharing feature compile-time-disabled (with perf boost as a result). More...
 
using Sharing_blob_sans_log_context = Basic_blob< std::allocator< uint8_t >, true >
 Identical to Blob_sans_log_context but with sharing feature compile-time-enabled. More...
 
using Blob = Blob_with_log_context<>
 A concrete Blob_with_log_context that compile-time-disables Basic_blob::share() and the sharing API derived from it. More...
 
using Sharing_blob = Blob_with_log_context< true >
 A concrete Blob_with_log_context that compile-time-enables Basic_blob::share() and the sharing API derived from it. More...
 
using Scheduled_task_handle = boost::shared_ptr< Scheduled_task_handle_state >
 Black-box type that represents a handle to a scheduled task as scheduled by schedule_task_at() or schedule_task_from_now() or similar, which can be (optionally) used to control the scheduled task after it has been thus scheduled. More...
 
using Scheduled_task_const_handle = boost::shared_ptr< const Scheduled_task_handle_state >
 Equivalent to Scheduled_task_handle but refers to immutable version of a task. More...
 
using Scheduled_task = Function< void(bool short_fire)>
 Short-hand for tasks that can be scheduled/fired by schedule_task_from_now() and similar. More...
 
using String_view = Basic_string_view< char >
 Commonly used char-based Basic_string_view. See its doc header.
 
using Thread = boost::thread
 Short-hand for standard thread class. More...
 
using Thread_id = Thread::id
 Short-hand for an OS-provided ID of a util::Thread.
 
using Task_engine = boost::asio::io_service
 Short-hand for boost.asio event service, the central class of boost.asio. More...
 
using Strand = Task_engine::strand
 Short-hand for boost.asio strand, an ancillary class that works with Task_engine for advanced task scheduling.
 
using Timer = boost::asio::basic_waitable_timer< Fine_clock >
 boost.asio timer. More...
 
using Auto_cleanup = boost::shared_ptr< void >
 Helper type for setup_auto_cleanup().
 
using Udp_endpoint = boost::asio::ip::udp::endpoint
 Short-hand for the UDP endpoint (IP/port) type.
 
using Ip_address_v4 = boost::asio::ip::address_v4
 Short-hand for the IPv4 address type.
 
using Ip_address_v6 = boost::asio::ip::address_v6
 Short-hand for the IPv6 address type.
 
using Mutex_non_recursive = boost::mutex
 Short-hand for non-reentrant, exclusive mutex. ("Reentrant" = one can lock an already-locked-in-that-thread mutex.)
 
using Mutex_recursive = boost::recursive_mutex
 Short-hand for reentrant, exclusive mutex.
 
using Mutex_shared_non_recursive = boost::shared_mutex
 Short-hand for non-reentrant, shared-or-exclusive mutex. More...
 
using Mutex_noop_shared_non_recursive = boost::null_mutex
 Short-hand for a mutex type equivalent to util::Mutex_shared_non_recursive – except that the lock/unlock mutex ops all do nothing. More...
 
template<typename Mutex >
using Lock_guard = boost::unique_lock< Mutex >
 Short-hand for advanced-capability RAII lock guard for any mutex, ensuring exclusive ownership of that mutex. More...
 
template<typename Shared_mutex >
using Shared_lock_guard = boost::shared_lock< Shared_mutex >
 Short-hand for shared mode advanced-capability RAII lock guard, particuarly for Mutex_shared_non_recursive mutexes. More...
 
using Lock_guard_non_recursive = boost::unique_lock< Mutex_non_recursive >
 (Deprecated given C++1x) Short-hand for advanced-capability RAII lock guard for Mutex_non_recursive mutexes. More...
 
using Lock_guard_recursive = boost::unique_lock< Mutex_recursive >
 (Deprecated given C++1x) Short-hand for advanced-capability RAII lock guard for Mutex_recursive mutexes. More...
 
using Lock_guard_shared_non_recursive_sh = boost::shared_lock< Mutex_shared_non_recursive >
 (Deprecated given C++1x) Short-hand for shared mode advanced-capability RAII lock guard for Mutex_shared_non_recursive mutexes. More...
 
using Lock_guard_shared_non_recursive_ex = boost::unique_lock< Mutex_shared_non_recursive >
 (Deprecated given C++1x) Short-hand for exclusive mode advanced-capability RAII lock guard for Mutex_shared_non_recursive mutexes. More...
 
using Lock_guard_noop_shared_non_recursive_sh = boost::shared_lock< Mutex_noop_shared_non_recursive >
 (Deprecated given C++1x) Equivalent to Lock_guard_shared_non_recursive_sh but applied to Mutex_noop_shared_non_recursive. More...
 
using Lock_guard_noop_shared_non_recursive_ex = boost::unique_lock< Mutex_noop_shared_non_recursive >
 (Deprecated given C++1x) Equivalent to Lock_guard_shared_non_recursive_ex but applied to Mutex_noop_shared_non_recursive. More...
 

Functions

template<typename Allocator , bool S_SHARING_ALLOWED>
void swap (Basic_blob< Allocator, S_SHARING_ALLOWED > &blob1, Basic_blob< Allocator, S_SHARING_ALLOWED > &blob2, log::Logger *logger_ptr=0)
 Equivalent to blob1.swap(blob2). More...
 
template<typename Allocator , bool S_SHARING_ALLOWED>
bool blobs_sharing (const Basic_blob< Allocator, S_SHARING_ALLOWED > &blob1, const Basic_blob< Allocator, S_SHARING_ALLOWED > &blob2)
 Returns true if and only if both given objects are not zero() == true, and they either co-own a common underlying buffer, or are the same object. More...
 
template<bool S_SHARING_ALLOWED>
void swap (Blob_with_log_context< S_SHARING_ALLOWED > &blob1, Blob_with_log_context< S_SHARING_ALLOWED > &blob2)
 On top of the similar Basic_blob related function, logs using the stored log context of blob1. More...
 
template<typename Key , typename Mapped , typename Hash , typename Pred >
void swap (Linked_hash_map< Key, Mapped, Hash, Pred > &val1, Linked_hash_map< Key, Mapped, Hash, Pred > &val2)
 Equivalent to val1.swap(val2). More...
 
template<typename Key , typename Hash , typename Pred >
void swap (Linked_hash_set< Key, Hash, Pred > &val1, Linked_hash_set< Key, Hash, Pred > &val2)
 Equivalent to val1.swap(val2). More...
 
bool scheduled_task_cancel (log::Logger *logger_ptr, Scheduled_task_handle task)
 Attempts to prevent the execution of a previously scheduled (by schedule_task_from_now() or similar) task. More...
 
bool scheduled_task_short_fire (log::Logger *logger_ptr, Scheduled_task_handle task)
 Attempts to reschedule a previously scheduled (by schedule_task_from_now() or similar) task to fire immediately. More...
 
Fine_duration scheduled_task_fires_from_now_or_canceled (log::Logger *logger_ptr, Scheduled_task_const_handle task)
 Returns how long remains until a previously scheduled (by schedule_task_from_now() or similar) task fires; or negative time if that point is in the past; or special value if the task has been canceled. More...
 
bool scheduled_task_fired (log::Logger *logger_ptr, Scheduled_task_const_handle task)
 Returns whether a previously scheduled (by schedule_task_from_now() or similar) task has already fired. More...
 
bool scheduled_task_canceled (log::Logger *logger_ptr, Scheduled_task_const_handle task)
 Returns whether a previously scheduled (by schedule_task_from_now() or similar) task has been canceled. More...
 
template<typename Scheduled_task_handler >
Scheduled_task_handle schedule_task_from_now (log::Logger *logger_ptr, const Fine_duration &from_now, bool single_threaded, Task_engine *task_engine, Scheduled_task_handler &&task_body_moved)
 Schedule the given function to execute in a certain amount of time: A handy wrapper around Timer (asio's timer facility). More...
 
template<typename Scheduled_task_handler >
Scheduled_task_handle schedule_task_at (log::Logger *logger_ptr, const Fine_time_pt &at, bool single_threaded, Task_engine *task_engine, Scheduled_task_handler &&task_body_moved)
 Identical to schedule_task_from_now() except the time is specified in absolute terms. More...
 
boost::chrono::microseconds time_since_posix_epoch ()
 Get the current POSIX (Unix) time as a duration from the Epoch time point. More...
 
void beautify_chrono_ostream (std::ostream *os)
 Sets certain chrono-related formatting on the given ostream that results in a consistent, desirable output of durations and certain types of time_points. More...
 
size_t deep_size (const std::string &val)
 Estimate of memory footprint of the given value, including memory allocated on its behalf – but excluding its shallow sizeof! – in bytes. More...
 
template<typename Time_unit , typename N_items >
double to_mbit_per_sec (N_items items_per_time, size_t bits_per_item=8)
 Utility that converts a bandwidth in arbitrary units in both numerator and denominator to the same bandwidth in megabits per second. More...
 
template<typename Integer >
Integer ceil_div (Integer dividend, Integer divisor)
 Returns the result of the given non-negative integer divided by a positive integer, rounded up to the nearest integer. More...
 
template<typename T >
bool in_closed_range (T const &min_val, T const &val, T const &max_val)
 Returns true if and only if the given value is within the given range, inclusive. More...
 
template<typename T >
bool in_open_closed_range (T const &min_val, T const &val, T const &max_val)
 Returns true if and only if the given value is within the given range, given as a (low, high] pair. More...
 
template<typename T >
bool in_closed_open_range (T const &min_val, T const &val, T const &max_val)
 Returns true if and only if the given value is within the given range, given as a [low, high) pair. More...
 
template<typename T >
bool in_open_open_range (T const &min_val, T const &val, T const &max_val)
 Returns true if and only if the given value is within the given range, given as a (low, high) pair. More...
 
template<typename Container >
bool key_exists (const Container &container, const typename Container::key_type &key)
 Returns true if and only if the given key is present at least once in the given associative container. More...
 
template<typename Cleanup_func >
Auto_cleanup setup_auto_cleanup (const Cleanup_func &func)
 Provides a way to execute arbitrary (cleanup) code at the exit of the current block. More...
 
template<typename Minuend , typename Subtrahend >
bool subtract_with_floor (Minuend *minuend, const Subtrahend &subtrahend, const Minuend &floor=0)
 Performs *minuend -= subtrahend, subject to a floor of floor. More...
 
template<typename From , typename To >
size_t size_unit_convert (From num_froms)
 Answers the question what's the smallest integer number of Tos sufficient to verbatim store the given number of Froms?, where From and To are POD types. More...
 
template<typename T1 , typename ... T_rest>
void feed_args_to_ostream (std::ostream *os, T1 const &ostream_arg1, T_rest const &... remaining_ostream_args)
 "Induction step" version of variadic function template that simply outputs arguments 2+ via << to the given ostream, in the order given. More...
 
template<typename T >
void feed_args_to_ostream (std::ostream *os, T const &only_ostream_arg)
 "Induction base" for a variadic function template, this simply outputs given item to given ostream via <<. More...
 
template<typename ... T>
void ostream_op_to_string (std::string *target_str, T const &... ostream_args)
 Writes to the specified string, as if the given arguments were each passed, via << in sequence, to an ostringstream, and then the result were appended to the aforementioned string variable. More...
 
template<typename ... T>
std::string ostream_op_string (T const &... ostream_args)
 Equivalent to ostream_op_to_string() but returns a new string by value instead of writing to the caller's string. More...
 
template<typename Map , typename Sequence >
void sequence_to_inverted_lookup_map (Sequence const &src_seq, Map *target_map, const Function< typename Map::mapped_type(size_t)> &idx_to_map_val_func)
 Similar to the 2-arg overload of sequence_to_inverted_lookup_map() but with the ability to store a value based on the index into the input sequence instead of that index itself. More...
 
template<typename Map , typename Sequence >
void sequence_to_inverted_lookup_map (Sequence const &src_seq, Map *target_map)
 Given a generic sequence (integer -> object) generates a generic map (object -> integer) providing inverse lookup. More...
 
template<typename Const_buffer_sequence >
std::ostream & buffers_to_ostream (std::ostream &os, const Const_buffer_sequence &data, const std::string &indentation, size_t bytes_per_line=0)
 Writes a multi- or single-line string representation of the provided binary data to an output stream, complete with a printable and hex versions of each byte. More...
 
template<typename Const_buffer_sequence >
std::string buffers_dump_string (const Const_buffer_sequence &data, const std::string &indentation, size_t bytes_per_line=0)
 Identical to buffers_to_ostream() but returns an std::string instead of writing to a given ostream. More...
 
template<typename Enum >
Enum istream_to_enum (std::istream *is_ptr, Enum enum_default, Enum enum_sentinel, bool accept_num_encoding=true, bool case_sensitive=false, Enum enum_lowest=Enum(0))
 Deserializes an enum class value from a standard input stream. More...
 

Detailed Description

Flow module containing miscellaneous general-use facilities that don't fit into any other Flow module.

Each symbol therein is typically used by at least 1 other Flow module; but all public symbols (except ones under a detail/ subdirectory) are intended for use by Flow user as well.

Todo:
Since Flow gained its first users beyond the original author, some Flow-adjacent code has been written from which Flow can benefit, including additions to flow::util module. (Flow itself continued to be developed, but some features were added elsewhere for expediency; this is a reminder to factor them out into Flow for the benefit of all.) Some features to migrate here might be: conversion between boost.chrono and std.chrono types; (add more here).

Typedef Documentation

◆ Blob

A concrete Blob_with_log_context that compile-time-disables Basic_blob::share() and the sharing API derived from it.

It is likely the user will refer to Blob (or Sharing_blob) rather than Blob_with_log_context.

See also
Also consider Blob_sans_log_context.

◆ Blob_sans_log_context

Short-hand for a Basic_blob that allocates/deallocates in regular heap and is itself assumed to be stored in heap or on stack; sharing feature compile-time-disabled (with perf boost as a result).

See also
Consider also Blob which takes a log::Logger at construction and stores it; so it is not necessary to provide one to each individual API one wants to log. It also adds logging where it is normally not possible (as of this writing at least the dtor). See Basic_blob doc header "Logging" section for brief discussion of trade-offs.

◆ Lock_guard

template<typename Mutex >
using flow::util::Lock_guard = typedef boost::unique_lock<Mutex>

Short-hand for advanced-capability RAII lock guard for any mutex, ensuring exclusive ownership of that mutex.

Note the advanced API available for the underlying type: it is possible to relinquish ownership without unlocking, gain ownership of a locked mutex; and so on.

See also
To grab shared-level ownership of a shared-or-exclusive mutex: use Shared_lock_guard.
Template Parameters
MutexA non-recursive or recursive mutex type. Recommend one of: Mutex_non_recursive, Mutex_recursive, Mutex_shared_non_recursive, Mutex_noop_shared_non_recursive.

◆ Lock_guard_non_recursive

using flow::util::Lock_guard_non_recursive = typedef boost::unique_lock<Mutex_non_recursive>

(Deprecated given C++1x) Short-hand for advanced-capability RAII lock guard for Mutex_non_recursive mutexes.

Todo:
Lock_guard_non_recursive is deprecated, now that C++1x made the more flexible Lock_guard<Mutex_non_recursive> possible. Remove it and all (outside) uses eventually.

◆ Lock_guard_noop_shared_non_recursive_ex

(Deprecated given C++1x) Equivalent to Lock_guard_shared_non_recursive_ex but applied to Mutex_noop_shared_non_recursive.

Todo:
Lock_guard_noop_shared_non_recursive_ex is deprecated, now that C++1x made the more flexible Lock_guard<Mutex_noop_shared_non_recursive> possible. Remove it and all (outside) uses eventually.

◆ Lock_guard_noop_shared_non_recursive_sh

(Deprecated given C++1x) Equivalent to Lock_guard_shared_non_recursive_sh but applied to Mutex_noop_shared_non_recursive.

Todo:
Lock_guard_noop_shared_non_recursive_sh is deprecated, now that C++1x made the more flexible Shared_lock_guard<Mutex_noop_shared_non_recursive> possible. Remove it and all (outside) uses eventually.

◆ Lock_guard_recursive

using flow::util::Lock_guard_recursive = typedef boost::unique_lock<Mutex_recursive>

(Deprecated given C++1x) Short-hand for advanced-capability RAII lock guard for Mutex_recursive mutexes.

Todo:
Lock_guard_recursive is deprecated, now that C++1x made the more flexible Lock_guard<Mutex_recursive> possible. Remove it and all (outside) uses eventually.

◆ Lock_guard_shared_non_recursive_ex

(Deprecated given C++1x) Short-hand for exclusive mode advanced-capability RAII lock guard for Mutex_shared_non_recursive mutexes.

Todo:
Lock_guard_shared_non_recursive_ex is deprecated, now that C++1x made the more flexible Lock_guard<Mutex_shared_non_recursive> possible. Remove it and all (outside) uses eventually.

◆ Lock_guard_shared_non_recursive_sh

(Deprecated given C++1x) Short-hand for shared mode advanced-capability RAII lock guard for Mutex_shared_non_recursive mutexes.

Todo:
Lock_guard_shared_non_recursive_sh is deprecated, now that C++1x made the more flexible Shared_lock_guard<Mutex_shared_non_recursive> possible. Remove it and all (outside) uses eventually.

◆ Mutex_noop_shared_non_recursive

using flow::util::Mutex_noop_shared_non_recursive = typedef boost::null_mutex

Short-hand for a mutex type equivalent to util::Mutex_shared_non_recursive – except that the lock/unlock mutex ops all do nothing.

One can parameterize templates accordingly; so that an algorithm can be generically written to work in both single- and multi-threaded contexts without branching into 2 code paths, yet avoid the unnecessary actual locking in the former case.

◆ Mutex_shared_non_recursive

using flow::util::Mutex_shared_non_recursive = typedef boost::shared_mutex

Short-hand for non-reentrant, shared-or-exclusive mutex.

When locking one of these, choose one of: Lock_guard<Mutex_shared_non_recursive>, Shared_lock_guard<Mutex_shared_non_recursive>. The level of locking acquired (shared vs. exclusive) depends on which you chose and is thus highly significant.

Todo:
Consider changing util::Mutex_shared_non_recursive from boost::shared_mutex to std::shared_mutex, as the former has a long-standing unresolved ticket about its impl being slow and/or outdated (as of Boost-1.80). However see also the note on util::Mutex_shared_non_recursive that explains why it might be best to avoid this type of mutex altogether in the first place (in most cases).

When to use versus Mutex_non_recursive

Experts suggest fairly strongly that one should be very wary about using a shared mutex, over a simple non-recursive exclusive mutex, in almost any practical application. The reason there's a trade-off at all is that a Mutex_non_recursive is extremely fast to lock and unlock; the entire perf cost is in waiting for another thread to unlock first; so without contention it's almost free. In contrast apparently any real impl of Mutex_shared_non_recursive is much slower to lock/unlock. The trade-off is against allowing reads to proceed in parallel; but experts say this is "worth it" only when these read critical sections are lengthy and very frequently invoked. So, for example, if the lock/unlock is around an unordered_map lookup with a quick hashing function, it would be quite difficult to induce enough lock contention to make a shared mutex better than an exclusive one. (Note this is assuming even no perf issue with boost::shared_mutex specifically.)

◆ Scheduled_task

using flow::util::Scheduled_task = typedef Function<void (bool short_fire)>

Short-hand for tasks that can be scheduled/fired by schedule_task_from_now() and similar.

short_fire shall be set to false if the task executed based on its scheduled time; true if it is fired early due to scheduled_task_short_fire().

◆ Scheduled_task_const_handle

using flow::util::Scheduled_task_const_handle = typedef boost::shared_ptr<const Scheduled_task_handle_state>

Equivalent to Scheduled_task_handle but refers to immutable version of a task.

One can safely construct a Scheduled_task_const_handle directly from Scheduled_task_handle (but not vice versa; nor would that compile without some kind of frowned-upon const_cast or equivalent).

◆ Scheduled_task_handle

using flow::util::Scheduled_task_handle = typedef boost::shared_ptr<Scheduled_task_handle_state>

Black-box type that represents a handle to a scheduled task as scheduled by schedule_task_at() or schedule_task_from_now() or similar, which can be (optionally) used to control the scheduled task after it has been thus scheduled.

Special value Scheduled_task_handle() represents an invalid task and can be used as a sentinel, as with a null pointer.

Values of this type are to be passed around by value, not reference. They are light-weight. Officially, passing it by reference results in undefined behavior. Unofficially, it will probably work (just haven't thought super-hard about it to make sure) but will bring no performance benefit.

◆ Shared_lock_guard

template<typename Shared_mutex >
using flow::util::Shared_lock_guard = typedef boost::shared_lock<Shared_mutex>

Short-hand for shared mode advanced-capability RAII lock guard, particuarly for Mutex_shared_non_recursive mutexes.

Template Parameters
Shared_mutexTypically Mutex_shared_non_recursive, but any mutex type with that both-exclusive-and-shared API will work.

◆ Sharing_blob

A concrete Blob_with_log_context that compile-time-enables Basic_blob::share() and the sharing API derived from it.

It is likely the user will refer to Sharing_blob (or Blob) rather than Blob_with_log_context.

See also
Also consider Sharing_blob_sans_log_context.

◆ Sharing_blob_sans_log_context

using flow::util::Sharing_blob_sans_log_context = typedef Basic_blob<std::allocator<uint8_t>, true>

Identical to Blob_sans_log_context but with sharing feature compile-time-enabled.

The latter fact implies a small perf hit; see details in Basic_blob doc header.

See also
Consider also Sharing_blob; and see Blob_sans_log_context similar "See" note for more info as to why.

◆ Task_engine

using flow::util::Task_engine = typedef boost::asio::io_service

Short-hand for boost.asio event service, the central class of boost.asio.

Naming

The reasons for the rename – as opposed to simply calling it Io_service (historically it was indeed so named) – are as follows. To start, a later version of Boost (to which we eventually moved) realigned boost.asio's API to match the feedback boost.asio received when submitted as a candidate for inclusion into the official STL (sometime after C++17); in this reorg they renamed it from io_service to io_context. That's not the actual reason but more of a symptom of similar reasoning to the following:

  • "Service" sounds like something that has its own thread(s) perhaps, which io_service actually doesn't; it can (and typically is) used to take over at least one thread, but that's not a part of its "essence," so to speak. I've noticed that when boost.asio neophytes see "service" they intuitively draw incorrect assumptions about what it does, and then one must repeatedly dash those assumptions.
  • Furthermore (though apparently boost.asio guys didn't agree or didn't agree strongly enough) "I/O" implies an io_service is all about working with various input/output (especially networking). While it's true that roughly 50% of the utility of boost.asio is its portable sockets/networking API, the other 50% is about its ability to flexibly execute user tasks in various threads; and, indeed, io_service itself is more about the latter 50% rather than the former. Its two most basic and arguably oft-used features are simply io_service::post() (which executes the given task ASAP); and the basic_waitable_timer set of classes (which execute task at some specified point in the future). Note these have nothing to do with networking or I/O of any kind. The I/O comes from the "I/O object" classes/etc., such as tcp::socket, but those are not io_service; they merely work together with it to execute the user-specified success/failure handlers.

So, neither "I/O" nor "service" is accurate. To fix both, then, we used this rename reasoning:

  • "I/O" can be replaced by the notion of it executing "tasks." boost.asio doesn't use or define the "task" term, sticking to "handler" or "functor" or "function," but we feel it's a reasonable term in this context. (Function, etc., doesn't cover all possibilities. Handler is OK, but when it's not in response to any event – as with vanilla post() – what is it handling?)
  • "Service" is trickier to replace. It's a battle between "too generic" and "too specific and therefore long to type." Plus there's the usual danger of accidentally overloading a term already used elsewhere nearby. Some ideas considered were: manager, engine, execution engine, queue, queues, context (which boost.asio chose), and various abbreviations thereof. Ultimately it came down to context vs. engine, and I chose engine because context is just a little vague, or it suggests an arbitrary grouping of things – whereas "engine" actually implies action: want to express this thing provides logical/algorithmic value as opposed to just bundling stuff together as "context" objects often do (e.g., "SSL context" in openssl). Indeed io_service does provide immense algorithmic value.
    • Manager: generic but not bad; clashes with "Windows task manager" and such.
    • Execution engine: good but long.
    • Queue: early candidate, but at the very least it's not a single queue; if there are multiple threads io_service::run()ning then to the extent there are any queues there might be multiple.
    • Queues, qs: better... but having a thing called Task_qs – of which there could be multiple (plural) – was clumsy. (An early proof-of-concept used Task_qs; people were not fans.)

◆ Thread

using flow::util::Thread = typedef boost::thread

Short-hand for standard thread class.

We use/encourage use of boost.thread threads (and other boost.thread facilities) over std.thread counterparts – simply because it tends to be more full-featured. However, reminder: boost.thread IS (API-wise at least) std.thread, plus more (advanced?) stuff. Generally, boost.thread-using code can be converted to std.thread with little pain, unless one of the "advanced?" features is used.

◆ Timer

using flow::util::Timer = typedef boost::asio::basic_waitable_timer<Fine_clock>

boost.asio timer.

Can schedule a function to get called within a set amount of time or at a certain specific time relative to some Epoch.

See also
schedule_task_from_now() (and friends) for a simpler task-scheduling API (which internally uses this Timer). The notes below still apply however. Please read them even if you won't useTimer` directly.

Important: While one can schedule events using very precise units using Fine_duration, the timer may not actually have that resolution. That is, I may schedule something to happen in 1 millisecond and then measure the time passed, using a high-res clock like Fine_clock, and discover that the wait was actually 15 milliseconds. This is not a problem for most timers. (In Flow itself, the most advanced use of timers by far is in flow::net_flow. Timers used in flow::net_flow – delayed ACK timer, Drop Timer, simulated latency timer – typically are for n x 50 milliseconds time periods or coarser.) However that may not be the case for all timers. (In particular, flow::net_flow packet pacing may use finer time periods.)

Note
The flow::net_flow references in the present doc header speak of internal implementation details – not of interest to the net_flow user. I leave them in this public doc header for practical purposes, as examples needed to discuss some Inside-Baseball topics involved.

So I looked into what boost.asio provides. deadline_timer uses the system clock (universal_time()) as the time reference, while basic_waitable_timer<Fine_clock> uses the high-resolution clock (see Fine_clock). I tested both (using wait()s of various lengths and using Fine_clock to measure duration) and observed the following resolutions on certain OS (not listing the hardware):

  • Linux (2.6.x kernel): sub-2 millisecond resolution, with some variance. (Note: This Linux's glibc is too old to provide the timerfd API, therefore Boost falls back to using epoll_wait() directly; on a newer Linux this may get better results.)
  • Windows 7: 15 millisecond resolution, with little variance.
  • Mac OS X: untested.
    Todo:
    Test Mac OS X timer fidelity.

These results were observed for BOTH deadline_timer and basic_waitable_timer<Fine_clock> in Boost 1.50. Thus, there was no resolution advantage to using the latter – only an interface advantage. Conclusion: we'd be well-advised not to rely on anything much smaller than 20 milliseconds when scheduling timer events. One technique might be, given any time T < 20 ms, to assume T = 0 (i.e., execute immediately). This may or may not be appropriate depending on the situation.

However, using basic_waitable_timer<Fine_clock> may get an additional nice property: Fine_clock always goes forward and cannot be affected by NTP or user changes. deadline_timer may or may not be affected by such tomfoolery (untested, but the potential is clearly there). Therefore we do the best we can by using the Fine_clock-based timer.

Todo:
Upgrade to newer Boost and keep testing timer resolutions on the above and other OS versions. Update: As of this writing we are on Boost 1.74 now. Needless to say, it is time to reassess the above, as it has been years since 1.50 – and we had been on 1.63 and 1.66 for long period of time, let alone 1.74. Update: No changes with Boost 1.75, the newest version as of this writing.
Note
One annoying, though fully documented, "feature" of this and all boost.asio timers is that cancel()ing it does not always cause the handler to fire with operation_aborted error code. Sometimes it was already "just about" to fire at cancel() time, so the handler will run with no error code regardless of the cancel(). Personally I have never once found this behavior useful, and there is much code currently written to work around this "feature." Furthermore, even a "successful" cancel still runs the handler but with operation_aborted; 99.999% of the time one would want to just not run it at all. Finally, in many cases, even having to create a timer object (and then the multiple steps required to actually schedule a thing) feels like more boiler-plate than should be necessary. To avoid all of these usability issues, see schedule_task_from_now() (and similar), which is a facility that wraps Timer for a boiler-plate-free experience sufficient in the majority of practical situations. Timer is to be used directly when that simpler facility is insufficient.
b_w_t<Fine_clock> happens to equal boost::asio::high_resolution_timer. We chose to alias in terms of Fine_clock merely to ensure that Fine_duration is one-to-one compatible with it, as it is defined in terms of Fine_clock also.

Function Documentation

◆ beautify_chrono_ostream()

void flow::util::beautify_chrono_ostream ( std::ostream *  os)

Sets certain chrono-related formatting on the given ostream that results in a consistent, desirable output of durations and certain types of time_points.

As of this writing this includes enabling short unit format (e.g., "ms" instead of "milliseconds") and avoiding Unicode characters (the Greek letter for micro becomes a similar-looking "u" instead).

See also
log::beautify_chrono_logger_this_thread() to affect a Logger directly.
flow::async in which new threads are set to use this formatting automatically. However you'll want to do this explicitly for the startup thread.
Parameters
osThe stream to affect.

◆ blobs_sharing()

template<typename Allocator , bool S_SHARING_ALLOWED>
bool blobs_sharing ( const Basic_blob< Allocator, S_SHARING_ALLOWED > &  blob1,
const Basic_blob< Allocator, S_SHARING_ALLOWED > &  blob2 
)

Returns true if and only if both given objects are not zero() == true, and they either co-own a common underlying buffer, or are the same object.

Note: by the nature of Basic_blob::share(), a true returned value is orthogonal to whether Basic_blob::start() and Basic_blob::size() values are respectively equal; true may be returned even if their [begin(), end()) ranges don't overlap at all – as long as the allocated buffer is co-owned by the 2 Basic_blobs.

If &blob1 != &blob2, true indicates blob1 was obtained from blob2 via a chain of Basic_blob::share() (or wrapper thereof) calls, or vice versa.

Parameters
blob1Object.
blob2Object.
Returns
Whether blob1 and blob2 both operate on the same underlying buffer.

◆ buffers_dump_string()

template<typename Const_buffer_sequence >
std::string flow::util::buffers_dump_string ( const Const_buffer_sequence &  data,
const std::string &  indentation,
size_t  bytes_per_line = 0 
)

Identical to buffers_to_ostream() but returns an std::string instead of writing to a given ostream.

See also
buffers_to_ostream() doc header for notes on performance and usability.
Template Parameters
Const_buffer_sequenceSee buffers_to_ostream().
Parameters
dataSee buffers_to_ostream().
indentationSee buffers_to_ostream().
bytes_per_lineSee buffers_to_ostream().
Returns
Result string.

◆ buffers_to_ostream()

template<typename Const_buffer_sequence >
std::ostream & flow::util::buffers_to_ostream ( std::ostream &  os,
const Const_buffer_sequence &  data,
const std::string &  indentation,
size_t  bytes_per_line = 0 
)

Writes a multi- or single-line string representation of the provided binary data to an output stream, complete with a printable and hex versions of each byte.

  • Single-line mode is chosen by setting bytes_per_line to special value -1. In this mode, the output consists of indentation, followed by a pretty-printed hex dump of every byte in data (no matter how many), followed by a pretty-printed ASCII-printable (or '.' if not so printable) character for each byte; that's it. No newline is added, and indentation is included only the one time. It is recommended to therefore print "in-line" inside log messages (etc.) without surrounding newlines, etc., when using this mode.
  • Multi-line mode is chosen by setting bytes_per_line to a positive value; or 0 which auto-selects a decent default positive value. Then, each line represents up to that many bytes, which are pretty-printed similarly to single-line mode, in order of appearance in data (with the last such line representing the last few bytes, formatting nicely including accounting for potentially fewer than the desired # of bytes per line). Every line starts with indentation and ends with a newline – including the last line. Therefore, it is recommended to precede this call with an output of a newline and to avoid doing so after that call (unless blank line is desired).

Example with a single contiguous memory area, multi-line mode:

array<uint8_t, 256 * 256> bytes(...);
// Output to cout.
cout
<< "Buffer contents: [\n";
boost::asio::buffer(bytes), // Turn a single memory array into a buffer sequence.
" ");
<< "]."; // This will be on its own line at the end.
std::ostream & buffers_to_ostream(std::ostream &os, const Const_buffer_sequence &data, const std::string &indentation, size_t bytes_per_line)
Writes a multi- or single-line string representation of the provided binary data to an output stream,...
Definition: util.hpp:388

See also buffers_dump_string() which returns a string and can thus be more easily used directly inside FLOW_LOG_DATA() and similar log macros.

Performance

This thing is slow... it's not trying to be fast and can't be all that fast anyway. As usual, though, if used in FLOW_LOG_DATA() (etc.) its slowness will only come into play if the log filter passes which (esp. for log::Severity::S_DATA) it usually won't.

That said buffers_dump_string() is even slower, because it'll use an intermediate ostream independent of whatever ostream you may or may not be placing the resulting string. However it's easier to use with FLOW_LOG_...(), and since in that case perf is typically not an issue, it makes sense to use the easier thing typically.

However, if you do want to avoid that extra copy and need to also use buffers_to_ostream() directly inside FLOW_LOG_...() then the following technique isn't too wordy and works:

const array<uint8_t, 256 * 256> bytes(...);
// Log with a flow::log macro.
const flow::Function<ostream& (ostream&)> os_manip = [&](ostream& os) -> ostream&
{
return buffers_to_ostream(os, boost::asio::buffer(bytes), " ");
};
FLOW_LOG_INFO("Buffer contents: [\n" << os_manip << "].");
// Above is probably more performant than:
FLOW_LOG_INFO("Buffer contents: [\n"
<< buffers_dump_string(boost::asio::buffer(bytes), " ") // Intermediate+copy, slow....
<< "].");
// flow::util::ostream_op_string() is a bit of an improvement but still. :-)
#define FLOW_LOG_INFO(ARG_stream_fragment)
Logs an INFO message into flow::log::Logger *get_logger() with flow::log::Component get_log_component...
Definition: log.hpp:197
std::string buffers_dump_string(const Const_buffer_sequence &data, const std::string &indentation, size_t bytes_per_line)
Identical to buffers_to_ostream() but returns an std::string instead of writing to a given ostream.
Definition: util.hpp:481

Rationale

The reason it returns os and takes a reference-to-mutable instead of the customary (for this project's style, to indicate modification potential at call sites) pointer-to-mutable is in order to be bind()able in such a way as to make an ostream manipulator. In the example above we use a lambda instead of bind() however.

Template Parameters
Const_buffer_sequenceType that models the boost.asio ConstBufferSequence concept (see Boost docs) which represents 1 or more scattered buffers in memory (only 1 buffer in a sequence is common). In particular boost::asio::const_buffer works when dumping a single buffer.
Parameters
osThe output stream to which to write.
dataThe data to write, given as an immutable sequence of buffers each of which is essentially a pointer and length. (Reminder: it is trivial to make such an object from a single buffer as well; for example given array<uint8_t, 256> data (an array of 256 bytes), you can just pass boost::asio::buffer(data), as the latter returns an object whose type satisfies requirements of Const_buffer_sequence despite the original memory area being nothing more than a single byte array.)
indentationThe indentation to use at the start of every line of output.
bytes_per_lineIf 0, act as-if a certain default positive value were passed; and then: If -1, single-line mode is invoked. If a positive value, multi-line mode is invoked, with all lines but the last one consisting of a dump of that many contiguous bytes of data. (Yes, multi-line mode is still in force, even if there are only enough bytes in data for one line of output anyway.)
Returns
os.

◆ ceil_div()

template<typename Integer >
Integer flow::util::ceil_div ( Integer  dividend,
Integer  divisor 
)

Returns the result of the given non-negative integer divided by a positive integer, rounded up to the nearest integer.

Internally, it avoids floating-point math for performance.

Template Parameters
IntegerA signed or unsigned integral type.
Parameters
dividendDividend; non-negative or assertion trips.
divisorDivisor; positive or assertion trips.
Returns
Ceiling of (dividend / divisor).

◆ deep_size()

size_t flow::util::deep_size ( const std::string &  val)

Estimate of memory footprint of the given value, including memory allocated on its behalf – but excluding its shallow sizeof! – in bytes.

Parameters
valValue.
Returns
See above.

◆ feed_args_to_ostream() [1/2]

template<typename T >
void flow::util::feed_args_to_ostream ( std::ostream *  os,
T const &  only_ostream_arg 
)

"Induction base" for a variadic function template, this simply outputs given item to given ostream via <<.

Template Parameters
TSee each of ...T in ostream_op_to_string().
Parameters
osPointer to stream to which to sequentially send arguments for output.
only_ostream_argSee each of ostream_args in ostream_op_to_string().

◆ feed_args_to_ostream() [2/2]

template<typename T1 , typename ... T_rest>
void flow::util::feed_args_to_ostream ( std::ostream *  os,
T1 const &  ostream_arg1,
T_rest const &...  remaining_ostream_args 
)

"Induction step" version of variadic function template that simply outputs arguments 2+ via << to the given ostream, in the order given.

Template Parameters
...T_restSee ...T in ostream_op_to_string().
Parameters
remaining_ostream_argsSee ostream_args in ostream_op_to_string().
Template Parameters
T1Same as each of ...T_rest.
Parameters
ostream_arg1Same as each of remaining_ostream_args.
osPointer to stream to which to sequentially send arguments for output.

◆ in_closed_open_range()

template<typename T >
bool flow::util::in_closed_open_range ( T const &  min_val,
T const &  val,
T const &  max_val 
)

Returns true if and only if the given value is within the given range, given as a [low, high) pair.

Parameters
min_valLower part of the range.
valValue to check.
max_valHigher part of the range. Must be greater than min_val, or behavior is undefined.
Template Parameters
TA type for which the operation x < y is defined and makes sense. Examples: double, char, unsigned int, Fine_duration.
Returns
true if and only if val is in [min_val, max_val), i.e., min_val <= val < max_val.

◆ in_closed_range()

template<typename T >
bool flow::util::in_closed_range ( T const &  min_val,
T const &  val,
T const &  max_val 
)

Returns true if and only if the given value is within the given range, inclusive.

Parameters
min_valLower part of the range.
valValue to check.
max_valHigher part of the range. Must be greater than or equal to min_val, or behavior is undefined.
Template Parameters
TA type for which the operation x < y is defined and makes sense. Examples: double, char, unsigned int, Fine_duration.
Returns
true if and only if val is in [min_val, max_val].

◆ in_open_closed_range()

template<typename T >
bool flow::util::in_open_closed_range ( T const &  min_val,
T const &  val,
T const &  max_val 
)

Returns true if and only if the given value is within the given range, given as a (low, high] pair.

Parameters
min_valLower part of the range.
valValue to check.
max_valHigher part of the range. Must be greater than min_val, or behavior is undefined.
Template Parameters
TA type for which the operation x < y is defined and makes sense. Examples: double, char, unsigned int, Fine_duration.
Returns
true if and only if val is in (min_val, max_val], i.e., min_val < val <= max_val.

◆ in_open_open_range()

template<typename T >
bool flow::util::in_open_open_range ( T const &  min_val,
T const &  val,
T const &  max_val 
)

Returns true if and only if the given value is within the given range, given as a (low, high) pair.

Parameters
min_valLower part of the range.
valValue to check.
max_valHigher part of the range. Must be at least 2 greater than min_val, or behavior is undefined.
Template Parameters
TA type for which the operation x < y is defined and makes sense. Examples: double, char, unsigned int, Fine_duration.
Returns
true if and only if val is in (min_val, max_val), i.e., min_val < val < max_val.

◆ istream_to_enum()

template<typename Enum >
Enum flow::util::istream_to_enum ( std::istream *  is_ptr,
Enum  enum_default,
Enum  enum_sentinel,
bool  accept_num_encoding = true,
bool  case_sensitive = false,
Enum  enum_lowest = Enum(0) 
)

Deserializes an enum class value from a standard input stream.

Reads up to but not including the next non-alphanumeric-or-underscore character; the resulting string is then mapped to an Enum. If none is recognized, enum_default is the result. The recognized values are:

  • "0", "1", ...: Corresponds to the underlying-integer conversion to that Enum. (Can be disabled optionally.)
  • Case-[in]sensitive string encoding of the Enum, as determined by operator<<(ostream&) – which must exist (or this will not compile). Informally we recommend the encoding to be the non-S_-prefix part of the actual Enum member; e.g., "WARNING" for log::Sev::S_WARNING. If the scanned token does not map to any of these, or if end-of-input is encountered immediately (empty token), then enum_default is returned.

Error semantics: There are no invalid values or exceptions thrown; enum_default returned is the worst case. Do note *is_ptr may not be good() == true after return.

Tip: It is convenient to implement operator>>(istream&) in terms of istream_to_enum(). With both >> and << available, serialization/deserialization of the enum class will work; this enables a few key things to work, including parsing from config file/command line via and conversion from string via lexical_cast.

Informal convention suggestion: S_END_SENTINEL should be the sentinel member of Enum. E.g., see log::Sev.

Template Parameters
EnumAn enum class which must satisfy the following requirements or else risk undefined behavior (if it compiles): An element, enum_lowest, has a non-negative integer value. Subsequent elements are strictly monotonically increasing with increment 1, up to and including enum_sentinel. (Elements outside [enum_lowest, enum_sentinel] may exist, as long as their numeric values don't conflict with those in-range, but informally we recommend against this.) ostream << Enum exists and works without throwing for all values in range [enum_lowest, enum_sentinel). Each <<-serialized string must be distinct from the others. Each <<-serialized string must start with a non-digit and must consist only of alphanumerics and underscores. Exception: digit-leading is allowed if and only if !accept_num_encoding, though informally we recommend against it as a convention.
Parameters
is_ptrStream from which to deserialize.
enum_defaultValue to return if the token does not match either the numeric encoding (if enabled) or the << encoding. enum_sentinel is a sensible (but not the only sensible) choice.
enum_sentinelEnum value such that all valid deserializable values have numeric conversions strictly lower than it.
accept_num_encodingIf true, a numeric value is accepted as an encoding; otherwise it is not (and will yield enum_default like any other non-matching token).
case_sensitiveIf true, then the token must exactly equal an ostream<< encoding of a non-sentinel Enum; otherwise it may equal it modulo different case.
enum_lowestThe lowest Enum value. Its integer value is very often 0, sometimes 1. Behavior undefined if it is negative.
Returns
See above.

◆ key_exists()

template<typename Container >
bool flow::util::key_exists ( const Container &  container,
const typename Container::key_type &  key 
)

Returns true if and only if the given key is present at least once in the given associative container.

Template Parameters
ContainerAssociative container type (boost::unordered_map, std::set, etc.). In particular must have the members find(), end(), and key_type.
Parameters
containerContainer to search.
keyKey to find.
Returns
See above.

◆ ostream_op_string()

template<typename ... T>
std::string flow::util::ostream_op_string ( T const &...  ostream_args)

Equivalent to ostream_op_to_string() but returns a new string by value instead of writing to the caller's string.

This is useful at least in constructor initializers, where it is not possible to first declare a stack variable.

With the C++11-y use of move semantics in STL it should be no slower than using ostream_op_to_string() – meaning, it is no slower, period, as this library now requires C++11.

Template Parameters
...TSee ostream_op_to_string().
Parameters
ostream_argsSee ostream_op_to_string().
Returns
Resulting std::string.

◆ ostream_op_to_string()

template<typename ... T>
void flow::util::ostream_op_to_string ( std::string *  target_str,
T const &...  ostream_args 
)

Writes to the specified string, as if the given arguments were each passed, via << in sequence, to an ostringstream, and then the result were appended to the aforementioned string variable.

Tip: It works nicely, 99% as nicely as simply <<ing an ostream; but certain language subtleties mean you may have to fully qualify some template instances among ostream_args. Do so if you receive a "deduced incomplete pack" (clang) or similar error, as I have seen when using, e.g., chrono::symbol_format formatter (which the compile error forced me to qualify as: symbol_format<char, ostream::traits_type>).

See also
log::Thread_local_string_appender for an even more efficient version of this for some applications that can also enable a continuous stream across multiple stream-writing statements over time.
Template Parameters
...TEach type T is such that os << t, with types T const & t and ostream& os, builds and writes t to os, returning lvalue os. Usually in practice this means the existence of ostream& operator<<(ostream&, T const &) or ostream& operator<<(ostream&, T) overload, the latter usually for basic types T. See also tip above, if compiler is unable to deduce a given T (even when it would deduce it in os << t).
Parameters
target_strPointer to the string to which to append.
ostream_argsOne or more arguments, such that each argument arg is suitable for os << arg, where os is an ostream.

◆ schedule_task_at()

template<typename Scheduled_task_handler >
Scheduled_task_handle flow::util::schedule_task_at ( log::Logger logger_ptr,
const Fine_time_pt at,
bool  single_threaded,
Task_engine task_engine,
Scheduled_task_handler &&  task_body_moved 
)

Identical to schedule_task_from_now() except the time is specified in absolute terms.

Performance note

The current implementation is such that there is no performance benefit to using schedule_task_from_now(at - Fine_clock::now(), ...) over schedule_task_at(at, ...). Therefore, if it is convenient for caller's code reuse to do the former, there is no perf downside to it, so feel free.

Parameters
logger_ptrSee schedule_task_from_now().
atFire at this absolute time. If this is in the past, it will fire ASAP.
single_threadedSee schedule_task_from_now().
task_engineSee schedule_task_from_now().
task_body_movedSee schedule_task_from_now().
Returns
See schedule_task_from_now().
Template Parameters
Scheduled_task_handlerSee schedule_task_from_now().

◆ schedule_task_from_now()

template<typename Scheduled_task_handler >
Scheduled_task_handle flow::util::schedule_task_from_now ( log::Logger logger_ptr,
const Fine_duration from_now,
bool  single_threaded,
Task_engine task_engine,
Scheduled_task_handler &&  task_body_moved 
)

Schedule the given function to execute in a certain amount of time: A handy wrapper around Timer (asio's timer facility).

Compared to using Timer, this has far simplified semantics at the cost of certain less-used features of Timer. Recommend using this facility when sufficient; otherwise use Timer directly. The trade-offs are explained below, but first:

Semantics

Conceptually this is similar to JavaScript's ubiquitous (albeit single-threaded) setTimeout() feature. The given function shall execute as if post()ed onto the given Task_engine; unless successfully canceled by scheduled_task_cancel(X), where X is the (optionally used and entirely ignorable) returned handle. Barring unrelated crashes/etc. there are exactly three mutually exclusive outcomes of this function executing:

  • It runs at the scheduled time, with the bool short_fire arg to it set to false.
  • It runs before the scheduled time, with short_fire == true. To trigger this, use scheduled_task_short_fire() before the scheduled time.
    • If this loses the race with normal firing, the short-fire function will indicate that via its return value.
  • It never runs. To trigger this, use scheduled_task_cancel() before the scheduled time.
    • If this loses the race with normal firing, the cancel function will indicate that via its return value.

All related functions are thread-safe w/r/t a given returned Scheduled_task_handle, if single_threaded == false. In addition, for extra performance (internally, by omitting certain locking), set single_threaded = true only if you can guarantee the following:

  • *task_engine is run()ning in no more than one thread throughout all work with the returned Scheduled_task_handle including the present function.
  • Any calls w/r/t the returned Scheduled_task_handle are also – if at all – called from that one thread.

The minimum is step 1: Call schedule_task_at() or schedule_task_from_now(). Optionally, if one saves the return value X from either, one can do step 2: scheduled_task_short_fire(X) or scheduled_task_cancel(X), which will succeed (return true) if called sufficiently early. There is no step 3; any subsequent calls on X will fail (return false).

Simplifications over Timer

  • There is no need to, separately, create a Timer object; set the firing time; and kick off the asynchronous wait. One call does everything, and there is no object to maintain. Sole exception to the latter: if you want to be able to cancel or short-circuit the task later on, you can optionally save the return value and later call a scheduled_task_*() function on it.)
  • You need not worry about internal errors – no need to worry about any Error_code passed in to your task function.
  • Cancellation semantics are straightforward and intuitive, lacking corner cases of vanilla Timer. To wit:
    • A successful scheduled_task_cancel() call means the task will not execute. (Timer::cancel() means it will still execute but with operation_aborted code.)
    • Timer::cancel() can result in ~3 behaviors: it already executed, so it does nothing; it has not yet executed but was JUST about to execute, so it will still execute with non-operation_aborted code; it has not yet executed, so now it will execute but with operation_aborted. With this facility, it's simpler: it either succeeds (so acts per previous bullet – does not execute); or it fails, hence it will have executed (and there is no Error_code to decipher).
  • Short-firing semantics are arguably more intuitive. With Timer, cancel() actually means short-firing (firing ASAP) but with a special Error_code. One can also change the expiration time to the past to short-fire in a different way. With this facility, cancellation means it won't run; and scheduled_task_short_fire() means it will fire early; that's it.

Features lost vs. Timer

  • A Timer can be reused repeatedly. This might be more performant than using this facility repeatedly, since (internally) that approach creates a Timer each time. Note: We have no data about the cost of initializing a new Timer, other than the fact I peeked at the boost.asio source and saw that the construction of a Timer isn't obviously trivial/cheap – which does NOT mean it isn't just cheap in reality, only that that's not immediately clear.
    • A rule of thumb in answering the question, 'Is this facility fine to just call repeatedly, or should we reuse a Timer for performance?': It might not be fine if and only if timer firings and/or cancellations occur many times a second in a performance-sensitive environment. E.g., if it's something fired and/or canceled every second repeatedly, it's fine; it it's packet pacing that must sensitively fire near the resolution limit of the native Timer facility, it might not be fine, and it's safer to use Timer repeatedly.
  • A Timer can be re-scheduled to fire at an arbitrary different time than originally set. We provide no such facility. (We could provide such an API at the cost of API and implementation complexity; it's a judgment call, but I feel at that point just use a Timer.)
  • One can (incrementally) schedule 2+ tasks to fire at the scheduled time on one Timer; this facility only takes exactly 1 task, up-front. (We could provide such an API, but again this feels like it defeats the point.)
  • Timer has certain informational accessors (like one that returns the scheduled firing time) that we lack. (Again, we could provide this also – but why?)

    Todo:
    We could eliminate schedule_task_from_now() potential limitation versus Timer wherein each call constructs (internally) a new Timer. A pool of Timers can be internally maintained to implement this. This may or may not be worth the complexity, but if the API can remain identically simple while cleanly eliminating the one perf-related reason to choose Timer over this simpler facility, then that is a clean win from the API user's point of view. By comparison, other possible improvements mentioned complicate the API which makes them less attractive.
See also
util::Timer doc header for native Timer resolution limitations (which apply to quite-low from_now values).
Note
Design note: This is a small, somewhat C-style set of functions – C-style in that it returns a handle on which to potentially call more functions as opposed to just being a class with methods. This is intentional, because Timer already provides the stateful class. The picture is slightly muddled because we DO provide some "methods" – so why not make it a class after all, just a simpler one than Timer? Answer: I believe this keeps the API simple: Step 1: Schedule it. Step 2 (optional): Short-fire or cancel it. There are no corner cases introduced as might have been via increased potential statefulness inherent with a class. But see the following to-do.
Todo:
schedule_task_from_now() and surrounding API provides an easy way to schedule a thing into the future, but it is built on top of boost.asio util::Timer directly; an intermediate wrapper class around this would be quite useful in its own right so that all boost.asio features including its perf-friendliness would be retained along with eliminating its annoyances (around canceling-but-not-really and similar). Then scheduled_task_from_now() would be internally even simpler, while a non-annoying util::Timer would become available for more advanced use cases. echan may have such a class (in a different project) ready to adapt (called Serial_task_timer). I believe it internally uses integer "task ID" to distinguish between scheduled tasks issued in some chronological order, so that boost.asio firing a task after it has been canceled/pre-fired can be easily detected.
Parameters
logger_ptrLogging, if any – including in the background – will be done via this logger.
from_nowFire ASAP once this time period passes (0 to fire ASAP). A negative value has the same effect as 0.
single_threadedSet to a true value if and only if, basically, *task_engine is single-threaded, and you promise not to call anything on the returned Scheduled_task_handle except from that same thread. More formally, see above.
task_engineThe Task_engine onto which the given task may be post()ed (or equivalent).
task_body_movedThe task to execute within *task_engine unless successfully canceled. See template param doc below also regarding Strand and other executor binding.
Returns
Handle to the scheduled task which can be ignored in most cases. If you want to cancel, short-fire, etc. subsequently, save this (by value) and operate on it subsequently, e.g., with scheduled_task_cancel().
Template Parameters
Scheduled_task_handlerCompletion handler with signature compatible with void (bool short_fired). This allows for standard boost.asio semantics, including associating with an executor such as a boost::asio::strand. In particular you may pass in: bind_executor(S, F), where F(bool short_fire) is the handler, and S is a Strand (or other executor). Binding to a Strand will ensure the fired or short-fired body will not execute concurrently with any other handler also bound to it.

◆ scheduled_task_cancel()

bool flow::util::scheduled_task_cancel ( log::Logger logger_ptr,
Scheduled_task_handle  task 
)

Attempts to prevent the execution of a previously scheduled (by schedule_task_from_now() or similar) task.

For semantics, in the context of the entire facility, see schedule_task_from_now() doc header.

Parameters
logger_ptrSee scheduled_task_short_fire().
taskSee scheduled_task_short_fire().
Returns
true if the task has not executed and will NEVER have executed, AND no other scheduled_task_cancel() with the same arg has succeeded before this. false if it has or will soon have executed because the present call occurred too late to stop it. Namely, it may have fired or short-fired already.

◆ scheduled_task_canceled()

bool flow::util::scheduled_task_canceled ( log::Logger logger_ptr,
Scheduled_task_const_handle  task 
)

Returns whether a previously scheduled (by schedule_task_from_now() or similar) task has been canceled.

Note that this cannot be true while scheduled_task_fired() is false and vice versa (but see thread safety note below).

Thread safety notes in the scheduled_task_fired() doc header apply equally here.

Parameters
logger_ptrSee scheduled_task_fired().
taskSee scheduled_task_fired().
Returns
See above.

◆ scheduled_task_fired()

bool flow::util::scheduled_task_fired ( log::Logger logger_ptr,
Scheduled_task_const_handle  task 
)

Returns whether a previously scheduled (by schedule_task_from_now() or similar) task has already fired.

Note that this cannot be true while scheduled_task_canceled() is false and vice versa (but see thread safety note below).

Also note that, while thread-safe, if !single_threaded in the original scheduling call, then the value returned might be different even if checked immediately after this function exits, in the same thread. However, if single_threaded (and one indeed properly uses task from one thread only) then it is guaranteed this value is consistent/correct synchronously in the caller's thread, until code in that thread actively changes it (e.g., by canceling task).

Parameters
logger_ptrSee scheduled_task_fires_from_now_or_canceled().
taskSee scheduled_task_fires_from_now_or_canceled().
Returns
See above.

◆ scheduled_task_fires_from_now_or_canceled()

Fine_duration flow::util::scheduled_task_fires_from_now_or_canceled ( log::Logger logger_ptr,
Scheduled_task_const_handle  task 
)

Returns how long remains until a previously scheduled (by schedule_task_from_now() or similar) task fires; or negative time if that point is in the past; or special value if the task has been canceled.

This is based solely on what was specified when scheduling it; it may be different from when it will actually fire or has fired. However, a special value (see below) is returned, if the task has been canceled (scheduled_task_cancel()).

Parameters
logger_ptrLogging, if any, will be done synchronously via this logger.
task(Copy of) the handle returned by a previous schedule_task_*() call.
Returns
Positive duration if it is set to fire in the future; negative or zero duration otherwise; or special value Fine_duration::max() to indicate the task has been canceled. Note a non-max() (probably negative) duration will be returned even if it has already fired.

◆ scheduled_task_short_fire()

bool flow::util::scheduled_task_short_fire ( log::Logger logger_ptr,
Scheduled_task_handle  task 
)

Attempts to reschedule a previously scheduled (by schedule_task_from_now() or similar) task to fire immediately.

For semantics, in the context of the entire facility, see schedule_task_from_now() doc header.

Parameters
logger_ptrSee schedule_task_from_now().
task(Copy of) the handle returned by a previous schedule_task_*() call.
Returns
true if the task will indeed have executed soon with short_fire == true. false if it has or will soon have executed with short_file == false; or if it has already been successfully canceled via scheduled_task_cancel().

◆ sequence_to_inverted_lookup_map() [1/2]

template<typename Map , typename Sequence >
void flow::util::sequence_to_inverted_lookup_map ( Sequence const &  src_seq,
Map *  target_map 
)

Given a generic sequence (integer -> object) generates a generic map (object -> integer) providing inverse lookup.

See the 3-arg overload if you want to provide a more complex lookup function to store something else based on each index.

A naive way of implementing lookups otherwise would be a linear search for the object; using this will use RAM to avoid the slow searches.

Template Parameters
SequenceSequence such as std::vector<T> or std::array<T>. Informally, T should be something light-weight and hence usable as a key type for a map type (Map).
MapMap that maps T from Sequence to size_t. Example: std::map<T, size_t>.
Parameters
src_seqInput sequence.
target_mapOutput map. Note it will not be pre-cleared; informally, this means one can shove 2+ lookup maps into one. If null behavior undefined (assertion may trip).

◆ sequence_to_inverted_lookup_map() [2/2]

template<typename Map , typename Sequence >
void flow::util::sequence_to_inverted_lookup_map ( Sequence const &  src_seq,
Map *  target_map,
const Function< typename Map::mapped_type(size_t)> &  idx_to_map_val_func 
)

Similar to the 2-arg overload of sequence_to_inverted_lookup_map() but with the ability to store a value based on the index into the input sequence instead of that index itself.

See the 2-arg overload.

Template Parameters
SequenceSequence such as std::vector<T> or std::array<T>. Informally, T should be something light-weight and hence usable as a key type for a map type (Map).
MapMap that maps T from Sequence to another type X. Example: unordered_map<T, X>, where an X can be computed from a size_t index.
Parameters
src_seqSee 2-arg overload.
target_mapSee 2-arg overload.
idx_to_map_val_funcGiven an index idx into src_seq, (*target_map)[] shall contain idx_to_map_val_func(idx). Use this arg to instead perform a second lookup before storing a value in *target_map. Use the 2-arg overload if you'd like to store the index itself.

◆ setup_auto_cleanup()

template<typename Cleanup_func >
Auto_cleanup flow::util::setup_auto_cleanup ( const Cleanup_func &  func)

Provides a way to execute arbitrary (cleanup) code at the exit of the current block.

Simply save the returned object into a local variable that will go out of scope when your code block exits. Example:

{
X* x = create_x();
auto cleanup = util::setup_auto_cleanup([&]() { delete_x(x); });
// Now delete_x(x) will be called no matter how the current { block } exits.
// ...
}
Auto_cleanup setup_auto_cleanup(const Cleanup_func &func)
Provides a way to execute arbitrary (cleanup) code at the exit of the current block.
Definition: util.hpp:282
Todo:
setup_auto_cleanup() should take a function via move semantics.
Template Parameters
Cleanup_funcAny type such that given an instance Cleanup_func f, the expression f() is valid.
Parameters
funcfunc() will be called when cleanup is needed.
Returns
A light-weight object that, when it goes out of scope, will cause func() to be called.

◆ size_unit_convert()

template<typename From , typename To >
size_t flow::util::size_unit_convert ( From  num_froms)

Answers the question what's the smallest integer number of Tos sufficient to verbatim store the given number of Froms?, where From and To are POD types.

For example, one needs 1 uint64_t to store 1, 2, 3, or 4 uint16_ts, hence size_unit_convert<uint16_t, uint64_t>(1 or 2 or 3 or 4) == 1. Similarly, 5 or 6 or 7 or 8 -> 2. It works in the opposite direction, too; if we are storing uint64_ts in multiples of uint16_t, then 1 -> 4, 2 -> 8, 3 -> 12, etc.

To be clear, when From bit width is smaller than To bit width, some of the bits will be padding and presumably unused. For example, raw data buffers of arbitrary bytes are often arranged in multi-byte "words."

Template Parameters
FromThe POD type of the values that must be encoded in Froms.
ToThe POD type of the array that would store the Froms.
Parameters
num_fromsHow many Froms does one want to encode in an array of Tos?
Returns
How many Tos are sufficient to encode num_from Froms verbatim?

◆ subtract_with_floor()

template<typename Minuend , typename Subtrahend >
bool flow::util::subtract_with_floor ( Minuend *  minuend,
const Subtrahend &  subtrahend,
const Minuend &  floor = 0 
)

Performs *minuend -= subtrahend, subject to a floor of floor.

Avoids underflow/overflow to the extent it's reasonably possible, but no more. The return value indicates whether the floor was hit; this allows one to chain high-performance subtractions like this:

double t = 44;
int x = rnd(); // Suppose x == 123, for example.
// Avoids the 2nd, 3rd computation altogether, as the first detects that x >= 44, and thus t == 0 regardless.
subtract_with_floor(&t, long_computation()) &&
subtract_with_floor(&t, another_long_computation());
bool subtract_with_floor(Minuend *minuend, const Subtrahend &subtrahend, const Minuend &floor)
Performs *minuend -= subtrahend, subject to a floor of floor.
Definition: util.hpp:299
Template Parameters
MinuendNumeric type.
SubtrahendNumeric type, such that given Subtrahend s, Minuend(s) is something reasonable for all s involved.
Parameters
minuend*minuend is set to either (*minuend - subtrahend) or floor, whichever is higher.
subtrahendDitto.
floorDitto. Negatives are OK. Typically it's best to keep the magnitude of this small.
Returns
true if *minuend == floor at function exit; false if *minuend > floor.

◆ swap() [1/4]

template<typename Allocator , bool S_SHARING_ALLOWED>
void swap ( Basic_blob< Allocator, S_SHARING_ALLOWED > &  blob1,
Basic_blob< Allocator, S_SHARING_ALLOWED > &  blob2,
log::Logger logger_ptr = 0 
)

Equivalent to blob1.swap(blob2).

Parameters
blob1Object.
blob2Object.
logger_ptrThe Logger implementation to use in this routine (synchronously) only. Null allowed.

◆ swap() [2/4]

template<bool S_SHARING_ALLOWED>
void swap ( Blob_with_log_context< S_SHARING_ALLOWED > &  blob1,
Blob_with_log_context< S_SHARING_ALLOWED > &  blob2 
)

On top of the similar Basic_blob related function, logs using the stored log context of blob1.

Parameters
blob1See super-class related API.
blob2See super-class related API.

◆ swap() [3/4]

template<typename Key , typename Mapped , typename Hash , typename Pred >
void swap ( Linked_hash_map< Key, Mapped, Hash, Pred > &  val1,
Linked_hash_map< Key, Mapped, Hash, Pred > &  val2 
)

Equivalent to val1.swap(val2).

Parameters
val1Object.
val2Object.

◆ swap() [4/4]

template<typename Key , typename Hash , typename Pred >
void swap ( Linked_hash_set< Key, Hash, Pred > &  val1,
Linked_hash_set< Key, Hash, Pred > &  val2 
)

Equivalent to val1.swap(val2).

Parameters
val1Object.
val2Object.

◆ time_since_posix_epoch()

boost::chrono::microseconds flow::util::time_since_posix_epoch ( )

Get the current POSIX (Unix) time as a duration from the Epoch time point.

This is the amount of time – according to the user-settable system clock time – to have passed since the POSIX (Unix) Epoch – January 1st, 1970, 00:00:00 UTC – not counting leap seconds to have been inserted or deleted between then and now.

The boost::chrono duration type is chosen so as to support the entire supported resolution of the OS-exposed system clock (but probably no more than that).

Known use cases, alternatives

  • Logging of time stamps. Output the raw value; or use boost.locale to output ceil<seconds>() in the desired human-friendly form, splicing in the left-over microseconds where desired (boost.locale lacks formatters for sub-second-resolution time points). However see below for a typically-superior alternative.
  • By subtracting return values of this at various points in time from each other, as a crude timing mechanism. (Various considerations make it just that – crude – and best replaced by flow::Fine_clock and the like. Moreover see flow::perf::Checkpointing_timer.)
  • It's a decent quick-and-dirty random seed.

Update/subtleties re. time stamp output

Using this for time stamp output is no longer needed or convenient, as a much nicer way presents itself when combined with boost.chrono I/O-v2 time_point-outputting ostream<< overload. Just grab time_point boost::chrono::system_clock::now() (or another system_clock-originated value); its default-formatted ostream<< output will include date, time with microsecond+ precision, and time-zone specifier (one can choose UTC or local time).

However, as of this writing, it is not possible to directly obtain just the microsecond+ part of this, in isolation, according boost.chrono docs. (One could hack it by taking a substring.) Quote: "Unfortunately there are no formatting/parsing sequences which indicate fractional seconds." From: https://www.boost.org/doc/libs/1_76_0/doc/html/chrono/users_guide.html#chrono.users_guide.tutorial.i_o.system_clock_time_point_io)

Returns
A duration representing how much time has passed since the Epoch reference point (could be negative if before it).

◆ to_mbit_per_sec()

template<typename Time_unit , typename N_items >
double flow::util::to_mbit_per_sec ( N_items  items_per_time,
size_t  bits_per_item = 8 
)

Utility that converts a bandwidth in arbitrary units in both numerator and denominator to the same bandwidth in megabits per second.

The input bandwidth is given in "items" per Time_unit(1); where Time_unit is an arbitrary boost.chrono duration type that must be explicitly provided as input; and an "item" is defined as bits_per_item bits. Useful at least for logging. It's probably easiest to understand by example; see below; rather than by parsing that description I just wrote.

To be clear (as C++ syntax is not super-expressive in this case) – the template parameter Time_unit is an explicit input to the function template, essentially instructing it as to in what units items_per_time is. Thus all uses of this function should look similar to:

// These are all equal doubles, because the (value, unit represented by value) pair is logically same in each case.
// We'll repeatedly convert from 2400 mebibytes (1024 * 1024 bytes) per second, represented one way or another.
const size_t MB_PER_SEC = 2400;
// First give it as _mebibytes_ (2nd arg) per _second_ (template arg).
const double mbps_from_mb_per_sec
= flow::util::to_mbit_per_sec<chrono::seconds>(MB_PER_SEC, 1024 * 8);
// Now give it as _bytes_ (2nd arg omitted in favor of very common default = 8) per _second_ (template arg).
const double mbps_from_b_per_sec
= flow::util::to_mbit_per_sec<chrono::seconds>(MB_PER_SEC * 1024 * 1024);
// Now in _bytes_ per _hour_.
const double mbps_from_b_per_hour
= flow::util::to_mbit_per_sec<chrono::hours>(MB_PER_SEC * 1024 * 1024 * 60 * 60);
// Finally give it in _bytes_ per _1/30th-of-a-second_ (i.e., per frame, when the frame rate is 30fps).
const double mbps_from_b_per_30fps_frame
= flow::util::to_mbit_per_sec<chrono::duration<int, ratio<1, 30>>(MB_PER_SEC * 1024 * 1024 / 30);
Note
Megabit (1000 x 1000 = 10^6 bits) =/= mebibit (1024 x 1024 = 2^20 bits); but the latter is only about 5% more.
Todo:
boost.unit "feels" like it would do this for us in some amazingly pithy and just-as-fast way. Because Boost.
Template Parameters
Time_unitboost::chrono::duration<Rep, Period> for some specific Rep and Period. See boost::chrono::duration documentation. Example types: boost::chrono::milliseconds; boost::chrono::seconds; see example use code above.
N_itemsSome (not necessarily integral) numeric type. Strictly speaking, any type convertible to double works.
Parameters
items_per_timeThe value, in items per Time_unit(1) (where there are bits_per_item bits in 1 item) to convert to megabits per second. Note this need not be an integer.
bits_per_itemNumber of bits in an item, where items_per_time is given as a number of items.
Returns
See above.