Flow 2.0.0
Flow project: Full implementation reference.
Public Types | Public Member Functions | Private Attributes | List of all members
flow::util::Polled_shared_state< Shared_state_t > Class Template Reference

Optional-use companion to Thread_local_state_registry that enables the Polled_share_state pattern wherein from some arbitrary thread user causes the extant thread-locally-activated threads opportunistically collaborate on/using locked shared state, with the no-op fast-path being gated by a high-performance-low-strictness atomic-flag being false. More...

#include <thread_lcl.hpp>

Inheritance diagram for flow::util::Polled_shared_state< Shared_state_t >:
[legend]
Collaboration diagram for flow::util::Polled_shared_state< Shared_state_t >:
[legend]

Public Types

using Shared_state = Shared_state_t
 Short-hand for template parameter type. More...
 

Public Member Functions

template<typename... Ctor_args>
 Polled_shared_state (Ctor_args &&... shared_state_ctor_args)
 Forwards to the stored object's Shared_state ctor. More...
 
template<typename Task >
void while_locked (const Task &task)
 Locks the non-recursive shared-state mutex, such that no access or modification of the contents of the Shared_state shall concurrently occur; executes given task; and unlocks said mutex. More...
 
void * this_thread_poll_state ()
 To be called from a thread-local context in which you'll be checking poll_armed(), returns opaque pointer to save in your Thread_local_state_registry::Thread_local_state and pass to poll_armed(). More...
 
void arm_next_poll (void *thread_poll_state)
 To be called from any context (typically not the targeted thread-local context in which you'll be checking poll_armed, though that works too), this causes the next poll_armed() called in the thread in which thread_poll_state was returned to return true (once). More...
 
bool poll_armed (void *thread_poll_state)
 If the given thread's poll-flag is not armed, no-ops and returns false; otherwise returns true and resets poll-flag to false. More...
 

Private Attributes

Thread_local_state_registry< std::atomic< bool > > m_poll_flag_registry
 An atomic "do-something" flag per thread; usually/initially false; armed to true by arm_next_poll() and disarmed by poll_armed(). More...
 
Mutex_non_recursive m_shared_state_mutex
 Protects m_shared_state. More...
 
Shared_state m_shared_state
 The managed Shared_state. More...
 

Detailed Description

template<typename Shared_state_t>
class flow::util::Polled_shared_state< Shared_state_t >

Optional-use companion to Thread_local_state_registry that enables the Polled_share_state pattern wherein from some arbitrary thread user causes the extant thread-locally-activated threads opportunistically collaborate on/using locked shared state, with the no-op fast-path being gated by a high-performance-low-strictness atomic-flag being false.

This Polled_shared_state pattern (I, ygoldfel, made that up... don't know if it's a known thing) is maybe best explained by example. Suppose we're using Thread_local_state_registry with Thread_local_state type being T. Suppose that sometimes some event occurs, in an arbitrary thread (for simplicity let's say that is not in any thread activated by the Thread_local_state_registry<T>) that requires each state to execute thread_locally_launch_rocket(). Lastly, suppose that upon launching the last rocket required, we must report success via report_success() from whichever thread did it.

However there are 2-ish problems at least:

To handle these challenges the pattern is as follows.

registry.while_locked([&](const auto& lock) // Any access across per-thread state is done while_locked().
{
const auto& state_per_thread = registry.state_per_thread(lock);
if (state_per_thread.empty()) { return; } // No missiles to launch for sure; forget it.
// Load the shared state (while_locked()):
missiles_to_launch_polled_shared_state.while_locked([&](set<T*>* threads_that_shall_launch_missiles)
{
// *threads_that_shall_launch_missiles is protected against concurrent change.
for (const auto& state_and_mdt)
{
T* const active_per_thread_t = state_and_mdt.first;
threads_that_shall_launch_missiles->insert(active_per_thread_t);
}
});
// *AFTER!!!* loading the shared state, arm the poll-flag:
for (const auto& state_and_mdt)
{
T* const active_per_thread_t = state_and_mdt.first;
missiles_to_launch_polled_shared_state.arm_next_poll(active_per_thread_t->m_missile_launch_needed_poll_state);
// (We arm every per-thread T; but it is possible and fine to do it only for some.)
// Also note it might already be armed; this would keep it armed; no problem. Before the for()
// the set<> might already have entries (launches planned, now we're adding possibly more to it).
}
}

So that's the setup/arming; and now to consume it:

void opportunistically_launch_when_triggered() // Assumes: bool(registry.this_thread_state_or_null()) == true.
{
T* const this_thread_state = registry.this_thread_state();
if (!missiles_to_launch_polled_shared_state.poll_armed(this_thread_state->m_missile_launch_needed_poll_state))
{ // Fast-path! Nothing to do re. missile-launching.
return;
}
// else: Slow-path. Examine the shared-state; do what's needed. Note: poll_armed() would now return false.
missiles_to_launch_polled_shared_state.while_locked([&](set<T*>* threads_that_shall_launch_missiles)
{
if (threads_that_shall_launch_missiles->erase(this_thread_state) == 0)
{
// Already-launched? Bug? It depends on your algorithm. But the least brittle thing to do is likely:
return; // Nothing to do (for us) after all.
}
// else: Okay: we need to launch, and we will, and we've marked our progress about it.
thread_locally_launch_rocket();
if (threads_that_shall_launch_missiles->empty())
{
report_success(); // We launched the last required missile... report success.
}
}
}

Hopefully that explains it. It is a little rigid and a little flexible; the nature of Shared_state is arbitrary, and the above is probably the simplest form of it (but typically we suspect it will usually involve some container(s) tracking some subset of extant T*s).

Though, perhaps an even simpler scenario might be Shared_state being an empty struct Dummy {};, so that the atomic-flags being armed are the only info actually being transmitted. In the above example that would have been enough – if not for the requirement to report_success(), when the last missile is launched.

Performance

The fast-path reasoning is that (1) the arming event occurs rarely and therefore is not part of any fast-path; and (2) thread-local logic can detect poll_armed() == false first-thing and do nothing further. Internally we facilitate speed further by poll_armed() using an atomic<bool> with an optimized memory-ordering setting that is nevertheless safe (impl details omitted here). Point is, if (!....poll_armed()) { return } shall be a quite speedy check.

Last but not least: If Shared_state is empty (formally: is_empty_v<Shared_state> == true; informally: use, e.g., struct Dummy {};), then while_locked() will not be generated, and trying to write code that calls it will cause a compile-time static_assert() fail. As noted earlier using Polled_shared_state, despite the name, for not any shared state but just the thread-local distributed flag arming/polling = a perfectly valid approach.

Template Parameters
Shared_state_tA single object of this type shall be constructed and can be accessed, whether for reading or writing, using Polled_shared_state::while_locked(). It must be constructible via the ctor signature you choose to use when constructing *this Polled_shared_state ctor (template). The ctor args shall be forwarded to the Shared_state_t ctor. Note that it is not required to actually use a Shared_state and Polled_shared_state::while_locked(). In that case please let Shared_state_t be an empty struct type.

Definition at line 816 of file thread_lcl.hpp.

Member Typedef Documentation

◆ Shared_state

template<typename Shared_state_t >
using flow::util::Polled_shared_state< Shared_state_t >::Shared_state = Shared_state_t

Short-hand for template parameter type.

Definition at line 823 of file thread_lcl.hpp.

Constructor & Destructor Documentation

◆ Polled_shared_state()

template<typename Shared_state_t >
template<typename... Ctor_args>
flow::util::Polled_shared_state< Shared_state_t >::Polled_shared_state ( Ctor_args &&...  shared_state_ctor_args)

Forwards to the stored object's Shared_state ctor.

You should also, in thread-local context, memorize ptr returned by this_thread_poll_state().

Next: outside thread-local context use while_locked() to check/modify Shared_state contents safely; then for each relevant per-thread context this->arm_next_poll(x), where x is the saved this_thread_poll_state(); this shall cause this->poll_armed() in that thread-local context to return true (once, until you again arm_next_poll() it).

Template Parameters
Ctor_argsSee above.
Parameters
shared_state_ctor_argsSee above.

Definition at line 1191 of file thread_lcl.hpp.

References flow::util::Polled_shared_state< Shared_state_t >::m_poll_flag_registry.

Member Function Documentation

◆ arm_next_poll()

template<typename Shared_state_t >
void flow::util::Polled_shared_state< Shared_state_t >::arm_next_poll ( void *  thread_poll_state)

To be called from any context (typically not the targeted thread-local context in which you'll be checking poll_armed, though that works too), this causes the next poll_armed() called in the thread in which thread_poll_state was returned to return true (once).

Tip: Typically one would use arm_next_poll() inside a Thread_local_state_registry::while_locked() statement, perhaps cycling through all of Thread_local_state_registry::state_per_thread() and arming the poll-flags of all or some subset of those Thread_local_states.

Parameters
thread_poll_stateValue from this_thread_poll_state() called from within the thread whose next poll_armed() you are targeting.

Definition at line 1221 of file thread_lcl.hpp.

◆ poll_armed()

template<typename Shared_state_t >
bool flow::util::Polled_shared_state< Shared_state_t >::poll_armed ( void *  thread_poll_state)

If the given thread's poll-flag is not armed, no-ops and returns false; otherwise returns true and resets poll-flag to false.

Use arm_next_poll(), typically from a different thread, to affect when this methods does return true. Usually that means there has been some meaningful change to the stored Shared_state, and therefore you should look there (and/or modify it) while_locked() immediately.

Parameters
thread_poll_stateSee arm_next_poll().
Returns
See above.

Definition at line 1251 of file thread_lcl.hpp.

◆ this_thread_poll_state()

template<typename Shared_state_t >
void * flow::util::Polled_shared_state< Shared_state_t >::this_thread_poll_state

To be called from a thread-local context in which you'll be checking poll_armed(), returns opaque pointer to save in your Thread_local_state_registry::Thread_local_state and pass to poll_armed().

Returns
See above.

Definition at line 1215 of file thread_lcl.hpp.

◆ while_locked()

template<typename Shared_state_t >
template<typename Task >
void flow::util::Polled_shared_state< Shared_state_t >::while_locked ( const Task &  task)

Locks the non-recursive shared-state mutex, such that no access or modification of the contents of the Shared_state shall concurrently occur; executes given task; and unlocks said mutex.

Behavior is undefined (actually: deadlock) if task() calls this->while_locked() (the mutex is non-recursive).

Template Parameters
TaskFunction object matching signature void F(Shared_state*).
Parameters
taskThis will be invoked as follows: task(shared_state). shared_state shall point to the object stored in *this and constructed in our ctor.

Definition at line 1203 of file thread_lcl.hpp.

Member Data Documentation

◆ m_poll_flag_registry

template<typename Shared_state_t >
Thread_local_state_registry<std::atomic<bool> > flow::util::Polled_shared_state< Shared_state_t >::m_poll_flag_registry
private

An atomic "do-something" flag per thread; usually/initially false; armed to true by arm_next_poll() and disarmed by poll_armed().

Definition at line 903 of file thread_lcl.hpp.

Referenced by flow::util::Polled_shared_state< Shared_state_t >::Polled_shared_state().

◆ m_shared_state

template<typename Shared_state_t >
Shared_state flow::util::Polled_shared_state< Shared_state_t >::m_shared_state
private

The managed Shared_state.

Definition at line 909 of file thread_lcl.hpp.

◆ m_shared_state_mutex

template<typename Shared_state_t >
Mutex_non_recursive flow::util::Polled_shared_state< Shared_state_t >::m_shared_state_mutex
mutableprivate

Protects m_shared_state.

Definition at line 906 of file thread_lcl.hpp.


The documentation for this class was generated from the following file: