|
Flow 2.0.0
Flow project: Full implementation reference.
|
Optional-use companion to Thread_local_state_registry that enables the Polled_share_state pattern wherein from some arbitrary thread user causes the extant thread-locally-activated threads opportunistically collaborate on/using locked shared state, with the no-op fast-path being gated by a high-performance-low-strictness atomic-flag being false.
More...
#include <thread_lcl.hpp>
Public Types | |
| using | Shared_state = Shared_state_t |
| Short-hand for template parameter type. More... | |
Public Member Functions | |
| template<typename... Ctor_args> | |
| Polled_shared_state (Ctor_args &&... shared_state_ctor_args) | |
| Forwards to the stored object's Shared_state ctor. More... | |
| template<typename Task > | |
| void | while_locked (const Task &task) |
| Locks the non-recursive shared-state mutex, such that no access or modification of the contents of the Shared_state shall concurrently occur; executes given task; and unlocks said mutex. More... | |
| void * | this_thread_poll_state () |
| To be called from a thread-local context in which you'll be checking poll_armed(), returns opaque pointer to save in your Thread_local_state_registry::Thread_local_state and pass to poll_armed(). More... | |
| void | arm_next_poll (void *thread_poll_state) |
To be called from any context (typically not the targeted thread-local context in which you'll be checking poll_armed, though that works too), this causes the next poll_armed() called in the thread in which thread_poll_state was returned to return true (once). More... | |
| bool | poll_armed (void *thread_poll_state) |
If the given thread's poll-flag is not armed, no-ops and returns false; otherwise returns true and resets poll-flag to false. More... | |
Private Attributes | |
| Thread_local_state_registry< std::atomic< bool > > | m_poll_flag_registry |
An atomic "do-something" flag per thread; usually/initially false; armed to true by arm_next_poll() and disarmed by poll_armed(). More... | |
| Mutex_non_recursive | m_shared_state_mutex |
| Protects m_shared_state. More... | |
| Shared_state | m_shared_state |
| The managed Shared_state. More... | |
Optional-use companion to Thread_local_state_registry that enables the Polled_share_state pattern wherein from some arbitrary thread user causes the extant thread-locally-activated threads opportunistically collaborate on/using locked shared state, with the no-op fast-path being gated by a high-performance-low-strictness atomic-flag being false.
This Polled_shared_state pattern (I, ygoldfel, made that up... don't know if it's a known thing) is maybe best explained by example. Suppose we're using Thread_local_state_registry with Thread_local_state type being T. Suppose that sometimes some event occurs, in an arbitrary thread (for simplicity let's say that is not in any thread activated by the Thread_local_state_registry<T>) that requires each state to execute thread_locally_launch_rocket(). Lastly, suppose that upon launching the last rocket required, we must report success via report_success() from whichever thread did it.
However there are 2-ish problems at least:
thread_locally_launch_rocket(). There's no way to signal them to do it immediately necessarily; but we can do it opportunistically to any thread that has already called this_thread_state() (been activated).To handle these challenges the pattern is as follows.
set<T*>: the set of Ts (each belonging to an activated thread that has called Thread_local_state_registry<T>::this_thread_state()) that should execute, and have not yet executed, thread_locally_launch_rocket().Thread_local_state_registry<T> is declared/instantiated – e.g., statically – also declare Polled_shared_state<set<T*>>, immediately before the registry.T ctor – which by definition executes only in an activated thread and only once – prepare an opaque atomic-flag-state by executing this_thread_poll_state() and saving the returned void* into a non-static data member of T (say, void* const m_missile_launch_needed_poll_state).So that's the setup/arming; and now to consume it:
this_thread_state() has been called in it (and therefore a T exists), whenever the opportunity arises, check the poll-flag, and in the rare case where it is armed, do thread_locally_launch_rocket():Hopefully that explains it. It is a little rigid and a little flexible; the nature of Shared_state is arbitrary, and the above is probably the simplest form of it (but typically we suspect it will usually involve some container(s) tracking some subset of extant T*s).
Though, perhaps an even simpler scenario might be Shared_state being an empty struct Dummy {};, so that the atomic-flags being armed are the only info actually being transmitted. In the above example that would have been enough – if not for the requirement to report_success(), when the last missile is launched.
The fast-path reasoning is that (1) the arming event occurs rarely and therefore is not part of any fast-path; and (2) thread-local logic can detect poll_armed() == false first-thing and do nothing further. Internally we facilitate speed further by poll_armed() using an atomic<bool> with an optimized memory-ordering setting that is nevertheless safe (impl details omitted here). Point is, if (!....poll_armed()) { return } shall be a quite speedy check.
Last but not least: If Shared_state is empty (formally: is_empty_v<Shared_state> == true; informally: use, e.g., struct Dummy {};), then while_locked() will not be generated, and trying to write code that calls it will cause a compile-time static_assert() fail. As noted earlier using Polled_shared_state, despite the name, for not any shared state but just the thread-local distributed flag arming/polling = a perfectly valid approach.
| Shared_state_t | A single object of this type shall be constructed and can be accessed, whether for reading or writing, using Polled_shared_state::while_locked(). It must be constructible via the ctor signature you choose to use when constructing *this Polled_shared_state ctor (template). The ctor args shall be forwarded to the Shared_state_t ctor. Note that it is not required to actually use a Shared_state and Polled_shared_state::while_locked(). In that case please let Shared_state_t be an empty struct type. |
Definition at line 816 of file thread_lcl.hpp.
| using flow::util::Polled_shared_state< Shared_state_t >::Shared_state = Shared_state_t |
Short-hand for template parameter type.
Definition at line 823 of file thread_lcl.hpp.
| flow::util::Polled_shared_state< Shared_state_t >::Polled_shared_state | ( | Ctor_args &&... | shared_state_ctor_args | ) |
Forwards to the stored object's Shared_state ctor.
You should also, in thread-local context, memorize ptr returned by this_thread_poll_state().
Next: outside thread-local context use while_locked() to check/modify Shared_state contents safely; then for each relevant per-thread context this->arm_next_poll(x), where x is the saved this_thread_poll_state(); this shall cause this->poll_armed() in that thread-local context to return true (once, until you again arm_next_poll() it).
| Ctor_args | See above. |
| shared_state_ctor_args | See above. |
Definition at line 1191 of file thread_lcl.hpp.
References flow::util::Polled_shared_state< Shared_state_t >::m_poll_flag_registry.
| void flow::util::Polled_shared_state< Shared_state_t >::arm_next_poll | ( | void * | thread_poll_state | ) |
To be called from any context (typically not the targeted thread-local context in which you'll be checking poll_armed, though that works too), this causes the next poll_armed() called in the thread in which thread_poll_state was returned to return true (once).
Tip: Typically one would use arm_next_poll() inside a Thread_local_state_registry::while_locked() statement, perhaps cycling through all of Thread_local_state_registry::state_per_thread() and arming the poll-flags of all or some subset of those Thread_local_states.
| thread_poll_state | Value from this_thread_poll_state() called from within the thread whose next poll_armed() you are targeting. |
Definition at line 1221 of file thread_lcl.hpp.
| bool flow::util::Polled_shared_state< Shared_state_t >::poll_armed | ( | void * | thread_poll_state | ) |
If the given thread's poll-flag is not armed, no-ops and returns false; otherwise returns true and resets poll-flag to false.
Use arm_next_poll(), typically from a different thread, to affect when this methods does return true. Usually that means there has been some meaningful change to the stored Shared_state, and therefore you should look there (and/or modify it) while_locked() immediately.
| thread_poll_state | See arm_next_poll(). |
Definition at line 1251 of file thread_lcl.hpp.
| void * flow::util::Polled_shared_state< Shared_state_t >::this_thread_poll_state |
To be called from a thread-local context in which you'll be checking poll_armed(), returns opaque pointer to save in your Thread_local_state_registry::Thread_local_state and pass to poll_armed().
Definition at line 1215 of file thread_lcl.hpp.
| void flow::util::Polled_shared_state< Shared_state_t >::while_locked | ( | const Task & | task | ) |
Locks the non-recursive shared-state mutex, such that no access or modification of the contents of the Shared_state shall concurrently occur; executes given task; and unlocks said mutex.
Behavior is undefined (actually: deadlock) if task() calls this->while_locked() (the mutex is non-recursive).
| Task | Function object matching signature void F(Shared_state*). |
| task | This will be invoked as follows: task(shared_state). shared_state shall point to the object stored in *this and constructed in our ctor. |
Definition at line 1203 of file thread_lcl.hpp.
|
private |
An atomic "do-something" flag per thread; usually/initially false; armed to true by arm_next_poll() and disarmed by poll_armed().
Definition at line 903 of file thread_lcl.hpp.
Referenced by flow::util::Polled_shared_state< Shared_state_t >::Polled_shared_state().
|
private |
The managed Shared_state.
Definition at line 909 of file thread_lcl.hpp.
|
mutableprivate |
Protects m_shared_state.
Definition at line 906 of file thread_lcl.hpp.