Flow-IPC
1.0.1
Flow-IPC project: Full implementation reference.
shm
shm_fwd.hpp
Go to the documentation of this file.
1
/* Flow-IPC: Shared Memory
2
* Copyright 2023 Akamai Technologies, Inc.
3
*
4
* Licensed under the Apache License, Version 2.0 (the
5
* "License"); you may not use this file except in
6
* compliance with the License. You may obtain a copy
7
* of the License at
8
*
9
* https://www.apache.org/licenses/LICENSE-2.0
10
*
11
* Unless required by applicable law or agreed to in
12
* writing, software distributed under the License is
13
* distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
14
* CONDITIONS OF ANY KIND, either express or implied.
15
* See the License for the specific language governing
16
* permissions and limitations under the License. */
17
18
/// @file
19
#pragma once
20
21
/**
22
* Modules for SHared Memory (SHM) support. At a high level ipc::shm is a collection of sub-modules, each
23
* known as a *SHM-provider*, as of this writing most prominently ipc::shm::classic and ipc::shm::arena_lend::jemalloc;
24
* plus provider-agnostic support for SHM-stored native-C++ STL-compliant data structures. See the doc headers
25
* for each sub-namespace for further information.
26
*
27
* That said here's an overview.
28
*
29
* ### SHM-providers (ipc::shm::classic, ipc::shm::arena_lend) ###
30
* Generally speaking there are two approaches to making use of ipc::shm:
31
* -# directly; or
32
* -# via ipc::session.
33
*
34
* While there are applications for approach 1, approach 2 is best by default. It makes the setup
35
* of a SHM environment far, far easier -- it is essentially done for you, with many painful details such as
36
* naming and cleanup (whether after graceful exit or otherwise) taken care of without your input. Furthermore
37
* it standardizes APIs in such a way as to make it possible to swap between the available SHM-providers without
38
* changing code. (There are some exceptions to this; they are well explained in docs.)
39
*
40
* Regardless of approach, you will need to choose which SHM-provider to use for your application. I (ygoldfel) would
41
* informally recommend looking at it from the angle of approach 2, as the ipc::session paradigm might clarify
42
* the high-level differences between the SHM-providers. I will now *briefly* describe and contrast the
43
* SHM-providers.
44
*
45
* As of this writing there are two known types of SHM-providers: *arena-sharing* and *arena-lending*, of which
46
* only the latter is formalized in terms of its general properties.
47
* - Arena-sharing: We have just one such SHM-provider available as of this writing: shm::classic (SHM-classic;
48
* boost.ipc-SHM; boost.interprocess-SHM). In brief: it's built around shm::classic::Pool_arena, a single-pool
49
* thin wrapper around an OS SHM object (pool) handle with a boost.ipc-supplied simple memory allocation algorithm.
50
* Both processes in an IPC conversation (session) share one SHM arena (in this case consisting of 1 pool); both
51
* can allocate in that shared arena and cross-deallocate. Internally auto-deallocation is handled via an atomic
52
* ref-count stored directly in the same SHM arena as the allocated object. Due to SHM-classic's intentional
53
* simplicity, the lend/borrow aspects of it (where side/process 1 of a session *lends* a SHM-allocated object to
54
* side/process 2 which *borrows* it, thus incrementing the conceptual cross-process ref-count to 2) are
55
* properties of the arena-object `Pool_arena` itself.
56
* - Arena-lending: We have one SHM-provider as of this writing: shm::arena_lend::jemalloc; it is an application
57
* of the formalized arena-lending-SHM-provider paradigm specifically to the commercial-grade
58
* 3rd party open-source `malloc()` provider (memory manager): [jemalloc](https://jemalloc.net). It could be
59
* applied to other memory managers; e.g., tcmalloc. (At the moment we feel jemalloc is easily the most
60
* performant and customizable open-source `malloc()`er around.) Generally the memory-manager-agnostic aspects
61
* live in shm::arena_lend; while the SHM-jemalloc-specific ones go into shm::arena_lend::jemalloc.
62
* A major aspect of arena-lending SHM-providers is the separation of the the arena from the lend/borrow
63
* engine (SHM-session). (Those aspects live in session::shm::arena_lend and session::shm::arena_lend::jemalloc;
64
* again, the memory-manager-agnostic and -non-agnostic aspects respectively.) With an arena-lending SHM-provider,
65
* *each* of the two processes in a session creates/maintains its own arena, in which the other side cannot
66
* allocate; then via the session object the other side *borrows* an allocated object which it can at least
67
* read (but not deallocate; and by default not write-to). Thus process 1 maintains a Jemalloc-managed arena;
68
* process 2 borrows objects from it and reads them; and conversely process 2 maintains a Jemalloc-managed arena;
69
* process 1 borrows objects from it and reads them. Hence there are 2 process-local *SHM-arenas* and 1
70
* *SHM-session* for bidirectional lending/borrowing.
71
*
72
* shm::classic is deliberately minimalistic. As a result it is very fast around setup (which involves, simply,
73
* an OS SHM-open operation on each side) and around lend/borrow time (when a process wants to share a SHM-stored datum
74
* with another process). The negatives are:
75
* - It is not (and would be -- at best -- extremely difficult to become)
76
* integrated with a commercial-grade memory manager, with features such as anti-fragmentation and thread-caching;
77
* hence the allocation/deallocation of objects may be slower compared to heap-based over time. We rely on
78
* boost.ipcs's algorithm which lacks the maturity of a jemalloc; and while a custom one could surely replace it,
79
* it would be challenging to improve-upon without bringing in a 3rd-party product; such products are not usually
80
* designed around being placed *entirely* into shared memory.
81
* - Two processes (at least) intensively write to the same memory area; this in the presence of
82
* crashing/zombifying/bugs -- where one process goes bad, while others are fine -- increases entropy and
83
* complicates recovery. If process X of multiple co-sharing processes goes down or is ill, the entirety
84
* of the SHM-stored data in this system is suspect and should probably be freed, all algorithms restarted.
85
*
86
* There are no particular plans to make shm::classic more sophisticated or to formalize its type ("arena-sharing") to
87
* be extensible to more variations. It fulfills its purpose; and in fact it may be suitable for many applications.
88
*
89
* In contrast shm::arena_lend is sophisticated. A process creates an *arena* (or arenas);
90
* one can allocate objects in arenas. A real memory manager is in charge of the mechanics of allocation; except
91
* when it would normally just `mmap()` a vaddr space for local heap use, it instead executes our internal hooks that
92
* `mmap()` to a SHM-pool; SHM-pools are created and destroyed as needed.
93
*
94
* The other process might do the same. It, thus, maintains its own memory manager, for allocations in SHM invoked
95
* from that process. Without further action, the two managers and the arenas they maintain are independent and only
96
* touched by their respective processes.
97
*
98
* To be useful for IPC one would share the objects between the processes. To be able to do
99
* so, during setup each process establishes a *SHM-session* to the other process (multiple
100
* sessions if talking to multiple processes). Then one registers each local arena with a
101
* SHM-session; this means objects from that arena can be sent to the process on the opposing side of the SHM-session.
102
* From that point on, any object constructed in a given arena can be *lent* (sent) to any process on the opposing
103
* side of a session with which that given arena has been registered. This negates the negatives of SHM-classic:
104
* - A commercial-grade memory manager efficiently manages the in-SHM heap. If you trust the performance of your
105
* regular `malloc()`/`new`/etc., then you can trust this equally.
106
* - Process 1 does not write to (or at least does not allocate in) a pool managed by process 2; and vice versa.
107
* Hence if process X goes down or is ill, the arenas created by the other processes in the system can continue
108
* safely.
109
* The negatives are a large increase in complexity and novelty; and possible risks of sporadically increased latency
110
* when SHM-allocating (as, internally, SHM-pool collections must be synchronized across session-connected processes)
111
* and during setup (as, during the initial arena-lend one may need to communicate a large built-up SHM-pool
112
* collection). Just to set up a session, one must provide an ipc::transport::struc::Channel for
113
* the SHM-session's internal use to synchronize pool collections and more.
114
*
115
* Lastly, as it stands, the arena-lending paradigm does lack one capability of SHM-classic; it is fairly
116
* advanced and may or may not come up as an actual problem:
117
*
118
* Imagine long-lived application A and relatively short-lived application B, with (say) serially appearing/ending
119
* processes B1, B2, B3 in chronological order. A can allocate and fill an object X1 while B1 is alive; it will
120
* persist even after B1 dies and through B2 and B3; B1 through B3 can all read it. But can B1 *itself* do so?
121
* - With shm::classic: Yes. It's all one arena shared by everyone, readable and writable and allocatable by all.
122
* - With shm::arena_lend: No. Anything B1 allocates, by definition, must disappear once B1 exits. The entire
123
* arena disappears by the time B2 appears. B2 can read anything that *A* allocated including before B2 was
124
* born, because A is alive as is the arena it maintains; but B1 -- no.
125
* In the ipc::session paradigm this type of data is known as *app-scope* in contrast to most data which are
126
* *session-scope*. For data relevant only to each conversation A-B1, A-B2, A-B3, there is no asymmetry: Internally
127
* there are 2 arenas in each of the 3 sessions, but conceptually it might as well be a 1 common arena, since both
128
* sides have symmetrical capabilities (allocate, read/write, lend; borrow, read). So for session-scope data
129
* shm::classic and shm::arena_lend are identical.
130
*
131
* ### STL support ###
132
* The other major sub-module, as mentioned, is agnostic to the specific SHM-provider. It allows one to store
133
* complex native C++ data directly in SHM. Namely, arbitrary combinations of STL-compliant containers, `struct`s,
134
* fixed-length arrays, scalars, and even pointers are supported. Both SHM-providers above (shm::classic and
135
* shm::arena_lend::jemalloc) provide the semantics required to correctly plug-in to this system. See doc header
136
* for namespace shm::stl to continue exploring this topic.
137
*/
138
namespace
ipc::shm
139
{
140
141
// Types.
142
143
// Find doc headers near the bodies of these compound types.
144
145
template
<
typename
Arena>
146
struct
Arena_to_borrower_allocator_arena;
147
148
}
// namespace ipc::shm
ipc::shm
Modules for SHared Memory (SHM) support.
Definition:
classic.hpp:26
Generated on Tue Mar 26 2024 02:49:35 for Flow-IPC by
1.9.4