Flow 1.0.2
Flow project: Full implementation reference.
async_file_logger.cpp
Go to the documentation of this file.
1/* Flow
2 * Copyright 2023 Akamai Technologies, Inc.
3 *
4 * Licensed under the Apache License, Version 2.0 (the
5 * "License"); you may not use this file except in
6 * compliance with the License. You may obtain a copy
7 * of the License at
8 *
9 * https://www.apache.org/licenses/LICENSE-2.0
10 *
11 * Unless required by applicable law or agreed to in
12 * writing, software distributed under the License is
13 * distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
14 * CONDITIONS OF ANY KIND, either express or implied.
15 * See the License for the specific language governing
16 * permissions and limitations under the License. */
17
18/// @file
21#include "flow/error/error.hpp"
22#include <boost/move/make_unique.hpp>
23#include <algorithm>
24#include <memory>
25#include <utility>
26
27namespace flow::log
28{
29
30// Implementations.
31
33 Config* config, const fs::path& log_path,
34 bool capture_rotate_signals_internally) :
35 /* Set up the logging *about our real attempts at file logging*, probably to a Simple_ostream_logger/cout;
36 * or to null (meaning no such logging). Either way we will still attempt to actually log user-requested messages
37 * to log_path! */
38 Log_context(backup_logger_ptr, Flow_log_component::S_LOG),
39 m_config(config), // Save pointer, not copy, of the config given to us. Hence thread safety is a thing.
40
41 /* Set up throttling algorithm, which is always-on (see class doc header discussion).
42 * Reminder that its output will be ignored, until/unless user calls throttling_cfg(true, ...). They can then
43 * set their own config values as well (or reuse this default which they can access via throttling_cfg() accessor). */
44 m_throttling_cfg({ Throttling_cfg::S_HI_LIMIT_DEFAULT }),
45 m_pending_logs_sz(0), // No memory used yet by pending messages.
46 m_throttling_now(false), // No throttling yet.
47 m_throttling_active(false),
48
49 // Any I/O operations done here are the only ones not done from m_async_worker thread.
50 m_serial_logger(boost::movelib::make_unique<Serial_file_logger>(get_logger(), m_config, log_path)),
51 /* ...Speaking of which: start the worker thread right now; synchronously; return from this ctor once it has
52 * certified it is up. It will log (to *backup_logger_ptr if not null) about starting up right here, too, which
53 * in practice would be the bulk of INFO-or-less-verbose log lines from us in my experience. */
54 m_async_worker(backup_logger_ptr, util::ostream_op_string("Async_file_logger[", this, ']')),
55 /* Set up a signal set object; this is a no-op until we .add() signals to it (which we may or may not do).
56 * Whether we do or not is more significant than merely whether whatever handler we'd later register
57 * via m_signal_set.async_wait() will be called; if we .add() zero signals, then IF some non-boost.asio
58 * signal handler is currently registered (such as some default OS handler; or the user's non-boost.asio handler)
59 * (or will be registered in the future) will continue to work undisturbed. If, however, we .add() one or more
60 * signals (or, equivalently, list 1 or more signal numbers right here in the constructor call),
61 * then we will REPLACE such non-boost.asio handlers with boost.asio's mechanism. (This behavior is
62 * explicitly documented in boost.asio docs.) Therefore we must be careful not to mindlessly .add() handler(s),
63 * and/or (equivalently) list 1 or more signal numbers in this constructor call here. The choice will be left to
64 * later code which will .add() or not .add() deliberately. (Copy/paste warning: This cmnt was copy-pasted from some
65 * vaguely similar code in flow::net_flow; but the 2 modules are barely related if at all, so....) */
66 m_signal_set(*(m_async_worker.task_engine()))
67{
68 m_async_worker.start();
69
70 FLOW_LOG_INFO("Async_file_logger [" << this << "]: "
71 "Log-writing worker thread(s) have started around now; ready to work.");
72
73 // Immediately open file in worker thread. Could be lazy about it but might as well log about any issue ASAP.
74 log_flush_and_reopen(false); // Do it synchronously for good measure; doesn't matter but the reduced entropy is nice.
75
76 if (capture_rotate_signals_internally)
77 {
78 FLOW_LOG_INFO("Setting up internal log-rotate-reopening signal handler. "
79 "CAUTION! User program MUST avoid using non-boost::asio::signal_set signal handling! "
80 "If it does use non-boost.asio, behavior is undefined.");
81
82 // Add the classic log-rotation-inducing signal numbers.
83 m_signal_set.add(SIGHUP);
84
85 /* At this point, receiving those signals will NOT do whatever default behavior would happen -- but I believe
86 * that unlike for SIGTERM/etc. the default behavior is no-op. Before actually reporting successful initialization
87 * by exiting constructor, set up the handler that'll be called upon receiving the signals. */
88
89 /* this->impl_log_flush_and_reopen() will be called on signal (or error).
90 * Note that that function's contract (from its doc comment) is it must execute in m_async_worker (see
91 * m_signal_set() init above in the init section of ctor).
92 * Indeed boost::asio::io_service semantics guarantee it'll run in m_async_worker (not some
93 * no-man's-land signal handler thread of execution, as one might fear could be the case) for the same reason
94 * the various post()ed tasks will run in m_async_worker. */
95 m_signal_set.async_wait([this](const Error_code& sys_err_code, int sig_number)
96 {
97 on_rotate_signal(sys_err_code, sig_number);
98 });
99 } // if (capture_rotate_signals_internally)
100 /* else if (!capture_rotate_signals_internally)
101 * {
102 * Do NOT .add() anything; and don't async_wait() anything. As noted in comment at m_signal_set construction time,
103 * .add() does more than make it possible to .async_wait(). It also replaces any default OS or user's own
104 * non-boost.asio signal handling machinery with boost.asio's signal_set machinery. That can be quite draconian,
105 * so user must specifically set that option to true. If it's false (in all siblings of *this in entire app),
106 * then whatever signal handling machinery the user wants to set up for themselves (or leave at OS's
107 * discretion) will remain undisturbed. By the way, of course, they can use boost.asio machinery themselves too;
108 * it's just that doing so would still work even if capture_rotate_signals_internally were true, so that's
109 * not the dangerous scenario. (Copy/paste warning: This cmnt was copy-pasted from some
110 * vaguely similar code in flow::net_flow; but the 2 modules are barely related if at all, so....)
111 * } */
112} // Async_file_logger::Async_file_logger()
113
115{
117 return m_throttling_cfg;
118}
119
121{
122 return m_throttling_active.load(std::memory_order_relaxed);
123}
124
126{
127 assert((cfg.m_hi_limit > 0) && "Per contract, hi_limit must be positive.");
128
129 /* Please see Impl section of class doc header for detailed discussion; also Throttling and m_throttling_states doc
130 * headers. We just reiterate here that m_throttling_active and m_throttling_cfg are orthogonal to each other;
131 * the latter is thus protected by mutex, while the former is atomic. */
132
133 // Deal with `active`.
134
135 const auto prev_active = m_throttling_active.exchange(active, std::memory_order_relaxed);
136 if (prev_active != active)
137 {
138 FLOW_LOG_INFO("Async_file_logger [" << this << "]: "
139 "Config set: throttling feature active? [" << prev_active << "] => [" << active << "].");
140 }
141 /* Subtlety: The un-mutex-protected, relaxed-order-accessed (as opposed to with memory_order_seq_cst)
142 * read/modify/write operation above is the reason we formally forbid this mutator being called concurrently
143 * with itself. Possibly oddness could occur if the above statement was invoked ~concurrently with itself.
144 * We could instead formally allow it and change the above to seq_cst. However: (1) in actual reality it's
145 * very unlikely a reasonable user would want to call this concurrently to self anyway in a realistic dynamic-config
146 * scenario -- usually people use a config-update thread -- so it's not very important; (2) the existing way
147 * the entire set of config is updated in synchronized fashion, as opposed to separating m_throttling_cfg from
148 * m_throttling_active. It's just less entropy, and it hardly matters anyway. */
149
150 // Deal with `cfg`.
151
152 { // All this->m_ touched in { here } can concurrently change, unless we lock.
154
156 {
157 const auto prev_throttling_now
158 = m_throttling_now.exchange(m_pending_logs_sz >= static_cast<decltype(m_pending_logs_sz)>(cfg.m_hi_limit),
159 std::memory_order_relaxed);
160 // (m_throttling_now cannot change during lock; using .exchange() purely for convenience to grab prev value.)
161
162 FLOW_LOG_INFO("Async_file_logger [" << this << "]: Config set: "
163 "hi_limit [" << m_throttling_cfg.m_hi_limit << "] => [" << cfg.m_hi_limit << "]. "
164 "Mem-use = [" << m_pending_logs_sz << "]; "
165 "throttling? = [" << prev_throttling_now << "] => [" << m_throttling_now << "]; "
166 "throttling feature active? = [" << active << "]. "
167 "Reminder: `throttling?` shall only be used if `throttling feature active?` is 1.");
168
170 }
171 /* else: As discussed in class doc header: no-op, unless they actually changed something; no state reset.
172 * E.g., perhaps they changed `active` while passing-in `cfg = throttling_cfg()` unchanged. */
173 } // Lock_guard lock(m_throttling_mutex);
174} // Async_file_logger::throttling_cfg()
175
177{
179
180 {
181 Lock_guard lock(m_throttling_mutex); // Careful: really_log()s may well be happening right now via m_async_worker.
182
183 FLOW_LOG_INFO("Async_file_logger [" << this << "]: Deleting. Worker thread will flush "
184 "output if possible; then we will proceed to shut down. Current mem-use of queued "
185 "log-requests is [" << m_pending_logs_sz << "]; if it is large, this might take some time.");
186 }
187
188 /* Could do this from current thread (in fact if we simply deleted the following statement, that is what would
189 * happen); that's still non-concurrent with other calls, as Serial_file_logger requires.
190 * However there is a subtle and important reason why the following is currently actively desired:
191 * By post()ing it -- meaning enqueuing it -- after any log-requests (function objects with a Log_request
192 * capture in each), and then awaiting the completion, we ensure those log requests are first serviced,
193 * or more in English: any pending log messages are actually written to file.
194 *
195 * Were we to not do so, some aspects of Log_request would be leaked here; as of this writing the message and
196 * the metadata. (They are not RAII for reasons explained in Log_request doc header.)
197 *
198 * Note: If we wanted to avoid this behavior -- e.g., if huge amounts of log-requests have been queued, this
199 * statement can very much take quite a while, and maybe we want that, or maybe we don't -- and wanted to
200 * avoid a leak, we'd need to come up with some new trick. For the time being though by doing this flush
201 * we both get desired functional behavior and avoid the leak. */
202 m_async_worker.post([this]() { m_serial_logger.reset(); },
203 Synchronicity::S_ASYNC_AND_AWAIT_CONCURRENT_COMPLETION);
204
205 // Now thread will exit and be joined, etc.
206}
207
208void Async_file_logger::on_rotate_signal(const Error_code& sys_err_code, int sig_number)
209{
210 // We are in m_async_worker thread.
211
212 if (sys_err_code == boost::asio::error::operation_aborted)
213 {
214 return; // Stuff is shutting down; just get out.
215 }
216 // else
217
218 FLOW_LOG_INFO("Async_file_logger [" << this << "]: "
219 "Internal log-rotate handler executed with signal number [" << sig_number << "].");
220
221 if (sys_err_code)
222 {
223 // This is odd, but there's no need to freak out about anything else. Just log and get out.
225 FLOW_LOG_WARNING("Async_file_logger [" << this << "]: "
226 "Internal signal handler executed with an error indicator. Strange! "
227 "Ignoring and continuing other operation.");
228 }
229 else
230 {
231 m_serial_logger->log_flush_and_reopen();
232 }
233
234 // Wait for it again, or else a rotation would only occur on the first signal but not thereafter.
235 m_signal_set.async_wait([this](const Error_code& sys_err_code, int sig_number)
236 {
237 on_rotate_signal(sys_err_code, sig_number);
238 });
239} // Async_file_logger::on_rotate_signal()
240
242{
243 using util::String_view;
244 using std::memcpy;
245
246 assert(metadata);
247
248 /* Our essential task is to push the log-request onto the m_async_worker queue; the code for processing
249 * that log-request, once the queue gets to it in that worker thread, is called really_log() and is in here too.
250 *
251 * Key points about how the asynchronicity/logging/queueing works: We aim to return synchronously ASAP and leave the
252 * potentially blocking (like if hard drive has to turn on from sleep) I/O ops to the worker thread asynchronously.
253 * There are 2 pieces of data, *metadata and `msg`, to deal with, both of which essentially must be available when
254 * we in fact write to file via m_serial_logger. By contract of do_log() and logs_asynchronously()==true:
255 * - We are to NOT copy *metadata; and therefore we are to `delete metadata` when done with it; i.e., after
256 * m_serial_logger->do_log() returns having written to file-system.
257 * - We MUST copy `msg` so we can use it asynchronously; we are to therefore free the copy ourselves once
258 * m_serial_logger->do_log() returns. (In reality this is all because `msg` is a shared thread-lcl thing that is
259 * reused in every subsequent and preceding FLOW_LOG_WARNING/etc. invocation in this thread. We technically don't
260 * know or care about that reality in here; only the *result* wherein we cannot use `msg` after do_log()
261 * synchronously returns.)
262 *
263 * Here's the Log_request encapsulating that stuff. See Log_request doc header for some nitty-gritty details. */
264 const auto msg_sz = msg.size();
265 Log_request log_request{ new char[msg_sz], msg_sz, metadata,
266 false }; // We can't know m_throttling_begins yet.
267 memcpy(log_request.m_msg_copy, msg.data(), msg_sz);
268
269 /* Before enqueuing that stuff, though, let's tally up the stats and otherwise proceed with the throttling algorithm.
270 * Please see Impl section of class doc header for detailed discussion. Then come back here.
271 * We rely on the background in that discussion frequently.
272 *
273 * Reminder: We should be careful to minimize computation in this section (but nowhere near as important to do so
274 * compared to should_log()). Similarly keep the locked section as small as possible. */
275
276 using logs_sz_t = decltype(m_pending_logs_sz);
277 const auto logs_sz = mem_cost(log_request);
278 auto& throttling_begins = log_request.m_throttling_begins;
279 logs_sz_t limit;
280 logs_sz_t pending_logs_sz; // For logging.
281 logs_sz_t prev_pending_logs_sz;
282 {
284 limit = static_cast<logs_sz_t>(m_throttling_cfg.m_hi_limit);
285 prev_pending_logs_sz = m_pending_logs_sz;
286 pending_logs_sz = (m_pending_logs_sz += logs_sz);
287 /* Minor/subtlety: We could also remove the m_throttling_now check here and instead use
288 * `throttling_begins = (m_throttling_now.exchange() == false)` in { body }. However we already had to lock
289 * mutex for ~unrelated reasons, and therefore m_throttling_now cannot change at the moment; so we
290 * might as well gate the integer comparisons on its value -- which rarely changes -- and only assign
291 * it, if its value would in fact change. */
292 if ((!m_throttling_now.load(std::memory_order_relaxed))
293 && (pending_logs_sz >= limit) && (prev_pending_logs_sz < limit))
294 {
295 m_throttling_now.store(true, std::memory_order_relaxed);
296 throttling_begins = true;
297 }
298 }
299 if (throttling_begins)
300 {
301 /* Log about it in the backup Logger (Logger-about-logging).
302 * When throttling feature is disabled: arguably it is still worthy of being a WARNING: it is in fact warning
303 * about a probably-bad situation (normal mem-use is roughly 0); and it is definitely useful information that
304 * might cause end user to enable throttling. However, subjectively, if they haven't enabled throttling then
305 * being faced with a WARNING (error message) is arguably excessive. Hence log it as INFO in that case.
306 * (If they don't want these messages at all, they can always set m_hi_limit to a terabyte or something.)
307 * Note: There's a similar statement inside really_log(). */
308 const bool active = m_throttling_active.load(std::memory_order_relaxed);
310 (active ? Sev::S_WARNING : Sev::S_INFO,
311 "Async_file_logger [" << this << "]: "
312 "do_log() throttling algorithm: a message reached hi_limit; next message-to-be => likely dropped, "
313 "if feature active. Config: hi_limit [" << limit << "]. "
314 "Mem-use = [" << prev_pending_logs_sz << "] => [" << pending_logs_sz << "]; "
315 "throttling? = 1 (see above); throttling feature active? = [" << active << "]. "
316 "Reminder: `throttling?` shall only be used if `throttling feature active?` is 1. "
317 "Limit-triggering message's contents follow: [" << msg << "].");
318 }
319#if 0 // Obv change to `if 1` if debugging + want to see it. Could just use TRACE but avoiding should_log() cost.
320 else
321 {
322 FLOW_LOG_INFO("Async_file_logger [" << this << "]: "
323 "do_log() throttling algorithm: a message was processed; situation (reminder: beware concurrency): "
324 "Config: hi_limit [" << limit << "]. "
325 "Mem-use = [" << prev_pending_logs_sz << "] => [" << pending_logs_sz << "]; "
326 "throttling feature active? = [" << m_throttling_active.load(std::memory_order_relaxed) << "]. "
327 "Message's contents follow: [" << msg << "].");
328 }
329#endif
330
331 /* Done! State updated, and throttling_begins determined for really_log().
332 *
333 * Now for the enqueueing of the log-request. This is what will execute in m_async_worker, when it's its turn. */
334
335 auto really_log = [this, log_request = std::move(log_request)]() mutable
336 {
337 const auto metadata = log_request.m_metadata;
338 const String_view msg{log_request.m_msg_copy, log_request.m_msg_size};
339
340 /* Throttling: do, essentially, the opposite of what do_log() did when issuing the log-request.
341 * Again please refer to Impl section of class doc header for reasoning about this algorithm. */
342
343 const auto logs_sz = mem_cost(log_request);
344 // @todo ^-- Maybe instead save this in Log_request? Trade-off is RAM vs cycles (currently favoring RAM).
345 bool throttling_ends = false;
346 logs_sz_t limit; // For logging.
347 logs_sz_t pending_logs_sz; // For logging.
348 logs_sz_t prev_pending_logs_sz;
349 {
351 limit = m_throttling_cfg.m_hi_limit; // Just for logging in this case.
352 prev_pending_logs_sz = m_pending_logs_sz;
353 assert((prev_pending_logs_sz >= logs_sz) && "Bug? really_log() has no matching do_log()?");
354 pending_logs_sz = (m_pending_logs_sz -= logs_sz);
355 /* Minor/subtlety: Versus the do_log() equivalent above, it looks more economical to check pending_logs_sz first,
356 * since it just compares one value to 0 in this case (as opposed to 2 values and < and >=), and then both set
357 * m_throttling_now and check its preceding value in one op. */
358 if (pending_logs_sz == 0)
359 {
360 // m_throttling_now should be false; but detect whether this is the change-over from true as opposed to no-op.
361 throttling_ends = (m_throttling_now.exchange(false, std::memory_order_relaxed) == true);
362 }
363 }
364
365 if (throttling_ends)
366 {
367 // Log about it in the backup Logger (Logger-about-logging).
368 FLOW_LOG_INFO("Async_file_logger [" << this << "]: last pending message was logged; "
369 "next message-to-be => likely first one to *not* be dropped, if throttling feature active. "
370 "Config: hi_limit [" << limit << "]. "
371 "Mem-use = [" << prev_pending_logs_sz << "] => [" << pending_logs_sz << "]; "
372 "throttling? = 0 (see above); "
373 "throttling feature active? = [" << m_throttling_active.load(std::memory_order_relaxed) << "]. "
374 "Reminder: `throttling?` shall only be used if `throttling feature active?` is 1. "
375 "Queue-clearing message's contents follow: [" << msg << "].");
376
377 // Log about it in file itself. (Performance in this block is not of huge import; this is a fairly rare event.)
378 FLOW_LOG_SET_CONTEXT(m_serial_logger.get(), this->get_log_component());
379
380 FLOW_LOG_INFO("Async_file_logger [" << this << "]: "
381 "really_log() throttling algorithm: last pending message was logged; "
382 "next message-to-be => likely first one to *not* be dropped, if throttling feature active. "
383 "Config: hi_limit [" << limit << "]. "
384 "Mem-use = [" << prev_pending_logs_sz << "] => 0; "
385 "throttling feature active? = [" << m_throttling_active.load(std::memory_order_relaxed) << "]. "
386 "Queue-clearing message is the one immediately following me in file. "
387 "Compare its time stamp to mine to see time lag due to queueing.");
388 }
389#if 0 // Obv change to `if 1` if debugging + want to see it. Could just use TRACE but avoiding should_log() cost.
390 else
391 {
392 FLOW_LOG_INFO("Async_file_logger [" << this << "]: "
393 "really_log() throttling algorithm: a message is about to be written to file; "
394 "situation (reminder: beware concurrency): Config: hi_limit [" << limit << "]. "
395 "Mem-use = [" << prev_pending_logs_sz << "] => [" << pending_logs_sz << "]; "
396 "throttling feature active? = [" << m_throttling_active.load(std::memory_order_relaxed) << "]. ");
397 "Message's contents follow: [" << msg << "].");
398 }
399#endif
400
401 // We are in m_async_worker thread, as m_serial_logger requires. Go!
402 m_serial_logger->do_log(metadata, msg);
403
404 // Oh and obey throttling_begins for this log-request, if it was computed to be true in do_log().
405 if (log_request.m_throttling_begins)
406 {
407 // Performance in this block is not of huge import; this is a fairly rare event.
408 FLOW_LOG_SET_CONTEXT(m_serial_logger.get(), this->get_log_component());
409
410 // Rationale w/r/t the Sev choice here: see similar `if (throttling_begins)` block above.
411 const bool active = m_throttling_active.load(std::memory_order_relaxed);
413 (active ? Sev::S_WARNING : Sev::S_INFO,
414 "Async_file_logger [" << this << "]: "
415 "really_log() throttling algorithm: The preceding message, when its log-request was "
416 "earlier enqueued, caused pending-logs RAM usage to exceed then-configured hi_limit. "
417 "If throttling feature was active, subsequent messages-to-be (log-requests) were dropped. "
418 "We only just got around to being able to log it (satisfy log-request) after all the "
419 "preceding ones in FIFO order. Nowadays: Config: hi_limit [" << limit << "]. "
420 "Mem-use = [" << prev_pending_logs_sz << "] => [" << pending_logs_sz << "]; "
421 "throttling feature active? = [" << active << "]. "
422 "Limit-crossing (in the past) message is the one immediately preceding the current one "
423 "you're reading in file. "
424 "Compare its time stamp to mine to see time lag due to queueing.");
425 } // if (log_request.throttling_begins)
426
427 /* Last but not least, as discussed in Log_request doc header, we must do this explicitly.
428 * To reiterate: as of this writing, if this lambda body is not actually executed, then these will leak.
429 * As of this writing we ensure it *is* executed, in the Async_file_logger dtor at the latest. */
430 delete[] log_request.m_msg_copy;
431 delete metadata;
432 }; // really_log =
433
434 // Enqueue it, after whatever others are already pending (hopefully not too many; ideally none).
435 m_async_worker.post(std::move(really_log));
436} // Async_file_logger::do_log()
437
438size_t Async_file_logger::mem_cost(const Log_request& log_request) // Static.
439{
440 /* We should strive to be quick here (also almost certainly we will be inlined, with full optimization anyway).
441 * This is called in every do_log(), which can be not-infrequent; and really_log() in the background thread --
442 * though extreme efficiency there is less important.
443 *
444 * This is an estimate; it need not be exact, as we use it as merely a heuristic when to throttle. For example
445 * personally I (ygoldfel) only vaguely understand how function<> stores stores a lambda and its captures.
446 * That said it should be roughly proportional to the memory used. */
447
448 return sizeof(async::Task) // Don't forget the function object itself. Then the captures:
449 + sizeof(Async_file_logger*) // `this`.
450 + sizeof(Log_request) // The main capture is this. Firstly its shallow size.
451 + deep_size(log_request); // And its size beyond sizeof (its members combined).
452
453 // See end of deep_cost() for style/pattern notes.
454} // Async_file_logger::mem_cost()
455
456size_t Async_file_logger::deep_size(const Log_request& val) // Static.
457{
458 using log::deep_size;
459
460 return val.m_msg_size // m_msg_copy = char*; we saved its pointee raw array's mem-use here.
461 // m_metadata is Msg_metadata*:
462 + sizeof(Msg_metadata) // Msg_metadata shallow (to it) parts are non-shallow to *us* (it's in heap).
463 + deep_size(*val.m_metadata); // Msg_metadata's non-shallow mem-use (recursively implement deep_size() ptrn).
464
465 /* Style/maintanability notes: At the risk of being overly formal: There's a pattern in play here + in mem_cost():
466 * The total-size of aggregate object `X x` is sizeof(X) + D, where D is its mem-use beyond the shallow object.
467 * To compute D, take each member `Y m_y` (or ptr or ref variant) of X; obtain its mem-use beyond sizeof(Y);
468 * sum these values to get D. Namely for each m_y:
469 * - It might have no non-shallow mem-use (e.g., a `float`). 0.
470 * - Its mem-use might be stored in a nearby data member; e.g., `char* m_str; size_t m_str_len`.
471 * Use that result.
472 * - It may itself be an aggregate object or pointer to one in which case:
473 * - If there is no deep_size(const Y&), it has no non-shallow mem-use. 0.
474 * - If there is deep_size(const Y&), its returned value = the result. (If m_y is a ptr, pass *m_y as arg.)
475 * - deep_size() should itself follow this pattern, recursively.
476 * - *Either way*: If m_y is a ptr or ref (to heap), also add sizeof(Y).
477 *
478 * So in our case you see, when we got to Log_request, we took its sizeof() and then added its deep_size() (below).
479 *
480 * Rationale for pattern: Keep things organized; provide reusable deep_size() APIs where applicable. E.g.,
481 * deep_size(const string&) may be useful elsewhere.
482 *
483 * Rationale: Why not include sizeof() in deep_size()? Answer: Consider
484 * struct X { struct { int m_a; float m_b; } m_c; bool m_d; }; X x{ ... };
485 * Its mem-use is simply sizeof(X). So we get the sizeof() part, which is mandatory, for "free," without having
486 * to error-pronely enumerate members at various levels, plus the boiler-plate of needing deep_size() at each level;
487 * and so on. So it makes sense to separate the sizeof() part -- executed once at the top level -- and the
488 * deep_size() part which is potentially recursive but only needs to explicitly define deep_size() for those with
489 * non-zero non-shallow mem-use and/or ptrs/refs; and only need to mention such members (and no others) in
490 * mem-use formulas.
491 *
492 * @todo We could probably get fancier with it, like having a deep_size<T>(const T* t) specialization that would
493 * forward to `sizeof T + deep_size(*t)` if it exists... or something like that. For now seems like overkill. */
494} // Async_file_logger::deep_size()
495
497{
499
500 // We are in some unspecified user thread that isn't m_async_worker.
501
502 FLOW_LOG_INFO("Async_file_logger [" << this << "]: Non-worker (user) thread "
503 "requested [" << (async ? "asynchronous" : "synchronous") << "] file flush/close (if needed) and "
504 "re-open, such as for rotation or at initialization.");
505
506 m_async_worker.post([this]() { m_serial_logger->log_flush_and_reopen(); },
507 async ? Synchronicity::S_ASYNC : Synchronicity::S_ASYNC_AND_AWAIT_CONCURRENT_COMPLETION);
508}
509
510bool Async_file_logger::should_log(Sev sev, const Component& component) const // Virtual.
511{
512 if (!m_serial_logger->should_log(sev, component)) // In normal conditions this is likeliest to return false.
513 {
514 return false;
515 }
516 // else
517
518 /* As explained in doc header (please see there for discussion), throttling -- if on -- can prevent logging.
519 * It is important that the following code is as fast as possible, though by placing it below the above
520 * forwarded should_log() check we've performed a key optimization already, as in a properly configured
521 * system verbosity knobs should throw out most messages-to-be. */
522
523 if (!m_throttling_active.load(std::memory_order_relaxed))
524 {
525 return true;
526 }
527 // else
528
529 const auto throttled = m_throttling_now.load(std::memory_order_relaxed);
530
531#if 0 // Obv change to `if 1` if debugging + want to see it. Could just use TRACE but avoiding should_log() cost.
532 FLOW_LOG_INFO("Async_file_logger [" << this << "]: "
533 "should_log(sev=[" << sev << "]; component=[" << component.payload_enum_raw_value() << "]) "
534 "throttling algorithm situation (reminder: beware concurrency): "
535 "Throttling feature active? = 1; throttling? = [" << throttled << "].");
536#endif
537
538 return !throttled;
539} // Async_file_logger::should_log()
540
542{
543 return true;
544}
545
546} // namespace flow::log
virtual void post(Task &&task, Synchronicity synchronicity=Synchronicity::S_ASYNC)
Cause the given Task (function) to execute within the worker thread as soon as the thread is free of ...
An implementation of Logger that logs messages to a given file-system path but never blocks any loggi...
flow::util::Lock_guard< Mutex > Lock_guard
Short-hand for Mutex lock.
async::Single_thread_task_loop m_async_worker
The thread (1-thread pool, technically) in charge of all m_serial_logger I/O operations including wri...
void on_rotate_signal(const Error_code &sys_err_code, int sig_number)
SIGHUP/equivalent handler for the optional feature capture_rotate_signals_internally in constructor.
std::atomic< bool > m_throttling_active
Whether the throttling-based-on-pending-logs-memory-used feature is currently active or not.
boost::movelib::unique_ptr< Serial_file_logger > m_serial_logger
This is the Logger doing all the real log-writing work (the one stored in Log_context is the logger-a...
static size_t mem_cost(const Log_request &log_request)
How much do_log() issuing the supplied Log_request shall contribute to m_pending_logs_sz.
Signal_set m_signal_set
Signal set which we may or may not be using to trap SIGHUP in order to auto-fire m_serial_logger->log...
size_t m_pending_logs_sz
Estimate of how much RAM is being used by storing do_log() requests' data (message itself,...
bool throttling_active() const
Whether the throttling feature is currently in effect.
Mutex m_throttling_mutex
Protects throttling algorithm data that require coherence among themselves: m_throttling_cfg,...
static size_t deep_size(const Log_request &val)
Estimate of memory footprint of the given value, including memory allocated on its behalf – but exclu...
std::atomic< bool > m_throttling_now
Contains the output of the always-on throttling algorithm; namely true if currently should_log() shal...
Throttling_cfg m_throttling_cfg
See Throttling_cfg. Protected by m_throttling_mutex.
void do_log(Msg_metadata *metadata, util::String_view msg) override
Implements interface method by asynchronously logging the message and some subset of the metadata in ...
void log_flush_and_reopen(bool async=true)
Causes the log at the file-system path to be flushed/closed (if needed) and re-opened; this will happ...
Throttling_cfg throttling_cfg() const
Accessor returning a copy of the current set of throttling knobs.
bool should_log(Sev sev, const Component &component) const override
Implements interface method by returning true if the severity and component (which is allowed to be n...
~Async_file_logger() override
Flushes out anything buffered, returns resources/closes output file(s); then returns.
Async_file_logger(Logger *backup_logger_ptr, Config *config, const fs::path &log_path, bool capture_rotate_signals_internally)
Constructs logger to subsequently log to the given file-system path.
bool logs_asynchronously() const override
Implements interface method by returning true, indicating that this Logger may need the contents of *...
A light-weight class, each object storing a component payload encoding an enum value from enum type o...
Definition: log.hpp:840
Class used to configure the filtering and logging behavior of Loggers; its use in your custom Loggers...
Definition: config.hpp:200
Convenience class that simply stores a Logger and/or Component passed into a constructor; and returns...
Definition: log.hpp:1619
Interface that the user should implement, passing the implementing Logger into logging classes (Flow'...
Definition: log.hpp:1291
#define FLOW_ERROR_SYS_ERROR_LOG_WARNING()
Logs a warning about the (often errno-based or from a library) error code in sys_err_code.
Definition: error.hpp:269
#define FLOW_LOG_INFO(ARG_stream_fragment)
Logs an INFO message into flow::log::Logger *get_logger() with flow::log::Component get_log_component...
Definition: log.hpp:197
#define FLOW_LOG_WARNING(ARG_stream_fragment)
Logs a WARNING message into flow::log::Logger *get_logger() with flow::log::Component get_log_compone...
Definition: log.hpp:152
#define FLOW_LOG_WITH_CHECKING(ARG_sev, ARG_stream_fragment)
Logs a message of the specified severity into flow::log::Logger *get_logger() with flow::log::Compone...
Definition: log.hpp:489
#define FLOW_LOG_SET_CONTEXT(ARG_logger_ptr, ARG_component_payload)
For the rest of the block within which this macro is instantiated, causes all FLOW_LOG_....
Definition: log.hpp:405
Synchronicity
Enumeration indicating the manner in which asio_exec_ctx_post(), and various boost....
Definition: async_fwd.hpp:223
Function< void()> Task
Short-hand for a task that can be posted for execution by a Concurrent_task_loop or flow::util::Task_...
Definition: async_fwd.hpp:96
Flow module providing logging functionality.
size_t deep_size(const Msg_metadata &val)
Estimate of memory footprint of the given value, including memory allocated on its behalf – but exclu...
Definition: log.cpp:289
Sev
Enumeration containing one of several message severity levels, ordered from highest to lowest.
Definition: log_fwd.hpp:224
@ S_WARNING
Message indicates a "bad" condition that is not frequent enough to be of severity Sev::S_TRACE.
@ S_INFO
Message indicates a not-"bad" condition that is not frequent enough to be of severity Sev::S_TRACE.
std::string ostream_op_string(T const &... ostream_args)
Equivalent to ostream_op_to_string() but returns a new string by value instead of writing to the call...
Definition: util.hpp:356
Basic_string_view< char > String_view
Commonly used char-based Basic_string_view. See its doc header.
boost::system::error_code Error_code
Short-hand for a boost.system error code (which basically encapsulates an integer/enum error code and...
Definition: common.hpp:503
Flow_log_component
The flow::log::Component payload enumeration comprising various log components used by Flow's own int...
Definition: common.hpp:633
In addition to the task object (function) itself, these are the data placed onto the queue of m_async...
size_t m_msg_size
Number of characters in m_msg_copy pointee string.
Msg_metadata * m_metadata
Pointer to array of characters comprising a copy of msg passed to do_log(). We must delete it.
Controls behavior of the throttling algorithm as described in Async_file_logger doc header Throttling...
static constexpr uint64_t S_HI_LIMIT_DEFAULT
Value of Async_file_logger(...).throttling_cfg().m_hi_limit: default/initial value of m_hi_limit.
uint64_t m_hi_limit
The throttling algorithm will go from Not-Throttling to Throttling state if and only if the current m...
Simple data store containing all of the information generated at every logging call site by flow::log...
Definition: log.hpp:1048