Flow 2.0.0
Flow project: Full implementation reference.
async_file_logger.cpp
Go to the documentation of this file.
1/* Flow
2 * Copyright 2023 Akamai Technologies, Inc.
3 *
4 * Licensed under the Apache License, Version 2.0 (the
5 * "License"); you may not use this file except in
6 * compliance with the License. You may obtain a copy
7 * of the License at
8 *
9 * https://www.apache.org/licenses/LICENSE-2.0
10 *
11 * Unless required by applicable law or agreed to in
12 * writing, software distributed under the License is
13 * distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
14 * CONDITIONS OF ANY KIND, either express or implied.
15 * See the License for the specific language governing
16 * permissions and limitations under the License. */
17
18/// @file
21#include "flow/error/error.hpp"
22#include <boost/move/make_unique.hpp>
23#include <algorithm>
24#include <memory>
25#include <utility>
26
27namespace flow::log
28{
29
30// Implementations.
31
33 Config* config, const fs::path& log_path,
34 bool capture_rotate_signals_internally) :
35 /* Set up the logging *about our real attempts at file logging*, probably to a Simple_ostream_logger/cout;
36 * or to null (meaning no such logging). Either way we will still attempt to actually log user-requested messages
37 * to log_path! */
38 Log_context(backup_logger_ptr, Flow_log_component::S_LOG),
39 m_config(config), // Save pointer, not copy, of the config given to us. Hence thread safety is a thing.
40
41 /* Set up throttling algorithm, which is always-on (see class doc header discussion).
42 * Reminder that its output will be ignored, until/unless user calls throttling_cfg(true, ...). They can then
43 * set their own config values as well (or reuse this default which they can access via throttling_cfg() accessor). */
44 m_throttling_cfg({ Throttling_cfg::S_HI_LIMIT_DEFAULT }),
45 m_pending_logs_sz(0), // No memory used yet by pending messages.
46 m_throttling_now(false), // No throttling yet.
47 m_throttling_active(false),
48
49 // Any I/O operations done here are the only ones not done from m_async_worker thread.
50 m_serial_logger(boost::movelib::make_unique<Serial_file_logger>(get_logger(), m_config, log_path)),
51 /* ...Speaking of which: start the worker thread right now; synchronously; return from this ctor once it has
52 * certified it is up. It will log (to *backup_logger_ptr if not null) about starting up right here, too, which
53 * in practice would be the bulk of INFO-or-less-verbose log lines from us in my experience. */
54 m_async_worker(backup_logger_ptr, util::ostream_op_string("ASFL-", this)),
55 /* Set up a signal set object; this is a no-op until we .add() signals to it (which we may or may not do).
56 * Whether we do or not is more significant than merely whether whatever handler we'd later register
57 * via m_signal_set.async_wait() will be called; if we .add() zero signals, then IF some non-boost.asio
58 * signal handler is currently registered (such as some default OS handler; or the user's non-boost.asio handler)
59 * (or will be registered in the future) will continue to work undisturbed. If, however, we .add() one or more
60 * signals (or, equivalently, list 1 or more signal numbers right here in the constructor call),
61 * then we will REPLACE such non-boost.asio handlers with boost.asio's mechanism. (This behavior is
62 * explicitly documented in boost.asio docs.) Therefore we must be careful not to mindlessly .add() handler(s),
63 * and/or (equivalently) list 1 or more signal numbers in this constructor call here. The choice will be left to
64 * later code which will .add() or not .add() deliberately. (Copy/paste warning: This cmnt was copy-pasted from some
65 * vaguely similar code in flow::net_flow; but the 2 modules are barely related if at all, so....) */
66 m_signal_set(*(m_async_worker.task_engine()))
67{
68 m_async_worker.start(flow::async::reset_this_thread_pinning);
69 // Don't inherit any strange core-affinity! ^-- Worker must float free.
70
71 FLOW_LOG_INFO("Async_file_logger [" << this << "]: "
72 "Log-writing worker thread(s) have started around now; ready to work.");
73
74 // Immediately open file in worker thread. Could be lazy about it but might as well log about any issue ASAP.
75 log_flush_and_reopen(false); // Do it synchronously for good measure; doesn't matter but the reduced entropy is nice.
76
77 if (capture_rotate_signals_internally)
78 {
79 FLOW_LOG_INFO("Setting up internal log-rotate-reopening signal handler. "
80 "CAUTION! User program MUST avoid using non-boost::asio::signal_set signal handling! "
81 "If it does use non-boost.asio, behavior is undefined.");
82
83 // Add the classic log-rotation-inducing signal numbers.
84 m_signal_set.add(SIGHUP);
85
86 /* At this point, receiving those signals will NOT do whatever default behavior would happen -- but I believe
87 * that unlike for SIGTERM/etc. the default behavior is no-op. Before actually reporting successful initialization
88 * by exiting constructor, set up the handler that'll be called upon receiving the signals. */
89
90 /* this->impl_log_flush_and_reopen() will be called on signal (or error).
91 * Note that that function's contract (from its doc comment) is it must execute in m_async_worker (see
92 * m_signal_set() init above in the init section of ctor).
93 * Indeed boost::asio::io_context semantics guarantee it'll run in m_async_worker (not some
94 * no-man's-land signal handler thread of execution, as one might fear could be the case) for the same reason
95 * the various post()ed tasks will run in m_async_worker. */
96 m_signal_set.async_wait([this](const Error_code& sys_err_code, int sig_number)
97 {
98 on_rotate_signal(sys_err_code, sig_number);
99 });
100 } // if (capture_rotate_signals_internally)
101 /* else if (!capture_rotate_signals_internally)
102 * {
103 * Do NOT .add() anything; and don't async_wait() anything. As noted in comment at m_signal_set construction time,
104 * .add() does more than make it possible to .async_wait(). It also replaces any default OS or user's own
105 * non-boost.asio signal handling machinery with boost.asio's signal_set machinery. That can be quite draconian,
106 * so user must specifically set that option to true. If it's false (in all siblings of *this in entire app),
107 * then whatever signal handling machinery the user wants to set up for themselves (or leave at OS's
108 * discretion) will remain undisturbed. By the way, of course, they can use boost.asio machinery themselves too;
109 * it's just that doing so would still work even if capture_rotate_signals_internally were true, so that's
110 * not the dangerous scenario. (Copy/paste warning: This cmnt was copy-pasted from some
111 * vaguely similar code in flow::net_flow; but the 2 modules are barely related if at all, so....)
112 * } */
113} // Async_file_logger::Async_file_logger()
114
116{
118 return m_throttling_cfg;
119}
120
122{
123 return m_throttling_active.load(std::memory_order_relaxed);
124}
125
127{
128 assert((cfg.m_hi_limit > 0) && "Per contract, hi_limit must be positive.");
129
130 /* Please see Impl section of class doc header for detailed discussion; also Throttling and m_throttling_states doc
131 * headers. We just reiterate here that m_throttling_active and m_throttling_cfg are orthogonal to each other;
132 * the latter is thus protected by mutex, while the former is atomic. */
133
134 // Deal with `active`.
135
136 const auto prev_active = m_throttling_active.exchange(active, std::memory_order_relaxed);
137 if (prev_active != active)
138 {
139 FLOW_LOG_INFO("Async_file_logger [" << this << "]: "
140 "Config set: throttling feature active? [" << prev_active << "] => [" << active << "].");
141 }
142 /* Subtlety: The un-mutex-protected, relaxed-order-accessed (as opposed to with memory_order_seq_cst)
143 * read/modify/write operation above is the reason we formally forbid this mutator being called concurrently
144 * with itself. Possibly oddness could occur if the above statement was invoked ~concurrently with itself.
145 * We could instead formally allow it and change the above to seq_cst. However: (1) in actual reality it's
146 * very unlikely a reasonable user would want to call this concurrently to self anyway in a realistic dynamic-config
147 * scenario -- usually people use a config-update thread -- so it's not very important; (2) the existing way
148 * the entire set of config is updated in synchronized fashion, as opposed to separating m_throttling_cfg from
149 * m_throttling_active. It's just less entropy, and it hardly matters anyway. */
150
151 // Deal with `cfg`.
152
153 { // All this->m_ touched in { here } can concurrently change, unless we lock.
155
157 {
158 const auto prev_throttling_now
159 = m_throttling_now.exchange(m_pending_logs_sz >= static_cast<decltype(m_pending_logs_sz)>(cfg.m_hi_limit),
160 std::memory_order_relaxed);
161 // (m_throttling_now cannot change during lock; using .exchange() purely for convenience to grab prev value.)
162
163 FLOW_LOG_INFO("Async_file_logger [" << this << "]: Config set: "
164 "hi_limit [" << m_throttling_cfg.m_hi_limit << "] => [" << cfg.m_hi_limit << "]. "
165 "Mem-use = [" << m_pending_logs_sz << "]; "
166 "throttling? = [" << prev_throttling_now << "] => [" << m_throttling_now << "]; "
167 "throttling feature active? = [" << active << "]. "
168 "Reminder: `throttling?` shall only be used if `throttling feature active?` is 1.");
169
171 }
172 /* else: As discussed in class doc header: no-op, unless they actually changed something; no state reset.
173 * E.g., perhaps they changed `active` while passing-in `cfg = throttling_cfg()` unchanged. */
174 } // Lock_guard lock(m_throttling_mutex);
175} // Async_file_logger::throttling_cfg()
176
178{
180
181 {
182 Lock_guard lock(m_throttling_mutex); // Careful: really_log()s may well be happening right now via m_async_worker.
183
184 FLOW_LOG_INFO("Async_file_logger [" << this << "]: Deleting. Worker thread will flush "
185 "output if possible; then we will proceed to shut down. Current mem-use of queued "
186 "log-requests is [" << m_pending_logs_sz << "]; if it is large, this might take some time.");
187 }
188
189 /* Could do this from current thread (in fact if we simply deleted the following statement, that is what would
190 * happen); that's still non-concurrent with other calls, as Serial_file_logger requires.
191 * However there is a subtle and important reason why the following is currently actively desired:
192 * By post()ing it -- meaning enqueuing it -- after any log-requests (function objects with a Log_request
193 * capture in each), and then awaiting the completion, we ensure those log requests are first serviced,
194 * or more in English: any pending log messages are actually written to file.
195 *
196 * Were we to not do so, some aspects of Log_request would be leaked here; as of this writing the message and
197 * the metadata. (They are not RAII for reasons explained in Log_request doc header.)
198 *
199 * Note: If we wanted to avoid this behavior -- e.g., if huge amounts of log-requests have been queued, this
200 * statement can very much take quite a while, and maybe we want that, or maybe we don't -- and wanted to
201 * avoid a leak, we'd need to come up with some new trick. For the time being though by doing this flush
202 * we both get desired functional behavior and avoid the leak. */
203 m_async_worker.post([this]() { m_serial_logger.reset(); },
204 Synchronicity::S_ASYNC_AND_AWAIT_CONCURRENT_COMPLETION);
205
206 // Now thread will exit and be joined, etc.
207}
208
209void Async_file_logger::on_rotate_signal(const Error_code& sys_err_code, int sig_number)
210{
211 // We are in m_async_worker thread.
212
213 if (sys_err_code == boost::asio::error::operation_aborted)
214 {
215 return; // Stuff is shutting down; just get out.
216 }
217 // else
218
219 FLOW_LOG_INFO("Async_file_logger [" << this << "]: "
220 "Internal log-rotate handler executed with signal number [" << sig_number << "].");
221
222 if (sys_err_code)
223 {
224 // This is odd, but there's no need to freak out about anything else. Just log and get out.
226 FLOW_LOG_WARNING("Async_file_logger [" << this << "]: "
227 "Internal signal handler executed with an error indicator. Strange! "
228 "Ignoring and continuing other operation.");
229 }
230 else
231 {
232 m_serial_logger->log_flush_and_reopen();
233 }
234
235 // Wait for it again, or else a rotation would only occur on the first signal but not thereafter.
236 m_signal_set.async_wait([this](const Error_code& sys_err_code, int sig_number)
237 {
238 on_rotate_signal(sys_err_code, sig_number);
239 });
240} // Async_file_logger::on_rotate_signal()
241
243{
244 using util::String_view;
245 using std::memcpy;
246
247 assert(metadata);
248
249 /* Our essential task is to push the log-request onto the m_async_worker queue; the code for processing
250 * that log-request, once the queue gets to it in that worker thread, is called really_log() and is in here too.
251 *
252 * Key points about how the asynchronicity/logging/queueing works: We aim to return synchronously ASAP and leave the
253 * potentially blocking (like if hard drive has to turn on from sleep) I/O ops to the worker thread asynchronously.
254 * There are 2 pieces of data, *metadata and `msg`, to deal with, both of which essentially must be available when
255 * we in fact write to file via m_serial_logger. By contract of do_log() and logs_asynchronously()==true:
256 * - We are to NOT copy *metadata; and therefore we are to `delete metadata` when done with it; i.e., after
257 * m_serial_logger->do_log() returns having written to file-system.
258 * - We MUST copy `msg` so we can use it asynchronously; we are to therefore free the copy ourselves once
259 * m_serial_logger->do_log() returns. (In reality this is all because `msg` is a shared thread-lcl thing that is
260 * reused in every subsequent and preceding FLOW_LOG_WARNING/etc. invocation in this thread. We technically don't
261 * know or care about that reality in here; only the *result* wherein we cannot use `msg` after do_log()
262 * synchronously returns.)
263 *
264 * Here's the Log_request encapsulating that stuff. See Log_request doc header for some nitty-gritty details. */
265 const auto msg_sz = msg.size();
266 Log_request log_request{ new char[msg_sz], msg_sz, metadata,
267 false }; // We can't know m_throttling_begins yet.
268 memcpy(log_request.m_msg_copy, msg.data(), msg_sz);
269
270 /* Before enqueuing that stuff, though, let's tally up the stats and otherwise proceed with the throttling algorithm.
271 * Please see Impl section of class doc header for detailed discussion. Then come back here.
272 * We rely on the background in that discussion frequently.
273 *
274 * Reminder: We should be careful to minimize computation in this section (but nowhere near as important to do so
275 * compared to should_log()). Similarly keep the locked section as small as possible. */
276
277 using logs_sz_t = decltype(m_pending_logs_sz);
278 const auto logs_sz = mem_cost(log_request);
279 auto& throttling_begins = log_request.m_throttling_begins;
280 logs_sz_t limit;
281 logs_sz_t pending_logs_sz; // For logging.
282 logs_sz_t prev_pending_logs_sz;
283 {
285 limit = static_cast<logs_sz_t>(m_throttling_cfg.m_hi_limit);
286 prev_pending_logs_sz = m_pending_logs_sz;
287 pending_logs_sz = (m_pending_logs_sz += logs_sz);
288 /* Minor/subtlety: We could also remove the m_throttling_now check here and instead use
289 * `throttling_begins = (m_throttling_now.exchange() == false)` in { body }. However we already had to lock
290 * mutex for ~unrelated reasons, and therefore m_throttling_now cannot change at the moment; so we
291 * might as well gate the integer comparisons on its value -- which rarely changes -- and only assign
292 * it, if its value would in fact change. */
293 if ((!m_throttling_now.load(std::memory_order_relaxed))
294 && (pending_logs_sz >= limit) && (prev_pending_logs_sz < limit))
295 {
296 m_throttling_now.store(true, std::memory_order_relaxed);
297 throttling_begins = true;
298 }
299 }
300 if (throttling_begins)
301 {
302 /* Log about it in the backup Logger (Logger-about-logging).
303 * When throttling feature is disabled: arguably it is still worthy of being a WARNING: it is in fact warning
304 * about a probably-bad situation (normal mem-use is roughly 0); and it is definitely useful information that
305 * might cause end user to enable throttling. However, subjectively, if they haven't enabled throttling then
306 * being faced with a WARNING (error message) is arguably excessive. Hence log it as INFO in that case.
307 * (If they don't want these messages at all, they can always set m_hi_limit to a terabyte or something.)
308 * Note: There's a similar statement inside really_log(). */
309 const bool active = m_throttling_active.load(std::memory_order_relaxed);
311 (active ? Sev::S_WARNING : Sev::S_INFO,
312 "Async_file_logger [" << this << "]: "
313 "do_log() throttling algorithm: a message reached hi_limit; next message-to-be => likely dropped, "
314 "if feature active. Config: hi_limit [" << limit << "]. "
315 "Mem-use = [" << prev_pending_logs_sz << "] => [" << pending_logs_sz << "]; "
316 "throttling? = 1 (see above); throttling feature active? = [" << active << "]. "
317 "Reminder: `throttling?` shall only be used if `throttling feature active?` is 1. "
318 "Limit-triggering message's contents follow: [" << msg << "].");
319 }
320#if 0 // Obv change to `if 1` if debugging + want to see it. Could just use TRACE but avoiding should_log() cost.
321 else
322 {
323 FLOW_LOG_INFO("Async_file_logger [" << this << "]: "
324 "do_log() throttling algorithm: a message was processed; situation (reminder: beware concurrency): "
325 "Config: hi_limit [" << limit << "]. "
326 "Mem-use = [" << prev_pending_logs_sz << "] => [" << pending_logs_sz << "]; "
327 "throttling feature active? = [" << m_throttling_active.load(std::memory_order_relaxed) << "]. "
328 "Message's contents follow: [" << msg << "].");
329 }
330#endif
331
332 /* Done! State updated, and throttling_begins determined for really_log().
333 *
334 * Now for the enqueueing of the log-request. This is what will execute in m_async_worker, when it's its turn. */
335
336 auto really_log = [this, log_request = std::move(log_request)]() mutable
337 {
338 const auto metadata = log_request.m_metadata;
339 const String_view msg{log_request.m_msg_copy, log_request.m_msg_size};
340
341 /* Throttling: do, essentially, the opposite of what do_log() did when issuing the log-request.
342 * Again please refer to Impl section of class doc header for reasoning about this algorithm. */
343
344 const auto logs_sz = mem_cost(log_request);
345 // @todo ^-- Maybe instead save this in Log_request? Trade-off is RAM vs cycles (currently favoring RAM).
346 bool throttling_ends = false;
347 logs_sz_t limit; // For logging.
348 logs_sz_t pending_logs_sz; // For logging.
349 logs_sz_t prev_pending_logs_sz;
350 {
352 limit = m_throttling_cfg.m_hi_limit; // Just for logging in this case.
353 prev_pending_logs_sz = m_pending_logs_sz;
354 assert((prev_pending_logs_sz >= logs_sz) && "Bug? really_log() has no matching do_log()?");
355 pending_logs_sz = (m_pending_logs_sz -= logs_sz);
356 /* Minor/subtlety: Versus the do_log() equivalent above, it looks more economical to check pending_logs_sz first,
357 * since it just compares one value to 0 in this case (as opposed to 2 values and < and >=), and then both set
358 * m_throttling_now and check its preceding value in one op. */
359 if (pending_logs_sz == 0)
360 {
361 // m_throttling_now should be false; but detect whether this is the change-over from true as opposed to no-op.
362 throttling_ends = (m_throttling_now.exchange(false, std::memory_order_relaxed) == true);
363 }
364 }
365
366 if (throttling_ends)
367 {
368 // Log about it in the backup Logger (Logger-about-logging).
369 FLOW_LOG_INFO("Async_file_logger [" << this << "]: last pending message was logged; "
370 "next message-to-be => likely first one to *not* be dropped, if throttling feature active. "
371 "Config: hi_limit [" << limit << "]. "
372 "Mem-use = [" << prev_pending_logs_sz << "] => [" << pending_logs_sz << "]; "
373 "throttling? = 0 (see above); "
374 "throttling feature active? = [" << m_throttling_active.load(std::memory_order_relaxed) << "]. "
375 "Reminder: `throttling?` shall only be used if `throttling feature active?` is 1. "
376 "Queue-clearing message's contents follow: [" << msg << "].");
377
378 // Log about it in file itself. (Performance in this block is not of huge import; this is a fairly rare event.)
379 FLOW_LOG_SET_CONTEXT(m_serial_logger.get(), this->get_log_component());
380
381 FLOW_LOG_INFO("Async_file_logger [" << this << "]: "
382 "really_log() throttling algorithm: last pending message was logged; "
383 "next message-to-be => likely first one to *not* be dropped, if throttling feature active. "
384 "Config: hi_limit [" << limit << "]. "
385 "Mem-use = [" << prev_pending_logs_sz << "] => 0; "
386 "throttling feature active? = [" << m_throttling_active.load(std::memory_order_relaxed) << "]. "
387 "Queue-clearing message is the one immediately following me in file. "
388 "Compare its time stamp to mine to see time lag due to queueing.");
389 }
390#if 0 // Obv change to `if 1` if debugging + want to see it. Could just use TRACE but avoiding should_log() cost.
391 else
392 {
393 FLOW_LOG_INFO("Async_file_logger [" << this << "]: "
394 "really_log() throttling algorithm: a message is about to be written to file; "
395 "situation (reminder: beware concurrency): Config: hi_limit [" << limit << "]. "
396 "Mem-use = [" << prev_pending_logs_sz << "] => [" << pending_logs_sz << "]; "
397 "throttling feature active? = [" << m_throttling_active.load(std::memory_order_relaxed) << "]. ");
398 "Message's contents follow: [" << msg << "].");
399 }
400#endif
401
402 // We are in m_async_worker thread, as m_serial_logger requires. Go!
403 m_serial_logger->do_log(metadata, msg);
404
405 // Oh and obey throttling_begins for this log-request, if it was computed to be true in do_log().
406 if (log_request.m_throttling_begins)
407 {
408 // Performance in this block is not of huge import; this is a fairly rare event.
409 FLOW_LOG_SET_CONTEXT(m_serial_logger.get(), this->get_log_component());
410
411 // Rationale w/r/t the Sev choice here: see similar `if (throttling_begins)` block above.
412 const bool active = m_throttling_active.load(std::memory_order_relaxed);
414 (active ? Sev::S_WARNING : Sev::S_INFO,
415 "Async_file_logger [" << this << "]: "
416 "really_log() throttling algorithm: The preceding message, when its log-request was "
417 "earlier enqueued, caused pending-logs RAM usage to exceed then-configured hi_limit. "
418 "If throttling feature was active, subsequent messages-to-be (log-requests) were dropped. "
419 "We only just got around to being able to log it (satisfy log-request) after all the "
420 "preceding ones in FIFO order. Nowadays: Config: hi_limit [" << limit << "]. "
421 "Mem-use = [" << prev_pending_logs_sz << "] => [" << pending_logs_sz << "]; "
422 "throttling feature active? = [" << active << "]. "
423 "Limit-crossing (in the past) message is the one immediately preceding the current one "
424 "you're reading in file. "
425 "Compare its time stamp to mine to see time lag due to queueing.");
426 } // if (log_request.throttling_begins)
427
428 /* Last but not least, as discussed in Log_request doc header, we must do this explicitly.
429 * To reiterate: as of this writing, if this lambda body is not actually executed, then these will leak.
430 * As of this writing we ensure it *is* executed, in the Async_file_logger dtor at the latest. */
431 delete[] log_request.m_msg_copy;
432 delete metadata;
433 }; // really_log =
434
435 // Enqueue it, after whatever others are already pending (hopefully not too many; ideally none).
436 m_async_worker.post(std::move(really_log));
437} // Async_file_logger::do_log()
438
439size_t Async_file_logger::mem_cost(const Log_request& log_request) // Static.
440{
441 /* We should strive to be quick here (also almost certainly we will be inlined, with full optimization anyway).
442 * This is called in every do_log(), which can be not-infrequent; and really_log() in the background thread --
443 * though extreme efficiency there is less important.
444 *
445 * This is an estimate; it need not be exact, as we use it as merely a heuristic when to throttle. For example
446 * personally I (ygoldfel) only vaguely understand how function<> stores stores a lambda and its captures.
447 * That said it should be roughly proportional to the memory used. */
448
449 return sizeof(async::Task) // Don't forget the function object itself. Then the captures:
450 + sizeof(Async_file_logger*) // `this`.
451 + sizeof(Log_request) // The main capture is this. Firstly its shallow size.
452 + deep_size(log_request); // And its size beyond sizeof (its members combined).
453
454 // See end of deep_cost() for style/pattern notes.
455} // Async_file_logger::mem_cost()
456
457size_t Async_file_logger::deep_size(const Log_request& val) // Static.
458{
459 using log::deep_size;
460
461 return val.m_msg_size // m_msg_copy = char*; we saved its pointee raw array's mem-use here.
462 // m_metadata is Msg_metadata*:
463 + sizeof(Msg_metadata) // Msg_metadata shallow (to it) parts are non-shallow to *us* (it's in heap).
464 + deep_size(*val.m_metadata); // Msg_metadata's non-shallow mem-use (recursively implement deep_size() ptrn).
465
466 /* Style/maintanability notes: At the risk of being overly formal: There's a pattern in play here + in mem_cost():
467 * The total-size of aggregate object `X x` is sizeof(X) + D, where D is its mem-use beyond the shallow object.
468 * To compute D, take each member `Y m_y` (or ptr or ref variant) of X; obtain its mem-use beyond sizeof(Y);
469 * sum these values to get D. Namely for each m_y:
470 * - It might have no non-shallow mem-use (e.g., a `float`). 0.
471 * - Its mem-use might be stored in a nearby data member; e.g., `char* m_str; size_t m_str_len`.
472 * Use that result.
473 * - It may itself be an aggregate object or pointer to one in which case:
474 * - If there is no deep_size(const Y&), it has no non-shallow mem-use. 0.
475 * - If there is deep_size(const Y&), its returned value = the result. (If m_y is a ptr, pass *m_y as arg.)
476 * - deep_size() should itself follow this pattern, recursively.
477 * - *Either way*: If m_y is a ptr or ref (to heap), also add sizeof(Y).
478 *
479 * So in our case you see, when we got to Log_request, we took its sizeof() and then added its deep_size() (below).
480 *
481 * Rationale for pattern: Keep things organized; provide reusable deep_size() APIs where applicable. E.g.,
482 * deep_size(const string&) may be useful elsewhere.
483 *
484 * Rationale: Why not include sizeof() in deep_size()? Answer: Consider
485 * struct X { struct { int m_a; float m_b; } m_c; bool m_d; }; X x{ ... };
486 * Its mem-use is simply sizeof(X). So we get the sizeof() part, which is mandatory, for "free," without having
487 * to error-pronely enumerate members at various levels, plus the boiler-plate of needing deep_size() at each level;
488 * and so on. So it makes sense to separate the sizeof() part -- executed once at the top level -- and the
489 * deep_size() part which is potentially recursive but only needs to explicitly define deep_size() for those with
490 * non-zero non-shallow mem-use and/or ptrs/refs; and only need to mention such members (and no others) in
491 * mem-use formulas.
492 *
493 * @todo We could probably get fancier with it, like having a deep_size<T>(const T* t) specialization that would
494 * forward to `sizeof T + deep_size(*t)` if it exists... or something like that. For now seems like overkill. */
495} // Async_file_logger::deep_size()
496
498{
500
501 // We are in some unspecified user thread that isn't m_async_worker.
502
503 FLOW_LOG_INFO("Async_file_logger [" << this << "]: Non-worker (user) thread "
504 "requested [" << (async ? "asynchronous" : "synchronous") << "] file flush/close (if needed) and "
505 "re-open, such as for rotation or at initialization.");
506
507 m_async_worker.post([this]() { m_serial_logger->log_flush_and_reopen(); },
508 async ? Synchronicity::S_ASYNC : Synchronicity::S_ASYNC_AND_AWAIT_CONCURRENT_COMPLETION);
509}
510
511bool Async_file_logger::should_log(Sev sev, const Component& component) const // Virtual.
512{
513 if (!m_serial_logger->should_log(sev, component)) // In normal conditions this is likeliest to return false.
514 {
515 return false;
516 }
517 // else
518
519 /* As explained in doc header (please see there for discussion), throttling -- if on -- can prevent logging.
520 * It is important that the following code is as fast as possible, though by placing it below the above
521 * forwarded should_log() check we've performed a key optimization already, as in a properly configured
522 * system verbosity knobs should throw out most messages-to-be. */
523
524 if (!m_throttling_active.load(std::memory_order_relaxed))
525 {
526 return true;
527 }
528 // else
529
530 const auto throttled = m_throttling_now.load(std::memory_order_relaxed);
531
532#if 0 // Obv change to `if 1` if debugging + want to see it. Could just use TRACE but avoiding should_log() cost.
533 FLOW_LOG_INFO("Async_file_logger [" << this << "]: "
534 "should_log(sev=[" << sev << "]; component=[" << component.payload_enum_raw_value() << "]) "
535 "throttling algorithm situation (reminder: beware concurrency): "
536 "Throttling feature active? = 1; throttling? = [" << throttled << "].");
537#endif
538
539 return !throttled;
540} // Async_file_logger::should_log()
541
543{
544 return true;
545}
546
547} // namespace flow::log
virtual void post(Task &&task, Synchronicity synchronicity=Synchronicity::S_ASYNC)
Cause the given Task (function) to execute within the worker thread as soon as the thread is free of ...
An implementation of Logger that logs messages to a given file-system path but never blocks any loggi...
flow::util::Lock_guard< Mutex > Lock_guard
Short-hand for Mutex lock.
async::Single_thread_task_loop m_async_worker
The thread (1-thread pool, technically) in charge of all m_serial_logger I/O operations including wri...
void on_rotate_signal(const Error_code &sys_err_code, int sig_number)
SIGHUP/equivalent handler for the optional feature capture_rotate_signals_internally in constructor.
std::atomic< bool > m_throttling_active
Whether the throttling-based-on-pending-logs-memory-used feature is currently active or not.
boost::movelib::unique_ptr< Serial_file_logger > m_serial_logger
This is the Logger doing all the real log-writing work (the one stored in Log_context is the logger-a...
static size_t mem_cost(const Log_request &log_request)
How much do_log() issuing the supplied Log_request shall contribute to m_pending_logs_sz.
Signal_set m_signal_set
Signal set which we may or may not be using to trap SIGHUP in order to auto-fire m_serial_logger->log...
size_t m_pending_logs_sz
Estimate of how much RAM is being used by storing do_log() requests' data (message itself,...
bool throttling_active() const
Whether the throttling feature is currently in effect.
Mutex m_throttling_mutex
Protects throttling algorithm data that require coherence among themselves: m_throttling_cfg,...
static size_t deep_size(const Log_request &val)
Estimate of memory footprint of the given value, including memory allocated on its behalf – but exclu...
std::atomic< bool > m_throttling_now
Contains the output of the always-on throttling algorithm; namely true if currently should_log() shal...
Throttling_cfg m_throttling_cfg
See Throttling_cfg. Protected by m_throttling_mutex.
void do_log(Msg_metadata *metadata, util::String_view msg) override
Implements interface method by asynchronously logging the message and some subset of the metadata in ...
void log_flush_and_reopen(bool async=true)
Causes the log at the file-system path to be flushed/closed (if needed) and re-opened; this will happ...
Throttling_cfg throttling_cfg() const
Accessor returning a copy of the current set of throttling knobs.
bool should_log(Sev sev, const Component &component) const override
Implements interface method by returning true if the severity and component (which is allowed to be n...
~Async_file_logger() override
Flushes out anything buffered, returns resources/closes output file(s); then returns.
Async_file_logger(Logger *backup_logger_ptr, Config *config, const fs::path &log_path, bool capture_rotate_signals_internally)
Constructs logger to subsequently log to the given file-system path.
bool logs_asynchronously() const override
Implements interface method by returning true, indicating that this Logger may need the contents of *...
A light-weight class, each object storing a component payload encoding an enum value from enum type o...
Definition: log.hpp:840
Class used to configure the filtering and logging behavior of Loggers; its use in your custom Loggers...
Definition: config.hpp:317
Convenience class that simply stores a Logger and/or Component passed into a constructor; and returns...
Definition: log.hpp:1610
Interface that the user should implement, passing the implementing Logger into logging classes (Flow'...
Definition: log.hpp:1284
#define FLOW_ERROR_SYS_ERROR_LOG_WARNING()
Logs a warning about the (often errno-based or from a library) error code in sys_err_code.
Definition: error.hpp:269
#define FLOW_LOG_INFO(ARG_stream_fragment)
Logs an INFO message into flow::log::Logger *get_logger() with flow::log::Component get_log_component...
Definition: log.hpp:197
#define FLOW_LOG_WARNING(ARG_stream_fragment)
Logs a WARNING message into flow::log::Logger *get_logger() with flow::log::Component get_log_compone...
Definition: log.hpp:152
#define FLOW_LOG_WITH_CHECKING(ARG_sev, ARG_stream_fragment)
Logs a message of the specified severity into flow::log::Logger *get_logger() with flow::log::Compone...
Definition: log.hpp:489
#define FLOW_LOG_SET_CONTEXT(ARG_logger_ptr, ARG_component_payload)
For the rest of the block within which this macro is instantiated, causes all FLOW_LOG_....
Definition: log.hpp:405
Synchronicity
Enumeration indicating the manner in which asio_exec_ctx_post(), and various boost....
Definition: async_fwd.hpp:223
void reset_this_thread_pinning()
Resets processor-affinity of the calling thread; does not log; and throws on extremely unlikely syste...
Function< void()> Task
Short-hand for a task that can be posted for execution by a Concurrent_task_loop or flow::util::Task_...
Definition: async_fwd.hpp:96
Flow module providing logging functionality.
size_t deep_size(const Msg_metadata &val)
Estimate of memory footprint of the given value, including memory allocated on its behalf – but exclu...
Definition: log.cpp:281
Sev
Enumeration containing one of several message severity levels, ordered from highest to lowest.
Definition: log_fwd.hpp:224
@ S_WARNING
Message indicates a "bad" condition that is not frequent enough to be of severity Sev::S_TRACE.
@ S_INFO
Message indicates a not-"bad" condition that is not frequent enough to be of severity Sev::S_TRACE.
std::string ostream_op_string(T const &... ostream_args)
Equivalent to ostream_op_to_string() but returns a new string by value instead of writing to the call...
Definition: util.hpp:381
Basic_string_view< char > String_view
Commonly used char-based Basic_string_view. See its doc header.
boost::system::error_code Error_code
Short-hand for a boost.system error code (which basically encapsulates an integer/enum error code and...
Definition: common.hpp:508
Flow_log_component
The flow::log::Component payload enumeration comprising various log components used by Flow's own int...
Definition: common.hpp:638
In addition to the task object (function) itself, these are the data placed onto the queue of m_async...
size_t m_msg_size
Number of characters in m_msg_copy pointee string.
Msg_metadata * m_metadata
Pointer to array of characters comprising a copy of msg passed to do_log(). We must delete it.
Controls behavior of the throttling algorithm as described in Async_file_logger doc header Throttling...
static constexpr uint64_t S_HI_LIMIT_DEFAULT
Value of Async_file_logger(...).throttling_cfg().m_hi_limit: default/initial value of m_hi_limit.
uint64_t m_hi_limit
The throttling algorithm will go from Not-Throttling to Throttling state if and only if the current m...
Simple data store containing all of the information generated at every logging call site by flow::log...
Definition: log.hpp:1041