Original link: https://blog.henix.info/blog/cpp-multithread-logging/
The log is a core component required by almost every program. Friends who have used C++’s std::cout know that it is not thread-safe! So if multiple threads use std::cout to output at the same time, the output may be chaotic. Is there an easy solution?
If you search on the Internet, or see how some specialized logging libraries (such as glog) solve this problem, there are no more than these methods:
- add a global lock
- Add a multi-producer-single-consumer queue. When other threads write logs, they are put into the queue first, and a separate thread is opened to write the logs in the queue one by one.
The first locking scheme adds a global state, which I don’t think is very beautiful. Not to mention the second option… I just write a log. Is it necessary to introduce a queue and a thread? Is there an easier solution?
When I write C++ programs, I usually don’t use std::cout to format the output, but use lower-level system calls, that is, write(2) or WriteFile / WriteConsole, so we can skip the synchronization of the internal state of stdio / iostream , consider the following issues directly from the operating system level:
If multiple threads/processes call write(2) or WriteFile at the same time to write a file descriptor or kernel handle, will their writes overwrite each other?
Thinking with common sense, isn’t there any mechanism at the operating system level to guarantee the atomicity of write operations?
So we naturally ask the following questions:
- Is file append atomic in UNIX? – Stack Overflow
- Is appending to a file atomic with Windows/NTFS? – Stack Overflow
- Are Files Appends Really Atomic? | Not The Wizard
The conclusion of the above discussion is that:
- For Linux, the POSIX specification guarantees that a file opened in O_APPEND mode is atomic 1 if the content written at one time does not exceed PIPE_BUF (usually 4096) bytes
- For Win32 WriteFile, if the FILE_APPEND_DATA parameter is added when opening the file, the append operation can also be guaranteed to be atomic
So my solution for multi-threaded log writing is: write each line to a buffer separately, and then call write(2) at one time to write, the program log will not exceed PIPE_BUF in most cases.
If you are still using the IO functions that come with C/C++, there are buffers (stdio buffering) inside them that we can’t control, this method may not work. So, use system calls directly to keep things safe. If you still want to use the formatting functions that come with C/C++, a simple way is to use snprintf / sstream to format the output to a buffer, and then use the system call to output.
A minimalist C++11 thread-safe logging library (POSIX only):
#include <sstream> #include <iomanip> #include <unistd.h> #include <time.h> /** * 在out 后面追加时间戳,格式YYYY-MM-DD HH:MM:SS.mmm */ void appendTimestampMs(std::ostream& out) { timespec t {}; clock_gettime(CLOCK_REALTIME, &t); // 忽略错误{ struct tm tm; localtime_r(&t.tv_sec, &tm); out << std::put_time(&tm, "%F %T"); } char fill = out.fill(); out << '.' << std::setfill('0') << std::setw(3) << t.tv_nsec / 1000000 << std::setfill(fill); } template<class... Args> void plog(Args&&... args) { using _expander = int[]; std::stringstream buf; // 先输出时间,精确到毫秒appendTimestampMs(buf); buf << ' '; (void)_expander{ (void(buf << std::forward<Args>(args)), 0)... }; buf << '\n'; std::string str = buf.str(); write(STDOUT_FILENO, str.data(), str.size()); }
use:
plog("[INFO] test: ", 10);
output:
2022-05-06 20:47:43.725 [INFO] test: 10
footnote:
-
Quoting from the write(2) man page : “If the file was open(2)ed with O_APPEND, the file offset is first set to the end of the file before writing. The adjustment of the file offset and the write operation are performed as an atomic step.” ↩︎
This article is reprinted from: https://blog.henix.info/blog/cpp-multithread-logging/
This site is for inclusion only, and the copyright belongs to the original author.