From ASAN Stuck to Open Files Limit

Original link: https://zu1k.com/posts/linux/large-nofile-cause-asan-stuck/

logo.jpg

Sanitizers are goodies that help programmers detect errors and provide detailed error reports. But I encountered a problem two days ago. In the Docker container of my laboratory host, after AddressSanitizer outputs a few lines of Error overview information, it cannot output the call stack information and subsequent content. The program will be stuck here, and a child process will occupy One full CPU core. It took me two days to investigate this matter, and it was finally determined that it was caused by the setting of the limit on the number of open files being too large. Please listen to me.

problem found

I prepared a minimal POC to reproduce the entire process of this incident. The following is a simple c program. If it is compiled and run directly, SegmentFault will appear because of out-of-bounds writing.

 1 2 3 4
 void main () { char * str = "abc" ; str [ 10 ] = 'z' ; }

Compile with clang and enable AddressSanitizer: clang -g -fsanitize=address -fno-omit-frame-pointer -o target_asan poc.c

Under normal circumstances, the call stack information should be output quickly when running, as shown in the figure:

AddressSanitizer output normally

And this will get stuck in my Docker container. Through the top command, you can see that a child process occupies a CPU core:

stuck situation

At first I thought that the program had just entered an infinite loop, but who knew that after waiting for a few minutes, the result was output.

So I started to check the information, and it was mentioned in the LLVM documentation that the symbolize process can be turned off by setting the environment variable ASAN_OPTIONS=symbolize=0 . The experiment found that after closing the symbol analysis, the subsequent content can be output smoothly.

Close symbolize can output smoothly

At first I thought it was a bug in the symbol parser, so I tried to switch the symbol parser and replace the default llvm-symbolizer with GNU addr2line .

ASAN_SYMBOLIZER_PATH=/usr/bin/addr2line ./target_asan

addr2line will still get stuck

It is still stuck, so I suspect that it is not the problem of llvm-symbolizer . I feel that it may be a problem with the system kernel, or because the latest version of Docker conflicts with the kernel? The specifics are not clear, anyway, I have no clue.

When I copied the program to the host machine, this problem disappeared inexplicably. I packaged and copied the container to my classmate’s Ubuntu, but the problem could not be reproduced, and the output was smooth. I also tried downgrading the Host kernel to 5.15, and downgrading Docker / Containerd / runc version to the same version as my classmate’s Ubuntu, but none of them could solve the problem.

Later, through strace, it was found that AddressSanitizer was stuck on the read system call, and the process of interacting with llvm-symbolizer was guessed through the context.

strace found card read system call

Here you can see that AddressSanitizer forks the child process, then communicates with the child process through pipe, and writes CODE "binary_path" offset\n to request to query the symbol information corresponding to offset position of binary . If the query is successful, it will return the source code and line number , function name and other symbolic information.

I tried to run llvm-symbolizer manually, normal output without any problem.

But at this point I couldn’t do anything, so I asked for help on Twitter before going to bed to see if anyone else had this problem.

in-depth

According to the reply from netizen whsloef , I printed the call stack of the blocked process, which is the same as the conclusion I got through strace, it is the card read system call.

print call stack

Then according to a historical issue replied by netizen Ningcong Chen , I tried to use gdb to attach the blocked process. (I thought about profiling the process that takes up 100% of the CPU to see what behavior is occupying the CPU, but considering that AddressSanitizer is injected by clang, I don’t know if it’s good or not, so I didn’t do it)

attach main process

After attaching the main process, it was found that it was stuck in internal_read , presumably the child process did not return.

attach child process

Attaching a child process, I found that it was stuck in a for loop, and downloaded the source code from GitHub through the call stack information, and began to analyze the cause.

Through the LLVM compiler-rt source code, locate compiler-rt/lib/sanitizer_common/sanitizer_posix_libcdep.cpp#L465 , I simplified StartSubprocess into the following process:

 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
 pid_t StartSubprocess ( const char * program , const char * const argv [], const char * const envp [], fd_t stdin_fd , fd_t stdout_fd , fd_t stderr_fd ) {     
int pid = internal_fork ();   
if ( pid == 0 ) { for ( int fd = sysconf ( _SC_OPEN_MAX ); fd > 2 ; fd -- ) internal_close ( fd );   
internal_execve ( program , const_cast < char **> ( & argv [ 0 ]), const_cast < char * const *> ( envp )); internal__exit ( 1 ); }   
return pid ; }

This is a typical method of starting a child process. fork first, then close unnecessary file descriptors in the child process, and finally start the target program through execve .

But here, LLVM obtains the maximum number of open files through int fd = sysconf(_SC_OPEN_MAX) , and then closes the loop. When the number of open files is limited, many unnecessary system calls will be made, which will consume time and occupy the CPU, and finally It caused me to have the kind of suspended animation above. In fact, the process is busy closing the non-existent file descriptor.

By running ulimit -n in the container, I found that the file descriptor limit in the container is 1073741816 , compared with the host’s limit 1024 , this difference is an important reason why I cannot reconnect the program when I copy the program to the host.

I tried to add a limit on the number of open files --ulimit nofile=1024:1024 when running the container, and the problem was solved smoothly.

It turned out that netizen lightning1141’s reply was to ask me to see if the number of open files was too large. I thought it was to see if it was enough. I always thought that the bigger the better, I was too naive too simple.

think

But since the host limit is 1024 , why is the limit in the Docker container 1073741816 ?

I checked the following files based on experience, and found that the number of open files is the default, and no specific value is specified:

  • /etc/security/limits.conf
  • /etc/systemd/system.conf
  • /etc/systemd/user.conf

Then check the docker-related restrictions, because it is managed by systemd, so check the following files:

  • /usr/lib/systemd/system/docker.service
  • /usr/lib/systemd/system/containerd.service

Specify LimitNOFILE=infinity in the service file, which leads to an unlimited number of open files. Check the kernel’s default process open file limit through cat /proc/sys/fs/nr_open and find that it is 1073741816 . And nr_open is 1048576 on the classmate’s ubuntu machine.

It’s hard to troubleshoot problems caused by the nuances of this distribution!

solution

Modify Containerd file descriptor limit

Modify /usr/lib/systemd/system/containerd.service

 1 2
 [Service] LimitNOFILE=1048576

No need to modify /usr/lib/systemd/system/docker.service

Or add the limit --ulimit nofile=1048576:1048576 when starting the container:

docker run -it --ulimit nofile=1048576:1048576 ubuntu:18.04 /bin/bash

Modify logic in LLVM

You can modify the LLVM source code to replace close with close_range or closefrom system calls.

  • close_range added in Linux kernel 5.9, also available on BSD
  • closefrom Introduced in FreeBSD 8.0, libbsd needs to be linked on Linux

It’s a pity that these two are not system calls defined by the POSIX specification, but I think this will become mainstream later.

Only the version of Linux has been changed, and Kernel 5.9 or above is required.

Follow up

An issue was raised in the corresponding warehouse on GitHub, waiting for improvement. Although I changed a Linux version and it can be used, but considering that LLVM needs to ensure compatibility, I dare not mention PR here. After all, it is not a good compatibility solution to require Linux 5.9 or above. (I can’t compile it in the docker of ubuntu 18.04. #define __NR_close_range 436 is not defined in unistd.h )

Suddenly thought of a question someone asked before: When I run >500 threads, the proxy starts to fail

Since the maximum number of open files in a single process in many distributions is 1024, this question is easy to speculate. This proxy program needs to open two file descriptors for each connection, one in and one out, so the number of concurrent connections cannot reach 500, because 1024 is too small. It can be solved by changing it.

This article is transferred from: https://zu1k.com/posts/linux/large-nofile-cause-asan-stuck/
This site is only for collection, and the copyright belongs to the original author.