Prometheus can’t collect data! It’s actually the pot of the Prometheus client package

Permalink to this article –

Before the new generation of observation facilities based on eBPF is immature, we adopted the industry-proven Prometheus + Grafana solution to collect node and application metrics information . As we all know, such a solution is an intrusive solution to the application, that is, the client package that collects measurement information and communicates with Prometheus needs to be embedded in the application.

Prometheus officially provides and maintains client packages for mainstream languages, including Go, Java, Python, Ruby, Rust , etc., as shown below:

The go client side of Prometheus is not complicated to use, and it is divided into two steps:

  • Register the metrics you want to get to the Prometheus Registry;
  • Establish an HTTP Server and expose the metrics collection port.

Prometheus uses a pull model to collect time series metric data. The data pull behavior is determined by the Prometheus server. For example, you can set the time period for Prometheus to pull each collection point. Generally speaking, this technology stack is very mature, and the effect can be seen immediately after configuration and startup. This technology stack is also very stable. We have been running well after using it until we encountered a problem during the stress test this week: Prometheus can’t collect data !

From the initial data from continuous lines to “intermittent” points, see the figure below:

In the end, no data can be collected at all:

Prometheus ran fine before, why can’t I collect data now? The difference between this time and the previous one is that in our stress test case scenario, each service node needs to establish more than one million connections, while before it was only on the order of 10W.

Fortunately, we have deployed the online Continuous Profiling tool , and you can view the resource usage during the stress measurement period, as shown in the following figure:

The above is a flame graph of an alloc object. We see that the Registry.Gather method of the Prometheus client accounts for 50% of the memory allocation overhead, which is very abnormal. Continuing to look along the flame graph of the Gather function, we see that the bottom is actually readdir . None of the metrics registered by our application need readdir when collecting them!

To solve this problem, only turn over the Prometheus client source code !

We are using the default defaultRegistry on the prometheus client side. As you can see from the source code: When this defaultRegistry is initialized, two collectors are registered by default:

 // registry.go func init() { MustRegister(NewProcessCollector(ProcessCollectorOpts{})) MustRegister(NewGoCollector()) }

We found that the first processCollector will collect the following metrics data:

 // process_collector.go func (c *processCollector) Describe(ch chan<- *Desc) { ch <- c.cpuTotal ch <- c.openFDs ch <- c.maxFDs ch <- c.vsize ch <- c.maxVsize ch <- c.rss ch <- c.startTime }

When collecting openFDs, processCollector traverses the fd directory under /proc/{pid}:

 // process_collector_other.go func (c *processCollector) processCollect(ch chan<- Metric) { pid, err := c.pidFn() if err != nil { c.reportError(ch, nil, err) return } p, err := procfs.NewProc(pid) if err != nil { c.reportError(ch, nil, err) return } if stat, err := p.Stat(); err == nil { ch <- MustNewConstMetric(c.cpuTotal, CounterValue, stat.CPUTime()) ch <- MustNewConstMetric(c.vsize, GaugeValue, float64(stat.VirtualMemory())) ch <- MustNewConstMetric(c.rss, GaugeValue, float64(stat.ResidentMemory())) if startTime, err := stat.StartTime(); err == nil { ch <- MustNewConstMetric(c.startTime, GaugeValue, startTime) } else { c.reportError(ch, c.startTime, err) } } else { c.reportError(ch, nil, err) } if fds, err := p.FileDescriptorsLen(); err == nil { // 这里获取openFDs ch <- MustNewConstMetric(c.openFDs, GaugeValue, float64(fds)) } else { c.reportError(ch, c.openFDs, err) } if limits, err := p.Limits(); err == nil { ch <- MustNewConstMetric(c.maxFDs, GaugeValue, float64(limits.OpenFiles)) ch <- MustNewConstMetric(c.maxVsize, GaugeValue, float64(limits.AddressSpace)) } else { c.reportError(ch, nil, err) } }

When collecting openFDS, the processCollector calls the FileDescriptorsLen method. In the fileDescriptors method called by the FileDescriptorsLen method, we find the call to Readdirnames, see the following source code snippet:

 // // FileDescriptorsLen returns the number of currently open file descriptors of // a process. func (p Proc) FileDescriptorsLen() (int, error) { fds, err := p.fileDescriptors() if err != nil { return 0, err } return len(fds), nil } func (p Proc) fileDescriptors() ([]string, error) { d, err := os.Open(p.path("fd")) if err != nil { return nil, err } defer d.Close() names, err := d.Readdirnames(-1) // 在这里遍历目录中的文件if err != nil { return nil, fmt.Errorf("could not read %q: %w", d.Name(), err) } return names, nil }

Under normal circumstances, it is no problem to read the /proc/{pid}/fd directory, but when our program is connected with 100w+ connections, it means that there are 100w+ files in the fd directory, and traversing these files one by one will bring A lot of overhead. This is the reason why Prometheus cannot collect data in time within the timeout period (usually 10 seconds).

So how to solve this problem?

The temporary solution is to comment out the line MustRegister(NewProcessCollector(ProcessCollectorOpts{})) in the init function of the registry.go file! This process metric information is of little use to us. However, the disadvantage of this is that we need to maintain a prometheus golang client package ourselves, and we need to use go mod replace, which is very inconvenient, and it is not convenient to upgrade the version of the prometheus golang client package.

The solution once and for all is: instead of using the default Registry, use the NewRegistry function to create a new Registry . In this way, we put aside those metrics registered by default, and can define the metrics we want to register by ourselves. When needed, we can also add ProcessCollector, which depends on the needs of different Go programs.

After modifying according to this plan, those familiar continuous curves are reappearing in front of us!

“Gopher Tribe” Knowledge Planet aims to create a high-quality Go learning and advanced community! High-quality first published Go technical articles, “three-day” first published reading rights, analysis of the current situation of Go language development twice a year, read the fresh Gopher daily 1 hour in advance every day, online courses, technical columns, book content preview, must answer within six hours Guaranteed to meet all your needs about the Go language ecosystem! In 2022, the Gopher tribe will be fully revised, and will continue to share knowledge, skills and practices in the Go language and Go application fields, and add many forms of interaction. Everyone is welcome to join!





I love texting : Enterprise-level SMS platform customization development expert smspush : A customized SMS platform that can be deployed within the enterprise, with three-network coverage, not afraid of large concurrent access, and can be customized and expanded; the content of the SMS is determined by you, no longer bound, with rich interfaces, long SMS support, and optional signature. On April 8, 2020, China’s three major telecom operators jointly released the “5G Message White Paper”, and the 51 SMS platform will also be newly upgraded to the “51 Commercial Message Platform” to fully support 5G RCS messages.

The famous cloud hosting service provider DigitalOcean released the latest hosting plan. The entry-level Droplet configuration is upgraded to: 1 core CPU, 1G memory, 25G high-speed SSD, and the price is 5$/month. Friends who need to use DigitalOcean can open this link : to open your DO host road.

Gopher Daily Archive Repository –

my contact information:

  • Weibo:
  • Blog:
  • github:

Business cooperation methods: writing, publishing books, training, online courses, partnership entrepreneurship, consulting, advertising cooperation.

© 2022, bigwhite . All rights reserved.

This article is reprinted from
This site is for inclusion only, and the copyright belongs to the original author.

Leave a Comment

Your email address will not be published.