Several changes worth noting in Go 1.19

Permalink to this article – https://ift.tt/YDtm2Sx

We know that the Go team redefined the rhythm of team releases in 2015, setting the release frequency of major Go releases to twice a year, and the release window to February and August each year. And Go 1.5, which implements bootstrapping, is the first release in this cadence. Generally speaking, the Go team will release the version in the middle of these two windows, but there have been accidents in recent years. For example, the release of Go 1.18 , which carries the responsibility of generic landing, has been delayed by a month.

Just when we thought the Go 1.19 version would not be released soon, on August 2, 2022 US time, the Go core team officially released the Go 1.19 version , not only within the release window but earlier than usual. why? Quite simply, Go 1.19 is a “small” version , of course the “small” here is relative to the “big” version of Go 1.18. The development cycle of the Go 1.19 version is only about 2 months (March to early May), so the Go team has compressed the number of features added to the Go 1.19 version.

But despite this, there are still a few changes in Go 1.19 that deserve our attention, and I’ll take a look at them in this article.

I. Overview

In June (the Go 1.19 version was Freeze at that time), I wrote an article “Go 1.19 New Features Preview” , which briefly introduced some new features of the Go 1.19 version that were basically determined at that time. Looking at it now, and Go 1.19 version The official version is not much different.

  • generic aspect

Considering that Go 1.18 generics have just landed, generics in Go 1.18 are not a full version. However, Go 1.19 version is not eager to implement the functional features that have not yet been implemented in the generic design document ), but instead focuses on fixing the generic implementation problems found in Go 1.18, in order to consolidate the base of Go generics , which lays the foundation for the implementation of full version generics in Go 1.20 and subsequent versions (for details, please refer to the article “Go 1.19 New Features Preview” ).

  • other grammatical aspects

No, no, no! Important things are said three times.

In this way, Go 1.19 maintains the Go1 compatibility promise.

  • Officially support Loongson architecture on linux (GOOS=linux, GOARCH=loong64)

This point has to be mentioned, because this change is contributed by the domestic Godson team. However, the minimum linux kernel version supported by Loongson is 5.19, which means Loongson cannot use Go on older versions of linux.

  • go env supports CGO_CFLAGS, CGO_CPPFLAGS, CGO_CXXFLAGS, CGO_FFLAGS, CGO_LDFLAGS and GOGCCFLAGS

When you want to set global rather than package-level CGO build options, you can do so through these newly added CGO-related environment variables, so that you can avoid using cgo indicators in each Go source file that uses Cgo to set it separately .

Currently the default values ​​of these go environment variables for CGO are as follows (take the default values ​​on my macOS as an example):

 CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" GOGCCFLAGS="-fPIC -arch x86_64 -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/cz/sbj5kg2d3m3c6j650z0qfm800000gn/T/go-build1672298076=/tmp/go-build -gno-record-gcc-switches -fno-common"

I won’t go into details about other more specific changes. You can go to “Go 1.19 New Features Preview” to see.

Below we focus on two important changes in Go 1.19: the new version of the Go memory model documentation and the introduction of Soft memory limit in the Go runtime .

2. Revise the Go memory model documentation

I remember that when I first learned Go, the most difficult one of all Go official documents was the Go memory model document (as shown in the figure below). I believe that many gophers must have a similar rush to me when they first read this document. Feet ^_^.

Figure: Old version of the Go memory model documentation

Note: The method of viewing the old version of the Go memory model documentation: godoc -http=:6060 -goroot /Users/tonybai/.bin/go1.18.3, where godoc is no longer distributed with the go installation package, you need to install it separately, the command is : go install golang.org/x/tools/cmd/godoc.

So what does the old memory model documentation say? Why amend it? By clarifying these two questions, we will roughly know the meaning of the new version of the memory model document. Let’s first look at what is the memory model of a programming language.

1. What is the memory model?

When it comes to memory models, let’s start with a paper titled “How to Make a Multiprocessor Computer That Correctly Executes Multiprocess Programs” published in 1979 by Leslie Lamport , a famous computer scientist and winner of the 2013 Turing Award.

In this article, Lamport gives the conditions for the correct operation of concurrent programs in the case of shared memory on a multiprocessor computer, that is, multiprocessors must satisfy sequentially consistent (sequentially consistent) .

It is mentioned in the text that a high-speed processor does not necessarily execute in the order specified by the program (code order). A processor is said to be sequential if its execution results (possibly out-of-order execution) are consistent with the results of execution in the order specified by the program (code order ) .

For a multiprocessor with shared memory, only if the following conditions are met, it can be considered to meet sequential consistency , that is, it has the conditions to ensure the correct operation of concurrent programs:

  • The result of any execution is consistent with the result of all processor operations performed in a certain order;
  • Looking at each processor individually in “execution in a certain order”, each processor is also executed in the order specified by the program (code order).

Sequential consistency is a typical shared memory, multiprocessor memory model that guarantees that all memory accesses are done atomically and in program order. The following is a schematic diagram of an abstract machine model of sequential consistency of shared memory, from “A Tutorial Introduction to the ARM and POWER Relaxed Memory Models” :

According to sequential consistency, the abstract machine in the above diagram has the following characteristics:

  • There is no local reordering: each hardware thread executes instructions in the order specified by the program, completing each instruction (including any reads or writes to shared memory) before starting the next.
  • Each write instruction is simultaneously visible to all threads, including the thread doing the write.

From a programmer’s perspective, a sequentially consistent memory model could not be more ideal. All read and write operations are directed to memory, with no cache, and a value written to memory by one processor (or hardware thread) can be observed by other processors (or hardware thread). With the sequential consistency (SC) provided by the hardware, we can achieve “what you write is what you get”.

But does such a machine really exist? No, at least not in mass-produced machines. why? Because sequential consistency is not conducive to the performance optimization of hardware and software. A common machine model of a real-world shared memory multiprocessor computer is this, also known as Total Store Ordering, TSO model (picture from “A Tutorial Introduction to the ARM and POWER Relaxed Memory Models” ):

We see that under this machine, all processors are still connected to a single shared memory, but each processor’s write memory operations change from writing to the shared memory to first writing to the processor’s write buffer queue (write buffer) , so that the processor does not need to be blocked waiting for write complete, and the read memory operation on one processor will first consult the write cache queue of this processor (but not the write cache queues of other processors). ). The existence of the write cache queue greatly improves the speed of the processor’s write memory operations.

But it is precisely because of the existence of the write cache that the TSO model cannot satisfy sequential consistency. For example, the feature that “each write instruction is visible to all threads (including the thread that performs the writing) at the same time” cannot be satisfied. Because the data written to the local write cache queue is only visible to itself before it is actually written to the shared memory, and not visible to other processors (hardware threads).

According to Lamport’s theory, programmers cannot develop concurrent programs (Data Race Free, DRF) that can run correctly on multiprocessor machines that do not satisfy SC, so what should we do? The processor provides synchronization instructions to the developer. For developers, non-SC machines with synchronous instructions have the attributes of SC machines. It’s just that all of this is not automatic/transparent to developers. It requires developers to be familiar with synchronization instructions and use them correctly in appropriate occasions, such as scenarios involving data competition, which greatly increases the mental burden of developers.

Developers usually do not face the hardware directly. At this time, high-level programming languages ​​are required to encapsulate the synchronization instructions provided by the hardware and provide them to developers. This is the synchronization primitives of programming languages . What kind of hardware synchronization instructions are used by the programming language, what synchronization primitives are encapsulated, how to apply these primitives, and wrong application examples, etc., all need to be explained to the users of the programming language. And these will be part of the programming language memory model documentation.

The memory model of today’s mainstream programming languages ​​is the sequential consistency (SC) model , which provides developers with an ideal SC machine (although the actual machine is not SC), and the program is built on this model. of. But as mentioned earlier, in order to implement correct concurrent programs, developers must also understand the synchronization primitives encapsulated by the programming language and their semantics. As long as the programmers follow the synchronization requirements of concurrent programs and use these synchronization primitives reasonably, the written concurrent programs can run the effect of sequential consistency on non-SC machines .

Now that we know the meaning of the programming language memory model, let’s take a look at what the old Go memory model documentation actually says.

2. Go Memory Model Documentation

Following the instructions above, the Go memory model documentation should describe what it takes to write a correct concurrent program in Go .

To be more specific, as the old memory model document said at the beginning: The Go memory model specifies some conditions, once these conditions are met, when reading a variable in a goroutine, Go can guarantee that it can observe different goroutines A new value resulting from a write to the same variable in .

Next, the memory model documentation gives the various synchronization operations provided by Go and their semantics based on the regular happens-before definition, including:

  • If a package p imports package q, the completion of q’s init function occurs before the start of any of p’s functions.
  • The start of the function main.main happens after all the init functions are done.
  • A go statement that starts a new goroutine occurs before the execution of the goroutine begins.
  • A send operation on a channel occurs before the corresponding receive operation for that channel completes.
  • Closing of a channel occurs before a receive that returns zero (because the channel is already closed).
  • A receive on an unbuffered channel occurs before the channel’s send operation completes.
  • The kth receive operation on a channel of capacity C occurs before the k+Cth send operation on that channel completes.
  • For any sync.Mutex or sync.RWMutex variable l, when n < m, the nth call to l.Unlock occurs before the mth call to l.Lock() returns.
  • The call to f() in once.Do(f) happens before any call to once.Do(f) returns.

Next, the memory model documentation also defines some examples of misuse of synchronization primitives.

So what exactly is updated in the new memory model documentation? Let’s continue reading.

3. What are the changes in the revised memory model documentation

Figure: Revised Go memory model documentation

Russ Cox, who was responsible for updating the memory model documentation, first added an overall approach to the Go memory model .

Go’s overall approach is between C/C++ and Java/Js. Neither defines a program with a data race as illegal like C/C++ does, and lets the compiler deal with it as undefined behavior, that is, the runtime shows any possibility It does not try to clarify the semantics of Data Race as much as Java/Js does, so as to limit the impact of Data Race to a minimum and make the program more reliable.

Go will output a race report and terminate the program for some situations where there is a data race, such as concurrent reading and writing of map by multiple goroutines without using synchronization methods. In addition, Go has explicit semantics for other data race scenarios, which makes programs more reliable and easier to debug.

Secondly, the new version of the Go memory model document adds the description of the new APIs in the sync package over the years, such as: mutex.TryLock, mutex.TryRLock, etc. For sync.Cond, Map, Pool, WaitGroup and other documents, there is no description one by one, but it is recommended to read the API document.

In the old version of the memory model document, there is no description of the sync/atom package. The new version of the document adds a description of the atom package and runtime.SetFinalizer.

Finally, in addition to providing examples of incorrect synchronization, the documentation adds a description of examples of incorrect compilation.

As a side note here: Go 1.19 introduced some new atomic types in the atomic package, including: Bool, Int32, Int64, Uint32, Uint64, Uintptr and Pointer. These new types make it easier for developers to use atomic packages. For example, here is a code comparison between Go 1.18 and Go 1.19 using Uint64 type atomic variables:

Compare the two approaches of Uint64:

 // Go 1.18 var i uint64 atomic.AddUint64(&i, 1) _ = atomic.LoadUint64(&i) vs. // Go 1.19 var i atomic.Uint64 // 默认值为0 i.Store(17) // 也可以通过Store设置初始值i.Add(1) _ = i.Load()

The new Pointer in the atomic package avoids the trouble of developers using unsafe.Pointer for conversion when using atomic pointers. At the same time, atomic.Pointer is a generic type. If I remember correctly, it is the first time that a generic-based standard library type was introduced in Go after Go 1.18 added the comparable predefined generic type:

 // $GOROOT/src/sync/atomic/type.go // A Pointer is an atomic pointer of type *T. The zero value is a nil *T. type Pointer[T any] struct { _ noCopy v unsafe.Pointer } // Load atomically loads and returns the value stored in x. func (x *Pointer[T]) Load() *T { return (*T)(LoadPointer(&x.v)) } // Store atomically stores val into x. func (x *Pointer[T]) Store(val *T) { StorePointer(&x.v, unsafe.Pointer(val)) } // Swap atomically stores new into x and returns the previous value. func (x *Pointer[T]) Swap(new *T) (old *T) { return (*T)(SwapPointer(&x.v, unsafe.Pointer(new))) } // CompareAndSwap executes the compare-and-swap operation for x. func (x *Pointer[T]) CompareAndSwap(old, new *T) (swapped bool) { return CompareAndSwapPointer(&x.v, unsafe.Pointer(old), unsafe.Pointer(new)) }

In addition, the newly added Int64 and Uint64 types of the atomic package have another feature, that is, Go guarantees that its address can be automatically aligned to 8 bytes (that is, the address is divisible by 64), even on 32-bit platforms. But even native int64 and uint64 can’t do it yet .

go101 shared a tip based on atomic Int64 and Uint64 on Twitter. Using the atomic.Int64/Uint64 newly added in go 1.19, we can use the following method to ensure that a field in the structure must be aligned with 8 bytes, that is, the address of the field can be divisible by 64.

 import "sync/atomic" type T struct { _ [0]atomic.Int64 x uint64 // 保证x是8字节对齐的}

In the previous code, why not use _ atomic.Int64? Why use an empty array? This is because empty arrays do not occupy space in go. You can try to output the size of the above structure T to see if it is 8.

3. Introduce Soft memory limit

1. The only GC tuning option: GOGC

Go GC has not had any major changes/optimizations in recent major releases. Compared with other programming languages ​​with GC, Go GC is a strange existence: for developers, before Go 1.19, there was only one tuning parameter for Go GC: GOGC (also available through runtime/debug.SetGCPercent Adjustment).

The default value of GOGC is 100. By adjusting its value, we can adjust the timing of GC triggering. The formula for calculating the size of the heap memory that triggers the next GC is as follows:

 // Go 1.18版本之前目标堆大小= (1+GOGC/100) * live heap // live heap为上一次GC标记后的堆上的live object的总size // Go 1.18版本及之后目标堆大小= live heap + (live heap + GC roots) * GOGC / 100

Note: After Go 1.18, GC roots (including goroutine stack size and pointer object size in global variables) are included in the calculation of target heap size

Taking the version before Go 1.18 as an example, when GOGC=100 (the default value), if the live heap after a GC is 10M, then the target heap size of the next GC will be 20M, that is, between two GCs , the application can allocate 10M of new heap objects.

It can be said that GOGC controls how often the GC runs . When the GOGC value is set to a small value, the GC runs more frequently, and the proportion of CPUs involved in the GC work is more; when the GOGC value is set to a large value, the GC runs less frequently, and the corresponding CPUs participate in the GC work. The proportion of CPU is smaller, but it has to bear the risk of memory allocation approaching the upper limit of resources.

In this way, the problem for developers is: the value of GOGC is difficult to choose, and the only tuning option becomes a display.

At the same time, the Go runtime does not care about resource limits, but will continue to allocate memory according to the needs of the application, and apply to the OS for new memory resources when its own memory pool is insufficient until the memory is exhausted (or reaches the platform allocated to the application). memory limit) and was killed by oom!

Why is the Go application still killed by oom due to exhaustion of system memory resources with GC? Let’s continue reading.

2. Pacer’s problem

The above calculation formula for the target heap size that triggers GC is called the pacer algorithm inside the Go runtime. The pacer is translated into “pacer” in Chinese, and some is translated into “pacer”. No matter what it is translated into, in short, it is used to control the rhythm of GC triggering .

However, the current algorithm of pacer cannot guarantee that your application will not be killed by OOM. For example (see the figure below):

In this example:

  • At the beginning, the live heap is always stable, and the net increase of heap objects remains 0, that is, the newly allocated heap objects and the cleaned heap objects cancel each other out.
  • A subsequent jump in target heap (from h/2->h) occurs at (1). The reason is obviously that there are more live heap objects, all of which are in use, and cannot be cleared even if GC is triggered. However, the target heap(h) is smaller than the hard memory limit at this time;
  • The program continues to execute. At (2), the target heap jumps again (from h->2h), and the number of live heap objects also increases and stabilizes at h. At this time, the target heap becomes 2h, which is higher than hard memory limit is up;
  • The subsequent program continues to execute. When the live heap object reaches (3), the actual Go heap memory (including uncleaned ones) exceeds the hard memory limit, but since the target heap (2h) has not yet been reached, the GC is not executed, so the application is blocked. oom killed.

We see that in this example, it is not that the Go application really needs so much memory (if there is a GC to clean up in time, the live heap object is at the height of (3)), but the Pacer algorithm caused the failure to trigger the GC in time.

So how to avoid oom killed as much as possible? Let’s take a look at the two “folk remedies” given by the Go community.

3. GC tuning scheme of the Go community

These two “remedies”, one is the memory ballast (memory ballast stone) given by the twitch game company , and the other is the automatic GC dynamic tuning scheme adopted by big manufacturers like uber . Of course, these two solutions are not only to avoid oom, but also to optimize the GC and improve the execution efficiency of the program.

Below we briefly introduce each. Let’s talk about twitch’s memory ballast first. twitch’s Go service runs on a VM with 64G physical memory. By observing the operation and maintenance personnel, it is found that the physical memory consumption of the service is only more than 400 M, but the Go GC is started very frequently, which leads to the response time of its service. longer. Twitch engineers considered making full use of memory and reducing the frequency of GC startup, thereby reducing the response delay of the service.

So they came up with a method. They declared a large slice with a capacity of 10G like the following in the initialization of the main function of the service, and ensured that the slice would not be released by the GC before the program exits:

 func main() { // Create a large heap allocation of 10 GiB ballast := make([]byte, 10<<30) // Application execution continues // ... runtime.Keepalive(ballast) // ... ... }

Because this slice is too large, it will be allocated on the heap and tracked by the runtime, but this slice will not bring substantial physical memory consumption to the application, thanks to the delayed bookkeeping of the application process memory by the OS: only read and write The memory will cause a page fault interrupt and the OS will allocate physical memory for it. From a tool like top, these 10 G bytes will only be recorded on VIRT/VSZ (virtual memory), not on RES/RSS (resident memory).

In this way, according to the principle of the previous Pacer algorithm, the next target heap size for triggering GC is at least 20G. The GC will not be triggered until the Go service allocates heap memory to 20G, and all cpu resources will be used to process services. This is also consistent with the measured results of twitch (GC times dropped by 99%).

Once it reaches 20G, due to the previous observation that the service only needs more than 400 M physical memory, a large number of heap objects will be recycled, and the live heap of the Go service will return to more than 400 M, but when the target heap memory is recalculated, due to the previous “” With the existence of “ballast stone”, the target heap memory will already be at the water level of at least 20G. In this way, the number of GCs is reduced, the number of GCs is reduced, the time for worker goroutines to participate in “labor service” is reduced, the CPU utilization rate is high, and the service response is reduced. The delay has also come down.

Note: “Labor” means that the worker goroutine is forced to “labor” by the runtime when it is in mallocgc memory: stop the work at hand and assist the GC to mark the heap live object.

However, the premise of using this solution is that you have a precise understanding of the memory consumption (busy and idle) of your Go service, so that you can set a reasonable ballast value in combination with hardware resources.

According to the Soft memory limit proposal , the disadvantages of this scheme are as follows:

  • It cannot be ported across platforms, and it is said that it is not applicable on Windows (the value of the ballast stone will be directly reflected as the physical memory usage of the application);
  • There is no guarantee that it will continue to work as the Go runtime evolves (eg: once the pacer algorithm changes dramatically);
  • Developers need to perform complex calculations and estimate runtime memory overhead to choose a suitable ballast size.

Next, let’s take a look at the automatic GC dynamic tuning scheme.

In December last year, uber shared the semi-automatic Go GC tuning scheme used internally by uber on its official blog. According to uber, this scheme helped uber save 70K cpu core computing power after implementation. The principle behind it is still based on Pacer’s algorithm formula, changing the original practice of keeping the GOGC value static throughout the Go service life cycle. In each GC, it is dynamically calculated and set according to the memory limit of the container and the current live heap size. GOGC value, so as to achieve the protection of insufficient memory oom-killed, while maximizing the use of memory and improving the occupancy rate of Gc on the cpu.

Obviously, this scheme is more complicated, and an expert team is required to ensure the setting of the parameters of this automatic tuning and the realization of the scheme.

4. Introduce Soft memory limit

In fact, there are still many problems with the Go GC pacer. Michael Knyszek, the developer of the Go core team, raised an issue of the pacer problem summary , which summarizes these problems. But the problem needs to be solved one by one. In the version of Go 1.19, Michael Knyszek brought his Soft memory limit solution .

This solution adds a function called SetMemoryLimit and the GOMEMLIMIT environment variable to the runtime/debug package, through which any of them can set the memory limit of the Go application.

Once the Memory limit is set, when the Go heap size reaches the “Memory limit minus non-heap memory”, a round of GC will be triggered. Even if you manually turn off the GC (GOGC=off), the GC will still be triggered.

Through the principle, we can see that the most direct solution of this feature is the problem of oom-killed! Just like the example in the schematic diagram of the pacer problem, if we set a value of soft memory limit that is smaller than the hard memory limit, then oom-killed will not occur at (3) point, because before that soft memory limit The memory limit will trigger a GC and reclaim some useless heap memory.

But we should also pay attention: soft memory limit does not guarantee that oom-killed will not occur, which is also very understandable. If the live heap object reaches the limit, it means that your application memory resources are really insufficient, and it is time to expand the memory module resources. This is a problem that GC cannot solve anyway.

However, if the live heap object of a Go application exceeds the soft memory limit but has not been killed, the GC will continue to be triggered, but in order to ensure that the business can still continue in this case, the soft memory limit scheme ensures that the GC is the most Only 50% of the CPU computing power will be used to ensure that business processing can still obtain CPU resources.

For the case of high GC trigger frequency and to reduce the GC frequency, the soft memory limit solution is to turn off the GC (GOGC=off) , so that the GC will only be triggered when the heap memory reaches the soft memory limit value, which can improve the CPU utilization. There is one case, however, which is not recommended in Go’s official GC guide , and that is when your Go program shares some limited memory with other programs. In this case just keep the memory limit and set it to a small reasonable value, as it may help suppress undesirable transient behavior.

So many values ​​are reasonable soft memory limit values? When a Go service is monopolizing container resources, a good rule of thumb is to leave an extra 5-10% to account for sources of memory that the Go runtime doesn’t know about. The limit set by uber in its blog is 70% of the resource limit, which is also a good experience value.

4. Summary

Maybe Go 1.19 didn’t bring many surprises to everyone due to the compression of the development cycle. However, although the features are few, they are very practical. For example, the soft memory limit above, once used well, can help you solve big problems.

Go 1.20, which has a normal development cycle, is already under active development. Judging from the functions and improvements planned in the current milestones , the Go generic grammar will be further completed and move towards the full version, which is worth looking forward to. !

5. References

  • Russ Cox Memory Model Series – https://ift.tt/UWbunet
  • Discussion on the Go memory model – https://ift.tt/dHZIO71
  • How to Make a Multiprocessor Computer That Correctly Executes Multiprocess Programs – https://ift.tt/kxiOjyq
  • A Tutorial Introduction to the ARM and POWER Relaxed Memory Models – https://ift.tt/Te2FDdx
  • Weak Ordering – A New Definition- https://ift.tt/fMFOYlG
  • Foundations of the C++ Concurrency Memory Model – https://ift.tt/kfhqIUp
  • Go GC pacer principle – https://ift.tt/pebdT8z

“Gopher Tribe” Knowledge Planet aims to create a high-quality Go learning and advanced community! High-quality first published Go technical articles, “three-day” first published reading rights, analysis of the current situation of Go language development twice a year, reading the fresh Gopher daily 1 hour in advance every day, online courses, technical columns, book content preview, must answer within 6 hours Guaranteed to meet all your needs about the Go language ecosystem! In 2022, the Gopher tribe will be fully revised, and will continue to share knowledge, skills and practices in the Go language and Go application fields, and add many forms of interaction. Everyone is welcome to join!

img{512x368}

img{512x368}

img{512x368}

img{512x368}

I love texting : Enterprise-level SMS platform customization development expert https://51smspush.com/. smspush : A customized SMS platform that can be deployed within the enterprise, with three-network coverage, not afraid of large concurrent access, and can be customized and expanded; the content of the SMS is determined by you, no longer bound, with rich interfaces, long SMS support, and optional signature. On April 8, 2020, China’s three major telecom operators jointly released the “5G Message White Paper”, and the 51 SMS platform will also be newly upgraded to the “51 Commercial Message Platform” to fully support 5G RCS messages.

The famous cloud hosting service provider DigitalOcean released the latest hosting plan. The entry-level Droplet configuration is upgraded to: 1 core CPU, 1G memory, 25G high-speed SSD, and the price is 5$/month. Friends who need to use DigitalOcean can open this link : https://ift.tt/MRsXFDl to open your DO host road.

Gopher Daily Archive Repository – https://ift.tt/HFjPBTY

my contact information:

  • Weibo: https://ift.tt/uF2ArcM
  • Blog: tonybai.com
  • github: https://ift.tt/bFvEBxA

Business cooperation methods: writing, publishing books, training, online courses, partnership entrepreneurship, consulting, advertising cooperation.

© 2022, bigwhite . All rights reserved.

This article is reprinted from https://tonybai.com/2022/08/22/some-changes-in-go-1-19/
This site is for inclusion only, and the copyright belongs to the original author.

Leave a Comment