Memory in Go language

Original link: https://blog.gotocoding.com/archives/1775?utm_source=rss&utm_medium=rss&utm_campaign=go%25e8%25af%25ad%25e8%25a8%2580%25e4%25b9%258b%25e5%2586%2585%25e5 %25ad%2598%25e7%25af%2587

TL;DR: This article does not discuss three-color garbage collection, read-write barriers, or memory allocation strategies. Just abstracts a simple barrier from a memory perspective. In order to know the boundaries of the language when writing the Go language, so that the previous experience of C/C++ can be reused.

In the last article , I mentioned a question, that is, two Slices refer to different parts of an Array respectively. How does GC ensure that the referenced Array will not be released during Mark.

Here, I fell into a big misunderstanding.

According to the experience of Lua and C#, when GC is an object in Mark, it is actually a piece of memory in Mark. After this memory is marked, it will not be released. It is also easy to know from the malloc function that releasing a memory block also requires the first address of the memory block.

This is why many languages ​​with GC do not allow pointer arithmetic.

The Go language books I read at the time said that although the Go language has pointers, it does not allow pointer arithmetic.

Empiricism makes me think that the mainstream design ideas of GC systems are similar, but the algorithm is different.

Then, I have the illusion that a Go pointer and a C# reference are actually the same thing.

However, this illusion cannot explain the GC problem with Slice in the previous article .

In fact, I’ve even overlooked a more general situation due to subconscious limitations.

Let’s look at a piece of code (just to demonstrate the problem, because it doesn’t make sense):

 func foo() *int { a := make([]int, 3) return &a[1] }

Yes, I even got it wrong, the fact that Go’s pointers are really pointers.

Go can’t do pointer arithmetic, which means we can’t add or subtract any offset from a pointer.

But a Go pointer can point to any legal memory address.

Take the above code as an example. When a function bar calls foo and holds this int pointer, even if the Slice variable a is destroyed, the Array pointed to by a will not be recycled.

Then my previous understanding of Go’s GC must be wrong.

After several tossing and turning, I finally found a clue in Section 7.1 “The Implementation Principle of Memory Allocator” in “Go Language Design and Implementation” .

The implementation of Go’s memory allocator is different before and after version 1.11. “Go Language Design and Implementation” spends a lot of time describing the implementation details after version 1.11.

The abstraction of the upper layer is the same between the two versions, but the version after 1.11 is a little more complicated. The “linear allocator” version before version 1.11 can help me easily establish an intuitive impression.

So, I found another article , this article introduces the design idea of ​​”linear distributor” in detail.

In this article , we can know several important conclusions:

  1. The smallest unit of memory allocation, Page
  2. The allocated memory block is a structure called mspan, and each mpan structure must hold an integer number of Pages
  3. Any Page will have a pointer to the corresponding mspan structure. When one mspan holds multiple Pages, multiple Pages will have the same mspan structure.

The above conclusion is enough to explain all the previous problems.

Since each Page is the same size, the Page index can be obtained with O(1) time complexity according to the memory address.

Then, according to the index of Page, the pointer of mspan is obtained with O(1) time complexity.

In an mspan memory block, all objects occupy the same size of memory, and spanClass is used to represent the size of the object (except for spanClass==0).

In this way, according to the object size information obtained from mspan, the first address of the object pointed to by the pointer is calculated.

I was stunned when I figured this out.

The Go language provides almost 90% of the pointer functions by integrating the memory allocator and the GC system. At this point, I kind of understand the term “C language in the cloud era”.


In the last article, I left a piece of code related to the interface like a dazzling skill, as follows:

 package main import "fmt" type FooBar interface { foo() bar() } type st1 struct { FooBar n int } type st2 struct { FooBar m int } func (s *st1) foo() { fmt.Println("st1.foo", sn) } func (s *st1) bar() { fmt.Println("st1.bar", sn) } func (s *st2) foo() { fmt.Println("st2.foo", sm) } func test(fb FooBar) { fb.foo() fb.bar() } func main() { v1 := &st1{n: 1} v3 := &st2{ m: 3, FooBar: v1, } test(v1) test(v3) }

At that time, due to the obstruction of Plan9 assembly, I didn’t quite understand the underlying implementation and mechanism, let alone what the boundaries of this usage were.

Finally, there is a self-consistent speculation recently. Yes, the following is all speculation, only partially corroborated.

I first try to write the equivalent code of the above code in C language.

 //ac #include <stdio.h> #include <string.h> #include <stdlib.h> typedef void (*foo_t)(void *); typedef void (*bar_t)(void *); struct FooBarFn { foo_t foo; bar_t bar; }; struct FooBar { void *data; struct FooBarFn *itab; }; struct st1 { struct FooBar _foobar; int n; }; struct st2 { struct FooBar _foobar; int m; }; void st1_foo(struct st1 *s) { printf("st1.foo:%d\n", s->n); } void st1_bar(struct st1 *s) { printf("st1.bar:%d\n", s->n); } void st2_foo(struct st2 *s) { printf("st2.foo:%d\n", s->m); } void st2_bar(struct st2 *s) { s->_foobar.itab->bar(s->_foobar.data); } struct FooBar st1_interface(struct st1 *s) { struct FooBar i; i.data = (void *)s; i.itab = malloc(sizeof(struct FooBarFn)); i.itab->foo = (foo_t)st1_foo; i.itab->bar = (bar_t)st1_bar; return i; } struct FooBar st2_interface(struct st2 *s) { struct FooBar i; i.data = (void *)s; i.itab = malloc(sizeof(struct FooBarFn)); i.itab->foo = (foo_t)st2_foo; i.itab->bar = (bar_t)st2_bar; return i; } void test(struct FooBar bar) { bar.itab->foo(bar.data); bar.itab->bar(bar.data); } int main() { struct FooBar i1, i2; struct st1 *v1 = malloc(sizeof(*v1)); struct st2 *v3 = malloc(sizeof(*v3)); memset(v1, 0, sizeof(*v1)); memset(v3, 0, sizeof(*v3)); v1->n = 1; v3->m = 3; v3->_foobar = st1_interface(v1); i1 = st1_interface(v1); i2 = st2_interface(v3); test(i1); test(i2); return 0; } //gcc -oa ac

The above code can be compiled and passed, and it is very close to the interface implementation disclosed in various Go language books. I can almost believe that the Go language is implemented like this.

This code mainly wants to explain “structure/interface inlining” , what the compiler does, and what his rules are, so that I can make better use of such rules.

The whole embedding structure of Go is actually pretty cool, but it’s also hard to understand.

But if you analyze it according to the C code above, the whole rule is actually very simple, just two syntactic sugars.

Let’s just look at the memory layout of struct first.

All of us in the C era wrote code like this:

 struct A { int f1; int f2; }; struct B { struct A a; int f3; }; void foo() { struct B b; baf1 = 3; baf2 = 4; b.f3 = 5; }

The corresponding Go language is as follows:

 type A struct { f1 int f2 int } type B struct { A f3 int } type D struct { A a f3 int } func foo() { b := new(B) b.f1 = 3 b.f2 = 4 b.f3 = 5 d := new(D) daf1 = 3 daf2 = 4 d.f3 = 5 }

It can be seen that the field access of the embedded structure is actually a syntactic sugar.

During the compilation phase, the Go compiler will convert the structure B to the structure D, and then compile it (Note: This refers to the source code level, because it is value embedding, the address offset can be directly calculated at the time of compilation, and the optimization at the assembly level is not easy. Optimization does not make any difference, and if it is pointer embedding, the effect is different).

Let us prove this conclusion:

 package main import ( "fmt" "unsafe" ) type A struct { f1 int8 f2 int8 } type B struct { A f3 int8 } func (*A) foo() {} func main() { var a A var b B fmt.Println(unsafe.Sizeof(a)) fmt.Println(unsafe.Sizeof(b)) }

The above code can prove that there is no magic about the struct structure layout, the size of the B structure is the size of the A structure + the size of the int8.

Similarly, type B struct {*A} and type B struct {a *A} are no different.

Looking at functions again, when a B is embedded in A, he has all the functions of A, such as the foo function.

In fact, this is also a very sweet syntactic sugar, so sweet that it is like magic.

When B embeds A, he will help B generate a set of all functions of A, so that B has its own foo function.

The function body of the B.foo function actually only does one thing, which is to call the A.foo function again.

The reason for this is that when calling A.foo, you need to pass in the memory address of the A object.

All of these are ideas before optimization.

If you go straight to the disassembly, you might get a different conclusion.

In order to generate one less call instruction, the compiler usually generates BAfoo code directly when calling B.foo.

But we can find clues through println.

 func main() { fA := (*A).foo fB := (*B).foo println(fA) println(fB) }

ps. Some people say that it is useless to study these. But if you don’t know the boundaries of language, how can you exert the greatest power of a language ^_^!

The post Memory of Go language first appeared on The Return to Chaos Blog .

This article is reprinted from: https://blog.gotocoding.com/archives/1775?utm_source=rss&utm_medium=rss&utm_campaign=go%25e8%25af%25ad%25e8%25a8%2580%25e4%25b9%258b%25e5%2586%2585%25e5 %25ad%2598%25e7%25af%2587
This site is for inclusion only, and the copyright belongs to the original author.