Added mutex for silly

Original link: https://blog.gotocoding.com/archives/1803?utm_source=rss&utm_medium=rss&utm_campaign=%25e4%25b8%25basilly%25e5%25a2%259e%25e5%258a%25a0%25e4%25ba%2586%25e4% 25ba%2592%25e6%2596%25a5%25e9%2594%2581

Silly is a high-concurrency network framework implemented based on the Coroutine mechanism of the Lua language.

Its coroutine scheduling mechanism is very simple. All functions are wrapped in coroutine and executed. When the code calls socket.read\/core.sleep and other specified APIs, the current coroutine will hang until socket.read\/core.sleep until the result is returned. While this coroutine is suspended, the CPU will be dispatched to other coroutines to run.

In most cases, the ideas of coroutine-based programming and multi-threaded programming are the same.

The exception is that the current coroutine is suspended only when the specified API is called. Compared with multi-threading, the coroutine-based scheduling mechanism introduces a disadvantage and an advantage.

The disadvantage is that if you accidentally write the infinite loop code, the current coroutine will never be suspended, and other coroutines will never get a chance to execute.

The advantage is just the other side of the disadvantage. Since the current coroutine is suspended only when the specified API is called, we can safely modify all data in the coroutine without worrying about data competition due to scheduling uncertainty. Also because we only suspend when we need to suspend, all unnecessary context switches can be avoided, which can greatly reduce the overhead of context switching.


I have always thought that mutex was invented to solve the uncertainty of preemptive scheduling. Since there is no preemptive scheduling in silly , there is no need to implement a mutex lock.

But based on my experience in the past few years, whether it is preemptive scheduling or not, concurrency problems always exist.

It’s just that the scope of influence caused by non-preemptive scheduling is less, so that I think it’s not a concurrency problem, but an asynchronous problem (looking back now, the two concepts are not so clearly distinguished).

For this kind of asynchronous problem , my usual approach is either to do lazy queues or to do failure compensation. They are all aimed at the implementation of specific business logic case by case. There is no set of mature and general-purpose solutions.

Let’s look at two examples (one based on failure compensation and one based on lazy queues):

 ---失败被偿local user = { money = 100 } func Foo() { if user.money < 50 then return fales end user.money = user.money - 50 ok = rpc:call("FooOtherService") if !ok { user.money = user.money + 50 } return ok } func Bar() { if user.money < 60 then return fales end user.money = user.money - 60 ok = rpc:call("BarOtherService") if !ok { user.money = user.money + 60 } return ok } ---惰性队列local user = { money = 100, q = nil, } func enter() { if user.q then table.insert(user.q, core.running()) core.wait() } else { user.q = {} } } func leave() { if user.q then co := table.remove(user.q, 1) if not co then user.q = nil else core.wakeup(co) end end } func Foo() { enter() if user.money < 50 then return fales end ok = rpc:call("FooOtherService") if ok { user.money = user.money - 50 } leave() return ok } func Bar() { enter() if user.money < 60 then return fales end ok = rpc:call("BarOtherService") if ok { user.money = user.money - 60 } leave() return ok }

Comparing the above two pieces of code, it can be found that the failure compensation logic basically has no possibility of being abstracted.

Although the lazy queue can be abstracted into a library with the help of the dynamics of the Lua language, it can only solve single-layer asynchronous problems, and the lazy queue cannot be perfectly abstracted for complex asynchronous problems.

Still the above code, if the Bar function is changed to the following requirements, a new lazy queue needs to be added to handle it. Otherwise, a deadlock will occur.

 local user = { money = 100, money2 = 100, q = nil, q2 = nil, } func enter(obj, name) { if obj[name] then table.insert(obj[name], core.running()) core.wait() } else { obj[name] = {} } } func leave(obj, name) { if obj[name] then co := table.remove(obj[name], 1) if not co then obj[name] = nil else core.wakeup(co) end end } func Foo() { enter(user, "q") if user.money < 50 then return fales end ok = rpc:call("FooOtherService") if ok { user.money = user.money - 50 } leave(user, "q") return ok } func Bar() { enter(user, "q2") if user.money2 < 60 then return fales end --do something ok := Foo() if ok { user.money2 = user.money2 - 60 } leave(user, "q2") }

Recently, I suddenly feel that this kind of problem is attributed to the concurrency problem , and a reentrant lock can perfectly solve all the above problems.

For example the following code:

 local user = { money = 100, money2 = 100, lock = lock:new() } func Foo() { local guad<close> = user.lock() if user.money < 50 then return fales end ok = rpc:call("FooOtherService") if ok { user.money = user.money - 50 } return ok } func Bar() { local guad<close> = user.lock() if user.money2 < 60 then return fales end --do something ok := Foo() if ok { user.money2 = user.money2 - 60 } }

Since the lock is reentrant , all functions in the same coroutine can lock successfully, so no matter how the module is modified later, there will be no deadlock effect in the module.

According to the traditional method, a lock needs to be new and initialized before it can be used. In some extreme scenarios (such as my previous experience, there are 400W grids), many locks may need to be new.

Due to our non-preemptive scheduling mechanism, in most cases, the probability of lock conflicts is very small, and a new lock is required for all places that need locks, which is a bit wasteful in this case.

In other words, although we need to implement a mutex, he is optimistic.

Based on the above ideas, I changed a lock implementation method, which is roughly used as follows:

 local mutex = require "sys.sync.mutex" local user = { money = 100, money2 = 100, } func Foo() { local guad<close> = mutex.lock(user) if user.money < 50 then return fales end ok = rpc:call("FooOtherService") if ok { user.money = user.money - 50 } return ok } func Bar() { local guad<close> = mutex.lock(user) if user.money2 < 60 then return fales end --do something ok := Foo() if ok { user.money2 = user.money2 - 60 } }

Based on the premise of optimistic locking, after this lock is allocated, there will be almost no collisions, and it will be released soon.

We can do some memory optimization inside the mutex, for example, we make a mutex cache, put it into the mutex cache when a lock is released, and try to allocate from the cache when we need a lock, etc.

This way we have a general abstraction scheme.

ps. When writing this article, I checked the wiki specifically. Mutex locks originally existed to solve concurrency problems. This kind of concurrency is not limited to preemptive or non-preemptive concurrency.

The post added a mutex for silly first appeared on BLOG of returning to chaos .

This article is transferred from: https://blog.gotocoding.com/archives/1803?utm_source=rss&utm_medium=rss&utm_campaign=%25e4%25b8%25basilly%25e5%25a2%259e%25e5%258a%25a0%25e4%25ba%2586%25e4% 25ba%2592%25e6%2596%25a5%25e9%2594%2581
This site is only for collection, and the copyright belongs to the original author.