A scheme for running a website with k8s cluster combined with public cloud function computing

foreword

This article mainly introduces the ideas and directions, and does not involve operational details such as code. If in doubt, please discuss further.

A little background first. The main body of my website is currently running on the k8s cluster, and the front end of the cluster is a public exit, a gateway implemented by a cloud service self-built nginx. The link of the network request is like this:

Self-built nginx gateway -> k8s cluster

Combined function calculation

In order to achieve search engine-friendly SEO, I later introduced the server-side rendering SSR function, that is, the server-side generates HTML pages and returns them to the user’s browser. This rendering process consumes a lot of server resources, and the resources consumed may be large or small (depending on the amount of user access). Large fluctuations may affect the stability of the entire cluster. For example, the traffic suddenly increases, the cluster resources are tight, and other services may run abnormally due to resource constraints. However, if too many computing resources are reserved in order to ensure the normal service, it will cause obvious resource emptiness and waste in idle time. Therefore, I plan to use the function computing service of the public cloud to host the server rendering part of the service function.

The advantage of function computing services is that they can be dynamically expanded, used and billed on demand. It is very suitable for some applications that use resource fluctuations and have obvious peaks and valleys in traffic. However, I also want to make one point. It is not recommended to use function calculation for long-term stable services, and the cost will be higher than that of the ESC server with annual subscription and monthly subscription. A better solution is to serve the main body of the website in the k8s cluster of the ESC server, and then put some resource-intensive tasks on function computing.

First, create and run a container in the background of Function Compute, and set the http trigger. This will get an http address accessible on the intranet. Then, start a container in k8s, install an nginx in the container, and forward it to the http intranet URL of function calculation through the reverse proxy. The link is this:

Self-built nginx gateway -> reverse proxy container of k8s cluster -> function computing

In this way, it is equivalent to abstracting the function calculation into a load of k8s. Other operations such as binding domain names are consistent with the original k8s usage, and can be directly operated on k8s without bringing additional Operation and maintenance costs.

disaster recovery

Although I say I use the Function Compute service, I don’t fully rely on it. If it hangs or I don’t want to use Function Compute, then it’s easy to configure traffic to be forwarded from the “reverse proxy” load to other loads (such as the original SSR before using Function Compute) in the load balancing block of k8s. load). If you want automatic disaster recovery switching, you can also perform load balancing at the “reverse proxy” load layer, and distribute the traffic to the original SSR load and function calculation, so that if the function calculation hangs, it will automatically use the original Some k8s SSR load support.

The link is this:

Self-built nginx gateway -> reverse proxy container of k8s cluster -> (function computing or original SSR service)

This article is reprinted from: https://blog.star7th.com/2022/04/2442.html
This site is for inclusion only, and the copyright belongs to the original author.

Leave a Comment