Deploy ClickHouse cluster on k3s cluster

Original link: https://jasonkayzk.github.io/2022/10/25/%E5%9C%A8k3s%E9%9B%86%E7%BE%A4%E4%B8%8A%E9%83%A8% E7%BD%B2ClickHouse%E9%9B%86%E7%BE%A4/

ClickHouse is a columnar database management system (DBMS) for online analysis (OLAP), open sourced by the Russian search engine Yandex;

This article builds a k3s cluster based on the previous article “Deploying autok3s on a single machine” and deploys ClickHouse;

clickhouse-operator repo:

clickhouse-operator documentation:

ClickHouse Documentation:

Deploy ClickHouse cluster on k3s cluster

Install clickhouse-operator

If you deploy the ClickHouse cluster directly, you need to manually modify some configurations, which is troublesome;

We can manage and deploy ClickHouse clusters with the help of: clickhouse-operator ;

Usually, xxx-operator provides a series of components, for example:

  • Cluster definition: Define custom resources such as Cluster through CRD ( CustomResourceDefinition ), so that the Kubernetes world knows the Cluster and let it enjoy the first-class citizen treatment of Kubernetes together with Deployment and StatefulSet ;
  • Controller xxx-controller-manager : Contains a set of custom controllers. The controller continuously compares the expected state and the actual state of the controlled object through a loop, and drives the controlled object to reach the desired state through custom logic (similar to K8S Controller in );
  • Scheduler scheduler : Usually, the scheduler is a Kubernetes scheduler extension, which injects cluster-specific scheduling logic into the Kubernetes scheduler, such as: in order to ensure high availability, any Node cannot schedule more than half of the instances in the cluster, etc.;

tidb also provides a corresponding: tidb-operator !

Therefore, before deploying ClickHouse, we should install clickhouse-operator ;

It can be installed directly using a configuration file:

 kubectl apply -f https://github.com/Altinity/clickhouse-operator/raw/0.18.3/deploy/operator/clickhouse-operator-install-bundle.yaml

The above version is 0.18.3 specified in the application configuration, which is also the officially recommended way;

Also note:

If there is still a Click House deployed using clickhouse-operator, do not use kubectl delete to delete the operator at this time!

See:

After the above command is executed successfully, it will output:

 customresourcedefinition.apiextensions.k8s.io/clickhouseinstallations.clickhouse.altinity.com createdcustomresourcedefinition.apiextensions.k8s.io/clickhouseinstallationtemplates.clickhouse.altinity.com createdcustomresourcedefinition.apiextensions.k8s.io/clickhouseoperatorconfigurations.clickhouse.altinity.com createdserviceaccount/clickhouse-operator createdclusterrole.rbac.authorization.k8s.io/clickhouse-operator-kube-system createdclusterrolebinding.rbac.authorization.k8s.io/clickhouse-operator-kube-system createdconfigmap/etc-clickhouse-operator-files createdconfigmap/etc-clickhouse-operator-confd-files createdconfigmap/etc-clickhouse-operator-configd-files createdconfigmap/etc-clickhouse-operator-templatesd-files createdconfigmap/etc-clickhouse-operator-usersd-files createddeployment.apps/clickhouse-operator createdservice/clickhouse-operator-metrics created

Indicates that a series of resources were successfully created:

We can view it with the command:

 $ kubectl -n kube-system get po | grep clickclickhouse-operator-857c69ffc6-njw97 2/2 Running 0 33h

Next install ClickHouse through clickhouse-operator ;

Install ClickHouse

First create a namespace:

 $ kubectl create ns my-chnamespace/my-ch created

Then declare the resource:

sample01.yaml

 apiVersion: "clickhouse.altinity.com/v1"kind: "ClickHouseInstallation"metadata: name: "demo-01"spec: configuration: clusters: - name: "demo-01" layout: shardsCount: 1 replicasCount: 1

Here we use the component configuration we loaded when we installed clickhouse-operator earlier;

Then apply the configuration directly:

 $ kubectl apply -n my-ch -f sample01.yamlclickhouseinstallation.clickhouse.altinity.com/demo-01 created

can be successfully deployed:

 $ kubectl -n my-ch get chi -o wideNAME VERSION CLUSTERS SHARDS HOSTS TASKID STATUS UPDATED ADDED DELETED DELETE ENDPOINTdemo-01 0.18.1 1 1 1 6d1d2c3d-90e5-4110-81ab-8863b0d1ac47 Completed 1 clickhouse-demo-01.test.svc.cluster.local

Also see services:

 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEchi-demo-01-demo-01-0-0 ClusterIP None <none> 8123/TCP,9000/TCP,9009/TCP 2sclickhouse-demo-01 LoadBalancer 10.111.27.86 <pending> 8123:31126/TCP,9000:32460/TCP 19s

At this point you can connect by entering the container:

 $ kubectl -n my-ch exec -it chi-demo-01-demo-01-0-0-0 -- clickhouse-clientClickHouse client version 22.1.3.7 (official build).Connecting to localhost:9000 as user default.Connected to ClickHouse server version 22.1.3 revision 54455.chi-demo-01-demo-01-0-0-0.chi-demo-01-demo-01-0-0.my-ch.svc.cluster.local :)

At the same time, you can also connect remotely. The default account and password:

  • Default Username: clickhouse_operator
  • Default Password: clickhouse_operator_password

Upgrade the cluster to 2 shards

Copy sample01.yaml to sample02.yaml:

sample02.yaml

 apiVersion: "clickhouse.altinity.com/v1"kind: "ClickHouseInstallation"metadata: name: "demo-01"spec: configuration: clusters: - name: "demo-01" layout: shardsCount: 2 replicasCount: 1

Note: Since we have not changed the name configuration, k8s knows that we are updating the configuration;

Apply the latest configuration:

 kubectl apply -n my-ch -f sample02.yamlclickhouseinstallation.clickhouse.altinity.com/demo-01 configured

At this point we have two shards:

 $ kubectl get service -n my-chNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEclickhouse-demo-01 LoadBalancer 10.43.93.132 172.19.0.2,172.19.0.3,172.19.0.4,172.19.0.5 8123:30842/TCP,9000:31655/TCP 33hchi-demo-01-demo-01-0-0 ClusterIP None <none> 8123/TCP,9000/TCP,9009/TCP 33hchi-demo-01-demo-01-1-0 ClusterIP None <none> 8123/TCP,9000/TCP,9009/TCP 33h

View cluster information:

 $ kubectl -n my-ch exec -it chi-demo-01-demo-01-0-0-0 -- clickhouse-clientClickHouse client version 22.1.3.7 (official build).Connecting to localhost:9000 as user default.Connected to ClickHouse server version 22.1.3 revision 54455.chi-demo-01-demo-01-0-0-0.chi-demo-01-demo-01-0-0.my-ch.svc.cluster.local :) SELECT * FROM system.clustersSELECT *FROM system.clustersQuery id: 587358e9-aeed-4df0-abe7-ee32543c418c┌─cluster─────────────────────────────────────────┬─shard_num─┬─shard_weight─┬─replica_num─┬─host_name───────────────┬─host_address─┬─port─┬─is_local─┬─user────┬─default_database─┬─errors_count─┬─slowdowns_count─┬─estimated_recovery_time─┐│ all-replicated │ 1 │ 1 │ 1 │ chi-demo-01-demo-01-0-0 │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 ││ all-replicated │ 1 │ 1 │ 2 │ chi-demo-01-demo-01-1-0 │ 10.42.1.15 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 ││ all-sharded │ 1 │ 1 │ 1 │ chi-demo-01-demo-01-0-0 │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 ││ all-sharded │ 2 │ 1 │ 1 │ chi-demo-01-demo-01-1-0 │ 10.42.1.15 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 ││ demo-01 │ 1 │ 1 │ 1 │ chi-demo-01-demo-01-0-0 │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 ││ demo-01 │ 2 │ 1 │ 1 │ chi-demo-01-demo-01-1-0 │ 10.42.1.15 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 ││ test_cluster_one_shard_three_replicas_localhost │ 1 │ 1 │ 1 │ 127.0.0.1 │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 ││ test_cluster_one_shard_three_replicas_localhost │ 1 │ 1 │ 2 │ 127.0.0.2 │ 127.0.0.2 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 ││ test_cluster_one_shard_three_replicas_localhost │ 1 │ 1 │ 3 │ 127.0.0.3 │ 127.0.0.3 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 ││ test_cluster_two_shards │ 1 │ 1 │ 1 │ 127.0.0.1 │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 ││ test_cluster_two_shards │ 2 │ 1 │ 1 │ 127.0.0.2 │ 127.0.0.2 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 ││ test_cluster_two_shards_internal_replication │ 1 │ 1 │ 1 │ 127.0.0.1 │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 ││ test_cluster_two_shards_internal_replication │ 2 │ 1 │ 1 │ 127.0.0.2 │ 127.0.0.2 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 ││ test_cluster_two_shards_localhost │ 1 │ 1 │ 1 │ localhost │ ::1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 ││ test_cluster_two_shards_localhost │ 2 │ 1 │ 1 │ localhost │ ::1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 ││ test_shard_localhost │ 1 │ 1 │ 1 │ localhost │ ::1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 ││ test_shard_localhost_secure │ 1 │ 1 │ 1 │ localhost │ ::1 │ 9440 │ 0 │ default │ │ 0 │ 0 │ 0 ││ test_unavailable_shard │ 1 │ 1 │ 1 │ localhost │ ::1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 ││ test_unavailable_shard │ 2 │ 1 │ 1 │ localhost │ ::1 │ 1 │ 0 │ default │ │ 0 │ 0 │ 0 │└─────────────────────────────────────────────────┴───────────┴──────────────┴─────────────┴─────────────────────────┴──────────────┴──────┴──────────┴─────────┴──────────────────┴──────────────┴─────────────────┴─────────────────────────┘19 rows in set. Elapsed: 0.001 sec.

As you can see, installing ClickHouse via clickhouse-operator is very simple!

More about installation:

appendix

clickhouse-operator repo:

clickhouse-operator documentation:

ClickHouse Documentation:

This article is reprinted from: https://jasonkayzk.github.io/2022/10/25/%E5%9C%A8k3s%E9%9B%86%E7%BE%A4%E4%B8%8A%E9%83%A8% E7%BD%B2ClickHouse%E9%9B%86%E7%BE%A4/
This site is for inclusion only, and the copyright belongs to the original author.