Skip to content
This repository has been archived by the owner on Mar 24, 2023. It is now read-only.

Commit

Permalink
Merge pull request #8 from molliezhang/main
Browse files Browse the repository at this point in the history
docs *: Add describle about cluster para #7
  • Loading branch information
dbkernel authored Oct 28, 2021
2 parents 5e5318c + bfa4b28 commit 449a85f
Show file tree
Hide file tree
Showing 4 changed files with 96 additions and 30 deletions.
72 changes: 52 additions & 20 deletions document/deploy_radondb-clickhouse_operator_on_kubernetes.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
Contents
=================

- [Contents](#contents)
- [Deploy Radondb ClickHouse On Kubernetes](#deploy-radondb-clickhouse-on-kubernetes)
- [Introduction](#introduction)
- [Prerequisites](#prerequisites)
Expand All @@ -12,21 +12,22 @@ Contents
- [Check the Status of Pod](#check-the-status-of-pod)
- [Check the Status of SVC](#check-the-status-of-svc)
- [Access RadonDB ClickHouse](#access-radondb-clickhouse)
- [Use pod](#use-pod)
- [Use Service](#use-service)
- [Pod](#pod)
- [Service](#service)
- [Persistence](#persistence)
- [Configuration](#configuration)

# Deploy Radondb ClickHouse On Kubernetes

## Introduction

RadonDB ClickHouse is an open-source, cloud-native, highly availability cluster solutions based on [ClickHouse](https://clickhouse.tech/).
RadonDB ClickHouse is an open-source, cloud-native, highly availability cluster solutions based on [ClickHouse](https://clickhouse.tech/). It provides features such as high availability, PB storage, real-time analytical, architectural stability and scalability.

This tutorial demonstrates how to deploy RadonDB ClickHouse on Kubernetes.

## Prerequisites

- You have created a Kubernetes Cluster.
- You have created a Kubernetes cluster.

## Procedure

Expand All @@ -39,7 +40,7 @@ $ helm repo add <repoName> https://radondb.github.io/radondb-clickhouse-kubernet
$ helm repo update
```

**Expected output:**
**Expected output**

```shell
$ helm repo add ck https://radondb.github.io/radondb-clickhouse-kubernetes/
Expand All @@ -55,10 +56,10 @@ Update Complete. ⎈Happy Helming!⎈
### Step 2 : Install RadonDB ClickHouse Operator
```bash
$ helm install --generate-name -n <Namespace> <repoName>/<appName>
$ helm install --generate-name -n <nameSpace> <repoName>/<appName>
```
**Expected output:**
**Expected output**
```shell
$ helm install clickhouse-operator ck/clickhouse-operator -n kube-system
Expand All @@ -77,10 +78,13 @@ TEST SUITE: None
### Step 3 : Install RadonDB ClickHouse Cluster
```bash
$ helm install --generate-name <repoName>/clickhouse-cluster -n <Namespace>
$ helm install --generate-name <repoName>/clickhouse-cluster -n <nameSpace>
--set <para_name>=<para_value>
```
**Expected output:**
For more information about cluter parameters, see [Configuration](#configuration).
**Expected output**
```shell
$ helm install clickhouse ck/clickhouse-cluster -n test
Expand All @@ -97,10 +101,10 @@ TEST SUITE: None
#### Check the Status of Pod
```bash
$ kubectl get pods -n <Namespace>
$ kubectl get pods -n <nameSpace>
```
**Expected output:**
**Expected output**
```shell
$ kubectl get pods -n test
Expand All @@ -115,10 +119,10 @@ pod/zk-clickhouse-cluster-2 1/1 Running 0 3m13s
#### Check the Status of SVC
```bash
$ kubectl get service -n <Namespace>
$ kubectl get service -n <nameSpace>
```
**Expected output:**
**Expected output**
```shell
$ kubectl get service -n test
Expand All @@ -132,15 +136,15 @@ service/zk-server-clickhouse-cluster ClusterIP None <none>
## Access RadonDB ClickHouse
### Use pod
### Pod
You can directly connect to ClickHouse Pod with `kubectl`.
```bash
$ kubectl exec -it <podName> -n <Namespace> -- clickhouse-client --user=<userName> --password=<userPassword>
$ kubectl exec -it <podName> -n <nameSpace> -- clickhouse-client --user=<userName> --password=<userPassword>
```
**Expected output:**
**Expected output**
```shell
$ kubectl get pods |grep clickhouse
Expand All @@ -151,11 +155,11 @@ $ kubectl exec -it chi-ClickHouse-replicas-0-0-0 -- clickhouse-client -u clickho
chi-ClickHouse-replicas-0-0-0
```
### Use Service
### Service
The Service `spec.type` is `ClusterIP`, so you need to create a client to connect the service.
**Expected output:**
**Expected output**
```
$ kubectl get service |grep clickhouse
Expand All @@ -178,6 +182,34 @@ In default, PVC mount on the `/var/lib/clickhouse` directory.
2. You should create a PVC that is automatically bound to a suitable PersistentVolume(PV).
> **Notice**
> **Notices**
>
> PVC can use different PV, so using the different PV show the different performance.
## Configuration
| Parameter | Description | Default Value |
|:----|:----|:----|
| **ClickHouse** | | |
| `clickhouse.clusterName` | ClickHouse cluster name. | all-nodes |
| `clickhouse.shardscount` | Shards count. Once confirmed, it cannot be reduced. | 1 |
| `clickhouse.replicascount` | Replicas count. Once confirmed, it cannot be modified. | 2 |
| `clickhouse.image` | ClickHouse image name, it is not recommended to modify. | radondb/clickhouse-server:v21.1.3.32-stable |
| `clickhouse.imagePullPolicy` | Image pull policy. The value can be Always/IfNotPresent/Never. | IfNotPresent |
| `clickhouse.resources.memory` | K8s memory resources should be requested by a single Pod. | 1Gi |
| `clickhouse.resources.cpu` | K8s CPU resources should be requested by a single Pod. | 0.5 |
| `clickhouse.resources.storage` | K8s Storage resources should be requested by a single Pod. | 10Gi |
| `clickhouse.user` | ClickHouse user array. Each user needs to contain a username, password and networks array. | [{"username": "clickhouse", "password": "c1ickh0use0perator", "networks": ["127.0.0.1", "::/0"]}] |
| `clickhouse.port.tcp` | Port for the native interface. | 9000 |
| `clickhouse.port.http` | Port for HTTP/REST interface. | 8123 |
| `clickhouse.svc.type` | K8s service type. The value can be ClusterIP/NodePort/LoadBalancer. | ClusterIP |
| `clickhouse.svc.qceip` | If the value of type is LoadBalancer, You need to configure loadbalancer that provided by third-party platforms. | nil |
| **ZooKeeper** | | |
| `zookeeper.install` | Whether to create ZooKeeper by operator. | true |
| `zookeeper.port` | ZooKeeper service port. | 2181 |
| `zookeeper.replicas` | ZooKeeper cluster replicas count. | 3 |
| `zookeeper.image` | ZooKeeper image name, it is not recommended to modify. | Deprecated, if install = true |
| `zookeeper.imagePullPolicy` | Image pull policy. The value can be Always/IfNotPresent/Never. | Deprecated, if install = true |
| `zookeeper.resources.memory` | K8s memory resources should be requested by a single Pod. | Deprecated, if install = true |
| `zookeeper.resources.cpu` | K8s CPU resources should be requested by a single Pod. | Deprecated, if install = true |
| `zookeeper.resources.storage` | K8s storage resources should be requested by a single Pod. | Deprecated, if install = true |
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
Contents
=================

- [Contents](#contents)
- [Deploy Radondb ClickHouse On KubeSphere](#deploy-radondb-clickhouse-on-kubesphere)
- [Introduction](#introduction)
- [Prerequisites](#prerequisites)
Expand All @@ -15,7 +16,7 @@ Contents

## Introduction

RadonDB ClickHouse is an open-source, cloud-native, highly availability cluster solutions based on [ClickHouse](https://clickhouse.tech/).
RadonDB ClickHouse is an open-source, cloud-native, highly availability cluster solutions based on [ClickHouse](https://clickhouse.tech/). It provides features such as high availability, PB storage, real-time analytical, architectural stability and scalability.

This tutorial demonstrates how to deploy ClickHouse Operator and a ClickHouse Cluster on KubeSphere.

Expand Down
48 changes: 40 additions & 8 deletions document/zh/deploy_radondb-clickhouse_operator_on_kubernetes.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
Contents
=================

- [Contents](#contents)
- [在 Kubernetes 上部署 RadonDB ClickHouse](#在-kubernetes-上部署-radondb-clickhouse)
- [简介](#简介)
- [部署准备](#部署准备)
Expand All @@ -15,12 +15,13 @@ Contents
- [通过 Pod](#通过-pod)
- [通过 Service](#通过-service)
- [持久化](#持久化)
- [配置](#配置)

# 在 Kubernetes 上部署 RadonDB ClickHouse

## 简介

RadonDB ClickHouse 是基于 [ClickHouse](https://clickhouse.tech/) 的开源、高可用、云原生集群解决方案。
RadonDB ClickHouse 是基于 [ClickHouse](https://clickhouse.tech/) 的开源、高可用、云原生集群解决方案。具备高可用、PB 级数据存储、实时数据分析、架构稳定和可扩展等性能。

本教程演示如何使用命令行在 Kubernetes 上部署 RadonDB ClickHouse。

Expand Down Expand Up @@ -56,7 +57,7 @@ Update Complete. ⎈Happy Helming!⎈
### 步骤 2 : 部署 RadonDB ClickHouse Operator
```bash
$ helm install --generate-name -n <Namespace> <repoName>/clickhouse-operator
$ helm install --generate-name -n <nameSpace> <repoName>/clickhouse-operator
```
**预期效果**
Expand All @@ -78,9 +79,12 @@ TEST SUITE: None
### 步骤 3 : 部署 RadonDB ClickHouse 集群
```bash
$ helm install --generate-name <repoName>/clickhouse-cluster -n <Namespace>
$ helm install --generate-name <repoName>/clickhouse-cluster -n <nameSpace>\
--set <para_name>=<para_value>
```
更多参数说明,请参见 [配置](#配置)。
**预期效果**
```shell
Expand All @@ -100,7 +104,7 @@ TEST SUITE: None
执行如下命令,查看创建的集群 Pod 运行状态。
```bash
$ kubectl get pods -n <Namespace>
$ kubectl get pods -n <nameSpace>
```
**预期结果**
Expand All @@ -120,7 +124,7 @@ pod/zk-clickhouse-cluster-2 1/1 Running 0 3m13s
执行如行命令,查看集群 SVC 运行状态。
```bash
$ kubectl get service -n <Namespace>
$ kubectl get service -n <nameSpace>
```
**预期结果**
Expand All @@ -142,7 +146,7 @@ service/zk-server-clickhouse-cluster ClusterIP None <none>
通过 `kubectl` 工具直接访问 ClickHouse Pod。
```bash
$ kubectl exec -it <podName> -n <Namespace> -- clickhouse-client --user=<userName> --password=<userPassword>
$ kubectl exec -it <podName> -n <nameSpace> -- clickhouse-client --user=<userName> --password=<userPassword>
```
**预期效果**
Expand Down Expand Up @@ -183,6 +187,34 @@ chi-ClickHouse-replicas-0-0-0
1. 创建一个使用 PVC 作为存储的 Pod。
2. 创建一个 PVC 自动绑定到合适的 PersistentVolume。
> **注意**
> **注意**
>
> 在 PersistentVolumeClaim 中,可以配置不同特性的 PersistentVolume。
## 配置
|参数 | 描述 | 默认值 |
|:----|:----|:----|
| **ClickHouse** | | |
| `clickhouse.clusterName` | ClickHouse cluster name. | all-nodes |
| `clickhouse.shardscount` | Shards count. Once confirmed, it cannot be reduced. | 1 |
| `clickhouse.replicascount` | Replicas count. Once confirmed, it cannot be modified. | 2 |
| `clickhouse.image` | ClickHouse image name, it is not recommended to modify. | radondb/clickhouse-server:v21.1.3.32-stable |
| `clickhouse.imagePullPolicy` | Image pull policy. The value can be Always/IfNotPresent/Never. | IfNotPresent |
| `clickhouse.resources.memory` | K8s memory resources should be requested by a single Pod. | 1Gi |
| `clickhouse.resources.cpu` | K8s CPU resources should be requested by a single Pod. | 0.5 |
| `clickhouse.resources.storage` | K8s Storage resources should be requested by a single Pod. | 10Gi |
| `clickhouse.user` | ClickHouse user array. Each user needs to contain a username, password and networks array. | [{"username": "clickhouse", "password": "c1ickh0use0perator", "networks": ["127.0.0.1", "::/0"]}] |
| `clickhouse.port.tcp` | Port for the native interface. | 9000 |
| `clickhouse.port.http` | Port for HTTP/REST interface. | 8123 |
| `clickhouse.svc.type` | K8s service type. The value can be ClusterIP/NodePort/LoadBalancer. | ClusterIP |
| `clickhouse.svc.qceip` | If the value of type is LoadBalancer, You need to configure loadbalancer that provided by third-party platforms. | nil |
| **ZooKeeper** | | |
| `zookeeper.install` | Whether to create ZooKeeper by operator. | true |
| `zookeeper.port` | ZooKeeper service port. | 2181 |
| `zookeeper.replicas` | ZooKeeper cluster replicas count. | 3 |
| `zookeeper.image` | ZooKeeper image name, it is not recommended to modify. | Deprecated, if install = true |
| `zookeeper.imagePullPolicy` | Image pull policy. The value can be Always/IfNotPresent/Never. | Deprecated, if install = true |
| `zookeeper.resources.memory` | K8s memory resources should be requested by a single Pod. | Deprecated, if install = true |
| `zookeeper.resources.cpu` | K8s CPU resources should be requested by a single Pod. | Deprecated, if install = true |
| `zookeeper.resources.storage` | K8s storage resources should be requested by a single Pod. | Deprecated, if install = true |
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
Contents
=================

- [Contents](#contents)
- [在 KubeSphere 上部署 RadonDB ClickHouse](#在-kubesphere-上部署-radondb-clickhouse)
- [简介](#简介)
- [部署准备](#部署准备)
Expand All @@ -15,7 +16,7 @@ Contents

## 简介

RadonDB ClickHouse 是基于 [ClickHouse](https://clickhouse.tech/) 的开源、高可用、云原生集群解决方案。
RadonDB ClickHouse 是基于 [ClickHouse](https://clickhouse.tech/) 的开源、高可用、云原生集群解决方案。具备高可用、PB 级数据存储、实时数据分析、架构稳定和可扩展等性能。

本教程演示了如何在 KubeSphere 上部署 ClickHouse Operator 和 ClickHouse 集群。

Expand Down

0 comments on commit 449a85f

Please sign in to comment.