Skip to content

Commit 35191ca

Browse files
committed
update kubernetes deployment
1 parent 483a256 commit 35191ca

File tree

16 files changed

+148
-1268
lines changed

16 files changed

+148
-1268
lines changed

src/UserGuide/Master/Tree/Deployment-and-Maintenance/Kubernetes_apache.md

Lines changed: 18 additions & 151 deletions
Original file line numberDiff line numberDiff line change
@@ -123,12 +123,11 @@ For installation steps, please refer to the[Helm Official Website.](https://helm
123123

124124
### 5.1 Clone IoTDB Kubernetes Deployment Code
125125

126-
Please contact timechodb staff to obtain the IoTDB Helm Chart. If you encounter proxy issues, disable the proxy settings:
127-
126+
Clone Helm : [Source Code](https://github.com/apache/iotdb-extras/tree/master/helm)
128127

129128
If encountering proxy issues, cancel proxy settings:
130129

131-
> The git clone error is as follows, indicating that the proxy has been configured and needs to be turned off fatal: unable to access 'https://gitlab.timecho.com/r-d/db/iotdb-cluster-k8s.git/': gnutls_handshake() failed: The TLS connection was non-properly terminated.
130+
> The git clone error is as follows, indicating that the proxy has been configured and needs to be turned off fatal: unable to access 'https://xxx': gnutls_handshake() failed: The TLS connection was non-properly terminated.
132131
133132
```Bash
134133
unset HTTPS_PROXY
@@ -145,9 +144,9 @@ nameOverride: "iotdb"
145144
fullnameOverride: "iotdb" # Name after installation
146145

147146
image:
148-
repository: nexus.infra.timecho.com:8143/timecho/iotdb-enterprise
147+
repository: apache/iotdb
149148
pullPolicy: IfNotPresent
150-
tag: 1.3.3.2-standalone # Repository and version used
149+
tag: latest # Repository and version used
151150

152151
storage:
153152
# Storage class name, if using local static storage, do not configure; if using dynamic storage, this must be set
@@ -184,85 +183,9 @@ confignode:
184183
dataRegionConsensusProtocolClass: org.apache.iotdb.consensus.iot.IoTConsensus
185184
```
186185
187-
## 6. Configure Private Repository Information or Pre-Pull Images
188-
189-
Configure private repository information on k8s as a prerequisite for the next helm install step.
190-
191-
Option one is to pull the available iotdb images during helm insta, while option two is to import the available iotdb images into containerd in advance.
192-
193-
### 6.1 [Option 1] Pull Image from Private Repository
194-
195-
#### 6.1.1 Create a Secret to Allow k8s to Access the IoTDB Helm Private Repository
196-
197-
Replace xxxxxx with the IoTDB private repository account, password, and email.
198-
199-
200-
201-
```Bash
202-
# Note the single quotes
203-
kubectl create secret docker-registry timecho-nexus \
204-
--docker-server='nexus.infra.timecho.com:8143' \
205-
--docker-username='xxxxxx' \
206-
--docker-password='xxxxxx' \
207-
--docker-email='xxxxxx' \
208-
-n iotdb-ns
209-
210-
# View the secret
211-
kubectl get secret timecho-nexus -n iotdb-ns
212-
# View and output as YAML
213-
kubectl get secret timecho-nexus --output=yaml -n iotdb-ns
214-
# View and decrypt
215-
kubectl get secret timecho-nexus --output="jsonpath={.data.\.dockerconfigjson}" -n iotdb-ns | base64 --decode
216-
```
217-
218-
#### 6.1.2 Load the Secret as a Patch to the Namespace iotdb-ns
219-
220-
```Bash
221-
# Add a patch to include login information for nexus in this namespace
222-
kubectl patch serviceaccount default -n iotdb-ns -p '{"imagePullSecrets": [{"name": "timecho-nexus"}]}'
223-
224-
# View the information in this namespace
225-
kubectl get serviceaccounts -n iotdb-ns -o yaml
226-
```
227-
228-
### 6.2 [Option 2] Import Image
186+
## 6. Install IoTDB
229187
230-
This step is for scenarios where the customer cannot connect to the private repository and requires assistance from company implementation staff.
231-
232-
#### 6.2.1 Pull and Export the Image:
233-
234-
```Bash
235-
ctr images pull --user xxxxxxxx nexus.infra.timecho.com:8143/timecho/iotdb-enterprise:1.3.3.2-standalone
236-
```
237-
238-
#### 6.2.2 View and Export the Image:
239-
240-
```Bash
241-
# View
242-
ctr images ls
243-
244-
# Export
245-
ctr images export iotdb-enterprise:1.3.3.2-standalone.tar nexus.infra.timecho.com:8143/timecho/iotdb-enterprise:1.3.3.2-standalone
246-
```
247-
248-
#### 6.2.3 Import into the k8s Namespace:
249-
250-
> Note that k8s.io is the namespace for ctr in the example environment; importing to other namespaces will not work.
251-
252-
```Bash
253-
# Import into the k8s namespace
254-
ctr -n k8s.io images import iotdb-enterprise:1.3.3.2-standalone.tar
255-
```
256-
257-
#### 6.2.4 View the Image:
258-
259-
```Bash
260-
ctr --namespace k8s.io images list | grep 1.3.3.2
261-
```
262-
263-
## 7. Install IoTDB
264-
265-
### 7.1 Install IoTDB
188+
### 6.1 Install IoTDB
266189
267190
```Bash
268191
# Enter the directory
@@ -272,14 +195,14 @@ cd iotdb-cluster-k8s/helm
272195
helm install iotdb ./ -n iotdb-ns
273196
```
274197

275-
### 7.2 View Helm Installation List
198+
### 6.2 View Helm Installation List
276199

277200
```Bash
278201
# helm list
279202
helm list -n iotdb-ns
280203
```
281204

282-
### 7.3 View Pods
205+
### 6.3 View Pods
283206

284207
```Bash
285208
# View IoTDB pods
@@ -288,7 +211,7 @@ kubectl get pods -n iotdb-ns -o wide
288211

289212
After executing the command, if the output shows 6 Pods with confignode and datanode labels (3 each), it indicates a successful installation. Note that not all Pods may be in the Running state initially; inactive datanode Pods may keep restarting but will normalize after activation.
290213

291-
### 7.4 Troubleshooting
214+
### 6.4 Troubleshooting
292215

293216
```Bash
294217
# View k8s creation logs
@@ -303,65 +226,9 @@ kubectl describe pod datanode-0 -n iotdb-ns
303226
kubectl logs -n iotdb-ns confignode-0 -f
304227
```
305228

306-
## 8. Activate IoTDB
307-
308-
### 8.1 Option 1: Activate Directly in the Pod (Quickest)
309-
310-
```Bash
311-
kubectl exec -it -n iotdb-ns confignode-0 -- /iotdb/sbin/start-activate.sh
312-
kubectl exec -it -n iotdb-ns confignode-1 -- /iotdb/sbin/start-activate.sh
313-
kubectl exec -it -n iotdb-ns confignode-2 -- /iotdb/sbin/start-activate.sh
314-
# Obtain the machine code and proceed with activation
315-
```
316-
317-
### 8.2 Option 2: Activate Inside the ConfigNode Container
318-
319-
```Bash
320-
kubectl exec -it -n iotdb-ns confignode-0 -- /bin/bash
321-
cd /iotdb/sbin
322-
/bin/bash start-activate.sh
323-
# Obtain the machine code and proceed with activation
324-
# Exit the container
325-
```
326-
327-
### Option 3: Manual Activation
328-
329-
1. View ConfigNode details to determine the node:
330-
331-
```Bash
332-
kubectl describe pod confignode-0 -n iotdb-ns | grep -e "Node:" -e "Path:"
333-
334-
# Example output:
335-
# Node: a87/172.20.31.87
336-
# Path: /data/k8s-data/env/confignode/.env
337-
```
338-
339-
2. View PVC and find the corresponding Volume for ConfigNode to determine the path:
340-
341-
```Bash
342-
kubectl get pvc -n iotdb-ns | grep "confignode-0"
343-
# Example output:
344-
# map-confignode-confignode-0 Bound iotdb-pv-04 10Gi RWO local-storage <unset> 8h
345-
346-
# To view multiple ConfigNodes, use the following:
347-
for i in {0..2}; do echo confignode-$i; kubectl describe pod confignode-${i} -n iotdb-ns | grep -e "Node:" -e "Path:"
348-
```
349-
350-
3. View the Detailed Information of the Corresponding Volume to Determine the Physical Directory Location:
351-
352-
353-
```Bash
354-
kubectl describe pv iotdb-pv-04 | grep "Path:"
355-
356-
# Example output:
357-
# Path: /data/k8s-data/iotdb-pv-04
358-
```
359-
360-
4. Locate the system-info file in the corresponding directory on the corresponding node, use this system-info as the machine code to generate an activation code, and create a new file named license in the same directory, writing the activation code into this file.
361-
362-
## 9. Verify IoTDB
229+
## 7. Verify IoTDB
363230

364-
### 9.1 Check the Status of Pods within the Namespace
231+
### 7.1 Check the Status of Pods within the Namespace
365232

366233
View the IP, status, and other information of the pods in the iotdb-ns namespace to ensure they are all running normally.
367234

@@ -378,7 +245,7 @@ kubectl get pods -n iotdb-ns -o wide
378245
# datanode-2 1/1 Running 10 (5m55s ago) 75m 10.20.191.76 a88 <none> <none>
379246
```
380247

381-
### 9.2 Check the Port Mapping within the Namespace
248+
### 7.2 Check the Port Mapping within the Namespace
382249

383250
```Bash
384251
kubectl get svc -n iotdb-ns
@@ -390,7 +257,7 @@ kubectl get svc -n iotdb-ns
390257
# jdbc-balancer LoadBalancer 10.10.191.209 <pending> 6667:31895/TCP 7d8h
391258
```
392259

393-
### 9.3 Start the CLI Script on Any Server to Verify the IoTDB Cluster Status
260+
### 7.3 Start the CLI Script on Any Server to Verify the IoTDB Cluster Status
394261

395262
Use the port of jdbc-balancer and the IP of any k8s node.
396263

@@ -402,9 +269,9 @@ start-cli.sh -h 172.20.31.88 -p 31895
402269

403270
<img src="/img/Kubernetes02.png" alt="" style="width: 70%;"/>
404271

405-
## 10. Scaling
272+
## 8. Scaling
406273

407-
### 10.1 Add New PV
274+
### 8.1 Add New PV
408275

409276
Add a new PV; scaling is only possible with available PVs.
410277

@@ -415,7 +282,7 @@ Add a new PV; scaling is only possible with available PVs.
415282
**Reason**:The static storage hostPath mode is configured, and the script modifies the `iotdb-system.properties` file to set `dn_data_dirs` to `/iotdb6/iotdb_data,/iotdb7/iotdb_data`. However, the default storage path `/iotdb/data` is not mounted, leading to data loss upon restart.
416283
**Solution**:Mount the `/iotdb/data` directory as well, and ensure this setting is applied to both ConfigNode and DataNode to maintain data integrity and cluster stability.
417284

418-
### 10.2 Scale ConfigNode
285+
### 8.2 Scale ConfigNode
419286

420287
Example: Scale from 3 ConfigNodes to 4 ConfigNodes
421288

@@ -428,7 +295,7 @@ helm upgrade iotdb . -n iotdb-ns
428295
<img src="/img/Kubernetes04.png" alt="" style="width: 70%;"/>
429296

430297

431-
### 10.3 Scale DataNode
298+
### 8.3 Scale DataNode
432299

433300
Example: Scale from 3 DataNodes to 4 DataNodes
434301

@@ -438,7 +305,7 @@ Modify the values.yaml file in iotdb-cluster-k8s/helm to change the number of Da
438305
helm upgrade iotdb . -n iotdb-ns
439306
```
440307

441-
### 10.4 Verify IoTDB Status
308+
### 8.4 Verify IoTDB Status
442309

443310
```Shell
444311
kubectl get pods -n iotdb-ns -o wide

src/UserGuide/Master/Tree/Deployment-and-Maintenance/Kubernetes_timecho.md

Lines changed: 0 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -125,15 +125,6 @@ For installation steps, please refer to the[Helm Official Website.](https://helm
125125

126126
Please contact timechodb staff to obtain the IoTDB Helm Chart. If you encounter proxy issues, disable the proxy settings:
127127

128-
129-
If encountering proxy issues, cancel proxy settings:
130-
131-
> The git clone error is as follows, indicating that the proxy has been configured and needs to be turned off fatal: unable to access 'https://gitlab.timecho.com/r-d/db/iotdb-cluster-k8s.git/': gnutls_handshake() failed: The TLS connection was non-properly terminated.
132-
133-
```Bash
134-
unset HTTPS_PROXY
135-
```
136-
137128
### 5.2 Modify YAML Files
138129

139130
> Ensure that the version used is supported (>=1.3.3.2):

0 commit comments

Comments
 (0)