diff --git a/src/UserGuide/Master/Table/Tools-System/Benchmark.md b/src/UserGuide/Master/Table/Tools-System/Benchmark.md new file mode 100644 index 000000000..b46184f7d --- /dev/null +++ b/src/UserGuide/Master/Table/Tools-System/Benchmark.md @@ -0,0 +1,421 @@ + + +## **Basic Overview** + +IoT-benchmark is a time-series database benchmarking tool developed in Java for big data environments. It was developed and open-sourced by the School of Software, Tsinghua University. The tool is user-friendly, supports various write and query methods, allows storing test information and results for further queries or analysis, and integrates with Tableau for visualizing test results. + +Figure 1-1 illustrates the test benchmark process and its extended functionalities, all of which can be streamlined by IoT-benchmark. It supports a variety of workloads, including write-only, read-only, and mixed write-and-read operations. Additionally, it offers software and hardware system monitoring, performance metric measurement, automated database initialization, test data analysis, and system parameter optimization. + +![](/img/benchmark-English1.png) + +Figure 1-1 *IoT-benchmark Test Benchmark Process* + +IoT-benchmark adopts the modular design concept of the YCSB test tool, which separates workload generation, performance measurement, and database interface components. Its modular structure is illustrated in Figure 1-2. Unlike YCSB-based testing tools, IoT-benchmark introduces a system monitoring module that supports the persistence of both test data and system metrics. It also includes load-testing functionalities specifically designed for time-series data scenarios, such as batch writes and multiple out-of-order data insertion modes for IoT environments. + +![](/img/benchmark-%20English2.png) + +Figure 1-2 *IoT-benchmark Modular Design* + +**Supported Databases** + +Currently, IoT-benchmark supports the following time series databases, versions and connection methods: + +| Database | Version | Connection mmethod | +| :-------------- | :-------------- | :------------------------------------------------------- | +| InfluxDB | v1.x v2.0 | SDK | +| TimescaleDB | -- | JDBC | +| OpenTSDB | -- | HTTP Request | +| QuestDB | v6.0.7 | JDBC | +| TDengine | v2.2.0.2 | JDBC | +| VictoriaMetrics | v1.64.0 | HTTP Request | +| KairosDB | -- | HTTP Request | +| IoTDB | v2.0 v1.x v0.13 | JDBC, SessionByTablet, SessionByRecord, SessionByRecords | + +## **Installation and Operation** + +#### **Prerequisites** + +1. Java 8 +2. Maven 3.6+ +3. The corresponding appropriate version of the database, such as Apache IoTDB 2.0 + +#### **How to Obtain** + +- **B****inary package****:** Visit https://github.com/thulab/iot-benchmark/releases to download the installation package. Extract the compressed file into a desired folder for use. + + - **Source Code** **Compilation (for** **Apache** **IoTDB 2.0 testing):** + + - **Compile the latest IoTDB Session package:** Download the IoTDB source code from https://github.com/apache/iotdb/tree/rc/2.0.1 and run the following command in the root directory to compile the latest IoTDB Session package: + + ```Bash + mvn clean package install -pl session -am -DskipTests + ``` + + - **Compile the IoT-benchmark test package:** Download the source code from https://github.com/thulab/iot-benchmark and run the following command in the root directory to compile the Apache IoTDB 2.0 test package:. + + ```Bash + mvn clean package install -pl iotdb-2.0 -am -DskipTests + ``` + + - The compiled test package will be located at: + + ```Bash + ./iotdb-2.0/target/iotdb-2.0-0.0.1/iotdb-2.0-0.0.1 + ``` + +#### **Test Package Structure** + +The directory structure of the test package is shown below. The test configuration file is `conf/config.properties`, and the test startup scripts are `benchmark.sh` (Linux & MacOS) and `benchmark.bat` (Windows). The detailed usage of the files is shown in the table below. + +```Shell +-rw-r--r--. 1 root root 2881 Jan 10 01:36 benchmark.bat +-rwxr-xr-x. 1 root root 314 Jan 10 01:36 benchmark.sh +drwxr-xr-x. 2 root root 24 Jan 10 01:36 bin +-rwxr-xr-x. 1 root root 1140 Jan 10 01:36 cli-benchmark.sh +drwxr-xr-x. 2 root root 107 Jan 10 01:36 conf +drwxr-xr-x. 2 root root 4096 Jan 10 01:38 lib +-rw-r--r--. 1 root root 11357 Jan 10 01:36 LICENSE +-rwxr-xr-x. 1 root root 939 Jan 10 01:36 rep-benchmark.sh +-rw-r--r--. 1 root root 14 Jan 10 01:36 routine +``` + +| Name | File | Usage | +| :--------------- | :---------------- | :-------------------------------------------------- | +| benchmark.bat | - | Startup script on Windows | +| benchmark.sh | - | Startup script on Linux/Mac | +| bin | startup.sh | Initialization script folder | +| conf | config.properties | Test scenario configuration file | +| lib | - | Dependency library | +| LICENSE | - | License file | +| cli-benchmark.sh | - | One-click startup script | +| routine | - | Automatic execution of multiple test configurations | +| rep-benchmark.sh | - | Automatic execution of multiple test scripts | + + + +#### **Execution** **of** **Tests** + +1. Modify the configuration file (conf/config.properties) according to test requirements. For example, to test Apache IoTDB 2.0, set the following parameter: + + ```Bash + DB_SWITCH=IoTDB-200-SESSION_BY_TABLET + ``` + +2. Ensure the target time-series database is running. + +3. Start IoT-benchmark to execute the test. Monitor the status of both the target database and IoT-benchmark during execution. + +4. Upon completion, review the results and analyze the test process. + +#### + +#### **Results Interpretation** + +All test log files are stored in the `logs` folder, while test results are saved in the `data/csvOutput` folder. For example, the following result matrix illustrates the test outcome: + +![](/img/bm4.png) + +- **Result Matrix:** + - OkOperation: Number of successful operations. + - OkPoint: Number of successfully written points (for write operations) or successfully queried points (for query operations). + - FailOperation: Number of failed operations. + - FailPoint: Number of failed write points. +- **Latency (ms) Matrix:** + - AVG: Average operation latency. + - MIN: Minimum operation latency. + - Pn: Quantile values of the overall operation distribution (e.g., P25 represents the 25th percentile, or lower quartile). + +## **Main** **Parameters** + +#### IoTDB Service Model + +The `IoTDB_DIALECT_MODE` parameter supports two modes: `tree` and `table`. The default value is `tree`. + +- **For IoTDB 2.0 and later versions**, the `IoTDB_DIALECT_MODE` parameter must be specified, and only one mode can be set for each IoTDB instance. +- **IoTDB_DIALECT_MODE = table:** + - The number of devices must be an integer multiple of the number of tables. + - The number of tables must be an integer multiple of the number of databases. +- **IoTDB_DIALECT_MODE = tree:** + - The number of devices must be greater than or equal to the number of databases. + +Key Parameters for IoTDB Service Model + +| **Parameter name** | **Type** | **Example** | **System description** | +| :---------------------- | :------- | :---------- | :----------------------------------------------------------- | +| IoTDB_TABLE_NAME_PREFIX | String | `table_` | Prefix for table names when `IoTDB_DIALECT_MODE` is set to `table`. | +| DATA_CLIENT_NUMBER | Integer | `10` | Number of clients, must be an integer multiple of the table count. | +| SENSOR_NUMBER | Integer | `10` | Controls the number of attribute columns in the table model. | +| IoTDB_TABLE_NUMBER | Integer | `1` | Specifies the number of tables when using the table model. | + +#### **Working** **M****ode** + +The `BENCHMARK_WORK_MODE` parameter supports four operational modes: + +1. **General Test Mode (****`testWithDefaultPath`****):** Configured via the `OPERATION_PROPORTION` parameter to support write-only, read-only, and mixed read-write operations. +2. **Data Generation Mode (****`generateDataMode`****):** Generates a reusable dataset, which is saved to `FILE_PATH` for subsequent use in the correctness write and correctness query modes. +3. **Single Database Correctness Write Mode (****`verificationWriteMode`****):** Verifies the correctness of dataset writing by writing the dataset generated in data generation mode. This mode supports only IoTDB v1.0+ and InfluxDB v1.x. +4. **Single Database Correctness Query Mode (****`verificationQueryMode`****):** Verifies the correctness of dataset queries after using the correctness write mode. This mode supports only IoTDB v1.0+ and InfluxDB v1.x. + +Mode configurations are shown in the following below: + +| **Mode name** | **BENCHMARK_WORK_MODE** | Description | Required Configuration | +| :------------------------------------- | :---------------------- | :------------------------------------------------------ | :------------------------- | +| General test mode | testWithDefaultPath | Supports multiple read and write mixed load operations. | `OPERATION_PROPORTION` | +| Generate data mode | generateDataMode | Generates datasets recognizable by IoT-benchmark. | `FILE_PATH` and `DATA_SET` | +| Single database correctness write mode | verificationWriteMode | Writes datasets for correctness verification. | `FILE_PATH` and `DATA_SET` | +| Single database correctness query mode | verificationQueryMode | Queries datasets to verify correctness. | `FILE_PATH` and `DATA_SET` | + +#### **Server** **Connection** **Information** + +Once the working mode is specified, the following parameters must be configured to inform IoT-benchmark of the target time-series database: + +| **Parameter** | **Type** | **Example** | D**escription** | +| :------------ | :------- | :---------------------------- | :----------------------------------------------------- | +| DB_SWITCH | String | `IoTDB-200-SESSION_BY_TABLET` | Specifies the type of time-series database under test. | +| HOST | String | `127.0.0.1` | Network address of the target time-series database. | +| PORT | Integer | `6667` | Network port of the target time-series database. | +| USERNAME | String | `root` | Login username for the time-series database. | +| PASSWORD | String | `root` | Password for the database login user. | +| DB_NAME | String | `test` | Name of the target time-series database. | +| TOKEN | String | - | Authentication token (used for InfluxDB 2.0). | + +#### **Write Scenario Parameters** + +| **Parameter** | **Type** | **Example** | D**escription** | +| :------------------------- | :-------------------- | :-------------------------- | :----------------------------------------------------------- | +| CLIENT_NUMBER | Integer | `100` | Total number of clients used for writing. | +| GROUP_NUMBER | Integer | `20` | Number of databases (only applicable for IoTDB). | +| DEVICE_NUMBER | Integer | `100` | Total number of devices. | +| SENSOR_NUMBER | Integer | `300` | Total number of sensors per device. (Control the number of attribute columns if you use the IoTDB table model) | +| INSERT_DATATYPE_PROPORTION | String | `1:1:1:1:1:1:0:0:0:0` | Ratio of data types: `BOOLEAN:INT32:INT64:FLOAT:DOUBLE:TEXT:STRING:BLOB:TIMESTAMP:DATE`. | +| POINT_STEP | Integer | `1000` | Time interval (in ms) between generated data points. | +| OP_MIN_INTERVAL | Integer | `0` | Minimum execution interval for operations (ms): if the operation takes more than the value, the next one will be executed immediately, otherwise wait (OP_MIN_INTERVAL - actual execution time) ms; if it is 0, the parameter is not effective; if it is -1, its value is consistent with POINT_STEP | +| IS_OUT_OF_ORDER | Boolean | `false` | Specifies whether to write data out of order. | +| OUT_OF_ORDER_RATIO | Floating point number | `0.3` | Proportion of out-of-order data. | +| BATCH_SIZE_PER_WRITE | Integer | `1` | Number of data rows written per batch. | +| START_TIME | Time | `2022-10-30T00:00:00+08:00` | Start timestamp for data generation. | +| LOOP | Integer | `86400` | Total number of write operations: Each type of operation will be divided according to the proportion defined by `OPERATION_PROPORTION` | +| OPERATION_PROPORTION | Character | `1:0:0:0:0:0:0:0:0:0:0` | Ratio of operation types (write:Q1:Q2:...:Q10). | + +#### **Query Scenario Parameters** + +| Parameter | Type | Example | Description | +| :------------------- | :-------- | :---------------------- | :----------------------------------------------------------- | +| QUERY_DEVICE_NUM | Integer | `2` | Number of devices involved in each query statement. | +| QUERY_SENSOR_NUM | Integer | `2` | Number of sensors involved in each query statement. | +| QUERY_AGGREGATE_FUN | Character | `count` | Aggregate functions used in queries (`COUNT`, `AVG`, `SUM`, etc.). | +| STEP_SIZE | Integer | `1` | Time interval step for time filter conditions. | +| QUERY_INTERVAL | Integer | `250000` | Time interval between query start and end times. | +| QUERY_LOWER_VALUE | Integer | `-5` | Threshold for conditional queries (`WHERE value > QUERY_LOWER_VALUE`). | +| GROUP_BY_TIME_UNIT | Integer | `20000` | The size of the group in the `GROUP BY` statement | +| LOOP | Integer | `10` | Total number of query operations: Each type of operation will be divided according to the proportion defined by `OPERATION_PROPORTION` | +| OPERATION_PROPORTION | Character | `0:0:0:0:0:0:0:0:0:0:1` | Ratio of operation types (`write:Q1:Q2:...:Q10`). | + +#### **Query Types and Example SQL** + +| Number | Query Type | IoTDB Sample SQL | +| :----- | :----------------------------- | :----------------------------------------------------------- | +| Q1 | Precise Point Query | `select v1 from root.db.d1 where time = ?` | +| Q2 | Time Range Query | `select v1 from root.db.d1 where time > ? and time < ?` | +| Q3 | Time Range with Value Filter | `select v1 from root.db.d1 where time > ? and time < ? and v1 > ?` | +| Q4 | Time Range Aggregation Query | `select count(v1) from root.db.d1 where and time > ? and time < ?` | +| Q5 | Full-Time Range with Filtering | `select count(v1) from root.db.d1 where v1 > ?` | +| Q6 | Range Aggregation with Filter | `select count(v1) from root.db.d1 where v1 > ? and time > ? and time < ?` | +| Q7 | Time Grouping Aggregation | `select count(v1) from root.db.d1 group by ([?, ?), ?, ?)` | +| Q8 | Latest Point Query | `select last v1 from root.db.d1` | +| Q9 | Descending Range Query | `select v1 from root.sg.d1 where time > ? and time < ? order by time desc` | +| Q10 | Descending Range with Filter | `select v1 from root.sg.d1 where time > ? and time < ? and v1 > ? order by time desc` | + +#### **Test process and test result persistence** + +IoT-benchmark currently supports persisting the test process and test results through configuration parameters. + +| **Parameter** | **Type** | **Example** | D**escription** | +| :-------------------- | :------- | :---------- | :----------------------------------------------------------- | +| TEST_DATA_PERSISTENCE | String | `None` | Specifies the result persistence method. Options: `None`, `IoTDB`, `MySQL`, `CSV`. | +| RECORD_SPLIT | Boolean | `true` | Whether to split results into multiple records. (Not supported by IoTDB currently.) | +| RECORD_SPLIT_MAX_LINE | Integer | `10000000` | Maximum number of rows per record (10 million rows per database table or CSV file). | +| TEST_DATA_STORE_IP | String | `127.0.0.1` | IP address of the database for result storage. | +| TEST_DATA_STORE_PORT | Integer | `6667` | Port number of the output database. | +| TEST_DATA_STORE_DB | String | `result` | Name of the output database. | +| TEST_DATA_STORE_USER | String | `root` | Username for accessing the output database. | +| TEST_DATA_STORE_PW | String | `root` | Password for accessing the output database. | + +**Result Persistence Details** + +- **CSV Mode:** If `TEST_DATA_PERSISTENCE` is set to `CSV`, a `data` folder is generated in the IoT-benchmark root directory during and after test execution. This folder contains: + - `csv` folder: Records the test process. + - `csvOutput` folder: Stores the test results. +- **MySQL Mode:** If `TEST_DATA_PERSISTENCE` is set to `MySQL`, IoT-benchmark creates the following tables in the specified MySQL database: + - **Test Process Table:** + 1. Created before the test starts. + 2. Named as: `testWithDefaultPath___`. + - **Configuration Table:** + 1. Named `CONFIG`. + 2. Stores the test configuration. + 3. Created if it does not exist. + - **Final Result Table:** + 1. Named `FINAL_RESULT`. + 2. Stores the test results after test completion. + 3. Created if it does not exist. + +#### Automation Script + +##### One-Click Script Startup + +The `cli-benchmark.sh` script allows one-click startup of IoTDB, IoTDB Benchmark monitoring, and IoTDB Benchmark testing. However, please note that this script will clear all existing data in IoTDB during startup, so use it with caution. + +**Steps to Run:** + +1. Edit the `IOTDB_HOME` parameter in `cli-benchmark.sh` to the local IoTDB directory. +2. Start the test by running the following command: + +```Bash +> ./cli-benchmark.sh +``` + +1. After the test completes: + 1. Check test-related logs in the `logs` folder. + 2. Check monitoring-related logs in the `server-logs` folder. + +##### Automatic Execution of Multiple Tests + +Single tests are often insufficient without comparative results. Therefore, IoT-benchmark provides an interface for executing multiple tests in sequence. + +1. **Routine Configuration:** Each line in the `routine` file specifies the parameters that change for each test. For example: + + ```Plain + LOOP=10 DEVICE_NUMBER=100 TEST + LOOP=20 DEVICE_NUMBER=50 TEST + LOOP=50 DEVICE_NUMBER=20 TEST + ``` + +In this example, three tests will run sequentially with `LOOP` values of 10, 20, and 50. + +Then the test process with 3 LOOP parameters of 10, 20, and 50 is executed in sequence. + +**Important Notes:** + +- Multiple parameters can be changed in each test using the format: + + ```Bash + LOOP=20 DEVICE_NUMBER=10 TEST + ``` + +- Avoid unnecessary spaces. + +- The `TEST` keyword marks the start of a new test. + +- Changed parameters persist across subsequent tests unless explicitly reset. + +1. **Start the Test:** After configuring the `routine` file, start multi-test execution using the following command + + ```Bash + > ./rep-benchmark.sh + ``` + +2. Test results will be displayed in the terminal. + +**Important Notes:** + +- Closing the terminal or losing the client connection will terminate the test process. + +- To run the test as a background daemon, execute: + + ```Bash + > ./rep-benchmark.sh > /dev/null 2>&1 & + ``` + +- To monitor progress, check the logs: + + ```Bash + > cd ./logs + > tail -f log_info.log + ``` + +## Test Example + +This example demonstrates how to configure and run an IoT-benchmark test with IoTDB 2.0 using the table model for writing and querying. + +```Properties +----------------------Main Configurations---------------------- +BENCHMARK_WORK_MODE=testWithDefaultPath +IoTDB_DIALECT_MODE=TABLE +DB_SWITCH=IoTDB-200-SESSION_BY_TABLET +GROUP_NUMBER=1 +IoTDB_TABLE_NUMBER=1 +DEVICE_NUMBER=60 +REAL_INSERT_RATE=1.0 +SENSOR_NUMBER=10 +OPERATION_PROPORTION=1:0:0:0:0:0:0:0:0:0:0:0 +SCHEMA_CLIENT_NUMBER=10 +DATA_CLIENT_NUMBER=10 +LOOP=10 +BATCH_SIZE_PER_WRITE=10 +DEVICE_NUM_PER_WRITE=1 +START_TIME=2025-01-01T00:00:00+08:00 +POINT_STEP=1000 +INSERT_DATATYPE_PROPORTION=1:1:1:1:1:1:0:0:0:0 +VECTOR=true +``` + +**Execution Steps:** + +1. Ensure the target database (IoTDB 2.0) is running. +2. Start IoT-benchmark using the configured parameters. +3. Upon completion, view the test results. + +```Shell +Create schema cost 0.88 second +Test elapsed time (not include schema creation): 4.60 second +----------------------------------------------------------Result Matrix---------------------------------------------------------- +Operation okOperation okPoint failOperation failPoint throughput(point/s) +INGESTION 600 60000 0 0 13054.42 +PRECISE_POINT 0 0 0 0 0.00 +TIME_RANGE 0 0 0 0 0.00 +VALUE_RANGE 0 0 0 0 0.00 +AGG_RANGE 0 0 0 0 0.00 +AGG_VALUE 0 0 0 0 0.00 +AGG_RANGE_VALUE 0 0 0 0 0.00 +GROUP_BY 0 0 0 0 0.00 +LATEST_POINT 0 0 0 0 0.00 +RANGE_QUERY_DESC 0 0 0 0 0.00 +VALUE_RANGE_QUERY_DESC 0 0 0 0 0.00 +GROUP_BY_DESC 0 0 0 0 0.00 +--------------------------------------------------------------------------------------------------------------------------------- + +--------------------------------------------------------------------------Latency (ms) Matrix-------------------------------------------------------------------------- +Operation AVG MIN P10 P25 MEDIAN P75 P90 P95 P99 P999 MAX SLOWEST_THREAD +INGESTION 41.77 0.95 1.41 2.27 6.76 24.14 63.42 127.18 1260.92 1265.72 1265.49 2581.91 +PRECISE_POINT 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 +TIME_RANGE 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 +VALUE_RANGE 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 +AGG_RANGE 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 +AGG_VALUE 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 +AGG_RANGE_VALUE 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 +GROUP_BY 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 +LATEST_POINT 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 +RANGE_QUERY_DESC 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 +VALUE_RANGE_QUERY_DESC 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 +GROUP_BY_DESC 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 +----------------------------------------------------------------------------------------------------------------------------------------------------------------------- +``` \ No newline at end of file diff --git a/src/UserGuide/Master/Table/Tools-System/CLI.md b/src/UserGuide/Master/Table/Tools-System/CLI.md index ab2208619..9b5db7a7a 100644 --- a/src/UserGuide/Master/Table/Tools-System/CLI.md +++ b/src/UserGuide/Master/Table/Tools-System/CLI.md @@ -19,43 +19,35 @@ --> -# CLI Tool - The IoTDB Command Line Interface (CLI) tool allows users to interact with the IoTDB server. Before using the CLI tool to connect to IoTDB, ensure that the IoTDB service is running correctly. This document explains how to launch the CLI and its related parameters. -> In this manual, `$IOTDB_HOME` represents the installation directory of IoTDB. +In this manual, `$IOTDB_HOME` represents the installation directory of IoTDB. -## 1 CLI Launch +### CLI Launch The CLI client script is located in the `$IOTDB_HOME/sbin` directory. The common commands to start the CLI tool are as follows: -- Linux/MacOS: +#### **Linux** **MacOS** -```Shell +```Bash Shell> bash sbin/start-cli.sh -sql_dialect table #or Shell> bash sbin/start-cli.sh -h 127.0.0.1 -p 6667 -u root -pw root -sql_dialect table ``` -- Windows: +#### **Windows** -```Shell +```Bash Shell> sbin\start-cli.bat -sql_dialect table #or Shell> sbin\start-cli.bat -h 127.0.0.1 -p 6667 -u root -pw root -sql_dialect table ``` -Among them: - --The -h and -p items are the IP and RPC port numbers where IoTDB is located (default IP and RPC port numbers are 127.0.0.1 and 6667 if not modified locally) -- -u and -pw are the username and password for logging into IoTDB (after installation, IoTDB has a default user, and both username and password are 'root') -- -sql-dialect is the logged in data model (table model or tree model), where table is specified to represent entering table model mode - **Parameter Explanation** | **Parameter** | **Type** | **Required** | **Description** | **Example** | | -------------------------- | -------- | ------------ | ------------------------------------------------------------ | ------------------- | -| -h `` | string | No | The IP address of the IoTDB server. (Default: 127.0.0.1) | -h 127.0.0.1 | +| -h `` | string | No | The IP address of the IoTDB server. (Default: 127.0.0.1) | -h 127.0.0.1 | | -p `` | int | No | The RPC port of the IoTDB server. (Default: 6667) | -p 6667 | | -u `` | string | No | The username to connect to the IoTDB server. (Default: root) | -u root | | -pw `` | string | No | The password to connect to the IoTDB server. (Default: root) | -pw root | @@ -70,11 +62,9 @@ The figure below indicates a successful startup: ![](/img/Cli-01.png) -## 2 Execute statements in CLI +### Example Commands -After entering the CLI, users can directly interact by entering SQL statements in the conversation. For example: - -- Create Database +#### **Create a Database** ```Java create database test @@ -83,8 +73,7 @@ create database test ![](/img/Cli-02.png) -- Show Databases - +#### **Show Databases** ```Java show databases ``` @@ -92,13 +81,11 @@ show databases ![](/img/Cli-03.png) -## 3 CLI Exit +### CLI Exit To exit the CLI and terminate the session, type`quit`or`exit`. -## 4 Additional Notes - -CLI Command Usage Tips: +### Additional Notes and Shortcuts 1. **Navigate Command History:** Use the up and down arrow keys. 2. **Auto-Complete Commands:** Use the right arrow key. diff --git a/src/UserGuide/Master/Table/Tools-System/Maintenance-Tool_timecho.md b/src/UserGuide/Master/Table/Tools-System/Maintenance-Tool_timecho.md new file mode 100644 index 000000000..0c60ab937 --- /dev/null +++ b/src/UserGuide/Master/Table/Tools-System/Maintenance-Tool_timecho.md @@ -0,0 +1,1150 @@ + + +## IoTDB-OpsKit + +The IoTDB OpsKit is an easy-to-use operation and maintenance tool designed for TimechoDB (Enterprise-grade product based on Apache IoTDB). It helps address the operational and maintenance challenges of multi-node distributed IoTDB deployments by providing functionalities such as cluster deployment, start/stop management, elastic scaling, configuration updates, and data export. With one-click command execution, it simplifies the management of complex database clusters and significantly reduces operational complexity. + +This document provides guidance on remotely deploying, configuring, starting, and stopping IoTDB cluster instances using the cluster management tool. + +### Prerequisites + +The IoTDB OpsKit requires GLIBC 2.17 or later, which means the minimum supported operating system version is CentOS 7. The target machines for IoTDB deployment must have the following dependencies installed: + +- JDK 8 or later +- lsof +- netstat +- unzip + +If any of these dependencies are missing, please install them manually. The last section of this document provides installation commands for reference. + +> **Note:** The IoTDB cluster management tool requires **root privileges** to execute. + +### Deployment + +#### Download and Installation + +The IoTDB OpsKit is an auxiliary tool for TimechoDB. Please contact Timecho team to obtain the download instructions. + +To install: + +1. Navigate to the `iotdb-opskit` directory and execute: + +```Bash +bash install-iotdbctl.sh +``` + +This will activate the `iotdbctl` command in the current shell session. You can verify the installation by checking the deployment prerequisites: + +```Bash +iotdbctl cluster check example +``` + +1. Alternatively, if you prefer not to activate `iotdbctl`, you can execute commands directly using the absolute path: + +```Bash +/sbin/iotdbctl cluster check example +``` + +### Cluster Configuration Files + +The cluster configuration files are stored in the `iotdbctl/config` directory as YAML files. + +- Each YAML file name corresponds to a cluster name. Multiple YAML files can coexist. +- A sample configuration file (`default_cluster.yaml`) is provided in the `iotdbctl/config` directory to assist users in setting up their configurations. + +#### **Structure of YAML Configuration** + +The YAML file consists of the following five sections: + +1. `global` – General settings, such as SSH credentials, installation paths, and JDK configurations. +2. `confignode_servers` – Configuration settings for ConfigNodes. +3. `datanode_servers` – Configuration settings for DataNodes. +4. `grafana_server` – Configuration settings for Grafana monitoring. +5. `prometheus_server` – Configuration settings for Prometheus monitoring. + +A sample YAML file (`default_cluster.yaml`) is included in the `iotdbctl/config` directory. + +- You can copy and rename it based on your cluster setup. +- All uncommented fields are mandatory. +- Commented fields are optional. + +**Example:** Checking `default_cluster.yaml` + +To validate a cluster configuration, execute: + +```SQL +iotdbctl cluster check default_cluster +``` + +For a complete list of available commands, refer to the command reference section below. + +#### Parameter Reference + +| **Parameter** | **Description** | **Mandatory** | +| ----------------------- | ------------------------------------------------------------ | ------------- | +| iotdb_zip_dir | IoTDB distribution directory. If empty, the package will be downloaded from `iotdb_download_url`. | NO | +| iotdb_download_url | IoTDB download URL. If `iotdb_zip_dir` is empty, the package will be retrieved from this address. | NO | +| jdk_tar_dir | Local path to the JDK package for uploading and deployment. | NO | +| jdk_deploy_dir | Remote deployment directory for the JDK. | NO | +| jdk_dir_name | JDK decompression directory name. Default: `jdk_iotdb`. | NO | +| iotdb_lib_dir | IoTDB library directory (or `.zip` package for upgrades). Default: commented out. | NO | +| user | SSH login username for deployment. | YES | +| password | SSH password (if omitted, key-based authentication will be used). | NO | +| pkey | SSH private key (used if `password` is not provided). | NO | +| ssh_port | SSH port number. | YES | +| deploy_dir | IoTDB deployment directory. | YES | +| iotdb_dir_name | IoTDB decompression directory name. Default: `iotdb`. | NO | +| datanode-env.sh | Corresponds to `iotdb/config/datanode-env.sh`. If both `global` and `confignode_servers` are configured, `confignode_servers` takes precedence. | NO | +| confignode-env.sh | Corresponds to `iotdb/config/confignode-env.sh`. If both `global` and `datanode_servers` are configured, `datanode_servers` takes precedence. | NO | +| iotdb-system.properties | Corresponds to `/config/iotdb-system.properties`. | NO | +| cn_internal_address | The inter-node communication address for ConfigNodes. This parameter defines the address of the surviving ConfigNode, which defaults to `confignode_x`. If both `global` and `confignode_servers` are configured, the value in `confignode_servers` takes precedence. Corresponds to `cn_internal_address` in `iotdb/config/iotdb-system.properties`. | YES | +| dn_internal_address | The inter-node communication address for DataNodes. This address defaults to `confignode_x`. If both `global` and `datanode_servers` are configured, the value in `datanode_servers` takes precedence. Corresponds to `dn_internal_address` in `iotdb/config/iotdb-system.properties`. | YES | + +Both `datanode-env.sh` and `confignode-env.sh` allow **extra parameters** to be appended. These parameters can be configured using the `extra_opts` field. Example from `default_cluster.yaml`: + +```YAML +datanode-env.sh: + extra_opts: | + IOTDB_JMX_OPTS="$IOTDB_JMX_OPTS -XX:+UseG1GC" + IOTDB_JMX_OPTS="$IOTDB_JMX_OPTS -XX:MaxGCPauseMillis=200" +``` + +#### ConfigNode Configuration + +ConfigNodes can be configured in `confignode_servers`. Multiple ConfigNodes can be deployed, with the first started ConfigNode (`node1`) serving as the Seed ConfigNode by default. + +| **Parameter** | **Description** | **Mandatory** | +| ----------------------- | ------------------------------------------------------------ | ------------- | +| name | ConfigNode name. | YES | +| deploy_dir | ConfigNode deployment directory. | YES | +| cn_internal_address | Inter-node communication address for ConfigNodes, corresponding to `iotdb/config/iotdb-system.properties`. | YES | +| cn_seed_config_node | The cluster configuration address points to the surviving ConfigNode. This address defaults to `confignode_x`. If both `global` and `confignode_servers` are configured, the value in `confignode_servers` takes precedence, corresponding to `cn_internal_address` in `iotdb/config/iotdb-system.properties` | YES | +| cn_internal_port | Internal communication port, corresponding to `cn_internal_port` in `iotdb/config/iotdb-system.properties`. | YES | +| cn_consensus_port | Consensus communication port, corresponding to `cn_consensus_port` in `iotdb/config/iotdb-system.properties`. | NO | +| cn_data_dir | Data directory for ConfigNodes, corresponding to `cn_data_dir` in `iotdb/config/iotdb-system.properties`. | YES | +| iotdb-system.properties | ConfigNode properties file. If `global` and `confignode_servers` are both configured, values from `confignode_servers` take precedence. | NO | + +#### DataNode Configuration + +Datanodes can be configured in `datanode_servers`. Multiple DataNodes can be deployed, each requiring unique configuration. + +| **Parameter** | **Description** | **Mandatory** | +| ----------------------- | ------------------------------------------------------------ | ------------- | +| name | DataNode name. | YES | +| deploy_dir | DataNode deployment directory. | YES | +| dn_rpc_address | RPC communication address, corresponding to `dn_rpc_address` in `iotdb/config/iotdb-system.properties`. | YES | +| dn_internal_address | Internal communication address, corresponding to `dn_internal_address` in `iotdb/config/iotdb-system.properties`. | YES | +| dn_seed_config_node | Points to the active ConfigNode. Defaults to `confignode_x`. If `global` and `datanode_servers` are both configured, values from `datanode_servers` take precedence. Corresponds to `dn_seed_config_node` in `iotdb/config/iotdb-system.properties`. | YES | +| dn_rpc_port | RPC port for DataNodes, corresponding to `dn_rpc_port` in `iotdb/config/iotdb-system.properties`. | YES | +| dn_internal_port | Internal communication port, corresponding to `dn_internal_port` in `iotdb/config/iotdb-system.properties`. | YES | +| iotdb-system.properties | DataNode properties file. If `global` and `datanode_servers` are both configured, values from `datanode_servers` take precedence. | NO | + +#### Grafana Configuration + +Grafana can be configured in `grafana_server`. Defines the settings for deploying Grafana as a monitoring solution for IoTDB. + +| **Parameter** | **Description** | **Mandatory** | +| ---------------- | ------------------------------------------------------------ | ------------- | +| grafana_dir_name | Name of the Grafana decompression directory. Default: `grafana_iotdb`. | NO | +| host | The IP address of the machine hosting Grafana. | YES | +| grafana_port | The port Grafana listens on. Default: `3000`. | NO | +| deploy_dir | Deployment directory for Grafana. | YES | +| grafana_tar_dir | Path to the Grafana compressed package. | YES | +| dashboards | Path to pre-configured Grafana dashboards. | NO | + +#### Prometheus Configuration + +Grafana can be configured in `prometheus_server`. Defines the settings for deploying Prometheus as a monitoring solution for IoTDB. + +| **Parameter** | **Description** | **Mandatory** | +| --------------------------- | ------------------------------------------------------------ | ------------- | +| prometheus_dir_name | Name of the Prometheus decompression directory. Default: `prometheus_iotdb`. | NO | +| host | The IP address of the machine hosting Prometheus. | YES | +| prometheus_port | The port Prometheus listens on. Default: `9090`. | NO | +| deploy_dir | Deployment directory for Prometheus. | YES | +| prometheus_tar_dir | Path to the Prometheus compressed package. | YES | +| storage_tsdb_retention_time | Number of days data is retained. Default: `15 days`. | NO | +| storage_tsdb_retention_size | Maximum data storage size per block. Default: `512M`. Units: KB, MB, GB, TB, PB, EB. | NO | + +If metrics are enabled in `iotdb-system.properties` (in `config/xxx.yaml`), the configurations will be automatically applied to Prometheus without manual modification. + +**Special Configuration Notes** + +- **Handling Special Characters in YAML Keys**: If a YAML key value contains special characters (such as `:`), it is recommended to enclose the entire value in double quotes (`""`). +- **Avoid Spaces in File Paths**: Paths containing spaces may cause parsing errors in some configurations. + +### Usage Scenarios + +#### Data Cleanup + +This operation deletes cluster data directories, including: + +- IoTDB data directories, +- ConfigNode directories (`cn_system_dir`, `cn_consensus_dir`), +- DataNode directories (`dn_data_dirs`, `dn_consensus_dir`, `dn_system_dir`), +- Log directories and ext directories specified in the YAML configuration. + +To clean cluster data, perform the following steps: + +```Bash +# Step 1: Stop the cluster +iotdbctl cluster stop default_cluster + +# Step 2: Clean the cluster data +iotdbctl cluster clean default_cluster +``` + +#### Cluster Destruction + +The cluster destruction process completely removes the following resources: + +- Data directories, +- ConfigNode directories (`cn_system_dir`, `cn_consensus_dir`), +- DataNode directories (`dn_data_dirs`, `dn_consensus_dir`, `dn_system_dir`), +- Log and ext directories, +- IoTDB deployment directory, +- Grafana and Prometheus deployment directories. + +To destroy a cluster, follow these steps: + +```Bash +# Step 1: Stop the cluster +iotdbctl cluster stop default_cluster + +# Step 2: Destroy the cluster +iotdbctl cluster destroy default_cluster +``` + +#### Cluster Upgrade + +To upgrade the cluster, follow these steps: + +1. In `config/xxx.yaml`, set **`iotdb_lib_dir`** to the path of the JAR files to be uploaded. Example: `iotdb/lib` +2. If uploading a compressed package, compress the `iotdb/lib` directory: + +```Bash +zip -r lib.zip apache-iotdb-1.2.0/lib/* +``` + +1. Execute the following commands to distribute the library and restart the cluster: + +```Bash +iotdbctl cluster dist-lib default_cluster +iotdbctl cluster restart default_cluster +``` + +#### Hot Deployment + +Hot deployment allows real-time configuration updates without restarting the cluster. + +Steps: + +1. Modify the configuration in `config/xxx.yaml`. +2. Distribute the updated configuration and reload it: + +```Bash +iotdbctl cluster dist-conf default_cluster +iotdbctl cluster reload default_cluster +``` + +#### Cluster Expansion + +To expand the cluster by adding new nodes: + +1. Add a new DataNode or ConfigNode in `config/xxx.yaml`. +2. Execute the cluster expansion command: + +```Bash +iotdbctl cluster scaleout default_cluster +``` + +#### Cluster Shrinking + +To remove a node from the cluster: + +1. Identify the node name or IP:port in `config/xxx.yaml`: + 1. ConfigNode port: `cn_internal_port` + 2. DataNode port: `rpc_port` +2. Execute the following command: + +```Bash +iotdbctl cluster scalein default_cluster +``` + +#### Managing Existing IoTDB Clusters + +To manage an existing IoTDB cluster with the OpsKit tool: + +1. Configure SSH credentials: + 1. Set `user`, `password` (or `pkey`), and `ssh_port` in `config/xxx.yaml`. +2. Modify IoTDB deployment paths: For example, if IoTDB is deployed at `/home/data/apache-iotdb-1.1.1`: + +```YAML +deploy_dir: /home/data/ +iotdb_dir_name: apache-iotdb-1.1.1 +``` + +1. Configure JDK paths: If `JAVA_HOME` is not used, set the JDK deployment path: + +```YAML +jdk_deploy_dir: /home/data/ +jdk_dir_name: jdk_1.8.2 +``` + +1. Set cluster addresses: + +- `cn_internal_address` and `dn_internal_address` +- In `confignode_servers` → `iotdb-system.properties`, configure: + - `cn_internal_address`, `cn_internal_port`, `cn_consensus_port`, `cn_system_dir`, `cn_consensus_dir` +- In `datanode_servers` → `iotdb-system.properties`, configure: + - `dn_rpc_address`, `dn_internal_address`, `dn_data_dirs`, `dn_consensus_dir`, `dn_system_dir` + +1. Execute the initialization command: + +```Bash +iotdbctl cluster init default_cluster +``` + +#### Deploying IoTDB, Grafana, and Prometheus + +To deploy an IoTDB cluster along with Grafana and Prometheus: + +1. Enable metrics: In `iotdb-system.properties`, enable the metrics interface. +2. Configure Grafana: + +- If deploying multiple dashboards, separate names with commas. +- Ensure dashboard names are unique to prevent overwriting. + +1. Configure Prometheus: + +- If the IoTDB cluster has metrics enabled, Prometheus automatically adapts without manual configuration. + +1. Start the cluster: + +```Bash +iotdbctl cluster start default_cluster +``` + +For detailed parameters, refer to the **Cluster Configuration Files** section above. + +### Command Reference + +The basic command structure is: + +```Bash +iotdbctl cluster [params (Optional)] +``` + +- `key` – The specific command to execute. +- `cluster_name` – The name of the cluster (matches the YAML file name in `iotdbctl/config`). +- `params` – Optional parameters for the command. + +Example: Deploying the `default_cluster` cluster + +```Bash +iotdbctl cluster deploy default_cluster +``` + +#### Command Overview + +| **ommand** | **escription** | **Parameters** | +| ---------- | ---------------------------------- | ---------------------------------------------------- | +| check | Check cluster readiness for deployment. | Cluster name | +| clean | Clean up cluster data. | Cluster name | +| deploy/dist-all | Deploy the cluster. | Cluster name, -N module (optional: iotdb, grafana, prometheus), -op force (optional) | +| list | List cluster status. | None | +| start | Start the cluster. | Cluster name, -N node name (optional: iotdb, grafana, prometheus) | +| stop | Stop the cluster. | Cluster name, -N node name (optional), -op force (optional) | +| restart | Restart the cluster. | Cluster name, -N node name (optional), -op force (optional) | +| show | View cluster details. | Cluster name, details (optional) | +| destroy | Destroy the cluster. | Cluster name, -N module (optional: iotdb, grafana, prometheus) | +| scaleout | Expand the cluster. | Cluster name | +| scalein | Shrink the cluster. | Cluster name, -N node name or IP:port | +| reload | Hot reload cluster configuration. | Cluster name | +| dist-conf | Distribute cluster configuration. | Cluster name | +| dumplog | Backup cluster logs. | Cluster name, -N node name, -h target IP, -pw target password, -p target port, -path backup path, -startdate, -enddate, -loglevel, -l transfer speed | +| dumpdata | Backup cluster data | Cluster name, -h target IP, -pw target password, -p target port, -path backup path, -startdate, -enddate, -l transfer speed | +| dist-lib | Upgrade the IoTDB lib package. | Cluster name | +| init | Initialize the cluster configuration. | Cluster name | +| status | View process status. | Cluster name | +| activate | Activate the cluster. | Cluster name | +| health_check | Perform a health check. | Cluster name, -N, nodename (optional) | +| backup | Backup the cluster. | Cluster name,-N nodename (optional) | +| importschema | Import metadata. | Cluster name,-N nodename -param paramters | +| exportschema | Export metadata. | Cluster name,-N nodename -param paramters | + +### Detailed Command Execution Process + +The following examples use `default_cluster.yaml` as a reference. Users can modify the commands according to their specific cluster configuration files. + +#### Check Cluster Deployment Environment + +The following command checks whether the cluster environment meets the deployment requirements: + +```Bash +iotdbctl cluster check default_cluster +``` + +**Execution Steps:** + +1. Locate the corresponding YAML file (`default_cluster.yaml`) based on the cluster name. +2. Retrieve configuration information for ConfigNode and DataNode (`confignode_servers` and `datanode_servers`). +3. Verify the following conditions on the target node: + 1. SSH connectivity + 2. JDK version (must be 1.8 or above) + 3. Required system tools: unzip, lsof, netstat + +**Expected Output:** + +- If successful: `Info: example check successfully!` +- If failed: `Error: example check fail!` + +**Troubleshooting Tips:** + +- JDK version not satisfied: Specify a valid `jdk1.8+` path in the YAML file for deployment. +- Missing system tools: Install unzip, lsof, and netstat on the server. +- Port conflict: Check the error log, e.g., `Error: Server (ip:172.20.31.76) iotdb port (10713) is listening.` + +#### Deploy Cluster + +Deploy the entire cluster using the following command: + +```Bash +iotdbctl cluster deploy default_cluster +``` + +**Execution Steps:** + +1. Locate the corresponding `YAML` file based on the cluster name. +2. Upload the IoTDB and JDK compressed packages (if `jdk_tar_dir` and `jdk_deploy_dir` are configured). +3. Generate and upload the iotdb-system.properties file based on the YAML configuration. + +**Force Deployment:** To overwrite existing deployment directories and redeploy: + +```Bash +iotdbctl cluster deploy default_cluster -op force +``` + +**Deploying Individual Modules:** + +You can deploy specific components individually: + +```Bash +# Deploy Grafana module +iotdbctl cluster deploy default_cluster -N grafana + +# Deploy Prometheus module +iotdbctl cluster deploy default_cluster -N prometheus + +# Deploy IoTDB module +iotdbctl cluster deploy default_cluster -N iotdb +``` + +#### Start Cluster + +Start the cluster using the following command: + +```Bash +iotdbctl cluster start default_cluster +``` + +**Execution Steps:** + +1. Locate the `YAML` file based on the cluster name. +2. Start ConfigNodes sequentially according to the YAML order. + 1. The first ConfigNode is treated as the Seed ConfigNode. + 2. Verify startup by checking process IDs. +3. Start DataNodes sequentially and verify their process IDs. +4. After process verification, check the cluster's service health via CLI. + 1. If the CLI connection fails, retry every 10 seconds, up to 5 times. + +**Start a Single Node:** Start specific nodes by name or IP: + +```Bash +# By node name +iotdbctl cluster start default_cluster -N datanode_1 + +# By IP and port (ConfigNode uses `cn_internal_port`, DataNode uses `rpc_port`) +iotdbctl cluster start default_cluster -N 192.168.1.5:6667 + +# Start Grafana +iotdbctl cluster start default_cluster -N grafana + +# Start Prometheus +iotdbctl cluster start default_cluster -N prometheus +``` + +**Note:** The `iotdbctl` tool relies on `start-confignode.sh` and `start-datanode.sh` scripts. If startup fails, check the cluster status using the following command: + +```Bash +iotdbctl cluster status default_cluster +``` + +#### View Cluster Status + +To view the current cluster status: + +```Bash +iotdbctl cluster show default_cluster +``` + +To view detailed information: + +```Bash +iotdbctl cluster show default_cluster details +``` + +**Execution Steps:** + +1. Locate the YAML file and retrieve `confignode_servers` and `datanode_servers` configuration. +2. Execute `show cluster details` via CLI. +3. If one node returns successfully, the process skips checking remaining nodes. + +#### Stop Cluster + +To stop the entire cluster: + +```Bash +iotdbctl cluster stop default_cluster +``` + +**Execution Steps:** + +1. Locate the YAML file and retrieve `confignode_servers` and `datanode_servers` configuration. +2. Stop DataNodes sequentially based on the YAML configuration. +3. Stop ConfigNodes in sequence. + +**Force Stop:** To forcibly stop the cluster using `kill -9`: + +```Bash +iotdbctl cluster stop default_cluster -op force +``` + +**Stop a Single Node:** Stop nodes by name or IP: + +```Bash +# By node name +iotdbctl cluster stop default_cluster -N datanode_1 + +# By IP and port +iotdbctl cluster stop default_cluster -N 192.168.1.5:6667 + +# Stop Grafana +iotdbctl cluster stop default_cluster -N grafana + +# Stop Prometheus +iotdbctl cluster stop default_cluster -N prometheus +``` + +**Note:** If the IoTDB cluster is not fully stopped, verify its status using: + +```Bash +iotdbctl cluster status default_cluster +``` + +#### Clean Cluster Data + +To clean up cluster data, execute: + +```Bash +iotdbctl cluster clean default_cluster +``` + +**Execution Steps:** + +1. Locate the `YAML` file and retrieve `confignode_servers` and `datanode_servers` configuration. +2. Verify that no services are running. If any are active, the cleanup will not proceed. +3. Delete the following directories: + 1. IoTDB data directories, + 2. ConfigNode and DataNode system directories (`cn_system_dir`, `dn_system_dir`), + 3. Consensus directories (`cn_consensus_dir`, `dn_consensus_dir`), + 4. Logs and ext directories. + +#### Restart Cluster + +Restart the cluster using the following command: + +```Bash +iotdbctl cluster restart default_cluster +``` + +**Execution Steps:** + +1. Locate the YAML file and retrieve configurations for ConfigNodes, DataNodes, Grafana, and Prometheus. +2. Perform a cluster stop followed by a cluster start. + +**Force Restart:** To forcibly restart the cluster: + +```Bash +iotdbctl cluster restart default_cluster -op force +``` + +**Restart a Single Node:** Restart specific nodes by name: + +```Bash +# Restart DataNode +iotdbctl cluster restart default_cluster -N datanode_1 + +# Restart ConfigNode +iotdbctl cluster restart default_cluster -N confignode_1 + +# Restart Grafana +iotdbctl cluster restart default_cluster -N grafana + +# Restart Prometheus +iotdbctl cluster restart default_cluster -N prometheus +``` + +#### Cluster Expansion + +To add a node to the cluster: + +1. Edit `config/xxx.yaml` to add a new DataNode or ConfigNode. +2. Execute the following command: + +```Bash +iotdbctl cluster scaleout default_cluster +``` + +**Execution Steps:** + +1. Locate the YAML file and retrieve node configuration. +2. Upload the IoTDB and JDK packages (if `jdk_tar_dir` and `jdk_deploy_dir` are configured). +3. Generate and upload iotdb-system.properties. +4. Start the new node and verify success. + +Tip: Only one node expansion is supported per execution. + +#### Cluster Shrinking + +To remove a node from the cluster: + +1. Identify the node name or IP:port in `config/xxx.yaml`. +2. Execute the following command: + +```Bash +#Scale down by node name +iotdbctl cluster scalein default_cluster -N nodename + +#Scale down according to ip+port (ip+port obtains the only node according to ip+dn_rpc_port in datanode, and obtains the only node according to ip+cn_internal_port in confignode) +iotdbctl cluster scalein default_cluster -N ip:port +``` + +**Execution Steps:** + +1. Locate the YAML file and retrieve node configuration. +2. Ensure at least one ConfigNode and one DataNode remain. +3. Identify the node to remove, execute the scale-in command, and delete the node directory. + +Tip: Only one node shrinking is supported per execution. + +#### Destroy Cluster + +To destroy the entire cluster: + +```Bash +iotdbctl cluster destroy default_cluster +``` + +**Execution Steps:** + +1. Locate the YAML file and retrieve node information. +2. Verify that all nodes are stopped. If any node is running, the destruction will not proceed. +3. Delete the following directories: + 1. IoTDB data directories, + 2. ConfigNode and DataNode system directories, + 3. Consensus directories, + 4. Logs, ext, and deployment directories, + 5. Grafana and Prometheus directories. + +**Destroy a Single Module:** To destroy individual modules: + +```Bash +# Destroy Grafana +iotdbctl cluster destroy default_cluster -N grafana + +# Destroy Prometheus +iotdbctl cluster destroy default_cluster -N prometheus + +# Destroy IoTDB +iotdbctl cluster destroy default_cluster -N iotdb +``` + +#### Distribute Cluster Configuration + +To distribute the cluster configuration files across nodes: + +```Bash +iotdbctl cluster dist-conf default_cluster +``` + +**Execution Steps:** + +1. Locate the YAML file based on `cluster-name`. +2. Retrieve configuration from `confignode_servers`, `datanode_servers`, `grafana`, and `prometheus`. +3. Generate and upload `iotdb-system.properties` to the specified nodes. + +#### Hot Load Cluster Configuration + +To reload the cluster configuration without restarting: + +```Plain +iotdbctl cluster reload default_cluster +``` + +**Execution Steps:** + +1. Locate the YAML file based on `cluster-name`. +2. Execute the `load configuration` command through the CLI for each node. + +#### Cluster Node Log Backup + +To back up logs from specific nodes: + +```Bash +iotdbctl cluster dumplog default_cluster -N datanode_1,confignode_1 -startdate '2023-04-11' -enddate '2023-04-26' -h 192.168.9.48 -p 36000 -u root -pw root -path '/iotdb/logs' -logs '/root/data/db/iotdb/logs' +``` + +**Execution Steps:** + +1. Locate the YAML file based on `cluster-name`. +2. Verify node existence (`datanode_1` and `confignode_1`). +3. Back up log data within the specified date range. +4. Save logs to `/iotdb/logs` or the default IoTDB installation path. + +| **Command** | **Description** | **Mandatory** | +| ----------- | ------------------------------------------------------------ | ------------- | +| -h | IP address of the backup server | NO | +| -u | Username for the backup server | NO | +| -pw | Password for the backup server | NO | +| -p | Backup server port (Default: `22`) | NO | +| -path | Path for backup data (Default: current path) | NO | +| -loglevel | Log level (`all`, `info`, `error`, `warn`. Default: `all`) | NO | +| -l | Speed limit (Default: unlimited; Range: 0 to 104857601 Kbit/s) | NO | +| -N | Cluster names (comma-separated) | YES | +| -startdate | Start date (inclusive; Default: `1970-01-01`) | NO | +| -enddate | End date (inclusive) | NO | +| -logs | IoTDB log storage path (Default: `{iotdb}/logs`) | NO | + +#### Cluster Data Backup + +To back up data from the cluster: + +```Bash +iotdbctl cluster dumpdata default_cluster -granularity partition -startdate '2023-04-11' -enddate '2023-04-26' -h 192.168.9.48 -p 36000 -u root -pw root -path '/iotdb/datas' +``` + +This command identifies the leader node from the YAML file and backs up data within the specified date range to the `/iotdb/datas` directory on the `192.168.9.48` server. + +| **Command** | **Description** | **Mandatory** | +| ------------ | ------------------------------------------------------------ | ------------- | +| -h | IP address of the backup server | NO | +| -u | Username for the backup server | NO | +| -pw | Password for the backup server | NO | +| -p | Backup server port (Default: `22`) | NO | +| -path | Path for storing backup data (Default: current path) | NO | +| -granularity | Data partition granularity | YES | +| -l | Speed limit (Default: unlimited; Range: 0 to 104857601 Kbit/s) | NO | +| -startdate | Start date (inclusive) | YES | +| -enddate | End date (inclusive) | YES | + +#### Cluster Upgrade + +To upgrade the cluster: + +```Bash +iotdbctl cluster dist-lib default_cluster +``` + +**Execution Steps:** + +1. Locate the YAML file based on the cluster name. +2. Retrieve the configuration of `confignode_servers` and `datanode_servers`. +3. Upload the library package. + +**Note:** After the upgrade, restart all IoTDB nodes for the changes to take effect. + +#### Cluster Initialization + +To initialize the cluster: + +```Bash +iotdbctl cluster init default_cluster +``` + +**Execution Steps:** + +1. Locate the YAML file based on the cluster name. +2. Retrieve configuration details for `confignode_servers`, `datanode_servers`, `Grafana`, and `Prometheus`. +3. Initialize the cluster configuration. + +#### View Cluster Process + +To check the cluster process status: + +```Bash +iotdbctl cluster status default_cluster +``` + +**Execution Steps:** + +1. Locate the YAML file based on the cluster name. +2. Retrieve configuration details for `confignode_servers`, `datanode_servers`, `Grafana`, and `Prometheus`. +3. Display the operational status of each node in the cluster. + +#### Cluster Authorization Activation + +**Default Activation Method:** To activate the cluster using an activation code: + +```Bash +iotdbctl cluster activate default_cluster +``` + +**Execution Steps:** + +1. Locate the YAML file based on the cluster name. +2. Retrieve the `confignode_servers` configuration. +3. Obtain the machine code. +4. Enter the activation code when prompted. + +Example: + +```Bash +Machine code: +Kt8NfGP73FbM8g4Vty+V9qU5lgLvwqHEF3KbLN/SGWYCJ61eFRKtqy7RS/jw03lHXt4MwdidrZJ== +JHQpXu97IKwv3rzbaDwoPLUuzNCm5aEeC9ZEBW8ndKgGXEGzMms25+u== +Please enter the activation code: +JHQpXu97IKwv3rzbaDwoPLUuzNCm5aEeC9ZEBW8ndKg=, lTF1Dur1AElXIi/5jPV9h0XCm8ziPd9/R+tMYLsze1oAPxE87+Nwws= +Activation successful. +``` + +**Activate a Specific Node:** To activate a specific node: + +```Bash +iotdbctl cluster activate default_cluster -N confignode1 +``` + +**Activate via License Path:** To activate using a license file: + +```Bash +iotdbctl cluster activate default_cluster -op license_path +``` + +#### Cluster Health Check + +To perform a cluster health check: + +```Bash +iotdbctl cluster health_check default_cluster +``` + +**Execution Steps:** + +1. Locate the YAML file based on the cluster name. +2. Retrieve configuration details for `confignode_servers` and `datanode_servers`. +3. Execute `health_check.sh` on each node. + +**Single Node Health Check:** To check a specific node: + +```Bash +iotdbctl cluster health_check default_cluster -N datanode_1 +``` + +#### Cluster Shutdown Backup + +To back up the cluster during shutdown: + +```Bash +iotdbctl cluster backup default_cluster +``` + +**Execution Steps:** + +1. Locate the YAML file based on the cluster name. +2. Retrieve configuration details for `confignode_servers` and `datanode_servers`. +3. Execute `backup.sh` on each node. + +**Single Node Backup:** To back up a specific node: + +```Bash +iotdbctl cluster backup default_cluster -N datanode_1 +``` + +**Note:** Multi-node deployment on a single machine only supports quick mode. + +#### Cluster Metadata Import + +To import metadata: + +```Bash +iotdbctl cluster importschema default_cluster -N datanode1 -param "-s ./dump0.csv -fd ./failed/ -lpf 10000" +``` + +**Execution Steps:** + +1. Locate the YAML file based on the cluster name to retrieve `datanode_servers` configuration information. +2. Execute metadata import using `import-schema.sh` on `datanode1`. + +Parameter Descriptions for `-param`: + +| **Command** | **Description** | **Mandatory** | +| ----------- | ------------------------------------------------------------ | ------------- | +| -s | Specify the data file or directory to be imported. If a directory is specified, all files with a `.csv` extension will be imported in bulk. | YES | +| -fd | Specify a directory to store failed import files. If omitted, failed files will be saved in the source directory with the `.failed` suffix added to the original filename. | No | +| -lpf | Specify the maximum number of lines per failed import file (Default: 10,000). | NO | + +#### Cluster Metadata Export + +To export metadata: + +```Bash +iotdbctl cluster exportschema default_cluster -N datanode1 -param "-t ./ -pf ./pattern.txt -lpf 10 -t 10000" +``` + +**Execution Steps:** + +1. Locate the YAML file based on the cluster name to retrieve `datanode_servers` configuration information. +2. Execute metadata export using `export-schema.sh` on `datanode1`. + +**Parameter Descriptions for** **`-param`:** + +| **Command** | **Description** | **Mandatory** | +| ----------- | ------------------------------------------------------------ | ------------- | +| -t | Specify the output path for the exported CSV file. | YES | +| -path | Specify the metadata path pattern for export. If this parameter is specified, the `-s` parameter will be ignored. Example: `root.stock.**`. | NO | +| -pf | If `-path` is not specified, use this parameter to specify the file containing metadata paths to export. The file must be in `.txt` format, with one path per line. | NO | +| -lpf | Specify the maximum number of lines per exported file (Default: 10,000). | NO | +| -timeout | Specify the session query timeout in milliseconds. | NO | + +### Introduction to Cluster Deployment Tool Samples + +In the cluster deployment tool installation directory (config/example), there are three YAML configuration examples. If needed, you can copy and modify them for your deployment. + +| **Name** | **Description** | +| -------------------- | -------------------------------------------------------- | +| default_1c1d.yaml | Example configuration for 1 ConfigNode and 1 DataNode. | +| default_3c3d.yaml | Example configuration for 3 ConfigNodes and 3 DataNodes. | +| default_3c3d_grafa_prome | Example configuration for 3 ConfigNodes, 3 DataNodes, Grafana, and Prometheus. | + +## IoTDB Data Directory Overview Tool + +The IoTDB Data Directory Overview Tool provides an overview of the IoTDB data directory structure. It is located at `tools/tsfile/print-iotdb-data-dir`. + +### Usage + +- For Windows: + +```Bash +.\print-iotdb-data-dir.bat () +``` + +- For Linux or MacOs: + +```Shell +./print-iotdb-data-dir.sh () +``` + +**Note:** If the output path is not specified, the default relative path `IoTDB_data_dir_overview.txt` will be used. + +### Example + +Use Windows in this example: + +~~~Bash +.\print-iotdb-data-dir.bat D:\github\master\iotdb\data\datanode\data +```````````````````````` +Starting Printing the IoTDB Data Directory Overview +```````````````````````` +output save path:IoTDB_data_dir_overview.txt +data dir num:1 +143 [main] WARN o.a.i.t.c.conf.TSFileDescriptor - not found iotdb-system.properties, use the default configs. +|============================================================== +|D:\github\master\iotdb\data\datanode\data +|--sequence +| |--root.redirect0 +| | |--1 +| | | |--0 +| |--root.redirect1 +| | |--2 +| | | |--0 +| |--root.redirect2 +| | |--3 +| | | |--0 +| |--root.redirect3 +| | |--4 +| | | |--0 +| |--root.redirect4 +| | |--5 +| | | |--0 +| |--root.redirect5 +| | |--6 +| | | |--0 +| |--root.sg1 +| | |--0 +| | | |--0 +| | | |--2760 +|--unsequence +|============================================================== +~~~ + +## TsFile Sketch Tool + +The TsFile Sketch Tool provides a summarized view of the content within a TsFile. It is located at `tools/tsfile/print-tsfile`. + +### Usage + +- For Windows: + +```Plain +.\print-tsfile-sketch.bat () +``` + +- For Linux or MacOs: + +```Plain +./print-tsfile-sketch.sh () +``` + +**Note:** If the output path is not specified, the default relative path `TsFile_sketch_view.txt` will be used. + +### Example + +Use Windows in this example: + +~~~Bash +.\print-tsfile.bat D:\github\master\1669359533965-1-0-0.tsfile D:\github\master\sketch.txt +```````````````````````` +Starting Printing the TsFile Sketch +```````````````````````` +TsFile path:D:\github\master\1669359533965-1-0-0.tsfile +Sketch save path:D:\github\master\sketch.txt +148 [main] WARN o.a.i.t.c.conf.TSFileDescriptor - not found iotdb-system.properties, use the default configs. +-------------------------------- TsFile Sketch -------------------------------- +file path: D:\github\master\1669359533965-1-0-0.tsfile +file length: 2974 + + POSITION| CONTENT + -------- ------- + 0| [magic head] TsFile + 6| [version number] 3 +||||||||||||||||||||| [Chunk Group] of root.sg1.d1, num of Chunks:3 + 7| [Chunk Group Header] + | [marker] 0 + | [deviceID] root.sg1.d1 + 20| [Chunk] of root.sg1.d1.s1, startTime: 1669359533948 endTime: 1669359534047 count: 100 [minValue:-9032452783138882770,maxValue:9117677033041335123,firstValue:7068645577795875906,lastValue:-5833792328174747265,sumValue:5.795959009889246E19] + | [chunk header] marker=5, measurementID=s1, dataSize=864, dataType=INT64, compressionType=SNAPPY, encodingType=RLE + | [page] UncompressedSize:862, CompressedSize:860 + 893| [Chunk] of root.sg1.d1.s2, startTime: 1669359533948 endTime: 1669359534047 count: 100 [minValue:-8806861312244965718,maxValue:9192550740609853234,firstValue:1150295375739457693,lastValue:-2839553973758938646,sumValue:8.2822564314572677E18] + | [chunk header] marker=5, measurementID=s2, dataSize=864, dataType=INT64, compressionType=SNAPPY, encodingType=RLE + | [page] UncompressedSize:862, CompressedSize:860 + 1766| [Chunk] of root.sg1.d1.s3, startTime: 1669359533948 endTime: 1669359534047 count: 100 [minValue:-9076669333460323191,maxValue:9175278522960949594,firstValue:2537897870994797700,lastValue:7194625271253769397,sumValue:-2.126008424849926E19] + | [chunk header] marker=5, measurementID=s3, dataSize=864, dataType=INT64, compressionType=SNAPPY, encodingType=RLE + | [page] UncompressedSize:862, CompressedSize:860 +||||||||||||||||||||| [Chunk Group] of root.sg1.d1 ends + 2656| [marker] 2 + 2657| [TimeseriesIndex] of root.sg1.d1.s1, tsDataType:INT64, startTime: 1669359533948 endTime: 1669359534047 count: 100 [minValue:-9032452783138882770,maxValue:9117677033041335123,firstValue:7068645577795875906,lastValue:-5833792328174747265,sumValue:5.795959009889246E19] + | [ChunkIndex] offset=20 + 2728| [TimeseriesIndex] of root.sg1.d1.s2, tsDataType:INT64, startTime: 1669359533948 endTime: 1669359534047 count: 100 [minValue:-8806861312244965718,maxValue:9192550740609853234,firstValue:1150295375739457693,lastValue:-2839553973758938646,sumValue:8.2822564314572677E18] + | [ChunkIndex] offset=893 + 2799| [TimeseriesIndex] of root.sg1.d1.s3, tsDataType:INT64, startTime: 1669359533948 endTime: 1669359534047 count: 100 [minValue:-9076669333460323191,maxValue:9175278522960949594,firstValue:2537897870994797700,lastValue:7194625271253769397,sumValue:-2.126008424849926E19] + | [ChunkIndex] offset=1766 + 2870| [IndexOfTimerseriesIndex Node] type=LEAF_MEASUREMENT + | + | +||||||||||||||||||||| [TsFileMetadata] begins + 2891| [IndexOfTimerseriesIndex Node] type=LEAF_DEVICE + | + | + | [meta offset] 2656 + | [bloom filter] bit vector byte array length=31, filterSize=256, hashFunctionSize=5 +||||||||||||||||||||| [TsFileMetadata] ends + 2964| [TsFileMetadataSize] 73 + 2968| [magic tail] TsFile + 2974| END of TsFile +---------------------------- IndexOfTimerseriesIndex Tree ----------------------------- + [MetadataIndex:LEAF_DEVICE] + └──────[root.sg1.d1,2870] + [MetadataIndex:LEAF_MEASUREMENT] + └──────[s1,2657] +---------------------------------- TsFile Sketch End ---------------------------------- +~~~ + +Explanations: + +- The output is separated by the `|` symbol. The left side indicates the actual position within the TsFile, while the right side provides a summary of the content. +- The `"||||||||||||||||||||"` lines are added for readability and are not part of the actual TsFile data. +- The final `"IndexOfTimerseriesIndex Tree"` section reorganizes the metadata index tree at the end of the TsFile. This view aids understanding but does not represent actual stored data. + +## TsFile Resource Sketch Tool + +The TsFile Resource Sketch Tool displays details about TsFile resource files. It is located at `tools/tsfile/print-tsfile-resource-files`. + +### Usage + +- For Windows: + +```Bash +.\print-tsfile-resource-files.bat +``` + +- For Linux or MacOs: + +```Plain +./print-tsfile-resource-files.sh +``` + +### Example + +Use Windows in this example: + +~~~Bash +.\print-tsfile-resource-files.bat D:\github\master\iotdb\data\datanode\data\sequence\root.sg1\0\0 +```````````````````````` +Starting Printing the TsFileResources +```````````````````````` +147 [main] WARN o.a.i.t.c.conf.TSFileDescriptor - not found iotdb-system.properties, use the default configs. +230 [main] WARN o.a.iotdb.db.conf.IoTDBDescriptor - Cannot find IOTDB_HOME or IOTDB_CONF environment variable when loading config file iotdb-system.properties, use default configuration +231 [main] WARN o.a.iotdb.db.conf.IoTDBDescriptor - Couldn't load the configuration iotdb-system.properties from any of the known sources. +233 [main] WARN o.a.iotdb.db.conf.IoTDBDescriptor - Cannot find IOTDB_HOME or IOTDB_CONF environment variable when loading config file iotdb-system.properties, use default configuration +237 [main] WARN o.a.iotdb.db.conf.IoTDBDescriptor - Couldn't load the configuration iotdb-system.properties from any of the known sources. +Analyzing D:\github\master\iotdb\data\datanode\data\sequence\root.sg1\0\0\1669359533489-1-0-0.tsfile ... + +Resource plan index range [9223372036854775807, -9223372036854775808] +device root.sg1.d1, start time 0 (1970-01-01T08:00+08:00[GMT+08:00]), end time 99 (1970-01-01T08:00:00.099+08:00[GMT+08:00]) + +Analyzing the resource file folder D:\github\master\iotdb\data\datanode\data\sequence\root.sg1\0\0 finished. +.\print-tsfile-resource-files.bat D:\github\master\iotdb\data\datanode\data\sequence\root.sg1\0\0\1669359533489-1-0-0.tsfile.resource +```````````````````````` +Starting Printing the TsFileResources +```````````````````````` +178 [main] WARN o.a.iotdb.db.conf.IoTDBDescriptor - Cannot find IOTDB_HOME or IOTDB_CONF environment variable when loading config file iotdb-system.properties, use default configuration +186 [main] WARN o.a.i.t.c.conf.TSFileDescriptor - not found iotdb-system.properties, use the default configs. +187 [main] WARN o.a.iotdb.db.conf.IoTDBDescriptor - Couldn't load the configuration iotdb-system.properties from any of the known sources. +188 [main] WARN o.a.iotdb.db.conf.IoTDBDescriptor - Cannot find IOTDB_HOME or IOTDB_CONF environment variable when loading config file iotdb-system.properties, use default configuration +192 [main] WARN o.a.iotdb.db.conf.IoTDBDescriptor - Couldn't load the configuration iotdb-system.properties from any of the known sources. +Analyzing D:\github\master\iotdb\data\datanode\data\sequence\root.sg1\0\0\1669359533489-1-0-0.tsfile ... + +Resource plan index range [9223372036854775807, -9223372036854775808] +device root.sg1.d1, start time 0 (1970-01-01T08:00+08:00[GMT+08:00]), end time 99 (1970-01-01T08:00:00.099+08:00[GMT+08:00]) + +Analyzing the resource file D:\github\master\iotdb\data\datanode\data\sequence\root.sg1\0\0\1669359533489-1-0-0.tsfile.resource finished. +~~~ diff --git a/src/UserGuide/Master/Table/Tools-System/Monitor-Tool_timecho.md b/src/UserGuide/Master/Table/Tools-System/Monitor-Tool_timecho.md new file mode 100644 index 000000000..4b5d7d5da --- /dev/null +++ b/src/UserGuide/Master/Table/Tools-System/Monitor-Tool_timecho.md @@ -0,0 +1,175 @@ + + +## **Prometheus** **Integration** + +### **Prometheus Metric Mapping** + +The following table illustrates the mapping of IoTDB metrics to the Prometheus-compatible format. For a given metric with `Metric Name = name` and tags `K1=V1, ..., Kn=Vn`, the mapping follows this pattern, where `value` represents the actual measurement. + +| **Metric Type** | **Mapping** | +| ---------------- | ------------------------------------------------------------ | +| Counter | name_total{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn"} value | +| AutoGauge, Gauge | name{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn"} value | +| Histogram | name_max{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn"} value
name_sum{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn"} value
name_count{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn"} value
name{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn", quantile="0.5"} value
name{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn", quantile="0.99"} value | +| Rate | name_total{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn"} value
name_total{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn", rate="m1"} value
name_total{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn", rate="m5"} value
name_total{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn", rate="m15"} value
name_total{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn", rate="mean"} value | +| Timer | name_seconds_max{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn"} value
name_seconds_sum{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn"} value
name_seconds_count{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn"} value
name_seconds{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn", quantile="0.5"} value
name_seconds{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn", quantile="0.99"} value | + +### **Configuration File** + +To enable Prometheus metric collection in IoTDB, modify the configuration file as follows: + +1. Taking DataNode as an example, modify the iotdb-system.properties configuration file as follows: + +```Properties +dn_metric_reporter_list=PROMETHEUS +dn_metric_level=CORE +dn_metric_prometheus_reporter_port=9091 +``` + +1. Start IoTDB DataNodes +2. Use a web browser or `curl` to access `http://server_ip:9091/metrics` to retrieve metric data, such as: + +```Plain +... +# HELP file_count +# TYPE file_count gauge +file_count{name="wal",} 0.0 +file_count{name="unseq",} 0.0 +file_count{name="seq",} 2.0 +... +``` + +### **Prometheus + Grafana** **Integration** + +IoTDB exposes monitoring data in the standard Prometheus-compatible format. Prometheus collects and stores these metrics, while Grafana is used for visualization. + +**Integration Workflow** + +The following picture describes the relationships among IoTDB, Prometheus and Grafana: + +![iotdb_prometheus_grafana](/img/UserGuide/System-Tools/Metrics/iotdb_prometheus_grafana.png) + +Iotdb-Prometheus-Grafana Workflow + +1. IoTDB continuously collects monitoring metrics. +2. Prometheus collects metrics from IoTDB at a configurable interval. +3. Prometheus stores the collected metrics in its internal time-series database (TSDB). +4. Grafana queries Prometheus at a configurable interval and visualizes the metrics. + +**Prometheus Configuration Example** + +To configure Prometheus to collect IoTDB metrics, modify the `prometheus.yml` file as follows: + +```YAML +job_name: pull-metrics +honor_labels: true +honor_timestamps: true +scrape_interval: 15s +scrape_timeout: 10s +metrics_path: /metrics +scheme: http +follow_redirects: true +static_configs: + - targets: + - localhost:9091 +``` + +For more details, refer to: + +- Prometheus Documentation: + - [Prometheus getting_started](https://prometheus.io/docs/prometheus/latest/getting_started/) + - [Prometheus scrape metrics](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config) +- Grafana Documentation: + - [Grafana getting_started](https://grafana.com/docs/grafana/latest/getting-started/getting-started/) + - [Grafana query metrics from Prometheus](https://prometheus.io/docs/visualization/grafana/#grafana-support-for-prometheus) + +## **Apache IoTDB Dashboard** + +We introduce the Apache IoTDB Dashboard, designed for unified centralized operations and management, which enables monitoring multiple clusters through a single panel. + +![Apache IoTDB Dashboard](/img/%E7%9B%91%E6%8E%A7%20default%20cluster.png) + +![Apache IoTDB Dashboard](/img/%E7%9B%91%E6%8E%A7%20cluster2.png) + +You can access the Dashboard's Json file in TimechoDB. + +### **Cluster Overview** + +Including but not limited to: + +- Total number of CPU cores, memory capacity, and disk space in the cluster. +- Number of ConfigNodes and DataNodes in the cluster. +- Cluster uptime. +- Cluster write throughput. +- Current CPU, memory, and disk utilization across all nodes. +- Detailed information for individual nodes. + +![](/img/%E7%9B%91%E6%8E%A7%20%E6%A6%82%E8%A7%88.png) + +### **Data Writing** + +Including but not limited to: + +- Average write latency, median latency, and the 99% percentile latency. +- Number and size of WAL files. +- WAL flush SyncBuffer latency per node. + +![](/img/%E7%9B%91%E6%8E%A7%20%E5%86%99%E5%85%A5.png) + +### **Data Querying** + +Including but not limited to: + +- Time series metadata query load time per node. +- Time series data read duration per node. +- Time series metadata modification duration per node. +- Chunk metadata list loading time per node. +- Chunk metadata modification duration per node. +- Chunk metadata-based filtering duration per node. +- Average time required to construct a Chunk Reader. + +![](/img/%E7%9B%91%E6%8E%A7%20%E6%9F%A5%E8%AF%A2.png) + +### **Storage Engine** + +Including but not limited to: + +- File count and size by type. +- Number and size of TsFiles at different processing stages. +- Task count and execution duration for various operations. + +![](/img/%E7%9B%91%E6%8E%A7%20%E5%AD%98%E5%82%A8%E5%BC%95%E6%93%8E.png) + +### **System Monitoring** + +Including but not limited to: + +- System memory, swap memory, and process memory usage. +- Disk space, file count, and file size statistics. +- JVM garbage collection (GC) time percentage, GC events by type, GC data volume, and heap memory utilization across generations. +- Network throughput and packet transmission rate. + +![](/img/%E7%9B%91%E6%8E%A7%20%E7%B3%BB%E7%BB%9F%20%E5%86%85%E5%AD%98%E4%B8%8E%E7%A1%AC%E7%9B%98.png) + +![](/img/%E7%9B%91%E6%8E%A7%20%E7%B3%BB%E7%BB%9Fjvm.png) + +![](/img/%E7%9B%91%E6%8E%A7%20%E7%B3%BB%E7%BB%9F%20%E7%BD%91%E7%BB%9C.png) \ No newline at end of file diff --git a/src/UserGuide/latest-Table/Tools-System/Benchmark.md b/src/UserGuide/latest-Table/Tools-System/Benchmark.md new file mode 100644 index 000000000..b46184f7d --- /dev/null +++ b/src/UserGuide/latest-Table/Tools-System/Benchmark.md @@ -0,0 +1,421 @@ + + +## **Basic Overview** + +IoT-benchmark is a time-series database benchmarking tool developed in Java for big data environments. It was developed and open-sourced by the School of Software, Tsinghua University. The tool is user-friendly, supports various write and query methods, allows storing test information and results for further queries or analysis, and integrates with Tableau for visualizing test results. + +Figure 1-1 illustrates the test benchmark process and its extended functionalities, all of which can be streamlined by IoT-benchmark. It supports a variety of workloads, including write-only, read-only, and mixed write-and-read operations. Additionally, it offers software and hardware system monitoring, performance metric measurement, automated database initialization, test data analysis, and system parameter optimization. + +![](/img/benchmark-English1.png) + +Figure 1-1 *IoT-benchmark Test Benchmark Process* + +IoT-benchmark adopts the modular design concept of the YCSB test tool, which separates workload generation, performance measurement, and database interface components. Its modular structure is illustrated in Figure 1-2. Unlike YCSB-based testing tools, IoT-benchmark introduces a system monitoring module that supports the persistence of both test data and system metrics. It also includes load-testing functionalities specifically designed for time-series data scenarios, such as batch writes and multiple out-of-order data insertion modes for IoT environments. + +![](/img/benchmark-%20English2.png) + +Figure 1-2 *IoT-benchmark Modular Design* + +**Supported Databases** + +Currently, IoT-benchmark supports the following time series databases, versions and connection methods: + +| Database | Version | Connection mmethod | +| :-------------- | :-------------- | :------------------------------------------------------- | +| InfluxDB | v1.x v2.0 | SDK | +| TimescaleDB | -- | JDBC | +| OpenTSDB | -- | HTTP Request | +| QuestDB | v6.0.7 | JDBC | +| TDengine | v2.2.0.2 | JDBC | +| VictoriaMetrics | v1.64.0 | HTTP Request | +| KairosDB | -- | HTTP Request | +| IoTDB | v2.0 v1.x v0.13 | JDBC, SessionByTablet, SessionByRecord, SessionByRecords | + +## **Installation and Operation** + +#### **Prerequisites** + +1. Java 8 +2. Maven 3.6+ +3. The corresponding appropriate version of the database, such as Apache IoTDB 2.0 + +#### **How to Obtain** + +- **B****inary package****:** Visit https://github.com/thulab/iot-benchmark/releases to download the installation package. Extract the compressed file into a desired folder for use. + + - **Source Code** **Compilation (for** **Apache** **IoTDB 2.0 testing):** + + - **Compile the latest IoTDB Session package:** Download the IoTDB source code from https://github.com/apache/iotdb/tree/rc/2.0.1 and run the following command in the root directory to compile the latest IoTDB Session package: + + ```Bash + mvn clean package install -pl session -am -DskipTests + ``` + + - **Compile the IoT-benchmark test package:** Download the source code from https://github.com/thulab/iot-benchmark and run the following command in the root directory to compile the Apache IoTDB 2.0 test package:. + + ```Bash + mvn clean package install -pl iotdb-2.0 -am -DskipTests + ``` + + - The compiled test package will be located at: + + ```Bash + ./iotdb-2.0/target/iotdb-2.0-0.0.1/iotdb-2.0-0.0.1 + ``` + +#### **Test Package Structure** + +The directory structure of the test package is shown below. The test configuration file is `conf/config.properties`, and the test startup scripts are `benchmark.sh` (Linux & MacOS) and `benchmark.bat` (Windows). The detailed usage of the files is shown in the table below. + +```Shell +-rw-r--r--. 1 root root 2881 Jan 10 01:36 benchmark.bat +-rwxr-xr-x. 1 root root 314 Jan 10 01:36 benchmark.sh +drwxr-xr-x. 2 root root 24 Jan 10 01:36 bin +-rwxr-xr-x. 1 root root 1140 Jan 10 01:36 cli-benchmark.sh +drwxr-xr-x. 2 root root 107 Jan 10 01:36 conf +drwxr-xr-x. 2 root root 4096 Jan 10 01:38 lib +-rw-r--r--. 1 root root 11357 Jan 10 01:36 LICENSE +-rwxr-xr-x. 1 root root 939 Jan 10 01:36 rep-benchmark.sh +-rw-r--r--. 1 root root 14 Jan 10 01:36 routine +``` + +| Name | File | Usage | +| :--------------- | :---------------- | :-------------------------------------------------- | +| benchmark.bat | - | Startup script on Windows | +| benchmark.sh | - | Startup script on Linux/Mac | +| bin | startup.sh | Initialization script folder | +| conf | config.properties | Test scenario configuration file | +| lib | - | Dependency library | +| LICENSE | - | License file | +| cli-benchmark.sh | - | One-click startup script | +| routine | - | Automatic execution of multiple test configurations | +| rep-benchmark.sh | - | Automatic execution of multiple test scripts | + + + +#### **Execution** **of** **Tests** + +1. Modify the configuration file (conf/config.properties) according to test requirements. For example, to test Apache IoTDB 2.0, set the following parameter: + + ```Bash + DB_SWITCH=IoTDB-200-SESSION_BY_TABLET + ``` + +2. Ensure the target time-series database is running. + +3. Start IoT-benchmark to execute the test. Monitor the status of both the target database and IoT-benchmark during execution. + +4. Upon completion, review the results and analyze the test process. + +#### + +#### **Results Interpretation** + +All test log files are stored in the `logs` folder, while test results are saved in the `data/csvOutput` folder. For example, the following result matrix illustrates the test outcome: + +![](/img/bm4.png) + +- **Result Matrix:** + - OkOperation: Number of successful operations. + - OkPoint: Number of successfully written points (for write operations) or successfully queried points (for query operations). + - FailOperation: Number of failed operations. + - FailPoint: Number of failed write points. +- **Latency (ms) Matrix:** + - AVG: Average operation latency. + - MIN: Minimum operation latency. + - Pn: Quantile values of the overall operation distribution (e.g., P25 represents the 25th percentile, or lower quartile). + +## **Main** **Parameters** + +#### IoTDB Service Model + +The `IoTDB_DIALECT_MODE` parameter supports two modes: `tree` and `table`. The default value is `tree`. + +- **For IoTDB 2.0 and later versions**, the `IoTDB_DIALECT_MODE` parameter must be specified, and only one mode can be set for each IoTDB instance. +- **IoTDB_DIALECT_MODE = table:** + - The number of devices must be an integer multiple of the number of tables. + - The number of tables must be an integer multiple of the number of databases. +- **IoTDB_DIALECT_MODE = tree:** + - The number of devices must be greater than or equal to the number of databases. + +Key Parameters for IoTDB Service Model + +| **Parameter name** | **Type** | **Example** | **System description** | +| :---------------------- | :------- | :---------- | :----------------------------------------------------------- | +| IoTDB_TABLE_NAME_PREFIX | String | `table_` | Prefix for table names when `IoTDB_DIALECT_MODE` is set to `table`. | +| DATA_CLIENT_NUMBER | Integer | `10` | Number of clients, must be an integer multiple of the table count. | +| SENSOR_NUMBER | Integer | `10` | Controls the number of attribute columns in the table model. | +| IoTDB_TABLE_NUMBER | Integer | `1` | Specifies the number of tables when using the table model. | + +#### **Working** **M****ode** + +The `BENCHMARK_WORK_MODE` parameter supports four operational modes: + +1. **General Test Mode (****`testWithDefaultPath`****):** Configured via the `OPERATION_PROPORTION` parameter to support write-only, read-only, and mixed read-write operations. +2. **Data Generation Mode (****`generateDataMode`****):** Generates a reusable dataset, which is saved to `FILE_PATH` for subsequent use in the correctness write and correctness query modes. +3. **Single Database Correctness Write Mode (****`verificationWriteMode`****):** Verifies the correctness of dataset writing by writing the dataset generated in data generation mode. This mode supports only IoTDB v1.0+ and InfluxDB v1.x. +4. **Single Database Correctness Query Mode (****`verificationQueryMode`****):** Verifies the correctness of dataset queries after using the correctness write mode. This mode supports only IoTDB v1.0+ and InfluxDB v1.x. + +Mode configurations are shown in the following below: + +| **Mode name** | **BENCHMARK_WORK_MODE** | Description | Required Configuration | +| :------------------------------------- | :---------------------- | :------------------------------------------------------ | :------------------------- | +| General test mode | testWithDefaultPath | Supports multiple read and write mixed load operations. | `OPERATION_PROPORTION` | +| Generate data mode | generateDataMode | Generates datasets recognizable by IoT-benchmark. | `FILE_PATH` and `DATA_SET` | +| Single database correctness write mode | verificationWriteMode | Writes datasets for correctness verification. | `FILE_PATH` and `DATA_SET` | +| Single database correctness query mode | verificationQueryMode | Queries datasets to verify correctness. | `FILE_PATH` and `DATA_SET` | + +#### **Server** **Connection** **Information** + +Once the working mode is specified, the following parameters must be configured to inform IoT-benchmark of the target time-series database: + +| **Parameter** | **Type** | **Example** | D**escription** | +| :------------ | :------- | :---------------------------- | :----------------------------------------------------- | +| DB_SWITCH | String | `IoTDB-200-SESSION_BY_TABLET` | Specifies the type of time-series database under test. | +| HOST | String | `127.0.0.1` | Network address of the target time-series database. | +| PORT | Integer | `6667` | Network port of the target time-series database. | +| USERNAME | String | `root` | Login username for the time-series database. | +| PASSWORD | String | `root` | Password for the database login user. | +| DB_NAME | String | `test` | Name of the target time-series database. | +| TOKEN | String | - | Authentication token (used for InfluxDB 2.0). | + +#### **Write Scenario Parameters** + +| **Parameter** | **Type** | **Example** | D**escription** | +| :------------------------- | :-------------------- | :-------------------------- | :----------------------------------------------------------- | +| CLIENT_NUMBER | Integer | `100` | Total number of clients used for writing. | +| GROUP_NUMBER | Integer | `20` | Number of databases (only applicable for IoTDB). | +| DEVICE_NUMBER | Integer | `100` | Total number of devices. | +| SENSOR_NUMBER | Integer | `300` | Total number of sensors per device. (Control the number of attribute columns if you use the IoTDB table model) | +| INSERT_DATATYPE_PROPORTION | String | `1:1:1:1:1:1:0:0:0:0` | Ratio of data types: `BOOLEAN:INT32:INT64:FLOAT:DOUBLE:TEXT:STRING:BLOB:TIMESTAMP:DATE`. | +| POINT_STEP | Integer | `1000` | Time interval (in ms) between generated data points. | +| OP_MIN_INTERVAL | Integer | `0` | Minimum execution interval for operations (ms): if the operation takes more than the value, the next one will be executed immediately, otherwise wait (OP_MIN_INTERVAL - actual execution time) ms; if it is 0, the parameter is not effective; if it is -1, its value is consistent with POINT_STEP | +| IS_OUT_OF_ORDER | Boolean | `false` | Specifies whether to write data out of order. | +| OUT_OF_ORDER_RATIO | Floating point number | `0.3` | Proportion of out-of-order data. | +| BATCH_SIZE_PER_WRITE | Integer | `1` | Number of data rows written per batch. | +| START_TIME | Time | `2022-10-30T00:00:00+08:00` | Start timestamp for data generation. | +| LOOP | Integer | `86400` | Total number of write operations: Each type of operation will be divided according to the proportion defined by `OPERATION_PROPORTION` | +| OPERATION_PROPORTION | Character | `1:0:0:0:0:0:0:0:0:0:0` | Ratio of operation types (write:Q1:Q2:...:Q10). | + +#### **Query Scenario Parameters** + +| Parameter | Type | Example | Description | +| :------------------- | :-------- | :---------------------- | :----------------------------------------------------------- | +| QUERY_DEVICE_NUM | Integer | `2` | Number of devices involved in each query statement. | +| QUERY_SENSOR_NUM | Integer | `2` | Number of sensors involved in each query statement. | +| QUERY_AGGREGATE_FUN | Character | `count` | Aggregate functions used in queries (`COUNT`, `AVG`, `SUM`, etc.). | +| STEP_SIZE | Integer | `1` | Time interval step for time filter conditions. | +| QUERY_INTERVAL | Integer | `250000` | Time interval between query start and end times. | +| QUERY_LOWER_VALUE | Integer | `-5` | Threshold for conditional queries (`WHERE value > QUERY_LOWER_VALUE`). | +| GROUP_BY_TIME_UNIT | Integer | `20000` | The size of the group in the `GROUP BY` statement | +| LOOP | Integer | `10` | Total number of query operations: Each type of operation will be divided according to the proportion defined by `OPERATION_PROPORTION` | +| OPERATION_PROPORTION | Character | `0:0:0:0:0:0:0:0:0:0:1` | Ratio of operation types (`write:Q1:Q2:...:Q10`). | + +#### **Query Types and Example SQL** + +| Number | Query Type | IoTDB Sample SQL | +| :----- | :----------------------------- | :----------------------------------------------------------- | +| Q1 | Precise Point Query | `select v1 from root.db.d1 where time = ?` | +| Q2 | Time Range Query | `select v1 from root.db.d1 where time > ? and time < ?` | +| Q3 | Time Range with Value Filter | `select v1 from root.db.d1 where time > ? and time < ? and v1 > ?` | +| Q4 | Time Range Aggregation Query | `select count(v1) from root.db.d1 where and time > ? and time < ?` | +| Q5 | Full-Time Range with Filtering | `select count(v1) from root.db.d1 where v1 > ?` | +| Q6 | Range Aggregation with Filter | `select count(v1) from root.db.d1 where v1 > ? and time > ? and time < ?` | +| Q7 | Time Grouping Aggregation | `select count(v1) from root.db.d1 group by ([?, ?), ?, ?)` | +| Q8 | Latest Point Query | `select last v1 from root.db.d1` | +| Q9 | Descending Range Query | `select v1 from root.sg.d1 where time > ? and time < ? order by time desc` | +| Q10 | Descending Range with Filter | `select v1 from root.sg.d1 where time > ? and time < ? and v1 > ? order by time desc` | + +#### **Test process and test result persistence** + +IoT-benchmark currently supports persisting the test process and test results through configuration parameters. + +| **Parameter** | **Type** | **Example** | D**escription** | +| :-------------------- | :------- | :---------- | :----------------------------------------------------------- | +| TEST_DATA_PERSISTENCE | String | `None` | Specifies the result persistence method. Options: `None`, `IoTDB`, `MySQL`, `CSV`. | +| RECORD_SPLIT | Boolean | `true` | Whether to split results into multiple records. (Not supported by IoTDB currently.) | +| RECORD_SPLIT_MAX_LINE | Integer | `10000000` | Maximum number of rows per record (10 million rows per database table or CSV file). | +| TEST_DATA_STORE_IP | String | `127.0.0.1` | IP address of the database for result storage. | +| TEST_DATA_STORE_PORT | Integer | `6667` | Port number of the output database. | +| TEST_DATA_STORE_DB | String | `result` | Name of the output database. | +| TEST_DATA_STORE_USER | String | `root` | Username for accessing the output database. | +| TEST_DATA_STORE_PW | String | `root` | Password for accessing the output database. | + +**Result Persistence Details** + +- **CSV Mode:** If `TEST_DATA_PERSISTENCE` is set to `CSV`, a `data` folder is generated in the IoT-benchmark root directory during and after test execution. This folder contains: + - `csv` folder: Records the test process. + - `csvOutput` folder: Stores the test results. +- **MySQL Mode:** If `TEST_DATA_PERSISTENCE` is set to `MySQL`, IoT-benchmark creates the following tables in the specified MySQL database: + - **Test Process Table:** + 1. Created before the test starts. + 2. Named as: `testWithDefaultPath___`. + - **Configuration Table:** + 1. Named `CONFIG`. + 2. Stores the test configuration. + 3. Created if it does not exist. + - **Final Result Table:** + 1. Named `FINAL_RESULT`. + 2. Stores the test results after test completion. + 3. Created if it does not exist. + +#### Automation Script + +##### One-Click Script Startup + +The `cli-benchmark.sh` script allows one-click startup of IoTDB, IoTDB Benchmark monitoring, and IoTDB Benchmark testing. However, please note that this script will clear all existing data in IoTDB during startup, so use it with caution. + +**Steps to Run:** + +1. Edit the `IOTDB_HOME` parameter in `cli-benchmark.sh` to the local IoTDB directory. +2. Start the test by running the following command: + +```Bash +> ./cli-benchmark.sh +``` + +1. After the test completes: + 1. Check test-related logs in the `logs` folder. + 2. Check monitoring-related logs in the `server-logs` folder. + +##### Automatic Execution of Multiple Tests + +Single tests are often insufficient without comparative results. Therefore, IoT-benchmark provides an interface for executing multiple tests in sequence. + +1. **Routine Configuration:** Each line in the `routine` file specifies the parameters that change for each test. For example: + + ```Plain + LOOP=10 DEVICE_NUMBER=100 TEST + LOOP=20 DEVICE_NUMBER=50 TEST + LOOP=50 DEVICE_NUMBER=20 TEST + ``` + +In this example, three tests will run sequentially with `LOOP` values of 10, 20, and 50. + +Then the test process with 3 LOOP parameters of 10, 20, and 50 is executed in sequence. + +**Important Notes:** + +- Multiple parameters can be changed in each test using the format: + + ```Bash + LOOP=20 DEVICE_NUMBER=10 TEST + ``` + +- Avoid unnecessary spaces. + +- The `TEST` keyword marks the start of a new test. + +- Changed parameters persist across subsequent tests unless explicitly reset. + +1. **Start the Test:** After configuring the `routine` file, start multi-test execution using the following command + + ```Bash + > ./rep-benchmark.sh + ``` + +2. Test results will be displayed in the terminal. + +**Important Notes:** + +- Closing the terminal or losing the client connection will terminate the test process. + +- To run the test as a background daemon, execute: + + ```Bash + > ./rep-benchmark.sh > /dev/null 2>&1 & + ``` + +- To monitor progress, check the logs: + + ```Bash + > cd ./logs + > tail -f log_info.log + ``` + +## Test Example + +This example demonstrates how to configure and run an IoT-benchmark test with IoTDB 2.0 using the table model for writing and querying. + +```Properties +----------------------Main Configurations---------------------- +BENCHMARK_WORK_MODE=testWithDefaultPath +IoTDB_DIALECT_MODE=TABLE +DB_SWITCH=IoTDB-200-SESSION_BY_TABLET +GROUP_NUMBER=1 +IoTDB_TABLE_NUMBER=1 +DEVICE_NUMBER=60 +REAL_INSERT_RATE=1.0 +SENSOR_NUMBER=10 +OPERATION_PROPORTION=1:0:0:0:0:0:0:0:0:0:0:0 +SCHEMA_CLIENT_NUMBER=10 +DATA_CLIENT_NUMBER=10 +LOOP=10 +BATCH_SIZE_PER_WRITE=10 +DEVICE_NUM_PER_WRITE=1 +START_TIME=2025-01-01T00:00:00+08:00 +POINT_STEP=1000 +INSERT_DATATYPE_PROPORTION=1:1:1:1:1:1:0:0:0:0 +VECTOR=true +``` + +**Execution Steps:** + +1. Ensure the target database (IoTDB 2.0) is running. +2. Start IoT-benchmark using the configured parameters. +3. Upon completion, view the test results. + +```Shell +Create schema cost 0.88 second +Test elapsed time (not include schema creation): 4.60 second +----------------------------------------------------------Result Matrix---------------------------------------------------------- +Operation okOperation okPoint failOperation failPoint throughput(point/s) +INGESTION 600 60000 0 0 13054.42 +PRECISE_POINT 0 0 0 0 0.00 +TIME_RANGE 0 0 0 0 0.00 +VALUE_RANGE 0 0 0 0 0.00 +AGG_RANGE 0 0 0 0 0.00 +AGG_VALUE 0 0 0 0 0.00 +AGG_RANGE_VALUE 0 0 0 0 0.00 +GROUP_BY 0 0 0 0 0.00 +LATEST_POINT 0 0 0 0 0.00 +RANGE_QUERY_DESC 0 0 0 0 0.00 +VALUE_RANGE_QUERY_DESC 0 0 0 0 0.00 +GROUP_BY_DESC 0 0 0 0 0.00 +--------------------------------------------------------------------------------------------------------------------------------- + +--------------------------------------------------------------------------Latency (ms) Matrix-------------------------------------------------------------------------- +Operation AVG MIN P10 P25 MEDIAN P75 P90 P95 P99 P999 MAX SLOWEST_THREAD +INGESTION 41.77 0.95 1.41 2.27 6.76 24.14 63.42 127.18 1260.92 1265.72 1265.49 2581.91 +PRECISE_POINT 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 +TIME_RANGE 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 +VALUE_RANGE 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 +AGG_RANGE 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 +AGG_VALUE 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 +AGG_RANGE_VALUE 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 +GROUP_BY 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 +LATEST_POINT 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 +RANGE_QUERY_DESC 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 +VALUE_RANGE_QUERY_DESC 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 +GROUP_BY_DESC 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 +----------------------------------------------------------------------------------------------------------------------------------------------------------------------- +``` \ No newline at end of file diff --git a/src/UserGuide/latest-Table/Tools-System/CLI.md b/src/UserGuide/latest-Table/Tools-System/CLI.md index ab2208619..9b5db7a7a 100644 --- a/src/UserGuide/latest-Table/Tools-System/CLI.md +++ b/src/UserGuide/latest-Table/Tools-System/CLI.md @@ -19,43 +19,35 @@ --> -# CLI Tool - The IoTDB Command Line Interface (CLI) tool allows users to interact with the IoTDB server. Before using the CLI tool to connect to IoTDB, ensure that the IoTDB service is running correctly. This document explains how to launch the CLI and its related parameters. -> In this manual, `$IOTDB_HOME` represents the installation directory of IoTDB. +In this manual, `$IOTDB_HOME` represents the installation directory of IoTDB. -## 1 CLI Launch +### CLI Launch The CLI client script is located in the `$IOTDB_HOME/sbin` directory. The common commands to start the CLI tool are as follows: -- Linux/MacOS: +#### **Linux** **MacOS** -```Shell +```Bash Shell> bash sbin/start-cli.sh -sql_dialect table #or Shell> bash sbin/start-cli.sh -h 127.0.0.1 -p 6667 -u root -pw root -sql_dialect table ``` -- Windows: +#### **Windows** -```Shell +```Bash Shell> sbin\start-cli.bat -sql_dialect table #or Shell> sbin\start-cli.bat -h 127.0.0.1 -p 6667 -u root -pw root -sql_dialect table ``` -Among them: - --The -h and -p items are the IP and RPC port numbers where IoTDB is located (default IP and RPC port numbers are 127.0.0.1 and 6667 if not modified locally) -- -u and -pw are the username and password for logging into IoTDB (after installation, IoTDB has a default user, and both username and password are 'root') -- -sql-dialect is the logged in data model (table model or tree model), where table is specified to represent entering table model mode - **Parameter Explanation** | **Parameter** | **Type** | **Required** | **Description** | **Example** | | -------------------------- | -------- | ------------ | ------------------------------------------------------------ | ------------------- | -| -h `` | string | No | The IP address of the IoTDB server. (Default: 127.0.0.1) | -h 127.0.0.1 | +| -h `` | string | No | The IP address of the IoTDB server. (Default: 127.0.0.1) | -h 127.0.0.1 | | -p `` | int | No | The RPC port of the IoTDB server. (Default: 6667) | -p 6667 | | -u `` | string | No | The username to connect to the IoTDB server. (Default: root) | -u root | | -pw `` | string | No | The password to connect to the IoTDB server. (Default: root) | -pw root | @@ -70,11 +62,9 @@ The figure below indicates a successful startup: ![](/img/Cli-01.png) -## 2 Execute statements in CLI +### Example Commands -After entering the CLI, users can directly interact by entering SQL statements in the conversation. For example: - -- Create Database +#### **Create a Database** ```Java create database test @@ -83,8 +73,7 @@ create database test ![](/img/Cli-02.png) -- Show Databases - +#### **Show Databases** ```Java show databases ``` @@ -92,13 +81,11 @@ show databases ![](/img/Cli-03.png) -## 3 CLI Exit +### CLI Exit To exit the CLI and terminate the session, type`quit`or`exit`. -## 4 Additional Notes - -CLI Command Usage Tips: +### Additional Notes and Shortcuts 1. **Navigate Command History:** Use the up and down arrow keys. 2. **Auto-Complete Commands:** Use the right arrow key. diff --git a/src/UserGuide/latest-Table/Tools-System/Maintenance-Tool_timecho.md b/src/UserGuide/latest-Table/Tools-System/Maintenance-Tool_timecho.md new file mode 100644 index 000000000..0c60ab937 --- /dev/null +++ b/src/UserGuide/latest-Table/Tools-System/Maintenance-Tool_timecho.md @@ -0,0 +1,1150 @@ + + +## IoTDB-OpsKit + +The IoTDB OpsKit is an easy-to-use operation and maintenance tool designed for TimechoDB (Enterprise-grade product based on Apache IoTDB). It helps address the operational and maintenance challenges of multi-node distributed IoTDB deployments by providing functionalities such as cluster deployment, start/stop management, elastic scaling, configuration updates, and data export. With one-click command execution, it simplifies the management of complex database clusters and significantly reduces operational complexity. + +This document provides guidance on remotely deploying, configuring, starting, and stopping IoTDB cluster instances using the cluster management tool. + +### Prerequisites + +The IoTDB OpsKit requires GLIBC 2.17 or later, which means the minimum supported operating system version is CentOS 7. The target machines for IoTDB deployment must have the following dependencies installed: + +- JDK 8 or later +- lsof +- netstat +- unzip + +If any of these dependencies are missing, please install them manually. The last section of this document provides installation commands for reference. + +> **Note:** The IoTDB cluster management tool requires **root privileges** to execute. + +### Deployment + +#### Download and Installation + +The IoTDB OpsKit is an auxiliary tool for TimechoDB. Please contact Timecho team to obtain the download instructions. + +To install: + +1. Navigate to the `iotdb-opskit` directory and execute: + +```Bash +bash install-iotdbctl.sh +``` + +This will activate the `iotdbctl` command in the current shell session. You can verify the installation by checking the deployment prerequisites: + +```Bash +iotdbctl cluster check example +``` + +1. Alternatively, if you prefer not to activate `iotdbctl`, you can execute commands directly using the absolute path: + +```Bash +/sbin/iotdbctl cluster check example +``` + +### Cluster Configuration Files + +The cluster configuration files are stored in the `iotdbctl/config` directory as YAML files. + +- Each YAML file name corresponds to a cluster name. Multiple YAML files can coexist. +- A sample configuration file (`default_cluster.yaml`) is provided in the `iotdbctl/config` directory to assist users in setting up their configurations. + +#### **Structure of YAML Configuration** + +The YAML file consists of the following five sections: + +1. `global` – General settings, such as SSH credentials, installation paths, and JDK configurations. +2. `confignode_servers` – Configuration settings for ConfigNodes. +3. `datanode_servers` – Configuration settings for DataNodes. +4. `grafana_server` – Configuration settings for Grafana monitoring. +5. `prometheus_server` – Configuration settings for Prometheus monitoring. + +A sample YAML file (`default_cluster.yaml`) is included in the `iotdbctl/config` directory. + +- You can copy and rename it based on your cluster setup. +- All uncommented fields are mandatory. +- Commented fields are optional. + +**Example:** Checking `default_cluster.yaml` + +To validate a cluster configuration, execute: + +```SQL +iotdbctl cluster check default_cluster +``` + +For a complete list of available commands, refer to the command reference section below. + +#### Parameter Reference + +| **Parameter** | **Description** | **Mandatory** | +| ----------------------- | ------------------------------------------------------------ | ------------- | +| iotdb_zip_dir | IoTDB distribution directory. If empty, the package will be downloaded from `iotdb_download_url`. | NO | +| iotdb_download_url | IoTDB download URL. If `iotdb_zip_dir` is empty, the package will be retrieved from this address. | NO | +| jdk_tar_dir | Local path to the JDK package for uploading and deployment. | NO | +| jdk_deploy_dir | Remote deployment directory for the JDK. | NO | +| jdk_dir_name | JDK decompression directory name. Default: `jdk_iotdb`. | NO | +| iotdb_lib_dir | IoTDB library directory (or `.zip` package for upgrades). Default: commented out. | NO | +| user | SSH login username for deployment. | YES | +| password | SSH password (if omitted, key-based authentication will be used). | NO | +| pkey | SSH private key (used if `password` is not provided). | NO | +| ssh_port | SSH port number. | YES | +| deploy_dir | IoTDB deployment directory. | YES | +| iotdb_dir_name | IoTDB decompression directory name. Default: `iotdb`. | NO | +| datanode-env.sh | Corresponds to `iotdb/config/datanode-env.sh`. If both `global` and `confignode_servers` are configured, `confignode_servers` takes precedence. | NO | +| confignode-env.sh | Corresponds to `iotdb/config/confignode-env.sh`. If both `global` and `datanode_servers` are configured, `datanode_servers` takes precedence. | NO | +| iotdb-system.properties | Corresponds to `/config/iotdb-system.properties`. | NO | +| cn_internal_address | The inter-node communication address for ConfigNodes. This parameter defines the address of the surviving ConfigNode, which defaults to `confignode_x`. If both `global` and `confignode_servers` are configured, the value in `confignode_servers` takes precedence. Corresponds to `cn_internal_address` in `iotdb/config/iotdb-system.properties`. | YES | +| dn_internal_address | The inter-node communication address for DataNodes. This address defaults to `confignode_x`. If both `global` and `datanode_servers` are configured, the value in `datanode_servers` takes precedence. Corresponds to `dn_internal_address` in `iotdb/config/iotdb-system.properties`. | YES | + +Both `datanode-env.sh` and `confignode-env.sh` allow **extra parameters** to be appended. These parameters can be configured using the `extra_opts` field. Example from `default_cluster.yaml`: + +```YAML +datanode-env.sh: + extra_opts: | + IOTDB_JMX_OPTS="$IOTDB_JMX_OPTS -XX:+UseG1GC" + IOTDB_JMX_OPTS="$IOTDB_JMX_OPTS -XX:MaxGCPauseMillis=200" +``` + +#### ConfigNode Configuration + +ConfigNodes can be configured in `confignode_servers`. Multiple ConfigNodes can be deployed, with the first started ConfigNode (`node1`) serving as the Seed ConfigNode by default. + +| **Parameter** | **Description** | **Mandatory** | +| ----------------------- | ------------------------------------------------------------ | ------------- | +| name | ConfigNode name. | YES | +| deploy_dir | ConfigNode deployment directory. | YES | +| cn_internal_address | Inter-node communication address for ConfigNodes, corresponding to `iotdb/config/iotdb-system.properties`. | YES | +| cn_seed_config_node | The cluster configuration address points to the surviving ConfigNode. This address defaults to `confignode_x`. If both `global` and `confignode_servers` are configured, the value in `confignode_servers` takes precedence, corresponding to `cn_internal_address` in `iotdb/config/iotdb-system.properties` | YES | +| cn_internal_port | Internal communication port, corresponding to `cn_internal_port` in `iotdb/config/iotdb-system.properties`. | YES | +| cn_consensus_port | Consensus communication port, corresponding to `cn_consensus_port` in `iotdb/config/iotdb-system.properties`. | NO | +| cn_data_dir | Data directory for ConfigNodes, corresponding to `cn_data_dir` in `iotdb/config/iotdb-system.properties`. | YES | +| iotdb-system.properties | ConfigNode properties file. If `global` and `confignode_servers` are both configured, values from `confignode_servers` take precedence. | NO | + +#### DataNode Configuration + +Datanodes can be configured in `datanode_servers`. Multiple DataNodes can be deployed, each requiring unique configuration. + +| **Parameter** | **Description** | **Mandatory** | +| ----------------------- | ------------------------------------------------------------ | ------------- | +| name | DataNode name. | YES | +| deploy_dir | DataNode deployment directory. | YES | +| dn_rpc_address | RPC communication address, corresponding to `dn_rpc_address` in `iotdb/config/iotdb-system.properties`. | YES | +| dn_internal_address | Internal communication address, corresponding to `dn_internal_address` in `iotdb/config/iotdb-system.properties`. | YES | +| dn_seed_config_node | Points to the active ConfigNode. Defaults to `confignode_x`. If `global` and `datanode_servers` are both configured, values from `datanode_servers` take precedence. Corresponds to `dn_seed_config_node` in `iotdb/config/iotdb-system.properties`. | YES | +| dn_rpc_port | RPC port for DataNodes, corresponding to `dn_rpc_port` in `iotdb/config/iotdb-system.properties`. | YES | +| dn_internal_port | Internal communication port, corresponding to `dn_internal_port` in `iotdb/config/iotdb-system.properties`. | YES | +| iotdb-system.properties | DataNode properties file. If `global` and `datanode_servers` are both configured, values from `datanode_servers` take precedence. | NO | + +#### Grafana Configuration + +Grafana can be configured in `grafana_server`. Defines the settings for deploying Grafana as a monitoring solution for IoTDB. + +| **Parameter** | **Description** | **Mandatory** | +| ---------------- | ------------------------------------------------------------ | ------------- | +| grafana_dir_name | Name of the Grafana decompression directory. Default: `grafana_iotdb`. | NO | +| host | The IP address of the machine hosting Grafana. | YES | +| grafana_port | The port Grafana listens on. Default: `3000`. | NO | +| deploy_dir | Deployment directory for Grafana. | YES | +| grafana_tar_dir | Path to the Grafana compressed package. | YES | +| dashboards | Path to pre-configured Grafana dashboards. | NO | + +#### Prometheus Configuration + +Grafana can be configured in `prometheus_server`. Defines the settings for deploying Prometheus as a monitoring solution for IoTDB. + +| **Parameter** | **Description** | **Mandatory** | +| --------------------------- | ------------------------------------------------------------ | ------------- | +| prometheus_dir_name | Name of the Prometheus decompression directory. Default: `prometheus_iotdb`. | NO | +| host | The IP address of the machine hosting Prometheus. | YES | +| prometheus_port | The port Prometheus listens on. Default: `9090`. | NO | +| deploy_dir | Deployment directory for Prometheus. | YES | +| prometheus_tar_dir | Path to the Prometheus compressed package. | YES | +| storage_tsdb_retention_time | Number of days data is retained. Default: `15 days`. | NO | +| storage_tsdb_retention_size | Maximum data storage size per block. Default: `512M`. Units: KB, MB, GB, TB, PB, EB. | NO | + +If metrics are enabled in `iotdb-system.properties` (in `config/xxx.yaml`), the configurations will be automatically applied to Prometheus without manual modification. + +**Special Configuration Notes** + +- **Handling Special Characters in YAML Keys**: If a YAML key value contains special characters (such as `:`), it is recommended to enclose the entire value in double quotes (`""`). +- **Avoid Spaces in File Paths**: Paths containing spaces may cause parsing errors in some configurations. + +### Usage Scenarios + +#### Data Cleanup + +This operation deletes cluster data directories, including: + +- IoTDB data directories, +- ConfigNode directories (`cn_system_dir`, `cn_consensus_dir`), +- DataNode directories (`dn_data_dirs`, `dn_consensus_dir`, `dn_system_dir`), +- Log directories and ext directories specified in the YAML configuration. + +To clean cluster data, perform the following steps: + +```Bash +# Step 1: Stop the cluster +iotdbctl cluster stop default_cluster + +# Step 2: Clean the cluster data +iotdbctl cluster clean default_cluster +``` + +#### Cluster Destruction + +The cluster destruction process completely removes the following resources: + +- Data directories, +- ConfigNode directories (`cn_system_dir`, `cn_consensus_dir`), +- DataNode directories (`dn_data_dirs`, `dn_consensus_dir`, `dn_system_dir`), +- Log and ext directories, +- IoTDB deployment directory, +- Grafana and Prometheus deployment directories. + +To destroy a cluster, follow these steps: + +```Bash +# Step 1: Stop the cluster +iotdbctl cluster stop default_cluster + +# Step 2: Destroy the cluster +iotdbctl cluster destroy default_cluster +``` + +#### Cluster Upgrade + +To upgrade the cluster, follow these steps: + +1. In `config/xxx.yaml`, set **`iotdb_lib_dir`** to the path of the JAR files to be uploaded. Example: `iotdb/lib` +2. If uploading a compressed package, compress the `iotdb/lib` directory: + +```Bash +zip -r lib.zip apache-iotdb-1.2.0/lib/* +``` + +1. Execute the following commands to distribute the library and restart the cluster: + +```Bash +iotdbctl cluster dist-lib default_cluster +iotdbctl cluster restart default_cluster +``` + +#### Hot Deployment + +Hot deployment allows real-time configuration updates without restarting the cluster. + +Steps: + +1. Modify the configuration in `config/xxx.yaml`. +2. Distribute the updated configuration and reload it: + +```Bash +iotdbctl cluster dist-conf default_cluster +iotdbctl cluster reload default_cluster +``` + +#### Cluster Expansion + +To expand the cluster by adding new nodes: + +1. Add a new DataNode or ConfigNode in `config/xxx.yaml`. +2. Execute the cluster expansion command: + +```Bash +iotdbctl cluster scaleout default_cluster +``` + +#### Cluster Shrinking + +To remove a node from the cluster: + +1. Identify the node name or IP:port in `config/xxx.yaml`: + 1. ConfigNode port: `cn_internal_port` + 2. DataNode port: `rpc_port` +2. Execute the following command: + +```Bash +iotdbctl cluster scalein default_cluster +``` + +#### Managing Existing IoTDB Clusters + +To manage an existing IoTDB cluster with the OpsKit tool: + +1. Configure SSH credentials: + 1. Set `user`, `password` (or `pkey`), and `ssh_port` in `config/xxx.yaml`. +2. Modify IoTDB deployment paths: For example, if IoTDB is deployed at `/home/data/apache-iotdb-1.1.1`: + +```YAML +deploy_dir: /home/data/ +iotdb_dir_name: apache-iotdb-1.1.1 +``` + +1. Configure JDK paths: If `JAVA_HOME` is not used, set the JDK deployment path: + +```YAML +jdk_deploy_dir: /home/data/ +jdk_dir_name: jdk_1.8.2 +``` + +1. Set cluster addresses: + +- `cn_internal_address` and `dn_internal_address` +- In `confignode_servers` → `iotdb-system.properties`, configure: + - `cn_internal_address`, `cn_internal_port`, `cn_consensus_port`, `cn_system_dir`, `cn_consensus_dir` +- In `datanode_servers` → `iotdb-system.properties`, configure: + - `dn_rpc_address`, `dn_internal_address`, `dn_data_dirs`, `dn_consensus_dir`, `dn_system_dir` + +1. Execute the initialization command: + +```Bash +iotdbctl cluster init default_cluster +``` + +#### Deploying IoTDB, Grafana, and Prometheus + +To deploy an IoTDB cluster along with Grafana and Prometheus: + +1. Enable metrics: In `iotdb-system.properties`, enable the metrics interface. +2. Configure Grafana: + +- If deploying multiple dashboards, separate names with commas. +- Ensure dashboard names are unique to prevent overwriting. + +1. Configure Prometheus: + +- If the IoTDB cluster has metrics enabled, Prometheus automatically adapts without manual configuration. + +1. Start the cluster: + +```Bash +iotdbctl cluster start default_cluster +``` + +For detailed parameters, refer to the **Cluster Configuration Files** section above. + +### Command Reference + +The basic command structure is: + +```Bash +iotdbctl cluster [params (Optional)] +``` + +- `key` – The specific command to execute. +- `cluster_name` – The name of the cluster (matches the YAML file name in `iotdbctl/config`). +- `params` – Optional parameters for the command. + +Example: Deploying the `default_cluster` cluster + +```Bash +iotdbctl cluster deploy default_cluster +``` + +#### Command Overview + +| **ommand** | **escription** | **Parameters** | +| ---------- | ---------------------------------- | ---------------------------------------------------- | +| check | Check cluster readiness for deployment. | Cluster name | +| clean | Clean up cluster data. | Cluster name | +| deploy/dist-all | Deploy the cluster. | Cluster name, -N module (optional: iotdb, grafana, prometheus), -op force (optional) | +| list | List cluster status. | None | +| start | Start the cluster. | Cluster name, -N node name (optional: iotdb, grafana, prometheus) | +| stop | Stop the cluster. | Cluster name, -N node name (optional), -op force (optional) | +| restart | Restart the cluster. | Cluster name, -N node name (optional), -op force (optional) | +| show | View cluster details. | Cluster name, details (optional) | +| destroy | Destroy the cluster. | Cluster name, -N module (optional: iotdb, grafana, prometheus) | +| scaleout | Expand the cluster. | Cluster name | +| scalein | Shrink the cluster. | Cluster name, -N node name or IP:port | +| reload | Hot reload cluster configuration. | Cluster name | +| dist-conf | Distribute cluster configuration. | Cluster name | +| dumplog | Backup cluster logs. | Cluster name, -N node name, -h target IP, -pw target password, -p target port, -path backup path, -startdate, -enddate, -loglevel, -l transfer speed | +| dumpdata | Backup cluster data | Cluster name, -h target IP, -pw target password, -p target port, -path backup path, -startdate, -enddate, -l transfer speed | +| dist-lib | Upgrade the IoTDB lib package. | Cluster name | +| init | Initialize the cluster configuration. | Cluster name | +| status | View process status. | Cluster name | +| activate | Activate the cluster. | Cluster name | +| health_check | Perform a health check. | Cluster name, -N, nodename (optional) | +| backup | Backup the cluster. | Cluster name,-N nodename (optional) | +| importschema | Import metadata. | Cluster name,-N nodename -param paramters | +| exportschema | Export metadata. | Cluster name,-N nodename -param paramters | + +### Detailed Command Execution Process + +The following examples use `default_cluster.yaml` as a reference. Users can modify the commands according to their specific cluster configuration files. + +#### Check Cluster Deployment Environment + +The following command checks whether the cluster environment meets the deployment requirements: + +```Bash +iotdbctl cluster check default_cluster +``` + +**Execution Steps:** + +1. Locate the corresponding YAML file (`default_cluster.yaml`) based on the cluster name. +2. Retrieve configuration information for ConfigNode and DataNode (`confignode_servers` and `datanode_servers`). +3. Verify the following conditions on the target node: + 1. SSH connectivity + 2. JDK version (must be 1.8 or above) + 3. Required system tools: unzip, lsof, netstat + +**Expected Output:** + +- If successful: `Info: example check successfully!` +- If failed: `Error: example check fail!` + +**Troubleshooting Tips:** + +- JDK version not satisfied: Specify a valid `jdk1.8+` path in the YAML file for deployment. +- Missing system tools: Install unzip, lsof, and netstat on the server. +- Port conflict: Check the error log, e.g., `Error: Server (ip:172.20.31.76) iotdb port (10713) is listening.` + +#### Deploy Cluster + +Deploy the entire cluster using the following command: + +```Bash +iotdbctl cluster deploy default_cluster +``` + +**Execution Steps:** + +1. Locate the corresponding `YAML` file based on the cluster name. +2. Upload the IoTDB and JDK compressed packages (if `jdk_tar_dir` and `jdk_deploy_dir` are configured). +3. Generate and upload the iotdb-system.properties file based on the YAML configuration. + +**Force Deployment:** To overwrite existing deployment directories and redeploy: + +```Bash +iotdbctl cluster deploy default_cluster -op force +``` + +**Deploying Individual Modules:** + +You can deploy specific components individually: + +```Bash +# Deploy Grafana module +iotdbctl cluster deploy default_cluster -N grafana + +# Deploy Prometheus module +iotdbctl cluster deploy default_cluster -N prometheus + +# Deploy IoTDB module +iotdbctl cluster deploy default_cluster -N iotdb +``` + +#### Start Cluster + +Start the cluster using the following command: + +```Bash +iotdbctl cluster start default_cluster +``` + +**Execution Steps:** + +1. Locate the `YAML` file based on the cluster name. +2. Start ConfigNodes sequentially according to the YAML order. + 1. The first ConfigNode is treated as the Seed ConfigNode. + 2. Verify startup by checking process IDs. +3. Start DataNodes sequentially and verify their process IDs. +4. After process verification, check the cluster's service health via CLI. + 1. If the CLI connection fails, retry every 10 seconds, up to 5 times. + +**Start a Single Node:** Start specific nodes by name or IP: + +```Bash +# By node name +iotdbctl cluster start default_cluster -N datanode_1 + +# By IP and port (ConfigNode uses `cn_internal_port`, DataNode uses `rpc_port`) +iotdbctl cluster start default_cluster -N 192.168.1.5:6667 + +# Start Grafana +iotdbctl cluster start default_cluster -N grafana + +# Start Prometheus +iotdbctl cluster start default_cluster -N prometheus +``` + +**Note:** The `iotdbctl` tool relies on `start-confignode.sh` and `start-datanode.sh` scripts. If startup fails, check the cluster status using the following command: + +```Bash +iotdbctl cluster status default_cluster +``` + +#### View Cluster Status + +To view the current cluster status: + +```Bash +iotdbctl cluster show default_cluster +``` + +To view detailed information: + +```Bash +iotdbctl cluster show default_cluster details +``` + +**Execution Steps:** + +1. Locate the YAML file and retrieve `confignode_servers` and `datanode_servers` configuration. +2. Execute `show cluster details` via CLI. +3. If one node returns successfully, the process skips checking remaining nodes. + +#### Stop Cluster + +To stop the entire cluster: + +```Bash +iotdbctl cluster stop default_cluster +``` + +**Execution Steps:** + +1. Locate the YAML file and retrieve `confignode_servers` and `datanode_servers` configuration. +2. Stop DataNodes sequentially based on the YAML configuration. +3. Stop ConfigNodes in sequence. + +**Force Stop:** To forcibly stop the cluster using `kill -9`: + +```Bash +iotdbctl cluster stop default_cluster -op force +``` + +**Stop a Single Node:** Stop nodes by name or IP: + +```Bash +# By node name +iotdbctl cluster stop default_cluster -N datanode_1 + +# By IP and port +iotdbctl cluster stop default_cluster -N 192.168.1.5:6667 + +# Stop Grafana +iotdbctl cluster stop default_cluster -N grafana + +# Stop Prometheus +iotdbctl cluster stop default_cluster -N prometheus +``` + +**Note:** If the IoTDB cluster is not fully stopped, verify its status using: + +```Bash +iotdbctl cluster status default_cluster +``` + +#### Clean Cluster Data + +To clean up cluster data, execute: + +```Bash +iotdbctl cluster clean default_cluster +``` + +**Execution Steps:** + +1. Locate the `YAML` file and retrieve `confignode_servers` and `datanode_servers` configuration. +2. Verify that no services are running. If any are active, the cleanup will not proceed. +3. Delete the following directories: + 1. IoTDB data directories, + 2. ConfigNode and DataNode system directories (`cn_system_dir`, `dn_system_dir`), + 3. Consensus directories (`cn_consensus_dir`, `dn_consensus_dir`), + 4. Logs and ext directories. + +#### Restart Cluster + +Restart the cluster using the following command: + +```Bash +iotdbctl cluster restart default_cluster +``` + +**Execution Steps:** + +1. Locate the YAML file and retrieve configurations for ConfigNodes, DataNodes, Grafana, and Prometheus. +2. Perform a cluster stop followed by a cluster start. + +**Force Restart:** To forcibly restart the cluster: + +```Bash +iotdbctl cluster restart default_cluster -op force +``` + +**Restart a Single Node:** Restart specific nodes by name: + +```Bash +# Restart DataNode +iotdbctl cluster restart default_cluster -N datanode_1 + +# Restart ConfigNode +iotdbctl cluster restart default_cluster -N confignode_1 + +# Restart Grafana +iotdbctl cluster restart default_cluster -N grafana + +# Restart Prometheus +iotdbctl cluster restart default_cluster -N prometheus +``` + +#### Cluster Expansion + +To add a node to the cluster: + +1. Edit `config/xxx.yaml` to add a new DataNode or ConfigNode. +2. Execute the following command: + +```Bash +iotdbctl cluster scaleout default_cluster +``` + +**Execution Steps:** + +1. Locate the YAML file and retrieve node configuration. +2. Upload the IoTDB and JDK packages (if `jdk_tar_dir` and `jdk_deploy_dir` are configured). +3. Generate and upload iotdb-system.properties. +4. Start the new node and verify success. + +Tip: Only one node expansion is supported per execution. + +#### Cluster Shrinking + +To remove a node from the cluster: + +1. Identify the node name or IP:port in `config/xxx.yaml`. +2. Execute the following command: + +```Bash +#Scale down by node name +iotdbctl cluster scalein default_cluster -N nodename + +#Scale down according to ip+port (ip+port obtains the only node according to ip+dn_rpc_port in datanode, and obtains the only node according to ip+cn_internal_port in confignode) +iotdbctl cluster scalein default_cluster -N ip:port +``` + +**Execution Steps:** + +1. Locate the YAML file and retrieve node configuration. +2. Ensure at least one ConfigNode and one DataNode remain. +3. Identify the node to remove, execute the scale-in command, and delete the node directory. + +Tip: Only one node shrinking is supported per execution. + +#### Destroy Cluster + +To destroy the entire cluster: + +```Bash +iotdbctl cluster destroy default_cluster +``` + +**Execution Steps:** + +1. Locate the YAML file and retrieve node information. +2. Verify that all nodes are stopped. If any node is running, the destruction will not proceed. +3. Delete the following directories: + 1. IoTDB data directories, + 2. ConfigNode and DataNode system directories, + 3. Consensus directories, + 4. Logs, ext, and deployment directories, + 5. Grafana and Prometheus directories. + +**Destroy a Single Module:** To destroy individual modules: + +```Bash +# Destroy Grafana +iotdbctl cluster destroy default_cluster -N grafana + +# Destroy Prometheus +iotdbctl cluster destroy default_cluster -N prometheus + +# Destroy IoTDB +iotdbctl cluster destroy default_cluster -N iotdb +``` + +#### Distribute Cluster Configuration + +To distribute the cluster configuration files across nodes: + +```Bash +iotdbctl cluster dist-conf default_cluster +``` + +**Execution Steps:** + +1. Locate the YAML file based on `cluster-name`. +2. Retrieve configuration from `confignode_servers`, `datanode_servers`, `grafana`, and `prometheus`. +3. Generate and upload `iotdb-system.properties` to the specified nodes. + +#### Hot Load Cluster Configuration + +To reload the cluster configuration without restarting: + +```Plain +iotdbctl cluster reload default_cluster +``` + +**Execution Steps:** + +1. Locate the YAML file based on `cluster-name`. +2. Execute the `load configuration` command through the CLI for each node. + +#### Cluster Node Log Backup + +To back up logs from specific nodes: + +```Bash +iotdbctl cluster dumplog default_cluster -N datanode_1,confignode_1 -startdate '2023-04-11' -enddate '2023-04-26' -h 192.168.9.48 -p 36000 -u root -pw root -path '/iotdb/logs' -logs '/root/data/db/iotdb/logs' +``` + +**Execution Steps:** + +1. Locate the YAML file based on `cluster-name`. +2. Verify node existence (`datanode_1` and `confignode_1`). +3. Back up log data within the specified date range. +4. Save logs to `/iotdb/logs` or the default IoTDB installation path. + +| **Command** | **Description** | **Mandatory** | +| ----------- | ------------------------------------------------------------ | ------------- | +| -h | IP address of the backup server | NO | +| -u | Username for the backup server | NO | +| -pw | Password for the backup server | NO | +| -p | Backup server port (Default: `22`) | NO | +| -path | Path for backup data (Default: current path) | NO | +| -loglevel | Log level (`all`, `info`, `error`, `warn`. Default: `all`) | NO | +| -l | Speed limit (Default: unlimited; Range: 0 to 104857601 Kbit/s) | NO | +| -N | Cluster names (comma-separated) | YES | +| -startdate | Start date (inclusive; Default: `1970-01-01`) | NO | +| -enddate | End date (inclusive) | NO | +| -logs | IoTDB log storage path (Default: `{iotdb}/logs`) | NO | + +#### Cluster Data Backup + +To back up data from the cluster: + +```Bash +iotdbctl cluster dumpdata default_cluster -granularity partition -startdate '2023-04-11' -enddate '2023-04-26' -h 192.168.9.48 -p 36000 -u root -pw root -path '/iotdb/datas' +``` + +This command identifies the leader node from the YAML file and backs up data within the specified date range to the `/iotdb/datas` directory on the `192.168.9.48` server. + +| **Command** | **Description** | **Mandatory** | +| ------------ | ------------------------------------------------------------ | ------------- | +| -h | IP address of the backup server | NO | +| -u | Username for the backup server | NO | +| -pw | Password for the backup server | NO | +| -p | Backup server port (Default: `22`) | NO | +| -path | Path for storing backup data (Default: current path) | NO | +| -granularity | Data partition granularity | YES | +| -l | Speed limit (Default: unlimited; Range: 0 to 104857601 Kbit/s) | NO | +| -startdate | Start date (inclusive) | YES | +| -enddate | End date (inclusive) | YES | + +#### Cluster Upgrade + +To upgrade the cluster: + +```Bash +iotdbctl cluster dist-lib default_cluster +``` + +**Execution Steps:** + +1. Locate the YAML file based on the cluster name. +2. Retrieve the configuration of `confignode_servers` and `datanode_servers`. +3. Upload the library package. + +**Note:** After the upgrade, restart all IoTDB nodes for the changes to take effect. + +#### Cluster Initialization + +To initialize the cluster: + +```Bash +iotdbctl cluster init default_cluster +``` + +**Execution Steps:** + +1. Locate the YAML file based on the cluster name. +2. Retrieve configuration details for `confignode_servers`, `datanode_servers`, `Grafana`, and `Prometheus`. +3. Initialize the cluster configuration. + +#### View Cluster Process + +To check the cluster process status: + +```Bash +iotdbctl cluster status default_cluster +``` + +**Execution Steps:** + +1. Locate the YAML file based on the cluster name. +2. Retrieve configuration details for `confignode_servers`, `datanode_servers`, `Grafana`, and `Prometheus`. +3. Display the operational status of each node in the cluster. + +#### Cluster Authorization Activation + +**Default Activation Method:** To activate the cluster using an activation code: + +```Bash +iotdbctl cluster activate default_cluster +``` + +**Execution Steps:** + +1. Locate the YAML file based on the cluster name. +2. Retrieve the `confignode_servers` configuration. +3. Obtain the machine code. +4. Enter the activation code when prompted. + +Example: + +```Bash +Machine code: +Kt8NfGP73FbM8g4Vty+V9qU5lgLvwqHEF3KbLN/SGWYCJ61eFRKtqy7RS/jw03lHXt4MwdidrZJ== +JHQpXu97IKwv3rzbaDwoPLUuzNCm5aEeC9ZEBW8ndKgGXEGzMms25+u== +Please enter the activation code: +JHQpXu97IKwv3rzbaDwoPLUuzNCm5aEeC9ZEBW8ndKg=, lTF1Dur1AElXIi/5jPV9h0XCm8ziPd9/R+tMYLsze1oAPxE87+Nwws= +Activation successful. +``` + +**Activate a Specific Node:** To activate a specific node: + +```Bash +iotdbctl cluster activate default_cluster -N confignode1 +``` + +**Activate via License Path:** To activate using a license file: + +```Bash +iotdbctl cluster activate default_cluster -op license_path +``` + +#### Cluster Health Check + +To perform a cluster health check: + +```Bash +iotdbctl cluster health_check default_cluster +``` + +**Execution Steps:** + +1. Locate the YAML file based on the cluster name. +2. Retrieve configuration details for `confignode_servers` and `datanode_servers`. +3. Execute `health_check.sh` on each node. + +**Single Node Health Check:** To check a specific node: + +```Bash +iotdbctl cluster health_check default_cluster -N datanode_1 +``` + +#### Cluster Shutdown Backup + +To back up the cluster during shutdown: + +```Bash +iotdbctl cluster backup default_cluster +``` + +**Execution Steps:** + +1. Locate the YAML file based on the cluster name. +2. Retrieve configuration details for `confignode_servers` and `datanode_servers`. +3. Execute `backup.sh` on each node. + +**Single Node Backup:** To back up a specific node: + +```Bash +iotdbctl cluster backup default_cluster -N datanode_1 +``` + +**Note:** Multi-node deployment on a single machine only supports quick mode. + +#### Cluster Metadata Import + +To import metadata: + +```Bash +iotdbctl cluster importschema default_cluster -N datanode1 -param "-s ./dump0.csv -fd ./failed/ -lpf 10000" +``` + +**Execution Steps:** + +1. Locate the YAML file based on the cluster name to retrieve `datanode_servers` configuration information. +2. Execute metadata import using `import-schema.sh` on `datanode1`. + +Parameter Descriptions for `-param`: + +| **Command** | **Description** | **Mandatory** | +| ----------- | ------------------------------------------------------------ | ------------- | +| -s | Specify the data file or directory to be imported. If a directory is specified, all files with a `.csv` extension will be imported in bulk. | YES | +| -fd | Specify a directory to store failed import files. If omitted, failed files will be saved in the source directory with the `.failed` suffix added to the original filename. | No | +| -lpf | Specify the maximum number of lines per failed import file (Default: 10,000). | NO | + +#### Cluster Metadata Export + +To export metadata: + +```Bash +iotdbctl cluster exportschema default_cluster -N datanode1 -param "-t ./ -pf ./pattern.txt -lpf 10 -t 10000" +``` + +**Execution Steps:** + +1. Locate the YAML file based on the cluster name to retrieve `datanode_servers` configuration information. +2. Execute metadata export using `export-schema.sh` on `datanode1`. + +**Parameter Descriptions for** **`-param`:** + +| **Command** | **Description** | **Mandatory** | +| ----------- | ------------------------------------------------------------ | ------------- | +| -t | Specify the output path for the exported CSV file. | YES | +| -path | Specify the metadata path pattern for export. If this parameter is specified, the `-s` parameter will be ignored. Example: `root.stock.**`. | NO | +| -pf | If `-path` is not specified, use this parameter to specify the file containing metadata paths to export. The file must be in `.txt` format, with one path per line. | NO | +| -lpf | Specify the maximum number of lines per exported file (Default: 10,000). | NO | +| -timeout | Specify the session query timeout in milliseconds. | NO | + +### Introduction to Cluster Deployment Tool Samples + +In the cluster deployment tool installation directory (config/example), there are three YAML configuration examples. If needed, you can copy and modify them for your deployment. + +| **Name** | **Description** | +| -------------------- | -------------------------------------------------------- | +| default_1c1d.yaml | Example configuration for 1 ConfigNode and 1 DataNode. | +| default_3c3d.yaml | Example configuration for 3 ConfigNodes and 3 DataNodes. | +| default_3c3d_grafa_prome | Example configuration for 3 ConfigNodes, 3 DataNodes, Grafana, and Prometheus. | + +## IoTDB Data Directory Overview Tool + +The IoTDB Data Directory Overview Tool provides an overview of the IoTDB data directory structure. It is located at `tools/tsfile/print-iotdb-data-dir`. + +### Usage + +- For Windows: + +```Bash +.\print-iotdb-data-dir.bat () +``` + +- For Linux or MacOs: + +```Shell +./print-iotdb-data-dir.sh () +``` + +**Note:** If the output path is not specified, the default relative path `IoTDB_data_dir_overview.txt` will be used. + +### Example + +Use Windows in this example: + +~~~Bash +.\print-iotdb-data-dir.bat D:\github\master\iotdb\data\datanode\data +```````````````````````` +Starting Printing the IoTDB Data Directory Overview +```````````````````````` +output save path:IoTDB_data_dir_overview.txt +data dir num:1 +143 [main] WARN o.a.i.t.c.conf.TSFileDescriptor - not found iotdb-system.properties, use the default configs. +|============================================================== +|D:\github\master\iotdb\data\datanode\data +|--sequence +| |--root.redirect0 +| | |--1 +| | | |--0 +| |--root.redirect1 +| | |--2 +| | | |--0 +| |--root.redirect2 +| | |--3 +| | | |--0 +| |--root.redirect3 +| | |--4 +| | | |--0 +| |--root.redirect4 +| | |--5 +| | | |--0 +| |--root.redirect5 +| | |--6 +| | | |--0 +| |--root.sg1 +| | |--0 +| | | |--0 +| | | |--2760 +|--unsequence +|============================================================== +~~~ + +## TsFile Sketch Tool + +The TsFile Sketch Tool provides a summarized view of the content within a TsFile. It is located at `tools/tsfile/print-tsfile`. + +### Usage + +- For Windows: + +```Plain +.\print-tsfile-sketch.bat () +``` + +- For Linux or MacOs: + +```Plain +./print-tsfile-sketch.sh () +``` + +**Note:** If the output path is not specified, the default relative path `TsFile_sketch_view.txt` will be used. + +### Example + +Use Windows in this example: + +~~~Bash +.\print-tsfile.bat D:\github\master\1669359533965-1-0-0.tsfile D:\github\master\sketch.txt +```````````````````````` +Starting Printing the TsFile Sketch +```````````````````````` +TsFile path:D:\github\master\1669359533965-1-0-0.tsfile +Sketch save path:D:\github\master\sketch.txt +148 [main] WARN o.a.i.t.c.conf.TSFileDescriptor - not found iotdb-system.properties, use the default configs. +-------------------------------- TsFile Sketch -------------------------------- +file path: D:\github\master\1669359533965-1-0-0.tsfile +file length: 2974 + + POSITION| CONTENT + -------- ------- + 0| [magic head] TsFile + 6| [version number] 3 +||||||||||||||||||||| [Chunk Group] of root.sg1.d1, num of Chunks:3 + 7| [Chunk Group Header] + | [marker] 0 + | [deviceID] root.sg1.d1 + 20| [Chunk] of root.sg1.d1.s1, startTime: 1669359533948 endTime: 1669359534047 count: 100 [minValue:-9032452783138882770,maxValue:9117677033041335123,firstValue:7068645577795875906,lastValue:-5833792328174747265,sumValue:5.795959009889246E19] + | [chunk header] marker=5, measurementID=s1, dataSize=864, dataType=INT64, compressionType=SNAPPY, encodingType=RLE + | [page] UncompressedSize:862, CompressedSize:860 + 893| [Chunk] of root.sg1.d1.s2, startTime: 1669359533948 endTime: 1669359534047 count: 100 [minValue:-8806861312244965718,maxValue:9192550740609853234,firstValue:1150295375739457693,lastValue:-2839553973758938646,sumValue:8.2822564314572677E18] + | [chunk header] marker=5, measurementID=s2, dataSize=864, dataType=INT64, compressionType=SNAPPY, encodingType=RLE + | [page] UncompressedSize:862, CompressedSize:860 + 1766| [Chunk] of root.sg1.d1.s3, startTime: 1669359533948 endTime: 1669359534047 count: 100 [minValue:-9076669333460323191,maxValue:9175278522960949594,firstValue:2537897870994797700,lastValue:7194625271253769397,sumValue:-2.126008424849926E19] + | [chunk header] marker=5, measurementID=s3, dataSize=864, dataType=INT64, compressionType=SNAPPY, encodingType=RLE + | [page] UncompressedSize:862, CompressedSize:860 +||||||||||||||||||||| [Chunk Group] of root.sg1.d1 ends + 2656| [marker] 2 + 2657| [TimeseriesIndex] of root.sg1.d1.s1, tsDataType:INT64, startTime: 1669359533948 endTime: 1669359534047 count: 100 [minValue:-9032452783138882770,maxValue:9117677033041335123,firstValue:7068645577795875906,lastValue:-5833792328174747265,sumValue:5.795959009889246E19] + | [ChunkIndex] offset=20 + 2728| [TimeseriesIndex] of root.sg1.d1.s2, tsDataType:INT64, startTime: 1669359533948 endTime: 1669359534047 count: 100 [minValue:-8806861312244965718,maxValue:9192550740609853234,firstValue:1150295375739457693,lastValue:-2839553973758938646,sumValue:8.2822564314572677E18] + | [ChunkIndex] offset=893 + 2799| [TimeseriesIndex] of root.sg1.d1.s3, tsDataType:INT64, startTime: 1669359533948 endTime: 1669359534047 count: 100 [minValue:-9076669333460323191,maxValue:9175278522960949594,firstValue:2537897870994797700,lastValue:7194625271253769397,sumValue:-2.126008424849926E19] + | [ChunkIndex] offset=1766 + 2870| [IndexOfTimerseriesIndex Node] type=LEAF_MEASUREMENT + | + | +||||||||||||||||||||| [TsFileMetadata] begins + 2891| [IndexOfTimerseriesIndex Node] type=LEAF_DEVICE + | + | + | [meta offset] 2656 + | [bloom filter] bit vector byte array length=31, filterSize=256, hashFunctionSize=5 +||||||||||||||||||||| [TsFileMetadata] ends + 2964| [TsFileMetadataSize] 73 + 2968| [magic tail] TsFile + 2974| END of TsFile +---------------------------- IndexOfTimerseriesIndex Tree ----------------------------- + [MetadataIndex:LEAF_DEVICE] + └──────[root.sg1.d1,2870] + [MetadataIndex:LEAF_MEASUREMENT] + └──────[s1,2657] +---------------------------------- TsFile Sketch End ---------------------------------- +~~~ + +Explanations: + +- The output is separated by the `|` symbol. The left side indicates the actual position within the TsFile, while the right side provides a summary of the content. +- The `"||||||||||||||||||||"` lines are added for readability and are not part of the actual TsFile data. +- The final `"IndexOfTimerseriesIndex Tree"` section reorganizes the metadata index tree at the end of the TsFile. This view aids understanding but does not represent actual stored data. + +## TsFile Resource Sketch Tool + +The TsFile Resource Sketch Tool displays details about TsFile resource files. It is located at `tools/tsfile/print-tsfile-resource-files`. + +### Usage + +- For Windows: + +```Bash +.\print-tsfile-resource-files.bat +``` + +- For Linux or MacOs: + +```Plain +./print-tsfile-resource-files.sh +``` + +### Example + +Use Windows in this example: + +~~~Bash +.\print-tsfile-resource-files.bat D:\github\master\iotdb\data\datanode\data\sequence\root.sg1\0\0 +```````````````````````` +Starting Printing the TsFileResources +```````````````````````` +147 [main] WARN o.a.i.t.c.conf.TSFileDescriptor - not found iotdb-system.properties, use the default configs. +230 [main] WARN o.a.iotdb.db.conf.IoTDBDescriptor - Cannot find IOTDB_HOME or IOTDB_CONF environment variable when loading config file iotdb-system.properties, use default configuration +231 [main] WARN o.a.iotdb.db.conf.IoTDBDescriptor - Couldn't load the configuration iotdb-system.properties from any of the known sources. +233 [main] WARN o.a.iotdb.db.conf.IoTDBDescriptor - Cannot find IOTDB_HOME or IOTDB_CONF environment variable when loading config file iotdb-system.properties, use default configuration +237 [main] WARN o.a.iotdb.db.conf.IoTDBDescriptor - Couldn't load the configuration iotdb-system.properties from any of the known sources. +Analyzing D:\github\master\iotdb\data\datanode\data\sequence\root.sg1\0\0\1669359533489-1-0-0.tsfile ... + +Resource plan index range [9223372036854775807, -9223372036854775808] +device root.sg1.d1, start time 0 (1970-01-01T08:00+08:00[GMT+08:00]), end time 99 (1970-01-01T08:00:00.099+08:00[GMT+08:00]) + +Analyzing the resource file folder D:\github\master\iotdb\data\datanode\data\sequence\root.sg1\0\0 finished. +.\print-tsfile-resource-files.bat D:\github\master\iotdb\data\datanode\data\sequence\root.sg1\0\0\1669359533489-1-0-0.tsfile.resource +```````````````````````` +Starting Printing the TsFileResources +```````````````````````` +178 [main] WARN o.a.iotdb.db.conf.IoTDBDescriptor - Cannot find IOTDB_HOME or IOTDB_CONF environment variable when loading config file iotdb-system.properties, use default configuration +186 [main] WARN o.a.i.t.c.conf.TSFileDescriptor - not found iotdb-system.properties, use the default configs. +187 [main] WARN o.a.iotdb.db.conf.IoTDBDescriptor - Couldn't load the configuration iotdb-system.properties from any of the known sources. +188 [main] WARN o.a.iotdb.db.conf.IoTDBDescriptor - Cannot find IOTDB_HOME or IOTDB_CONF environment variable when loading config file iotdb-system.properties, use default configuration +192 [main] WARN o.a.iotdb.db.conf.IoTDBDescriptor - Couldn't load the configuration iotdb-system.properties from any of the known sources. +Analyzing D:\github\master\iotdb\data\datanode\data\sequence\root.sg1\0\0\1669359533489-1-0-0.tsfile ... + +Resource plan index range [9223372036854775807, -9223372036854775808] +device root.sg1.d1, start time 0 (1970-01-01T08:00+08:00[GMT+08:00]), end time 99 (1970-01-01T08:00:00.099+08:00[GMT+08:00]) + +Analyzing the resource file D:\github\master\iotdb\data\datanode\data\sequence\root.sg1\0\0\1669359533489-1-0-0.tsfile.resource finished. +~~~ diff --git a/src/UserGuide/latest-Table/Tools-System/Monitor-Tool_timecho.md b/src/UserGuide/latest-Table/Tools-System/Monitor-Tool_timecho.md new file mode 100644 index 000000000..4b5d7d5da --- /dev/null +++ b/src/UserGuide/latest-Table/Tools-System/Monitor-Tool_timecho.md @@ -0,0 +1,175 @@ + + +## **Prometheus** **Integration** + +### **Prometheus Metric Mapping** + +The following table illustrates the mapping of IoTDB metrics to the Prometheus-compatible format. For a given metric with `Metric Name = name` and tags `K1=V1, ..., Kn=Vn`, the mapping follows this pattern, where `value` represents the actual measurement. + +| **Metric Type** | **Mapping** | +| ---------------- | ------------------------------------------------------------ | +| Counter | name_total{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn"} value | +| AutoGauge, Gauge | name{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn"} value | +| Histogram | name_max{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn"} value
name_sum{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn"} value
name_count{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn"} value
name{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn", quantile="0.5"} value
name{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn", quantile="0.99"} value | +| Rate | name_total{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn"} value
name_total{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn", rate="m1"} value
name_total{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn", rate="m5"} value
name_total{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn", rate="m15"} value
name_total{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn", rate="mean"} value | +| Timer | name_seconds_max{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn"} value
name_seconds_sum{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn"} value
name_seconds_count{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn"} value
name_seconds{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn", quantile="0.5"} value
name_seconds{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn", quantile="0.99"} value | + +### **Configuration File** + +To enable Prometheus metric collection in IoTDB, modify the configuration file as follows: + +1. Taking DataNode as an example, modify the iotdb-system.properties configuration file as follows: + +```Properties +dn_metric_reporter_list=PROMETHEUS +dn_metric_level=CORE +dn_metric_prometheus_reporter_port=9091 +``` + +1. Start IoTDB DataNodes +2. Use a web browser or `curl` to access `http://server_ip:9091/metrics` to retrieve metric data, such as: + +```Plain +... +# HELP file_count +# TYPE file_count gauge +file_count{name="wal",} 0.0 +file_count{name="unseq",} 0.0 +file_count{name="seq",} 2.0 +... +``` + +### **Prometheus + Grafana** **Integration** + +IoTDB exposes monitoring data in the standard Prometheus-compatible format. Prometheus collects and stores these metrics, while Grafana is used for visualization. + +**Integration Workflow** + +The following picture describes the relationships among IoTDB, Prometheus and Grafana: + +![iotdb_prometheus_grafana](/img/UserGuide/System-Tools/Metrics/iotdb_prometheus_grafana.png) + +Iotdb-Prometheus-Grafana Workflow + +1. IoTDB continuously collects monitoring metrics. +2. Prometheus collects metrics from IoTDB at a configurable interval. +3. Prometheus stores the collected metrics in its internal time-series database (TSDB). +4. Grafana queries Prometheus at a configurable interval and visualizes the metrics. + +**Prometheus Configuration Example** + +To configure Prometheus to collect IoTDB metrics, modify the `prometheus.yml` file as follows: + +```YAML +job_name: pull-metrics +honor_labels: true +honor_timestamps: true +scrape_interval: 15s +scrape_timeout: 10s +metrics_path: /metrics +scheme: http +follow_redirects: true +static_configs: + - targets: + - localhost:9091 +``` + +For more details, refer to: + +- Prometheus Documentation: + - [Prometheus getting_started](https://prometheus.io/docs/prometheus/latest/getting_started/) + - [Prometheus scrape metrics](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config) +- Grafana Documentation: + - [Grafana getting_started](https://grafana.com/docs/grafana/latest/getting-started/getting-started/) + - [Grafana query metrics from Prometheus](https://prometheus.io/docs/visualization/grafana/#grafana-support-for-prometheus) + +## **Apache IoTDB Dashboard** + +We introduce the Apache IoTDB Dashboard, designed for unified centralized operations and management, which enables monitoring multiple clusters through a single panel. + +![Apache IoTDB Dashboard](/img/%E7%9B%91%E6%8E%A7%20default%20cluster.png) + +![Apache IoTDB Dashboard](/img/%E7%9B%91%E6%8E%A7%20cluster2.png) + +You can access the Dashboard's Json file in TimechoDB. + +### **Cluster Overview** + +Including but not limited to: + +- Total number of CPU cores, memory capacity, and disk space in the cluster. +- Number of ConfigNodes and DataNodes in the cluster. +- Cluster uptime. +- Cluster write throughput. +- Current CPU, memory, and disk utilization across all nodes. +- Detailed information for individual nodes. + +![](/img/%E7%9B%91%E6%8E%A7%20%E6%A6%82%E8%A7%88.png) + +### **Data Writing** + +Including but not limited to: + +- Average write latency, median latency, and the 99% percentile latency. +- Number and size of WAL files. +- WAL flush SyncBuffer latency per node. + +![](/img/%E7%9B%91%E6%8E%A7%20%E5%86%99%E5%85%A5.png) + +### **Data Querying** + +Including but not limited to: + +- Time series metadata query load time per node. +- Time series data read duration per node. +- Time series metadata modification duration per node. +- Chunk metadata list loading time per node. +- Chunk metadata modification duration per node. +- Chunk metadata-based filtering duration per node. +- Average time required to construct a Chunk Reader. + +![](/img/%E7%9B%91%E6%8E%A7%20%E6%9F%A5%E8%AF%A2.png) + +### **Storage Engine** + +Including but not limited to: + +- File count and size by type. +- Number and size of TsFiles at different processing stages. +- Task count and execution duration for various operations. + +![](/img/%E7%9B%91%E6%8E%A7%20%E5%AD%98%E5%82%A8%E5%BC%95%E6%93%8E.png) + +### **System Monitoring** + +Including but not limited to: + +- System memory, swap memory, and process memory usage. +- Disk space, file count, and file size statistics. +- JVM garbage collection (GC) time percentage, GC events by type, GC data volume, and heap memory utilization across generations. +- Network throughput and packet transmission rate. + +![](/img/%E7%9B%91%E6%8E%A7%20%E7%B3%BB%E7%BB%9F%20%E5%86%85%E5%AD%98%E4%B8%8E%E7%A1%AC%E7%9B%98.png) + +![](/img/%E7%9B%91%E6%8E%A7%20%E7%B3%BB%E7%BB%9Fjvm.png) + +![](/img/%E7%9B%91%E6%8E%A7%20%E7%B3%BB%E7%BB%9F%20%E7%BD%91%E7%BB%9C.png) \ No newline at end of file