diff --git a/src/UserGuide/Master/Table/API/Programming-JDBC_apache.md b/src/UserGuide/Master/Table/API/Programming-JDBC_apache.md index 13c16fe39..69c64c77b 100644 --- a/src/UserGuide/Master/Table/API/Programming-JDBC_apache.md +++ b/src/UserGuide/Master/Table/API/Programming-JDBC_apache.md @@ -18,19 +18,20 @@ under the License. --> +# JDBC The IoTDB JDBC provides a standardized way to interact with the IoTDB database, allowing users to execute SQL statements from Java programs for managing databases and time-series data. It supports operations such as connecting to the database, creating, querying, updating, and deleting data, as well as batch insertion and querying of time-series data. **Note:** The current JDBC implementation is designed primarily for integration with third-party tools. High-performance writing **may not be achieved** when using JDBC for insert operations. For Java applications, it is recommended to use the **JAVA Native API** for optimal performance. -## Prerequisites +## 1. Prerequisites -### **Environment Requirements** +### 1.1 **Environment Requirements** - **JDK:** Version 1.8 or higher - **Maven:** Version 3.6 or higher -### **Adding Maven Dependencies** +### 1.2 **Adding Maven Dependencies** Add the following dependency to your Maven `pom.xml` file: @@ -44,13 +45,13 @@ Add the following dependency to your Maven `pom.xml` file: ``` -## Read and Write Operations +## 2. Read and Write Operations **Write Operations:** Perform database operations such as inserting data, creating databases, and creating time-series using the `execute` method. **Read Operations:** Execute queries using the `executeQuery` method and retrieve results via the `ResultSet` object. -### Method Overview +### 2.1 Method Overview | **Method Name** | **Description** | **Parameters** | **Return Value** | | ------------------------------------------------------------ | ----------------------------------------------------------- | ------------------------------------------------------------ | ------------------------------------------------- | @@ -63,7 +64,7 @@ Add the following dependency to your Maven `pom.xml` file: | ResultSet.next() | Moves to the next row in the result set | None | `boolean`: Whether the move was successful | | ResultSet.getString(int columnIndex) | Retrieves the string value of a specified column | `columnIndex`: Column index (starting from 1) | `String`: Column value | -## Sample Code +## 3. Sample Code **Note:** When using the Table Model, you must specify the `sql_dialect` parameter as `table` in the URL. Example: diff --git a/src/UserGuide/Master/Table/API/Programming-Java-Native-API_apache.md b/src/UserGuide/Master/Table/API/Programming-Java-Native-API_apache.md index 2fd8718be..ae2296b47 100644 --- a/src/UserGuide/Master/Table/API/Programming-Java-Native-API_apache.md +++ b/src/UserGuide/Master/Table/API/Programming-Java-Native-API_apache.md @@ -18,17 +18,18 @@ under the License. --> +# Java Native API IoTDB provides a Java native client driver and a session pool management mechanism. These tools enable developers to interact with IoTDB using object-oriented APIs, allowing time-series objects to be directly assembled and inserted into the database without constructing SQL statements. It is recommended to use the `ITableSessionPool` for multi-threaded database operations to maximize efficiency. -## Prerequisites +## 1. Prerequisites -### Environment Requirements +### 1.1 Environment Requirements - **JDK**: Version 1.8 or higher - **Maven**: Version 3.6 or higher -### Adding Maven Dependencies +### 1.2 Adding Maven Dependencies ```XML @@ -40,9 +41,9 @@ IoTDB provides a Java native client driver and a session pool management mechani ``` -## Read and Write Operations +## 2. Read and Write Operations -### ITableSession Interface +### 2.1 ITableSession Interface The `ITableSession` interface defines basic operations for interacting with IoTDB, including data insertion, query execution, and session closure. Note that this interface is **not thread-safe**. @@ -124,7 +125,7 @@ public interface ITableSession extends AutoCloseable { } ``` -### TableSessionBuilder Class +### 2.2 TableSessionBuilder Class The `TableSessionBuilder` class is a builder for configuring and creating instances of the `ITableSession` interface. It allows developers to set connection parameters, query parameters, and security features. @@ -336,9 +337,9 @@ public class TableSessionBuilder { } ``` -## Session Pool +## 3. Session Pool -### ITableSessionPool Interface +### 3.1 ITableSessionPool Interface The `ITableSessionPool` interface manages a pool of `ITableSession` instances, enabling efficient reuse of connections and proper cleanup of resources. @@ -378,7 +379,7 @@ public interface ITableSessionPool { } ``` -### TableSessionPoolBuilder Class +### 3.2 TableSessionPoolBuilder Class The `TableSessionPoolBuilder` class is a builder for configuring and creating `ITableSessionPool` instances, supporting options like connection settings and pooling behavior. diff --git a/src/UserGuide/Master/Tree/Ecosystem-Integration/DBeaver.md b/src/UserGuide/Master/Tree/Ecosystem-Integration/DBeaver.md index cd28d1b38..9f0323b86 100644 --- a/src/UserGuide/Master/Tree/Ecosystem-Integration/DBeaver.md +++ b/src/UserGuide/Master/Tree/Ecosystem-Integration/DBeaver.md @@ -23,11 +23,11 @@ DBeaver is a SQL client software application and a database administration tool. It can use the JDBC application programming interface (API) to interact with IoTDB via the JDBC driver. -## DBeaver Installation +## 1. DBeaver Installation * From DBeaver site: https://dbeaver.io/download/ -## IoTDB Installation +## 2. IoTDB Installation * Download binary version * From IoTDB site: https://iotdb.apache.org/Download/ @@ -35,7 +35,7 @@ DBeaver is a SQL client software application and a database administration tool. * Or compile from source code * See https://github.com/apache/iotdb -## Connect IoTDB and DBeaver +## 3. Connect IoTDB and DBeaver 1. Start IoTDB server diff --git a/src/UserGuide/Master/Tree/Ecosystem-Integration/DataEase.md b/src/UserGuide/Master/Tree/Ecosystem-Integration/DataEase.md index 021ed7d68..ddfbb62ce 100644 --- a/src/UserGuide/Master/Tree/Ecosystem-Integration/DataEase.md +++ b/src/UserGuide/Master/Tree/Ecosystem-Integration/DataEase.md @@ -20,7 +20,7 @@ --> # DataEase -## Product Overview +## 1. Product Overview 1. Introduction to DataEase @@ -37,7 +37,7 @@ -## Installation Requirements +## 2. Installation Requirements | **Preparation Content** | **Version Requirements** | | :-------------------- | :----------------------------------------------------------- | @@ -46,7 +46,7 @@ | DataEase | Requires v1 series v1.18 version, please refer to the official [DataEase Installation Guide](https://dataease.io/docs/v2/installation/offline_INSTL_and_UPG/)(V2.x is currently not supported. For integration with other versions, please contact Timecho) | | DataEase-IoTDB Connector | Please contact Timecho for assistance | -## Installation Steps +## 3. Installation Steps Step 1: Please contact Timecho to obtain the file and unzip the installation package `iotdb-api-source-1.0.0.zip` @@ -88,16 +88,16 @@ Step 4: After startup, you can check whether the startup was successful through lsof -i:8097 // The port configured in the file where the IoTDB API Source listens ``` -## Instructions +## 4. Instructions -### Sign in DataEase +### 4.1 Sign in DataEase 1. Sign in DataEase,access address: `http://[target server IP address]:80`
-### Configure data source +### 4.2 Configure data source 1. Navigate to "Data Source".
@@ -153,7 +153,7 @@ Step 4: After startup, you can check whether the startup was successful through
-### Configure the Dataset +### 4.3 Configure the Dataset 1. Create API dataset: Navigate to "Data Set",click on the "+" on the top left corner, select "API dataset" and choose the directory where this dataset is located to enter the New API Dataset interface.
@@ -189,7 +189,7 @@ Step 4: After startup, you can check whether the startup was successful through
-### Configure Dashboard +### 4.4 Configure Dashboard 1. Navigate to "Dashboard", click on "+" to create a directory, then click on "+" of the directory and select "Create Dashboard".
diff --git a/src/UserGuide/Master/Tree/Ecosystem-Integration/Flink-IoTDB.md b/src/UserGuide/Master/Tree/Ecosystem-Integration/Flink-IoTDB.md index efb39723c..31176cd24 100644 --- a/src/UserGuide/Master/Tree/Ecosystem-Integration/Flink-IoTDB.md +++ b/src/UserGuide/Master/Tree/Ecosystem-Integration/Flink-IoTDB.md @@ -23,12 +23,12 @@ IoTDB integration for [Apache Flink](https://flink.apache.org/). This module includes the IoTDB sink that allows a flink job to write events into timeseries, and the IoTDB source allowing reading data from IoTDB. -## IoTDBSink +## 1. IoTDBSink To use the `IoTDBSink`, you need construct an instance of it by specifying `IoTDBSinkOptions` and `IoTSerializationSchema` instances. The `IoTDBSink` send only one event after another by default, but you can change to batch by invoking `withBatchSize(int)`. -### Example +### 1.1 Example This example shows a case that sends data to a IoTDB server from a Flink job: @@ -115,17 +115,17 @@ public class FlinkIoTDBSink { ``` -### Usage +### 1.2 Usage * Launch the IoTDB server. * Run `org.apache.iotdb.flink.FlinkIoTDBSink.java` to run the flink job on local mini cluster. -## IoTDBSource +## 2. IoTDBSource To use the `IoTDBSource`, you need to construct an instance of `IoTDBSource` by specifying `IoTDBSourceOptions` and implementing the abstract method `convert()` in `IoTDBSource`. The `convert` methods defines how you want the row data to be transformed. -### Example +### 2.1 Example This example shows a case where data are read from IoTDB. ```java import org.apache.iotdb.flink.options.IoTDBSourceOptions; @@ -209,7 +209,7 @@ public class FlinkIoTDBSource { } ``` -### Usage +### 2.2 Usage Launch the IoTDB server. Run org.apache.iotdb.flink.FlinkIoTDBSource.java to run the flink job on local mini cluster. diff --git a/src/UserGuide/Master/Tree/Ecosystem-Integration/Flink-TsFile.md b/src/UserGuide/Master/Tree/Ecosystem-Integration/Flink-TsFile.md index e1ea626dd..79e29ab4a 100644 --- a/src/UserGuide/Master/Tree/Ecosystem-Integration/Flink-TsFile.md +++ b/src/UserGuide/Master/Tree/Ecosystem-Integration/Flink-TsFile.md @@ -21,7 +21,7 @@ # Apache Flink(TsFile) -## About Flink-TsFile-Connector +## 1. About Flink-TsFile-Connector Flink-TsFile-Connector implements the support of Flink for external data sources of Tsfile type. This enables users to read and write Tsfile by Flink via DataStream/DataSet API. @@ -31,9 +31,9 @@ With this connector, you can * load a single TsFile or multiple TsFiles(only for DataSet), from either the local file system or hdfs, into Flink * load all files in a specific directory, from either the local file system or hdfs, into Flink -## Quick Start +## 2. Quick Start -### TsFileInputFormat Example +### 2.1 TsFileInputFormat Example 1. create TsFileInputFormat with default RowRowRecordParser. @@ -93,7 +93,7 @@ for (String s : result) { } ``` -### Example of TSRecordOutputFormat +### 2.2 Example of TSRecordOutputFormat 1. create TSRecordOutputFormat with default RowTSRecordConverter. diff --git a/src/UserGuide/Master/Tree/Ecosystem-Integration/Grafana-Connector.md b/src/UserGuide/Master/Tree/Ecosystem-Integration/Grafana-Connector.md index 92fb176fa..718c28c07 100644 --- a/src/UserGuide/Master/Tree/Ecosystem-Integration/Grafana-Connector.md +++ b/src/UserGuide/Master/Tree/Ecosystem-Integration/Grafana-Connector.md @@ -23,14 +23,14 @@ Grafana is an open source volume metrics monitoring and visualization tool, which can be used to display time series data and application runtime analysis. Grafana supports Graphite, InfluxDB and other major time series databases as data sources. IoTDB-Grafana-Connector is a connector which we developed to show time series data in IoTDB by reading data from IoTDB and sends to Grafana(https://grafana.com/). Before using this tool, make sure Grafana and IoTDB are correctly installed and started. -## Installation and deployment +## 1. Installation and deployment -### Install Grafana +### 1.1 Install Grafana * Download url: https://grafana.com/grafana/download * Version >= 4.4.1 -### Install data source plugin +### 1.2 Install data source plugin * Plugin name: simple-json-datasource * Download url: https://github.com/grafana/simple-json-datasource @@ -64,7 +64,7 @@ Please try to find config file of grafana(eg. customer.ini in windows, and /etc/ allow_loading_unsigned_plugins = "grafana-simple-json-datasource" ``` -### Start Grafana +### 1.3 Start Grafana If Unix is used, Grafana will start automatically after installing, or you can run `sudo service grafana-server start` command. See more information [here](http://docs.grafana.org/installation/debian/). If Mac and `homebrew` are used to install Grafana, you can use `homebrew` to start Grafana. @@ -73,17 +73,17 @@ See more information [here](http://docs.grafana.org/installation/mac/). If Windows is used, start Grafana by executing grafana-server.exe, located in the bin directory, preferably from the command line. See more information [here](http://docs.grafana.org/installation/windows/). -## IoTDB installation +## 2. IoTDB installation See https://github.com/apache/iotdb -## IoTDB-Grafana-Connector installation +## 3. IoTDB-Grafana-Connector installation ```shell git clone https://github.com/apache/iotdb.git ``` -## Start IoTDB-Grafana-Connector +## 4. Start IoTDB-Grafana-Connector * Option one @@ -117,13 +117,13 @@ $ java -jar iotdb-grafana-connector-{version}.war To configure properties, move the `grafana-connector/src/main/resources/application.properties` to the same directory as the war package (`grafana/target`) -## Explore in Grafana +## 5. Explore in Grafana The default port of Grafana is 3000, see http://localhost:3000/ Username and password are both "admin" by default. -### Add data source +### 5.1 Add data source Select `Data Sources` and then `Add data source`, select `SimpleJson` in `Type` and `URL` is http://localhost:8888. After that, make sure IoTDB has been started, click "Save & Test", and "Data Source is working" will be shown to indicate successful configuration. @@ -131,13 +131,13 @@ After that, make sure IoTDB has been started, click "Save & Test", and "Data Sou -### Design in dashboard +### 5.2 Design in dashboard Add diagrams in dashboard and customize your query. See http://docs.grafana.org/guides/getting_started/ -## config grafana +## 6. config grafana ``` # ip and port of IoTDB diff --git a/src/UserGuide/Master/Tree/Ecosystem-Integration/Grafana-Plugin.md b/src/UserGuide/Master/Tree/Ecosystem-Integration/Grafana-Plugin.md index 0597fb124..afb2eaa93 100644 --- a/src/UserGuide/Master/Tree/Ecosystem-Integration/Grafana-Plugin.md +++ b/src/UserGuide/Master/Tree/Ecosystem-Integration/Grafana-Plugin.md @@ -27,23 +27,23 @@ Grafana is an open source volume metrics monitoring and visualization tool, whic We developed the Grafana-Plugin for IoTDB, using the IoTDB REST service to present time series data and providing many visualization methods for time series data. Compared with previous IoTDB-Grafana-Connector, current Grafana-Plugin performs more efficiently and supports more query types. So, **we recommend using Grafana-Plugin instead of IoTDB-Grafana-Connector**. -## Installation and deployment +## 1. Installation and deployment -### Install Grafana +### 1.1 Install Grafana * Download url: https://grafana.com/grafana/download * Version >= 9.3.0 -### Acquisition method of grafana plugin +### 1.2 Acquisition method of grafana plugin #### Download apache-iotdb-datasource from Grafana's official website Download url:https://grafana.com/api/plugins/apache-iotdb-datasource/versions/1.0.0/download -### Install Grafana-Plugin +### 1.3 Install Grafana-Plugin -### Method 1: Install using the grafana cli tool (recommended) +#### Method 1: Install using the grafana cli tool (recommended) * Use the grafana cli tool to install apache-iotdb-datasource from the command line. The command content is as follows: @@ -51,11 +51,11 @@ Download url:https://grafana.com/api/plugins/apache-iotdb-datasource/versions/ grafana-cli plugins install apache-iotdb-datasource ``` -### Method 2: Install using the Grafana interface (recommended) +#### Method 2: Install using the Grafana interface (recommended) * Click on Configuration ->Plugins ->Search IoTDB from local Grafana to install the plugin -### Method 3: Manually install the grafana-plugin plugin (not recommended) +#### Method 3: Manually install the grafana-plugin plugin (not recommended) * Copy the front-end project target folder generated above to Grafana's plugin directory `${Grafana directory}\data\plugins\`。If there is no such directory, you can manually create it or start grafana and it will be created automatically. Of course, you can also modify the location of plugins. For details, please refer to the following instructions for modifying the location of Grafana's plugin directory. @@ -64,7 +64,7 @@ grafana-cli plugins install apache-iotdb-datasource For more details,please click [here](https://grafana.com/docs/grafana/latest/plugins/installation/) -### Start Grafana +### 1.4 Start Grafana Start Grafana with the following command in the Grafana directory: @@ -89,7 +89,7 @@ For more details,please click [here](https://grafana.com/docs/grafana/latest/i -### Configure IoTDB REST Service +### 1.5 Configure IoTDB REST Service * Modify `{iotdb directory}/conf/iotdb-system.properties` as following: @@ -104,9 +104,9 @@ rest_service_port=18080 Start IoTDB (restart if the IoTDB service is already started) -## How to use Grafana-Plugin +## 2. How to use Grafana-Plugin -### Access Grafana dashboard +### 2.1 Access Grafana dashboard Grafana displays data in a web page dashboard. Please open your browser and visit `http://:` when using it. @@ -115,7 +115,7 @@ Grafana displays data in a web page dashboard. Please open your browser and visi * The default login username and password are both `admin`. -### Add IoTDB as Data Source +### 2.2 Add IoTDB as Data Source Click the `Settings` icon on the left, select the `Data Source` option, and then click `Add data source`. @@ -135,7 +135,7 @@ Click `Save & Test`, and `Data source is working` will appear. -### Create a new Panel +### 2.3 Create a new Panel Click the `Dashboards` icon on the left, and select `Manage` option. @@ -195,7 +195,7 @@ Select a time series in the TIME-SERIES selection box, select a function in the -### Support for variables and template functions +### 2.4 Support for variables and template functions Both SQL: Full Customized and SQL: Drop-down List input methods support the variable and template functions of grafana. In the following example, raw input method is used, and aggregation is similar. @@ -246,7 +246,7 @@ In addition to the examples above, the following statements are supported: Tip: If the query field contains Boolean data, the result value will be converted to 1 by true and 0 by false. -### Grafana alert function +### 2.5 Grafana alert function This plugin supports Grafana alert function. @@ -293,6 +293,6 @@ For example, we have 3 conditions in the following order: Condition: B (Evaluate 10. We can also configure `Contact points` for alarms to receive alarm notifications. For more detailed operations, please refer to the official document (https://grafana.com/docs/grafana/latest/alerting/manage-notifications/create-contact-point/). -## More Details about Grafana +## 3. More Details about Grafana For more details about Grafana operation, please refer to the official Grafana documentation: http://docs.grafana.org/guides/getting_started/. diff --git a/src/UserGuide/Master/Tree/Ecosystem-Integration/Hive-TsFile.md b/src/UserGuide/Master/Tree/Ecosystem-Integration/Hive-TsFile.md index e8b4dc30d..227a1e383 100644 --- a/src/UserGuide/Master/Tree/Ecosystem-Integration/Hive-TsFile.md +++ b/src/UserGuide/Master/Tree/Ecosystem-Integration/Hive-TsFile.md @@ -20,7 +20,7 @@ --> # Apache Hive(TsFile) -## About Hive-TsFile-Connector +## 1. About Hive-TsFile-Connector Hive-TsFile-Connector implements the support of Hive for external data sources of Tsfile type. This enables users to operate TsFile by Hive. @@ -31,7 +31,7 @@ With this connector, you can * Query the tsfile through HQL. * As of now, the write operation is not supported in hive-connector. So, insert operation in HQL is not allowed while operating tsfile through hive. -## System Requirements +## 2. System Requirements |Hadoop Version |Hive Version | Java Version | TsFile | |------------- |------------ | ------------ |------------ | @@ -39,7 +39,7 @@ With this connector, you can > Note: For more information about how to download and use TsFile, please see the following link: https://github.com/apache/iotdb/tree/master/tsfile. -## Data Type Correspondence +## 3. Data Type Correspondence | TsFile data type | Hive field type | | ---------------- | --------------- | @@ -51,7 +51,7 @@ With this connector, you can | TEXT | STRING | -## Add Dependency For Hive +## 4. Add Dependency For Hive To use hive-connector in hive, we should add the hive-connector jar into hive. @@ -67,7 +67,7 @@ Added resources: [/Users/hive/iotdb/hive-connector/target/hive-connector-1.0.0-j ``` -## Create Tsfile-backed Hive tables +## 5. Create Tsfile-backed Hive tables To create a Tsfile-backed table, specify the `serde` as `org.apache.iotdb.hive.TsFileSerDe`, specify the `inputformat` as `org.apache.iotdb.hive.TSFHiveInputFormat`, @@ -110,7 +110,7 @@ Time taken: 0.053 seconds, Fetched: 2 row(s) ``` At this point, the Tsfile-backed table can be worked with in Hive like any other table. -## Query from TsFile-backed Hive tables +## 6. Query from TsFile-backed Hive tables Before we do any queries, we should set the `hive.input.format` in hive by executing the following command. @@ -123,7 +123,7 @@ We can use any query operations through HQL to analyse it. For example: -### Select Clause Example +### 6.1 Select Clause Example ``` hive> select * from only_sensor_1 limit 10; @@ -141,7 +141,7 @@ OK Time taken: 1.464 seconds, Fetched: 10 row(s) ``` -### Aggregate Clause Example +### 6.2 Aggregate Clause Example ``` hive> select count(*) from only_sensor_1; diff --git a/src/UserGuide/Master/Tree/Ecosystem-Integration/Ignition-IoTDB-plugin_timecho.md b/src/UserGuide/Master/Tree/Ecosystem-Integration/Ignition-IoTDB-plugin_timecho.md index 10f07ed73..ac82207e8 100644 --- a/src/UserGuide/Master/Tree/Ecosystem-Integration/Ignition-IoTDB-plugin_timecho.md +++ b/src/UserGuide/Master/Tree/Ecosystem-Integration/Ignition-IoTDB-plugin_timecho.md @@ -21,7 +21,7 @@ # Ignition -## Product Overview +## 1. Product Overview 1. Introduction to Ignition @@ -38,7 +38,7 @@ ![](/img/20240703114443.png) -## Installation Requirements +## 2. Installation Requirements | **Preparation Content** | Version Requirements | | ------------------------------- | ------------------------------------------------------------ | @@ -47,15 +47,15 @@ | Ignition-IoTDB Connector module | Please contact Business to obtain | | Ignition-IoTDB With JDBC module | Download address:https://repo1.maven.org/maven2/org/apache/iotdb/iotdb-jdbc/ | -## Instruction Manual For Ignition-IoTDB Connector +## 3. Instruction Manual For Ignition-IoTDB Connector -### Introduce +### 3.1 Introduce The Ignition-IoTDB Connector module can store data in a database connection associated with the historical database provider. The data is directly stored in a table in the SQL database based on its data type, as well as a millisecond timestamp. Store data only when making changes based on the value pattern and dead zone settings on each label, thus avoiding duplicate and unnecessary data storage. The Ignition-IoTDB Connector provides the ability to store the data collected by Ignition into IoTDB. -### Installation Steps +### 3.2 Installation Steps Step 1: Enter the `Configuration` - `System` - `Modules` module and click on the `Install or Upgrade a Module` button at the bottom @@ -157,7 +157,7 @@ The configuration content is as follows: -### Instructions +### 3.3 Instructions #### Configure Historical Data Storage @@ -233,13 +233,13 @@ The configuration content is as follows: system.iotdb.query("IoTDB", "select * from root.db.Sine where time > 1709563427247") ``` -## Ignition-IoTDB With JDBC +## 4. Ignition-IoTDB With JDBC -### Introduce +### 4.1 Introduce Ignition-IoTDB With JDBC provides a JDBC driver that allows users to connect and query the Ignition IoTDB database using standard JDBC APIs -### Installation Steps +### 4.2 Installation Steps Step 1: Enter the `Configuration` - `Databases` -`Drivers` module and create the `Translator` @@ -253,7 +253,7 @@ Step 3: Enter the `Configuration` - `Databases` - `Connections` module, create a ![](/img/Ignition-IoTDBWithJDBC-3.png) -### Instructions +### 4.3 Instructions #### Data Writing diff --git a/src/UserGuide/Master/Tree/Ecosystem-Integration/NiFi-IoTDB.md b/src/UserGuide/Master/Tree/Ecosystem-Integration/NiFi-IoTDB.md index 531c5119c..b45eefb0b 100644 --- a/src/UserGuide/Master/Tree/Ecosystem-Integration/NiFi-IoTDB.md +++ b/src/UserGuide/Master/Tree/Ecosystem-Integration/NiFi-IoTDB.md @@ -20,7 +20,7 @@ --> # Apache NiFi -## Apache NiFi Introduction +## 1. Apache NiFi Introduction Apache NiFi is an easy to use, powerful, and reliable system to process and distribute data. @@ -46,11 +46,11 @@ Apache NiFi includes the following capabilities: * Multi-tenant authorization and policy management * Standard protocols for encrypted communication including TLS and SSH -## PutIoTDBRecord +## 2. PutIoTDBRecord This is a processor that reads the content of the incoming FlowFile as individual records using the configured 'Record Reader' and writes them to Apache IoTDB using native interface. -### Properties of PutIoTDBRecord +### 2.1 Properties of PutIoTDBRecord | property | description | default value | necessary | |---------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| ------------- | --------- | @@ -65,7 +65,7 @@ This is a processor that reads the content of the incoming FlowFile as individua | Aligned | Whether using aligned interface? It can be updated by expression language. | false | false | | MaxRowNumber | Specifies the max row number of each tablet. It can be updated by expression language. | 1024 | false | -### Inferred Schema of Flowfile +### 2.2 Inferred Schema of Flowfile There are a couple of rules about flowfile: @@ -75,7 +75,7 @@ There are a couple of rules about flowfile: 4. Fields excepted time must start with `root.`. 5. The supported data types are `INT`, `LONG`, `FLOAT`, `DOUBLE`, `BOOLEAN`, `TEXT`. -### Convert Schema by property +### 2.3 Convert Schema by property As mentioned above, converting schema by property which is more flexible and stronger than inferred schema. @@ -108,7 +108,7 @@ The structure of property `Schema`: 6. The supported `encoding` are `PLAIN`, `DICTIONARY`, `RLE`, `DIFF`, `TS_2DIFF`, `BITMAP`, `GORILLA_V1`, `REGULAR`, `GORILLA`, `CHIMP`, `SPRINTZ`, `RLBE`. 7. The supported `compressionType` are `UNCOMPRESSED`, `SNAPPY`, `GZIP`, `LZO`, `SDT`, `PAA`, `PLA`, `LZ4`, `ZSTD`, `LZMA2`. -## Relationships +## 3. Relationships | relationship | description | | ------------ | ---------------------------------------------------- | @@ -116,11 +116,11 @@ The structure of property `Schema`: | failure | The shema or flow file is abnormal. | -## QueryIoTDBRecord +## 4. QueryIoTDBRecord This is a processor that reads the sql query from the incoming FlowFile and using it to query the result from IoTDB using native interface. Then it use the configured 'Record Writer' to generate the flowfile -### Properties of QueryIoTDBRecord +### 4.1 Properties of QueryIoTDBRecord | property | description | default value | necessary | |---------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------| --------- | @@ -133,7 +133,7 @@ This is a processor that reads the sql query from the incoming FlowFile and usin | iotdb-query-chunk-size | Chunking can be used to return results in a stream of smaller batches (each has a partial results up to a chunk size) rather than as a single response. Chunking queries can return an unlimited number of rows. Note: Chunking is enable when result chunk size is greater than 0 | 0 | false | -## Relationships +## 5. Relationships | relationship | description | | ------------ | ---------------------------------------------------- | diff --git a/src/UserGuide/Master/Tree/Ecosystem-Integration/Spark-IoTDB.md b/src/UserGuide/Master/Tree/Ecosystem-Integration/Spark-IoTDB.md index 7e03da5c2..3193ce505 100644 --- a/src/UserGuide/Master/Tree/Ecosystem-Integration/Spark-IoTDB.md +++ b/src/UserGuide/Master/Tree/Ecosystem-Integration/Spark-IoTDB.md @@ -21,7 +21,7 @@ # Apache Spark(IoTDB) -## Supported Versions +## 1. Supported Versions Supported versions of Spark and Scala are as follows: @@ -29,16 +29,16 @@ Supported versions of Spark and Scala are as follows: |----------------|---------------| | `2.4.0-latest` | `2.11, 2.12` | -## Precautions +## 2. Precautions 1. The current version of `spark-iotdb-connector` supports Scala `2.11` and `2.12`, but not `2.13`. 2. `spark-iotdb-connector` supports usage in Spark for both Java, Scala, and PySpark. -## Deployment +## 3. Deployment `spark-iotdb-connector` has two use cases: IDE development and `spark-shell` debugging. -### IDE Development +### 3.1 IDE Development For IDE development, simply add the following dependency to the `pom.xml` file: @@ -51,7 +51,7 @@ For IDE development, simply add the following dependency to the `pom.xml` file: ``` -### `spark-shell` Debugging +### 3.2 `spark-shell` Debugging To use `spark-iotdb-connector` in `spark-shell`, you need to download the `with-dependencies` version of the jar package from the official website. After that, copy the jar package to the `${SPARK_HOME}/jars` directory. @@ -81,9 +81,9 @@ At last, copy the jar package to the ${SPARK_HOME}/jars directory. Simply execut cp iotdb-jdbc-{version}-SNAPSHOT-jar-with-dependencies.jar $SPARK_HOME/jars/ ``` -## Usage +## 4. Usage -### Parameters +### 4.1Parameters | Parameter | Description | Default Value | Scope | Can be Empty | |--------------|--------------------------------------------------------------------------------------------------------------|---------------|-------------|--------------| @@ -95,7 +95,7 @@ cp iotdb-jdbc-{version}-SNAPSHOT-jar-with-dependencies.jar $SPARK_HOME/jars/ | lowerBound | The start timestamp of the query (inclusive) | 0 | read | true | | upperBound | The end timestamp of the query (inclusive) | 0 | read | true | -### Reading Data from IoTDB +### 4.2 Reading Data from IoTDB Here is an example that demonstrates how to read data from IoTDB into a DataFrame: @@ -117,7 +117,7 @@ df.printSchema() df.show() ``` -### Writing Data to IoTDB +### 4.3 Writing Data to IoTDB Here is an example that demonstrates how to write data to IoTDB: @@ -163,7 +163,7 @@ dfWithColumn.write.format("org.apache.iotdb.spark.db") .save ``` -### Wide and Narrow Table Conversion +### 4.4 Wide and Narrow Table Conversion Here are examples of how to convert between wide and narrow tables: @@ -184,7 +184,7 @@ import org.apache.iotdb.spark.db._ val wide_df = Transformer.toWideForm(spark, narrow_df) ``` -## Wide and Narrow Tables +## 5. Wide and Narrow Tables Using the TsFile structure as an example: there are three measurements in the TsFile pattern, namely `Status`, `Temperature`, and `Hardware`. The basic information for each of these three measurements is as diff --git a/src/UserGuide/Master/Tree/Ecosystem-Integration/Spark-TsFile.md b/src/UserGuide/Master/Tree/Ecosystem-Integration/Spark-TsFile.md index 151d81e14..6df313fcb 100644 --- a/src/UserGuide/Master/Tree/Ecosystem-Integration/Spark-TsFile.md +++ b/src/UserGuide/Master/Tree/Ecosystem-Integration/Spark-TsFile.md @@ -21,7 +21,7 @@ # Apache Spark(TsFile) -## About Spark-TsFile-Connector +## 1. About Spark-TsFile-Connector Spark-TsFile-Connector implements the support of Spark for external data sources of Tsfile type. This enables users to read, write and query Tsfile by Spark. @@ -31,7 +31,7 @@ With this connector, you can * load all files in a specific directory, from either the local file system or hdfs, into Spark * write data from Spark into TsFile -## System Requirements +## 2. System Requirements |Spark Version | Scala Version | Java Version | TsFile | |:-------------: | :-------------: | :------------: |:------------: | @@ -40,8 +40,8 @@ With this connector, you can > Note: For more information about how to download and use TsFile, please see the following link: https://github.com/apache/iotdb/tree/master/tsfile. > Currently we only support spark version 2.4.3 and there are some known issue on 2.4.7, do no use it -## Quick Start -### Local Mode +## 3. Quick Start +### 3.1 Local Mode Start Spark with TsFile-Spark-Connector in local mode: @@ -56,7 +56,7 @@ Note: * See https://github.com/apache/iotdb/tree/master/tsfile for how to get TsFile. -### Distributed Mode +### 3.2 Distributed Mode Start Spark with TsFile-Spark-Connector in distributed mode (That is, the spark cluster is connected by spark-shell): @@ -70,7 +70,7 @@ Note: * Multiple jar packages are separated by commas without any spaces. * See https://github.com/apache/iotdb/tree/master/tsfile for how to get TsFile. -## Data Type Correspondence +## 4. Data Type Correspondence | TsFile data type | SparkSQL data type| | --------------| -------------- | @@ -81,7 +81,7 @@ Note: | DOUBLE | DoubleType | | TEXT | StringType | -## Schema Inference +## 5. Schema Inference The way to display TsFile is dependent on the schema. Take the following TsFile structure as an example: There are three measurements in the TsFile schema: status, temperature, and hardware. The basic information of these three measurements is listed: @@ -122,7 +122,7 @@ You can also use narrow table form which as follows: (You can see part 6 about h -## Scala API +## 6. Scala API NOTE: Remember to assign necessary read and write permissions in advance. diff --git a/src/UserGuide/Master/Tree/Ecosystem-Integration/Thingsboard.md b/src/UserGuide/Master/Tree/Ecosystem-Integration/Thingsboard.md index 024d3ed46..f304c277f 100644 --- a/src/UserGuide/Master/Tree/Ecosystem-Integration/Thingsboard.md +++ b/src/UserGuide/Master/Tree/Ecosystem-Integration/Thingsboard.md @@ -20,7 +20,7 @@ --> # ThingsBoard -## Product Overview +## 1. Product Overview 1. Introduction to ThingsBoard @@ -32,11 +32,11 @@ ThingsBoard IoTDB provides the ability to store data from ThingsBoard to IoTDB, and also supports reading data information from the `root.thingsboard` database in ThingsBoard. The detailed architecture diagram is shown in yellow in the following figure. -### Relationship Diagram +### 1.1 Relationship Diagram ![](/img/Thingsboard-2.png) -## Installation Requirements +## 2. Installation Requirements | **Preparation Content** | **Version Requirements** | | :---------------------------------------- | :----------------------------------------------------------- | @@ -44,7 +44,7 @@ | IoTDB |IoTDB v1.3.0 or above. Please refer to the [Deployment guidance](../Deployment-and-Maintenance/IoTDB-Package.md) | | ThingsBoard
(IoTDB adapted version) | Please contact Timecho staff to obtain the installation package. Detailed installation steps are provided below. | -## Installation Steps +## 3. Installation Steps Please refer to the installation steps on [ThingsBoard Official Website](https://thingsboard.io/docs/user-guide/install/ubuntu/),wherein: @@ -73,7 +73,7 @@ export IoTDB_MAX_SIZE=200 ## The maximum number of sessions in the session export IoTDB_DATABASE=root.thingsboard ## Thingsboard data is written to the database stored in IoTDB, supporting customization ``` -## Instructions +## 4. Instructions 1. Set up devices and connect datasource: Add a new device under "Entities" - "Devices" in Thingsboard and send data to the specified devices through gateway. diff --git a/src/UserGuide/Master/Tree/Ecosystem-Integration/Zeppelin-IoTDB.md b/src/UserGuide/Master/Tree/Ecosystem-Integration/Zeppelin-IoTDB.md index a572cdfca..f72578e84 100644 --- a/src/UserGuide/Master/Tree/Ecosystem-Integration/Zeppelin-IoTDB.md +++ b/src/UserGuide/Master/Tree/Ecosystem-Integration/Zeppelin-IoTDB.md @@ -21,7 +21,7 @@ # Apache Zeppelin -## About Zeppelin +## 1. About Zeppelin Zeppelin is a web-based notebook that enables interactive data analytics. You can connect to data sources and perform interactive operations with SQL, Scala, etc. The operations can be saved as documents, just like Jupyter. Zeppelin has already supported many data sources, including Spark, ElasticSearch, Cassandra, and InfluxDB. Now, we have enabled Zeppelin to operate IoTDB via SQL. @@ -29,9 +29,9 @@ Zeppelin is a web-based notebook that enables interactive data analytics. You ca -## Zeppelin-IoTDB Interpreter +## 2. Zeppelin-IoTDB Interpreter -### System Requirements +### 2.1 System Requirements | IoTDB Version | Java Version | Zeppelin Version | | :-----------: | :-----------: | :--------------: | @@ -46,7 +46,7 @@ Install Zeppelin: Suppose Zeppelin is placed at `$Zeppelin_HOME`. -### Build Interpreter +### 2.2 Build Interpreter ``` cd $IoTDB_HOME @@ -61,7 +61,7 @@ The interpreter will be in the folder: -### Install Interpreter +### 2.3 Install Interpreter Once you have built your interpreter, create a new folder under the Zeppelin interpreter directory and put the built interpreter into it. @@ -71,7 +71,7 @@ Once you have built your interpreter, create a new folder under the Zeppelin int cp $IoTDB_HOME/zeppelin-interpreter/target/zeppelin-{version}-SNAPSHOT-jar-with-dependencies.jar $Zeppelin_HOME/interpreter/iotdb ``` -### Modify Configuration +### 2.4 Modify Configuration Enter `$Zeppelin_HOME/conf` and use template to create Zeppelin configuration file: @@ -82,7 +82,7 @@ cp zeppelin-site.xml.template zeppelin-site.xml Open the zeppelin-site.xml file and change the `zeppelin.server.addr` item to `0.0.0.0` -### Running Zeppelin and IoTDB +### 2.5 Running Zeppelin and IoTDB Go to `$Zeppelin_HOME` and start Zeppelin by running: @@ -110,7 +110,7 @@ Go to `$IoTDB_HOME` and start IoTDB server: -## Use Zeppelin-IoTDB +## 3. Use Zeppelin-IoTDB Wait for Zeppelin server to start, then visit http://127.0.0.1:8080/ @@ -164,7 +164,7 @@ The above demo notebook can be found at `$IoTDB_HOME/zeppelin-interpreter/Zeppe -## Configuration +## 4. Configuration You can configure the connection parameters in http://127.0.0.1:8080/#/interpreter : diff --git a/src/UserGuide/Master/Tree/Reference/Common-Config-Manual.md b/src/UserGuide/Master/Tree/Reference/Common-Config-Manual.md index 57d7c1e54..ca38812b8 100644 --- a/src/UserGuide/Master/Tree/Reference/Common-Config-Manual.md +++ b/src/UserGuide/Master/Tree/Reference/Common-Config-Manual.md @@ -26,16 +26,16 @@ IoTDB common files for ConfigNode and DataNode are under `conf`. * `iotdb-system.properties`:IoTDB system configurations. -## Effective +## 1. Effective Different configuration parameters take effect in the following three ways: + **Only allowed to be modified in first start up:** Can't be modified after first start, otherwise the ConfigNode/DataNode cannot start. + **After restarting system:** Can be modified after the ConfigNode/DataNode first start, but take effect after restart. + **hot-load:** Can be modified while the ConfigNode/DataNode is running, and trigger through sending the command(sql) `load configuration` or `set configuration` to the IoTDB server by client or session. -## Configuration File +## 2. Configuration File -### Replication Configuration +### 2.1 Replication Configuration * config\_node\_consensus\_protocol\_class @@ -82,7 +82,7 @@ Different configuration parameters take effect in the following three ways: | Default | org.apache.iotdb.consensus.simple.SimpleConsensus | | Effective | Only allowed to be modified in first start up | -### Load balancing Configuration +### 2.2 Load balancing Configuration * series\_partition\_slot\_num @@ -192,7 +192,7 @@ Different configuration parameters take effect in the following three ways: | Default | true | | Effective | After restarting system | -### Cluster Management +### 2.3 Cluster Management * cluster\_name @@ -233,7 +233,7 @@ Different configuration parameters take effect in the following three ways: | Default | 0.05 | | Effective | After restarting system | -### Memory Control Configuration +### 2.4 Memory Control Configuration * datanode\_memory\_proportion @@ -370,7 +370,7 @@ Different configuration parameters take effect in the following three ways: |Default| 1000 | |Effective|After restarting system| -### Schema Engine Configuration +### 2.5 Schema Engine Configuration * schema\_engine\_mode @@ -435,7 +435,7 @@ Different configuration parameters take effect in the following three ways: |Default| 10000 | |Effective|After restarting system| -### Configurations for creating schema automatically +### 2.6 Configurations for creating schema automatically * enable\_auto\_create\_schema @@ -491,7 +491,7 @@ Different configuration parameters take effect in the following three ways: | Default | FLOAT | | Effective | After restarting system | -### Query Configurations +### 2.7 Query Configurations * read\_consistency\_level @@ -636,7 +636,7 @@ Different configuration parameters take effect in the following three ways: |Default| 100000 | |Effective|After restarting system| -### TTL Configuration +### 2.8 TTL Configuration * ttl\_check\_interval | Name | ttl\_check\_interval | @@ -665,7 +665,7 @@ Different configuration parameters take effect in the following three ways: | Effective | After restarting system | -### Storage Engine Configuration +### 2.9 Storage Engine Configuration * timestamp\_precision @@ -839,7 +839,7 @@ Different configuration parameters take effect in the following three ways: | Default | 10 | | Effective | After restarting system | -### Compaction Configurations +### 2.10 Compaction Configurations * enable\_seq\_space\_compaction @@ -1138,7 +1138,7 @@ Different configuration parameters take effect in the following three ways: |Default| 4 | |Effective| hot-load | -### Write Ahead Log Configuration +### 2.11 Write Ahead Log Configuration * wal\_mode @@ -1239,7 +1239,7 @@ Different configuration parameters take effect in the following three ways: | Default | 20000 | | Effective | hot-load | -### TsFile Configurations +### 2.12 TsFile Configurations * group\_size\_in\_byte @@ -1332,7 +1332,7 @@ Different configuration parameters take effect in the following three ways: | Effective | After restarting system | -### Authorization Configuration +### 2.13 Authorization Configuration * authorizer\_provider\_class @@ -1389,7 +1389,7 @@ Different configuration parameters take effect in the following three ways: | Default | 30 | | Effective | After restarting system | -### UDF Configuration +### 2.14 UDF Configuration * udf\_initial\_byte\_array\_length\_for\_memory\_control @@ -1436,7 +1436,7 @@ Different configuration parameters take effect in the following three ways: | Default | ext/udf(Windows:ext\\udf) | | Effective | After restarting system | -### Trigger Configuration +### 2.15 Trigger Configuration * trigger\_lib\_dir @@ -1458,7 +1458,7 @@ Different configuration parameters take effect in the following three ways: | Effective | After restarting system | -### SELECT-INTO +### 2.16 SELECT-INTO * into\_operation\_buffer\_size\_in\_byte @@ -1488,7 +1488,7 @@ Different configuration parameters take effect in the following three ways: | Default | 2 | | Effective | After restarting system | -### Continuous Query +### 2.17 Continuous Query * continuous\_query\_execution\_thread @@ -1508,7 +1508,7 @@ Different configuration parameters take effect in the following three ways: | Default | 1s | | Effective | After restarting system | -### PIPE Configuration +### 2.18 PIPE Configuration * pipe_lib_dir @@ -1582,7 +1582,7 @@ Different configuration parameters take effect in the following three ways: | Default Value | -1 | | Effective | Can be hot-loaded | -### IOTConsensus Configuration +### 2.19 IOTConsensus Configuration * data_region_iot_max_log_entries_num_per_batch @@ -1620,7 +1620,7 @@ Different configuration parameters take effect in the following three ways: | Default | 0.6 | | Effective | After restarting system | -### RatisConsensus Configuration +### 2.20 RatisConsensus Configuration * config\_node\_ratis\_log\_appender\_buffer\_size\_max @@ -2074,7 +2074,7 @@ Different configuration parameters take effect in the following three ways: | Default | 86400 (seconds) | | Effective | After restarting system | -### Procedure Configuration +### 2.21 Procedure Configuration * procedure\_core\_worker\_thread\_count @@ -2105,7 +2105,7 @@ Different configuration parameters take effect in the following three ways: | Default | 800 | | Effective | After restarting system | -### MQTT Broker Configuration +### 2.22 MQTT Broker Configuration * enable\_mqtt\_service @@ -2164,7 +2164,7 @@ Different configuration parameters take effect in the following three ways: -#### TsFile Active Listening&Loading Function Configuration +### 2.23 TsFile Active Listening&Loading Function Configuration * load\_active\_listening\_enable diff --git a/src/UserGuide/Master/Tree/Reference/ConfigNode-Config-Manual.md b/src/UserGuide/Master/Tree/Reference/ConfigNode-Config-Manual.md index 80c2cbaf7..7a2b8e6f6 100644 --- a/src/UserGuide/Master/Tree/Reference/ConfigNode-Config-Manual.md +++ b/src/UserGuide/Master/Tree/Reference/ConfigNode-Config-Manual.md @@ -27,7 +27,7 @@ IoTDB ConfigNode files are under `conf`. * `iotdb-system.properties`:IoTDB system configurations. -## Environment Configuration File(confignode-env.sh/bat) +## 1. Environment Configuration File(confignode-env.sh/bat) The environment configuration file is mainly used to configure the Java environment related parameters when ConfigNode is running, such as JVM related configuration. This part of the configuration is passed to the JVM when the ConfigNode starts. @@ -61,11 +61,11 @@ The details of each parameter are as follows: |Effective|After restarting system| -## ConfigNode Configuration File (iotdb-system.properties) +## 2. ConfigNode Configuration File (iotdb-system.properties) The global configuration of cluster is in ConfigNode. -### Config Node RPC Configuration +### 2.1 Config Node RPC Configuration * cn\_internal\_address @@ -85,7 +85,7 @@ The global configuration of cluster is in ConfigNode. |Default| 10710 | |Effective|Only allowed to be modified in first start up| -### Consensus +### 2.2 Consensus * cn\_consensus\_port @@ -96,7 +96,7 @@ The global configuration of cluster is in ConfigNode. |Default| 10720 | |Effective|Only allowed to be modified in first start up| -### SeedConfigNode +### 2.3 SeedConfigNode * cn\_seed\_config\_node @@ -107,7 +107,7 @@ The global configuration of cluster is in ConfigNode. |Default| 127.0.0.1:10710 | |Effective| Only allowed to be modified in first start up | -### Directory configuration +### 2.4 Directory configuration * cn\_system\_dir @@ -127,7 +127,7 @@ The global configuration of cluster is in ConfigNode. |Default| data/confignode/consensus(Windows:data\\confignode\\consensus) | |Effective| After restarting system | -### Thrift RPC configuration +### 2.5 Thrift RPC configuration * cn\_rpc\_thrift\_compression\_enable @@ -220,4 +220,4 @@ The global configuration of cluster is in ConfigNode. | Default | 300 | | Effective | After restarting system | -### Metric Configuration +### 2.6 Metric Configuration diff --git a/src/UserGuide/Master/Tree/Reference/DataNode-Config-Manual_apache.md b/src/UserGuide/Master/Tree/Reference/DataNode-Config-Manual_apache.md index b568ab7ad..d452a6673 100644 --- a/src/UserGuide/Master/Tree/Reference/DataNode-Config-Manual_apache.md +++ b/src/UserGuide/Master/Tree/Reference/DataNode-Config-Manual_apache.md @@ -27,14 +27,14 @@ We use the same configuration files for IoTDB DataNode and Standalone version, a * `iotdb-system.properties`:IoTDB system configurations. -## Hot Modification Configuration +## 1. ot Modification Configuration For the convenience of users, IoTDB provides users with hot modification function, that is, modifying some configuration parameters in `iotdb-system.properties` during the system operation and applying them to the system immediately. In the parameters described below, these parameters whose way of `Effective` is `hot-load` support hot modification. Trigger way: The client sends the command(sql) `load configuration` or `set configuration` to the IoTDB server. -## Environment Configuration File(datanode-env.sh/bat) +## 2. Environment Configuration File(datanode-env.sh/bat) The environment configuration file is mainly used to configure the Java environment related parameters when DataNode is running, such as JVM related configuration. This part of the configuration is passed to the JVM when the DataNode starts. @@ -94,7 +94,7 @@ The details of each parameter are as follows: |Default|127.0.0.1| |Effective|After restarting system| -## JMX Authorization +## 3. JMX Authorization We **STRONGLY RECOMMENDED** you CHANGE the PASSWORD for the JMX remote connection. @@ -102,9 +102,9 @@ The user and passwords are in ${IOTDB\_CONF}/conf/jmx.password. The permission definitions are in ${IOTDB\_CONF}/conf/jmx.access. -## DataNode/Standalone Configuration File (iotdb-system.properties) +## 4. DataNode/Standalone Configuration File (iotdb-system.properties) -### Data Node RPC Configuration +### 4.1 Data Node RPC Configuration * dn\_rpc\_address @@ -178,7 +178,7 @@ The permission definitions are in ${IOTDB\_CONF}/conf/jmx.access. |Default| 5000 | |Effective| After restarting system | -### SSL Configuration +### 4.2 SSL Configuration * enable\_thrift\_ssl @@ -216,7 +216,7 @@ The permission definitions are in ${IOTDB\_CONF}/conf/jmx.access. |Default| "" | |Effective| After restarting system | -### SeedConfigNode +### 4.3 SeedConfigNode * dn\_seed\_config\_node @@ -227,7 +227,7 @@ The permission definitions are in ${IOTDB\_CONF}/conf/jmx.access. |Default| 127.0.0.1:10710 | |Effective| Only allowed to be modified in first start up | -### Connection Configuration +### 4.4 Connection Configuration * dn\_rpc\_thrift\_compression\_enable @@ -320,7 +320,7 @@ The permission definitions are in ${IOTDB\_CONF}/conf/jmx.access. | Default | 300 | | Effective | After restarting system | -### Dictionary Configuration +### 4.5 Dictionary Configuration * dn\_system\_dir @@ -385,9 +385,9 @@ The permission definitions are in ${IOTDB\_CONF}/conf/jmx.access. | Default | data/datanode/sync | | Effective | After restarting system | -### Metric Configuration +### 4.6 Metric Configuration -## Enable GC log +## 5. Enable GC log GC log is off by default. For performance tuning, you may want to collect the GC info. @@ -405,7 +405,7 @@ sbin\start-datanode.bat printgc GC log is stored at `IOTDB_HOME/logs/gc.log`. There will be at most 10 gc.log.* files and each one can reach to 10MB. -### REST Service Configuration +### 5.1 REST Service Configuration * enable\_rest\_service diff --git a/src/UserGuide/Master/Tree/Reference/DataNode-Config-Manual_timecho.md b/src/UserGuide/Master/Tree/Reference/DataNode-Config-Manual_timecho.md index 94ede5013..4dbe77f53 100644 --- a/src/UserGuide/Master/Tree/Reference/DataNode-Config-Manual_timecho.md +++ b/src/UserGuide/Master/Tree/Reference/DataNode-Config-Manual_timecho.md @@ -27,14 +27,14 @@ We use the same configuration files for IoTDB DataNode and Standalone version, a * `iotdb-system.properties`:IoTDB system configurations. -## Hot Modification Configuration +## 1. Hot Modification Configuration For the convenience of users, IoTDB provides users with hot modification function, that is, modifying some configuration parameters in `iotdb-system.properties` during the system operation and applying them to the system immediately. In the parameters described below, these parameters whose way of `Effective` is `hot-load` support hot modification. Trigger way: The client sends the command(sql) `load configuration` or `set configuration` to the IoTDB server. -## Environment Configuration File(datanode-env.sh/bat) +## 2. Environment Configuration File(datanode-env.sh/bat) The environment configuration file is mainly used to configure the Java environment related parameters when DataNode is running, such as JVM related configuration. This part of the configuration is passed to the JVM when the DataNode starts. @@ -94,7 +94,7 @@ The details of each parameter are as follows: |Default|127.0.0.1| |Effective|After restarting system| -## JMX Authorization +## 3. JMX Authorization We **STRONGLY RECOMMENDED** you CHANGE the PASSWORD for the JMX remote connection. @@ -102,9 +102,9 @@ The user and passwords are in ${IOTDB\_CONF}/conf/jmx.password. The permission definitions are in ${IOTDB\_CONF}/conf/jmx.access. -## DataNode/Standalone Configuration File (iotdb-system.properties) +## 4. DataNode/Standalone Configuration File (iotdb-system.properties) -### Data Node RPC Configuration +### 4.1 Data Node RPC Configuration * dn\_rpc\_address @@ -178,7 +178,7 @@ The permission definitions are in ${IOTDB\_CONF}/conf/jmx.access. |Default| 5000 | |Effective| After restarting system | -### SSL Configuration +### 4.2 SSL Configuration * enable\_thrift\_ssl @@ -216,7 +216,7 @@ The permission definitions are in ${IOTDB\_CONF}/conf/jmx.access. |Default| "" | |Effective| After restarting system | -### SeedConfigNode +### 4.3 SeedConfigNode * dn\_seed\_config\_node @@ -227,7 +227,7 @@ The permission definitions are in ${IOTDB\_CONF}/conf/jmx.access. |Default| 127.0.0.1:10710 | |Effective| Only allowed to be modified in first start up | -### Connection Configuration +### 4.4 Connection Configuration * dn\_rpc\_thrift\_compression\_enable @@ -320,7 +320,7 @@ The permission definitions are in ${IOTDB\_CONF}/conf/jmx.access. | Default | 300 | | Effective | After restarting system | -### Dictionary Configuration +### 4.5 Dictionary Configuration * dn\_system\_dir @@ -385,9 +385,9 @@ The permission definitions are in ${IOTDB\_CONF}/conf/jmx.access. | Default | data/datanode/sync | | Effective | After restarting system | -### Metric Configuration +### 4.6 Metric Configuration -## Enable GC log +## 5. Enable GC log GC log is off by default. For performance tuning, you may want to collect the GC info. @@ -405,7 +405,7 @@ sbin\start-datanode.bat printgc GC log is stored at `IOTDB_HOME/logs/gc.log`. There will be at most 10 gc.log.* files and each one can reach to 10MB. -### REST Service Configuration +### 5.1 REST Service Configuration * enable\_rest\_service diff --git a/src/UserGuide/Master/Tree/SQL-Manual/Function-and-Expression.md b/src/UserGuide/Master/Tree/SQL-Manual/Function-and-Expression.md index c208d6f17..3c315f44f 100644 --- a/src/UserGuide/Master/Tree/SQL-Manual/Function-and-Expression.md +++ b/src/UserGuide/Master/Tree/SQL-Manual/Function-and-Expression.md @@ -19,11 +19,11 @@ specific language governing permissions and limitations under the License. ---> +--> -## Arithmetic Operators and Functions +## 1. Arithmetic Operators and Functions -### Arithmetic Operators +### 1.1 Arithmetic Operators #### Unary Arithmetic Operators @@ -65,7 +65,7 @@ Total line number = 5 It costs 0.014s ``` -### Arithmetic Functions +### 1.2 Arithmetic Functions Currently, IoTDB supports the following mathematical functions. The behavior of these mathematical functions is consistent with the behavior of these functions in the Java Math standard library. @@ -156,9 +156,9 @@ It costs 0.059s --> -## Comparison Operators and Functions +## 2. Comparison Operators and Functions -### Basic comparison operators +### 2.1 Basic comparison operators Supported operators `>`, `>=`, `<`, `<=`, `==`, `!=` (or `<>` ) @@ -188,7 +188,7 @@ IoTDB> select a, b, a > 10, a <= b, !(a <= b), a > 10 && a > b from root.test; +-----------------------------+-----------+-----------+----------------+--------------------------+---------------------------+------------------------------------------------+ ``` -### `BETWEEN ... AND ...` operator +### 2.2 `BETWEEN ... AND ...` operator |operator |meaning| |-----------------------------|-----------| @@ -205,7 +205,7 @@ select temperature from root.sg1.d1 where temperature between 36.5 and 40; select temperature from root.sg1.d1 where temperature not between 36.5 and 40; ``` -### Fuzzy matching operator +### 2.3 Fuzzy matching operator For TEXT type data, support fuzzy matching of data using `Like` and `Regexp` operators. @@ -311,7 +311,7 @@ operation result +-----------------------------+-----------+------- ------------------+--------------------------+ ``` -### `IS NULL` operator +### 2.4 `IS NULL` operator |operator |meaning| |-----------------------------|-----------| @@ -330,7 +330,7 @@ select code from root.sg1.d1 where temperature is null; select code from root.sg1.d1 where temperature is not null; ``` -### `IN` operator +### 2.5 `IN` operator |operator |meaning| |-----------------------------|-----------| @@ -377,7 +377,7 @@ Output 2: +-----------------------------+-----------+------- -------------+ ``` -### Condition Functions +### 2.6 Condition Functions Condition functions are used to check whether timeseries data points satisfy some specific condition. @@ -462,9 +462,9 @@ IoTDB> select ts, in_range(ts,'lower'='2', 'upper'='3.1') from root.test; --> -## Logical Operators +## 3. Logical Operators -### Unary Logical Operators +### 3.1 Unary Logical Operators Supported operator `!` @@ -474,7 +474,7 @@ Output data type: `BOOLEAN` Hint: the priority of `!` is the same as `-`. Remember to use brackets to modify priority. -### Binary Logical Operators +### 3.2 Binary Logical Operators Supported operators AND:`and`,`&`, `&&`; OR:`or`,`|`,`||` @@ -526,7 +526,7 @@ IoTDB> select a, b, a > 10, a <= b, !(a <= b), a > 10 && a > b from root.test; --> -## Aggregate Functions +## 4. Aggregate Functions Aggregate functions are many-to-one functions. They perform aggregate calculations on a set of values, resulting in a single aggregated result. @@ -553,7 +553,7 @@ The aggregate functions supported by IoTDB are as follows: | COUNT_TIME | The number of timestamps in the query data set. When used with `align by device`, the result is the number of timestamps in the data set per device. | All data Types, the input parameter can only be `*` | / | INT64 | -### COUNT +### 4.1 COUNT #### example @@ -572,7 +572,7 @@ Total line number = 1 It costs 0.016s ``` -### COUNT_IF +### 4.2 COUNT_IF #### Grammar ```sql @@ -637,7 +637,7 @@ Result: +------------------------------------------------------------------------+------------------------------------------------------------------------+ ``` -### TIME_DURATION +### 4.3 TIME_DURATION #### Grammar ```sql time_duration(Path) @@ -691,7 +691,7 @@ Result: ``` > Note: Returns 0 if there is only one data point, or null if the data point is null. -### COUNT_TIME +### 4.4 COUNT_TIME #### Grammar ```sql count_time(*) @@ -821,9 +821,9 @@ Result --> -## String Processing +## 5. String Processing -### STRING_CONTAINS +### 5.1 STRING_CONTAINS #### Function introduction @@ -856,7 +856,7 @@ Total line number = 3 It costs 0.007s ``` -### STRING_MATCHES +### 5.2 STRING_MATCHES #### Function introduction @@ -889,7 +889,7 @@ Total line number = 3 It costs 0.007s ``` -### Length +### 5.3 Length #### Usage @@ -933,7 +933,7 @@ Output series: +-----------------------------+--------------+----------------------+ ``` -### Locate +### 5.4 Locate #### Usage @@ -999,7 +999,7 @@ Output series: +-----------------------------+--------------+------------------------------------------------------+ ``` -### StartsWith +### 5.5 StartsWith #### Usage @@ -1046,7 +1046,7 @@ Output series: +-----------------------------+--------------+----------------------------------------+ ``` -### EndsWith +### 5.6 EndsWith #### Usage @@ -1093,7 +1093,7 @@ Output series: +-----------------------------+--------------+--------------------------------------+ ``` -### Concat +### 5.7 Concat #### Usage @@ -1161,7 +1161,7 @@ Output series: +-----------------------------+--------------+--------------+-----------------------------------------------------------------------------------------------+ ``` -### substring +### 5.8 substring #### Usage @@ -1209,7 +1209,7 @@ Output series: +-----------------------------+--------------+--------------------------------------+ ``` -### replace +### 5.9 replace #### Usage @@ -1257,7 +1257,7 @@ Output series: +-----------------------------+--------------+-----------------------------------+ ``` -### Upper +### 5.10 Upper #### Usage @@ -1301,7 +1301,7 @@ Output series: +-----------------------------+--------------+---------------------+ ``` -### Lower +### 5.11 Lower #### Usage @@ -1345,7 +1345,7 @@ Output series: +-----------------------------+--------------+---------------------+ ``` -### Trim +### 5.12 Trim #### Usage @@ -1389,7 +1389,7 @@ Output series: +-----------------------------+--------------+--------------------+ ``` -### StrCmp +### 5.13 StrCmp #### Usage @@ -1435,7 +1435,7 @@ Output series: ``` -### StrReplace +### 5.14 StrReplace #### Usage @@ -1514,7 +1514,7 @@ Output series: +-----------------------------+-----------------------------------------------------+ ``` -### RegexMatch +### 5.15 RegexMatch #### Usage @@ -1573,7 +1573,7 @@ Output series: +-----------------------------+----------------------------------------------------------------------+ ``` -### RegexReplace +### 5.16 RegexReplace #### Usage @@ -1632,7 +1632,7 @@ Output series: +-----------------------------+-----------------------------------------------------------+ ``` -### RegexSplit +### 5.17 RegexSplit #### Usage @@ -1733,7 +1733,7 @@ Output series: --> -## Data Type Conversion Function +## 6. Data Type Conversion Function The IoTDB currently supports 6 data types, including INT32, INT64 ,FLOAT, DOUBLE, BOOLEAN, TEXT. When we query or evaluate data, we may need to convert data types, such as TEXT to INT32, or FLOAT to DOUBLE. IoTDB supports cast function to convert data types. @@ -1754,7 +1754,7 @@ The syntax of the cast function is consistent with that of PostgreSQL. The data | **BOOLEAN** | true: 1
false: 0 | true: 1L
false: 0 | true: 1.0f
false: 0 | true: 1.0
false: 0 | No need to cast | true: "true"
false: "false" | | **TEXT** | Integer.parseInt() | Long.parseLong() | Float.parseFloat() | Double.parseDouble() | text.toLowerCase =="true" : true
text.toLowerCase =="false" : false
Otherwise: throw Exception | No need to cast | -### Examples +### 6.1 Examples ``` // timeseries @@ -1833,7 +1833,7 @@ IoTDB> select cast(s6 as BOOLEAN) from root.sg.d1 where time >= 2 --> -## Constant Timeseries Generating Functions +## 7. Constant Timeseries Generating Functions The constant timeseries generating function is used to generate a timeseries in which the values of all data points are the same. @@ -1891,7 +1891,7 @@ It costs 0.005s --> -## Selector Functions +## 8. Selector Functions Currently, IoTDB supports the following selector functions: @@ -1943,7 +1943,7 @@ It costs 0.006s --> -## Continuous Interval Functions +## 9. Continuous Interval Functions The continuous interval functions are used to query all continuous intervals that meet specified conditions. They can be divided into two categories according to return value: @@ -1957,7 +1957,7 @@ They can be divided into two categories according to return value: | ZERO_COUNT | INT32/ INT64/ FLOAT/ DOUBLE/ BOOLEAN | `min`:Optional with default value `1L`
`max`:Optional with default value `Long.MAX_VALUE` | Long | Return intervals' start times and the number of data points in the interval in which the value is always 0(false). Data points number `n` satisfy `n >= min && n <= max` | | NON_ZERO_COUNT | INT32/ INT64/ FLOAT/ DOUBLE/ BOOLEAN | `min`:Optional with default value `1L`
`max`:Optional with default value `Long.MAX_VALUE` | Long | Return intervals' start times and the number of data points in the interval in which the value is always not 0(false). Data points number `n` satisfy `n >= min && n <= max` | -### Demonstrate +### 9.1 Demonstrate Example data: ``` IoTDB> select s1,s2,s3,s4,s5 from root.sg.d2; @@ -2017,7 +2017,7 @@ Result: --> -## Variation Trend Calculation Functions +## 10. Variation Trend Calculation Functions Currently, IoTDB supports the following variation trend calculation functions: @@ -2052,7 +2052,7 @@ Total line number = 5 It costs 0.014s ``` -### Example +### 10.1 Example #### RawData @@ -2132,9 +2132,9 @@ Result: --> -## Sample Functions +## 11. Sample Functions -### Equal Size Bucket Sample Function +### 11.1 Equal Size Bucket Sample Function This function samples the input sequence in equal size buckets, that is, according to the downsampling ratio and downsampling method given by the user, the input sequence is equally divided into several buckets according to a fixed number of points. Sampling by the given sampling method within each bucket. - `proportion`: sample ratio, the value range is `(0, 1]`. @@ -2360,7 +2360,7 @@ Total line number = 10 It costs 0.041s ``` -### M4 Function +### 11.2 M4 Function M4 is used to sample the `first, last, bottom, top` points for each sliding window: @@ -2533,9 +2533,9 @@ It is worth noting that both functions sort and deduplicate the aggregated point --> -## Time Series Processing +## 12. Time Series Processing -### CHANGE_POINTS +### 12.1 CHANGE_POINTS #### Usage @@ -2604,9 +2604,9 @@ Output series: --> -## Lambda Expression +## 13. Lambda Expression -### JEXL Function +### 13.1 JEXL Function Java Expression Language (JEXL) is an expression language engine. We use JEXL to extend UDFs, which are implemented on the command line with simple lambda expressions. See the link for [operators supported in jexl lambda expressions](https://commons.apache.org/proper/commons-jexl/apidocs/org/apache/commons/jexl3/package-summary.html#customization). @@ -2683,9 +2683,9 @@ It costs 0.118s --> -## Conditional Expressions +## 14. Conditional Expressions -### CASE +### 14.1 CASE The CASE expression is a kind of conditional expression that can be used to return different values based on specific conditions, similar to the if-else statements in other languages. diff --git a/src/UserGuide/Master/Tree/SQL-Manual/Keywords.md b/src/UserGuide/Master/Tree/SQL-Manual/Keywords.md index c098b3e99..dae343532 100644 --- a/src/UserGuide/Master/Tree/SQL-Manual/Keywords.md +++ b/src/UserGuide/Master/Tree/SQL-Manual/Keywords.md @@ -21,13 +21,13 @@ # Keywords -Reserved words(Can not be used as identifier): +## 1. Reserved words(Can not be used as identifier): - ROOT - TIME - TIMESTAMP -Common Keywords: +## 2. Common Keywords: - ADD - AFTER diff --git a/src/UserGuide/Master/Tree/SQL-Manual/Operator-and-Expression.md b/src/UserGuide/Master/Tree/SQL-Manual/Operator-and-Expression.md index 1b6fd667f..ee905b41f 100644 --- a/src/UserGuide/Master/Tree/SQL-Manual/Operator-and-Expression.md +++ b/src/UserGuide/Master/Tree/SQL-Manual/Operator-and-Expression.md @@ -27,9 +27,9 @@ A list of all available functions, both built-in and custom, can be displayed wi See the documentation [Select-Expression](../SQL-Manual/Function-and-Expression.md#selector-functions) for the behavior of operators and functions in SQL. -## OPERATORS +## 1. OPERATORS -### Arithmetic Operators +### 1.1 Arithmetic Operators | Operator | Meaning | | -------- | ------------------------- | @@ -43,7 +43,7 @@ See the documentation [Select-Expression](../SQL-Manual/Function-and-Expression. For details and examples, see the document [Arithmetic Operators and Functions](../SQL-Manual/Function-and-Expression.md#arithmetic-functions). -### Comparison Operators +### 1.2 Comparison Operators | Operator | Meaning | | ------------------------- | ------------------------------------ | @@ -66,7 +66,7 @@ For details and examples, see the document [Arithmetic Operators and Functions]( For details and examples, see the document [Comparison Operators and Functions](../SQL-Manual/Function-and-Expression.md#comparison-operators-and-functions). -### Logical Operators +### 1.3 Logical Operators | Operator | Meaning | | --------------------------- | --------------------------------- | @@ -76,7 +76,7 @@ For details and examples, see the document [Comparison Operators and Functions]( For details and examples, see the document [Logical Operators](../SQL-Manual/Function-and-Expression.md#logical-operators). -### Operator Precedence +### 1.4 Operator Precedence The precedence of operators is arranged as shown below from high to low, and operators on the same row have the same precedence. @@ -93,11 +93,11 @@ AND, &, && OR, |, || ``` -## BUILT-IN FUNCTIONS +## 2. BUILT-IN FUNCTIONS The built-in functions can be used in IoTDB without registration, and the functions in the data quality function library need to be registered by referring to the registration steps in the next chapter before they can be used. -### Aggregate Functions +### 2.1 Aggregate Functions | Function Name | Description | Allowed Input Series Data Types | Required Attributes | Output Series Data Type | |---------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------| @@ -125,7 +125,7 @@ The built-in functions can be used in IoTDB without registration, and the functi For details and examples, see the document [Aggregate Functions](../SQL-Manual/Function-and-Expression.md#aggregate-functions). -### Arithmetic Functions +### 2.2 Arithmetic Functions | Function Name | Allowed Input Series Data Types | Output Series Data Type | Required Attributes | Corresponding Implementation in the Java Standard Library | | ------------- | ------------------------------- | ----------------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | @@ -152,7 +152,7 @@ For details and examples, see the document [Aggregate Functions](../SQL-Manual/F For details and examples, see the document [Arithmetic Operators and Functions](../SQL-Manual/Function-and-Expression.md#arithmetic-operators-and-functions). -### Comparison Functions +### 2.3 Comparison Functions | Function Name | Allowed Input Series Data Types | Required Attributes | Output Series Data Type | Description | | ------------- | ------------------------------- | ----------------------------------------- | ----------------------- | --------------------------------------------- | @@ -161,7 +161,7 @@ For details and examples, see the document [Arithmetic Operators and Functions]( For details and examples, see the document [Comparison Operators and Functions](../SQL-Manual/Function-and-Expression.md#comparison-operators-and-functions). -### String Processing Functions +### 2.4 String Processing Functions | Function Name | Allowed Input Series Data Types | Required Attributes | Output Series Data Type | Description | | --------------- |---------------------------------| ------------------------------------------------------------ | ----------------------- | ------------------------------------------------------------ | @@ -181,7 +181,7 @@ For details and examples, see the document [Comparison Operators and Functions]( For details and examples, see the document [String Processing](../SQL-Manual/Function-and-Expression.md#string-processing). -### Data Type Conversion Function +### 2.5 Data Type Conversion Function | Function Name | Required Attributes | Output Series Data Type | Description | | ------------- | ------------------------------------------------------------ | ----------------------- | ------------------------------------------------------------ | @@ -189,7 +189,7 @@ For details and examples, see the document [String Processing](../SQL-Manual/Fun For details and examples, see the document [Data Type Conversion Function](../SQL-Manual/Function-and-Expression.md#data-type-conversion-function). -### Constant Timeseries Generating Functions +### 2.6 Constant Timeseries Generating Functions | Function Name | Required Attributes | Output Series Data Type | Description | | ------------- | ------------------------------------------------------------ | -------------------------------------------- | ------------------------------------------------------------ | @@ -199,7 +199,7 @@ For details and examples, see the document [Data Type Conversion Function](../SQ For details and examples, see the document [Constant Timeseries Generating Functions](../SQL-Manual/Function-and-Expression.md#constant-timeseries-generating-functions). -### Selector Functions +### 2.7 Selector Functions | Function Name | Allowed Input Series Data Types | Required Attributes | Output Series Data Type | Description | | ------------- |-------------------------------------------------------------------| ------------------------------------------------------------ | ----------------------------- | ------------------------------------------------------------ | @@ -208,7 +208,7 @@ For details and examples, see the document [Constant Timeseries Generating Funct For details and examples, see the document [Selector Functions](../SQL-Manual/Function-and-Expression.md#selector-functions). -### Continuous Interval Functions +### 2.8 Continuous Interval Functions | Function Name | Allowed Input Series Data Types | Required Attributes | Output Series Data Type | Description | | ----------------- | ------------------------------------ | ------------------------------------------------------------ | ----------------------- | ------------------------------------------------------------ | @@ -219,7 +219,7 @@ For details and examples, see the document [Selector Functions](../SQL-Manual/Fu For details and examples, see the document [Continuous Interval Functions](../SQL-Manual/Function-and-Expression.md#continuous-interval-functions). -### Variation Trend Calculation Functions +### 2.9 Variation Trend Calculation Functions | Function Name | Allowed Input Series Data Types | Required Attributes | Output Series Data Type | Description | | ----------------------- | ----------------------------------------------- | ------------------------------------------------------------ | ----------------------------- | ------------------------------------------------------------ | @@ -232,7 +232,7 @@ For details and examples, see the document [Continuous Interval Functions](../SQ For details and examples, see the document [Variation Trend Calculation Functions](../SQL-Manual/Function-and-Expression.md#variation-trend-calculation-functions). -### Sample Functions +### 2.10 Sample Functions | Function Name | Allowed Input Series Data Types | Required Attributes | Output Series Data Type | Description | | -------------------------------- | ------------------------------- | ------------------------------------------------------------ | ------------------------------ | ------------------------------------------------------------ | @@ -244,7 +244,7 @@ For details and examples, see the document [Variation Trend Calculation Function For details and examples, see the document [Sample Functions](../SQL-Manual/Function-and-Expression.md#sample-functions). -### Change Points Function +### 2.11 Change Points Function | Function Name | Allowed Input Series Data Types | Required Attributes | Output Series Data Type | Description | | ------------- | ------------------------------- | ------------------- | ----------------------------- | ----------------------------------------------------------- | @@ -253,7 +253,7 @@ For details and examples, see the document [Sample Functions](../SQL-Manual/Func For details and examples, see the document [Time-Series](../SQL-Manual/Function-and-Expression.md#time-series-processing). -## LAMBDA EXPRESSION +## 3. LAMBDA EXPRESSION | Function Name | Allowed Input Series Data Types | Required Attributes | Output Series Data Type | Series Data Type Description | | ------------- | ----------------------------------------------- | ------------------------------------------------------------ | ----------------------------------------------- | ------------------------------------------------------------ | @@ -261,7 +261,7 @@ For details and examples, see the document [Time-Series](../SQL-Manual/Function- For details and examples, see the document [Lambda](../SQL-Manual/Function-and-Expression.md#lambda-expression). -## CONDITIONAL EXPRESSION +## 4. CONDITIONAL EXPRESSION | Expression Name | Description | | --------------- | -------------------- | @@ -269,7 +269,7 @@ For details and examples, see the document [Lambda](../SQL-Manual/Function-and-E For details and examples, see the document [Conditional Expressions](../SQL-Manual/Function-and-Expression.md#conditional-expressions). -## SELECT EXPRESSION +## 5. SELECT EXPRESSION The `SELECT` clause specifies the output of the query, consisting of several `selectExpr`. Each `selectExpr` defines one or more columns in the query result. @@ -285,7 +285,7 @@ The `SELECT` clause specifies the output of the query, consisting of several `se - Time series generation functions (including built-in functions and user-defined functions) - constant -### Use Alias +### 5.1 Use Alias Since the unique data model of IoTDB, lots of additional information like device will be carried before each sensor. Sometimes, we want to query just one specific device, then these prefix information show frequently will be redundant in this situation, influencing the analysis of result set. At this time, we can use `AS` function provided by IoTDB, assign an alias to time series selected in query. @@ -302,11 +302,11 @@ The result set is: | ... | ... | ... | -### Operator +### 5.2 Operator See this documentation for a list of operators supported in IoTDB. -### Function +### 5.3 Function #### Aggregate Functions @@ -339,7 +339,7 @@ See this documentation for a list of built-in functions supported in IoTDB. IoTDB supports function extension through User Defined Function (click for [User-Defined Function](../User-Manual/Database-Programming.md#udtfuser-defined-timeseries-generating-function)) capability. -### Nested Expressions +### 5.4 Nested Expressions IoTDB supports the calculation of arbitrary nested expressions. Since time series query and aggregation query can not be used in a query statement at the same time, we divide nested expressions into two types, which are nested expressions with time series query and nested expressions with aggregation query. diff --git a/src/UserGuide/Master/Tree/SQL-Manual/SQL-Manual.md b/src/UserGuide/Master/Tree/SQL-Manual/SQL-Manual.md index f86e66790..9e0b3582e 100644 --- a/src/UserGuide/Master/Tree/SQL-Manual/SQL-Manual.md +++ b/src/UserGuide/Master/Tree/SQL-Manual/SQL-Manual.md @@ -21,25 +21,25 @@ # SQL Manual -## DATABASE MANAGEMENT +## 1. DATABASE MANAGEMENT For more details, see document [Operate-Metadata](../Basic-Concept/Operate-Metadata.md). -### Create Database +### 1.1 Create Database ```sql IoTDB > create database root.ln IoTDB > create database root.sgcc ``` -### Show Databases +### 1.2 Show Databases ```sql IoTDB> SHOW DATABASES IoTDB> SHOW DATABASES root.** ``` -### Delete Database +### 1.3 Delete Database ```sql IoTDB > DELETE DATABASE root.ln @@ -48,7 +48,7 @@ IoTDB > DELETE DATABASE root.sgcc IoTDB > DELETE DATABASE root.** ``` -### Count Databases +### 1.4 Count Databases ```sql IoTDB> count databases @@ -57,7 +57,7 @@ IoTDB> count databases root.sgcc.* IoTDB> count databases root.sgcc ``` -### Setting up heterogeneous databases (Advanced operations) +### 1.5 Setting up heterogeneous databases (Advanced operations) #### Set heterogeneous parameters when creating a Database @@ -77,7 +77,7 @@ ALTER DATABASE root.db WITH SCHEMA_REGION_GROUP_NUM=1, DATA_REGION_GROUP_NUM=2; SHOW DATABASES DETAILS ``` -### TTL +### 1.6 TTL #### Set TTL @@ -103,7 +103,7 @@ IoTDB> SHOW TTL ON StorageGroupNames IoTDB> SHOW DEVICES ``` -## DEVICE TEMPLATE +## 2. DEVICE TEMPLATE For more details, see document [Operate-Metadata](../Basic-Concept/Operate-Metadata.md). @@ -115,7 +115,7 @@ For more details, see document [Operate-Metadata](../Basic-Concept/Operate-Metad ![img](/img/templateEN.jpg) -### Create Device Template +### 2.1 Create Device Template **Example 1:** Create a template containing two non-aligned timeseires @@ -131,13 +131,13 @@ IoTDB> create device template t2 aligned (lat FLOAT encoding=Gorilla, lon FLOAT The` lat` and `lon` measurements are aligned. -### Set Device Template +### 2.2 Set Device Template ```sql IoTDB> set device template t1 to root.sg1.d1 ``` -### Activate Device Template +### 2.3 Activate Device Template ```sql IoTDB> set device template t1 to root.sg1.d1 @@ -146,7 +146,7 @@ IoTDB> create timeseries using device template on root.sg1.d1 IoTDB> create timeseries using device template on root.sg1.d2 ``` -### Show Device Template +### 2.4 Show Device Template ```sql IoTDB> show device templates @@ -155,7 +155,7 @@ IoTDB> show paths set device template t1 IoTDB> show paths using device template t1 ``` -### Deactivate Device Template +### 2.5 Deactivate Device Template ```sql IoTDB> delete timeseries of device template t1 from root.sg1.d1 @@ -164,29 +164,29 @@ IoTDB> delete timeseries of device template t1 from root.sg1.*, root.sg2.* IoTDB> deactivate device template t1 from root.sg1.*, root.sg2.* ``` -### Unset Device Template +### 2.6 Unset Device Template ```sql IoTDB> unset device template t1 from root.sg1.d1 ``` -### Drop Device Template +### 2.7 Drop Device Template ```sql IoTDB> drop device template t1 ``` -### Alter Device Template +### 2.8 Alter Device Template ```sql IoTDB> alter device template t1 add (speed FLOAT encoding=RLE, FLOAT TEXT encoding=PLAIN compression=SNAPPY) ``` -## TIMESERIES MANAGEMENT +## 3. TIMESERIES MANAGEMENT For more details, see document [Operate-Metadata](../Basic-Concept/Operate-Metadata.md). -### Create Timeseries +### 3.1 Create Timeseries ```sql IoTDB > create timeseries root.ln.wf01.wt01.status with datatype=BOOLEAN,encoding=PLAIN @@ -215,13 +215,13 @@ IoTDB > create timeseries root.ln.wf02.wt02.status WITH DATATYPE=BOOLEAN, ENCODI error: encoding TS_2DIFF does not support BOOLEAN ``` -### Create Aligned Timeseries +### 3.2 Create Aligned Timeseries ```sql IoTDB> CREATE ALIGNED TIMESERIES root.ln.wf01.GPS(latitude FLOAT encoding=PLAIN compressor=SNAPPY, longitude FLOAT encoding=PLAIN compressor=SNAPPY) ``` -### Delete Timeseries +### 3.3 Delete Timeseries ```sql IoTDB> delete timeseries root.ln.wf01.wt01.status @@ -230,7 +230,7 @@ IoTDB> delete timeseries root.ln.wf02.* IoTDB> drop timeseries root.ln.wf02.* ``` -### Show Timeseries +### 3.4 Show Timeseries ```sql IoTDB> show timeseries root.** @@ -240,7 +240,7 @@ IoTDB> show timeseries root.ln.** where timeseries contains 'wf01.wt' IoTDB> show timeseries root.ln.** where dataType=FLOAT ``` -### Count Timeseries +### 3.5 Count Timeseries ```sql IoTDB > COUNT TIMESERIES root.** @@ -257,7 +257,7 @@ IoTDB > COUNT TIMESERIES root.ln.** GROUP BY LEVEL=2 IoTDB > COUNT TIMESERIES root.ln.wf01.* GROUP BY LEVEL=2 ``` -### Tag and Attribute Management +### 3.6 Tag and Attribute Management ```sql create timeseries root.turbine.d1.s1(temprature) with datatype=FLOAT, encoding=RLE, compression=SNAPPY tags(tag1=v1, tag2=v2) attributes(attr1=v1, attr2=v2) @@ -362,23 +362,23 @@ IoTDB> show timeseries where TAGS(tag1)='v1' The above operations are supported for timeseries tag, attribute updates, etc. -## NODE MANAGEMENT +## 4. NODE MANAGEMENT For more details, see document [Operate-Metadata](../Basic-Concept/Operate-Metadata.md). -### Show Child Paths +### 4.1 Show Child Paths ```SQL SHOW CHILD PATHS pathPattern ``` -### Show Child Nodes +### 4.2 Show Child Nodes ```SQL SHOW CHILD NODES pathPattern ``` -### Count Nodes +### 4.3 Count Nodes ```SQL IoTDB > COUNT NODES root.** LEVEL=2 @@ -387,7 +387,7 @@ IoTDB > COUNT NODES root.ln.wf01.** LEVEL=3 IoTDB > COUNT NODES root.**.temperature LEVEL=3 ``` -### Show Devices +### 4.4 Show Devices ```SQL IoTDB> show devices @@ -397,7 +397,7 @@ IoTDB> show devices with database IoTDB> show devices root.ln.** with database ``` -### Count Devices +### 4.5 Count Devices ```SQL IoTDB> show devices @@ -405,9 +405,9 @@ IoTDB> count devices IoTDB> count devices root.ln.** ``` -## INSERT & LOAD DATA +## 5. INSERT & LOAD DATA -### Insert Data +### 5.1 Insert Data For more details, see document [Write-Delete-Data](../Basic-Concept/Write-Delete-Data.md). @@ -442,7 +442,7 @@ IoTDB > insert into root.sg1.d1(time, s1, s2) aligned values(2, 2, 2), (3, 3, 3) IoTDB > select * from root.sg1.d1 ``` -### Load External TsFile Tool +### 5.2 Load External TsFile Tool For more details, see document [Data Import](../Tools-System/Data-Import-Tool.md). @@ -469,11 +469,11 @@ For more details, see document [Data Import](../Tools-System/Data-Import-Tool.md ./load-rewrite.bat -f D:\IoTDB\data -h 192.168.0.101 -p 6667 -u root -pw root ``` -## DELETE DATA +## 6. DELETE DATA For more details, see document [Write-Delete-Data](../Basic-Concept/Write-Delete-Data.md). -### Delete Single Timeseries +### 6.1 Delete Single Timeseries ```sql IoTDB > delete from root.ln.wf02.wt02.status where time<=2017-11-01T16:26:00; @@ -491,7 +491,7 @@ expressions like : time > XXX, time <= XXX, or two atomic expressions connected IoTDB > delete from root.ln.wf02.wt02.status ``` -### Delete Multiple Timeseries +### 6.2 Delete Multiple Timeseries ```sql IoTDB > delete from root.ln.wf02.wt02 where time <= 2017-11-01T16:26:00; @@ -500,13 +500,13 @@ IoTDB> delete from root.ln.wf03.wt02.status where time < now() Msg: The statement is executed successfully. ``` -### Delete Time Partition (experimental) +### 6.3 Delete Time Partition (experimental) ```sql IoTDB > DELETE PARTITION root.ln 0,1,2 ``` -## QUERY DATA +## 7. QUERY DATA For more details, see document [Query-Data](../Basic-Concept/Query-Data.md). @@ -532,7 +532,7 @@ SELECT [LAST] selectExpr [, selectExpr] ... [ALIGN BY {TIME | DEVICE}] ``` -### Basic Examples +### 7.1 Basic Examples #### Select a Column of Data Based on a Time Interval @@ -564,7 +564,7 @@ IoTDB > select wf01.wt01.status,wf02.wt02.hardware from root.ln where (time > 20 IoTDB > select * from root.ln.** where time > 1 order by time desc limit 10; ``` -### `SELECT` CLAUSE +### 7.2 `SELECT` CLAUSE #### Use Alias @@ -623,7 +623,7 @@ IoTDB > select last * from root.ln.wf01.wt01 order by timeseries desc; IoTDB > select last * from root.ln.wf01.wt01 order by dataType desc; ``` -### `WHERE` CLAUSE +### 7.3 `WHERE` CLAUSE #### Time Filter @@ -662,7 +662,7 @@ IoTDB > select * from root.sg.d1 where value regexp '^[A-Za-z]+$' IoTDB > select * from root.sg.d1 where value regexp '^[a-z]+$' and time > 100 ``` -### `GROUP BY` CLAUSE +### 7.4 `GROUP BY` CLAUSE - Aggregate By Time without Specifying the Sliding Step Length @@ -754,7 +754,7 @@ IoTDB > SELECT avg(temperature) FROM root.factory1.** GROUP BY TAGS(city, worksh IoTDB > SELECT avg(temperature) FROM root.factory1.** GROUP BY ([1000, 10000), 5s), TAGS(city, workshop); ``` -### `HAVING` CLAUSE +### 7.5 `HAVING` CLAUSE Correct: @@ -772,7 +772,7 @@ IoTDB > select count(s1) from root.** group by ([1,3),1ms), level=1 having sum(d IoTDB > select count(d1.s1) from root.** group by ([1,3),1ms), level=1 having sum(s1) > 1 ``` -### `FILL` CLAUSE +### 7.6 `FILL` CLAUSE #### `PREVIOUS` Fill @@ -798,7 +798,7 @@ IoTDB > select temperature, status from root.sgcc.wf03.wt01 where time >= 2017-1 IoTDB > select temperature, status from root.sgcc.wf03.wt01 where time >= 2017-11-01T16:37:00.000 and time <= 2017-11-01T16:40:00.000 fill(true); ``` -### `LIMIT` and `SLIMIT` CLAUSES (PAGINATION) +### 7.7 `LIMIT` and `SLIMIT` CLAUSES (PAGINATION) #### Row Control over Query Results @@ -823,7 +823,7 @@ IoTDB > select max_value(*) from root.ln.wf01.wt01 group by ([2017-11-01T00:00:0 IoTDB > select * from root.ln.wf01.wt01 limit 10 offset 100 slimit 2 soffset 0 ``` -### `ORDER BY` CLAUSE +### 7.8 `ORDER BY` CLAUSE #### Order by in ALIGN BY TIME mode @@ -855,7 +855,7 @@ IoTDB > select min_value(total),max_value(base) from root.** order by max_value( IoTDB > select score from root.** order by device asc, score desc, time asc align by device ``` -### `ALIGN BY` CLAUSE +### 7.9 `ALIGN BY` CLAUSE #### Align by Device @@ -863,7 +863,7 @@ IoTDB > select score from root.** order by device asc, score desc, time asc alig IoTDB > select * from root.ln.** where time <= 2017-11-01T00:01:00 align by device; ``` -### `INTO` CLAUSE (QUERY WRITE-BACK) +### 7.10 `INTO` CLAUSE (QUERY WRITE-BACK) ```sql IoTDB > select s1, s2 into root.sg_copy.d1(t1), root.sg_copy.d2(t1, t2), root.sg_copy.d1(t2) from root.sg.d1, root.sg.d2; @@ -900,7 +900,7 @@ IoTDB > select * into ::(backup_${4}) from root.sg.** align by device; IoTDB > select s1, s2 into root.sg_copy.d1(t1, t2), aligned root.sg_copy.d2(t1, t2) from root.sg.d1, root.sg.d2 align by device; ``` -## Maintennance +## 8. Maintennance Generate the corresponding query plan: ``` explain select s1,s2 from root.sg.d1 @@ -909,11 +909,11 @@ Execute the corresponding SQL, analyze the execution and output: ``` explain analyze select s1,s2 from root.sg.d1 order by s1 ``` -## OPERATOR +## 9. OPERATOR For more details, see document [Operator-and-Expression](./Operator-and-Expression.md). -### Arithmetic Operators +### 9.1 Arithmetic Operators For details and examples, see the document [Arithmetic Operators and Functions](./Operator-and-Expression.md#arithmetic-operators). @@ -921,7 +921,7 @@ For details and examples, see the document [Arithmetic Operators and Functions]( select s1, - s1, s2, + s2, s1 + s2, s1 - s2, s1 * s2, s1 / s2, s1 % s2 from root.sg.d1 ``` -### Comparison Operators +### 9.2 Comparison Operators For details and examples, see the document [Comparison Operators and Functions](./Operator-and-Expression.md#comparison-operators). @@ -952,7 +952,7 @@ select code from root.sg1.d1 where code not in ('200', '300', '400', '500'); select a, a in (1, 2) from root.test; ``` -### Logical Operators +### 9.3 Logical Operators For details and examples, see the document [Logical Operators](./Operator-and-Expression.md#logical-operators). @@ -960,11 +960,11 @@ For details and examples, see the document [Logical Operators](./Operator-and-Ex select a, b, a > 10, a <= b, !(a <= b), a > 10 && a > b from root.test; ``` -## BUILT-IN FUNCTIONS +## 10. BUILT-IN FUNCTIONS For more details, see document [Operator-and-Expression](./Operator-and-Expression.md#built-in-functions). -### Aggregate Functions +### 10.1 Aggregate Functions For details and examples, see the document [Aggregate Functions](./Operator-and-Expression.md#aggregate-functions). @@ -977,7 +977,7 @@ select count_if(s1=0 & s2=0, 3, 'ignoreNull'='false'), count_if(s1=1 & s2=0, 3, select time_duration(s1) from root.db.d1; ``` -### Arithmetic Functions +### 10.2 Arithmetic Functions For details and examples, see the document [Arithmetic Operators and Functions](./Operator-and-Expression.md#arithmetic-functions). @@ -986,7 +986,7 @@ select s1, sin(s1), cos(s1), tan(s1) from root.sg1.d1 limit 5 offset 1000; select s4,round(s4),round(s4,2),round(s4,-1) from root.sg1.d1; ``` -### Comparison Functions +### 10.3 Comparison Functions For details and examples, see the document [Comparison Operators and Functions](./Operator-and-Expression.md#comparison-functions). @@ -995,7 +995,7 @@ select ts, on_off(ts, 'threshold'='2') from root.test; select ts, in_range(ts, 'lower'='2', 'upper'='3.1') from root.test; ``` -### String Processing Functions +### 10.4 String Processing Functions For details and examples, see the document [String Processing](./Operator-and-Expression.md#string-processing-functions). @@ -1023,7 +1023,7 @@ select regexsplit(s1, "regex"=",", "index"="-1") from root.test.d1 select regexsplit(s1, "regex"=",", "index"="3") from root.test.d1 ``` -### Data Type Conversion Function +### 10.5 Data Type Conversion Function For details and examples, see the document [Data Type Conversion Function](./Operator-and-Expression.md#data-type-conversion-function). @@ -1031,7 +1031,7 @@ For details and examples, see the document [Data Type Conversion Function](./Ope SELECT cast(s1 as INT32) from root.sg ``` -### Constant Timeseries Generating Functions +### 10.6 Constant Timeseries Generating Functions For details and examples, see the document [Constant Timeseries Generating Functions](./Operator-and-Expression.md#constant-timeseries-generating-functions). @@ -1039,7 +1039,7 @@ For details and examples, see the document [Constant Timeseries Generating Funct select s1, s2, const(s1, 'value'='1024', 'type'='INT64'), pi(s2), e(s1, s2) from root.sg1.d1; ``` -### Selector Functions +### 10.7 Selector Functions For details and examples, see the document [Selector Functions](./Operator-and-Expression.md#selector-functions). @@ -1047,7 +1047,7 @@ For details and examples, see the document [Selector Functions](./Operator-and-E select s1, top_k(s1, 'k'='2'), bottom_k(s1, 'k'='2') from root.sg1.d2 where time > 2020-12-10T20:36:15.530+08:00; ``` -### Continuous Interval Functions +### 10.8 Continuous Interval Functions For details and examples, see the document [Continuous Interval Functions](./Operator-and-Expression.md#continuous-interval-functions). @@ -1055,7 +1055,7 @@ For details and examples, see the document [Continuous Interval Functions](./Ope select s1, zero_count(s1), non_zero_count(s2), zero_duration(s3), non_zero_duration(s4) from root.sg.d2; ``` -### Variation Trend Calculation Functions +### 10.9 Variation Trend Calculation Functions For details and examples, see the document [Variation Trend Calculation Functions](./Operator-and-Expression.md#variation-trend-calculation-functions). @@ -1066,7 +1066,7 @@ SELECT DIFF(s1), DIFF(s2) from root.test; SELECT DIFF(s1, 'ignoreNull'='false'), DIFF(s2, 'ignoreNull'='false') from root.test; ``` -### Sample Functions +### 10.10 Sample Functions For details and examples, see the document [Sample Functions](./Operator-and-Expression.md#sample-functions). @@ -1080,7 +1080,7 @@ select M4(s1,'timeInterval'='25','displayWindowBegin'='0','displayWindowEnd'='10 select M4(s1,'windowSize'='10') from root.vehicle.d1 ``` -### Change Points Function +### 10.11 Change Points Function For details and examples, see the document [Time-Series](./Operator-and-Expression.md#change-points-function). @@ -1088,11 +1088,11 @@ For details and examples, see the document [Time-Series](./Operator-and-Expressi select change_points(s1), change_points(s2), change_points(s3), change_points(s4), change_points(s5), change_points(s6) from root.testChangePoints.d1 ``` -## DATA QUALITY FUNCTION LIBRARY +## 11. DATA QUALITY FUNCTION LIBRARY For more details, see document [Operator-and-Expression](../SQL-Manual/UDF-Libraries.md). -### Data Quality +### 11.1 Data Quality For details and examples, see the document [Data-Quality](../SQL-Manual/UDF-Libraries.md#data-quality). @@ -1117,7 +1117,7 @@ select Validity(s1,"window"="15") from root.test.d1 where time <= 2020-01-01 00: select Accuracy(t1,t2,t3,m1,m2,m3) from root.test ``` -### Data Profiling +### 11.2 Data Profiling For details and examples, see the document [Data-Profiling](../SQL-Manual/UDF-Libraries.md#data-profiling). @@ -1197,7 +1197,7 @@ select stddev(s1) from root.test.d1 select zscore(s1) from root.test ``` -### Anomaly Detection +### 11.3 Anomaly Detection For details and examples, see the document [Anomaly-Detection](../SQL-Manual/UDF-Libraries.md#anomaly-detection). @@ -1232,7 +1232,7 @@ select MasterDetect(lo,la,m_lo,m_la,model,'output_type'='repair','p'='3','k'='3' select MasterDetect(lo,la,m_lo,m_la,model,'output_type'='anomaly','p'='3','k'='3','eta'='1.0') from root.test ``` -### Frequency Domain +### 11.4 Frequency Domain For details and examples, see the document [Frequency-Domain](../SQL-Manual/UDF-Libraries.md#frequency-domain-analysis). @@ -1264,7 +1264,7 @@ select lowpass(s1,'wpass'='0.45') from root.test.d1 select envelope(s1) from root.test.d1 ``` -### Data Matching +### 11.5 Data Matching For details and examples, see the document [Data-Matching](../SQL-Manual/UDF-Libraries.md#data-matching). @@ -1285,7 +1285,7 @@ select ptnsym(s4, 'window'='5', 'threshold'='0') from root.test.d1 select xcorr(s1, s2) from root.test.d1 where time <= 2020-01-01 00:00:05 ``` -### Data Repairing +### 11.6 Data Repairing For details and examples, see the document [Data-Repairing](../SQL-Manual/UDF-Libraries.md#data-repairing). @@ -1310,7 +1310,7 @@ select seasonalrepair(s1,'period'=3,'k'=2) from root.test.d2 select seasonalrepair(s1,'method'='improved','period'=3) from root.test.d2 ``` -### Series Discovery +### 11.7 Series Discovery For details and examples, see the document [Series-Discovery](../SQL-Manual/UDF-Libraries.md#series-discovery). @@ -1323,7 +1323,7 @@ select consecutivesequences(s1,s2) from root.test.d1 select consecutivewindows(s1,s2,'length'='10m') from root.test.d1 ``` -### Machine Learning +### 11.8 Machine Learning For details and examples, see the document [Machine-Learning](../SQL-Manual/UDF-Libraries.md#machine-learning). @@ -1338,7 +1338,7 @@ select representation(s0,"tb"="3","vb"="2") from root.test.d0 select rm(s0, s1,"tb"="3","vb"="2") from root.test.d0 ``` -## LAMBDA EXPRESSION +## 12. LAMBDA EXPRESSION For details and examples, see the document [Lambda](../SQL-Manual/UDF-Libraries.md#lambda-expression). @@ -1346,7 +1346,7 @@ For details and examples, see the document [Lambda](../SQL-Manual/UDF-Libraries. select jexl(temperature, 'expr'='x -> {x + x}') as jexl1, jexl(temperature, 'expr'='x -> {x * 3}') as jexl2, jexl(temperature, 'expr'='x -> {x * x}') as jexl3, jexl(temperature, 'expr'='x -> {multiply(x, 100)}') as jexl4, jexl(temperature, st, 'expr'='(x, y) -> {x + y}') as jexl5, jexl(temperature, st, str, 'expr'='(x, y, z) -> {x + y + z}') as jexl6 from root.ln.wf01.wt01;``` ``` -## CONDITIONAL EXPRESSION +## 13. CONDITIONAL EXPRESSION For details and examples, see the document [Conditional Expressions](../SQL-Manual/UDF-Libraries.md#conditional-expressions). @@ -1384,11 +1384,11 @@ end as `result` from root.test4 ``` -## TRIGGER +## 14. TRIGGER For more details, see document [Database-Programming](../User-Manual/Database-Programming.md). -### Create Trigger +### 14.1 Create Trigger ```sql // Create Trigger @@ -1421,7 +1421,7 @@ triggerAttribute ; ``` -### Drop Trigger +### 14.2 Drop Trigger ```sql // Drop Trigger @@ -1430,13 +1430,13 @@ dropTrigger ; ``` -### Show Trigger +### 14.3 Show Trigger ```sql SHOW TRIGGERS ``` -## CONTINUOUS QUERY (CQ) +## 15. CONTINUOUS QUERY (CQ) For more details, see document [Operator-and-Expression](./Operator-and-Expression.md). @@ -1461,7 +1461,7 @@ BEGIN END ``` -### Configuring execution intervals +### 15.1 Configuring execution intervals ```sql CREATE CONTINUOUS QUERY cq1 @@ -1474,7 +1474,7 @@ SELECT max_value(temperature) END ``` -### Configuring time range for resampling +### 15.2 Configuring time range for resampling ```sql CREATE CONTINUOUS QUERY cq2 @@ -1487,7 +1487,7 @@ BEGIN END ``` -### Configuring execution intervals and CQ time ranges +### 15.3 Configuring execution intervals and CQ time ranges ```sql CREATE CONTINUOUS QUERY cq3 @@ -1501,7 +1501,7 @@ BEGIN END ``` -### Configuring end_time_offset for CQ time range +### 15.4 Configuring end_time_offset for CQ time range ```sql CREATE CONTINUOUS QUERY cq4 @@ -1515,7 +1515,7 @@ BEGIN END ``` -### CQ without group by clause +### 15.5 CQ without group by clause ```sql CREATE CONTINUOUS QUERY cq5 @@ -1528,7 +1528,7 @@ BEGIN END ``` -### CQ Management +### 15.6 CQ Management #### Listing continuous queries @@ -1546,23 +1546,23 @@ DROP (CONTINUOUS QUERY | CQ) CQs can't be altered once they're created. To change a CQ, you must `DROP` and re`CREATE` it with the updated settings. -## USER-DEFINED FUNCTION (UDF) +## 16. USER-DEFINED FUNCTION (UDF) For more details, see document [Operator-and-Expression](../SQL-Manual/UDF-Libraries.md). -### UDF Registration +### 16.1 UDF Registration ```sql CREATE FUNCTION AS (USING URI URI-STRING)? ``` -### UDF Deregistration +### 16.2 UDF Deregistration ```sql DROP FUNCTION ``` -### UDF Queries +### 16.3 UDF Queries ```sql SELECT example(*) from root.sg.d1 @@ -1578,17 +1578,17 @@ SELECT s1 * example(* / s1 + s2) FROM root.sg.d1; SELECT s1, s2, s1 + example(s1, s2), s1 - example(s1 + example(s1, s2) / s2) FROM root.sg.d1; ``` -### Show All Registered UDFs +### 16.4 Show All Registered UDFs ```sql SHOW FUNCTIONS ``` -## ADMINISTRATION MANAGEMENT +## 17. ADMINISTRATION MANAGEMENT For more details, see document [Operator-and-Expression](./Operator-and-Expression.md). -### SQL Statements +### 17.1 SQL Statements - Create user (Requires MANAGE_USER permission) @@ -1679,7 +1679,7 @@ ALTER USER SET PASSWORD ; eg: ALTER USER tempuser SET PASSWORD 'newpwd'; ``` -### Authorization and Deauthorization +### 17.2 Authorization and Deauthorization ```sql diff --git a/src/UserGuide/Master/Tree/SQL-Manual/Syntax-Rule.md b/src/UserGuide/Master/Tree/SQL-Manual/Syntax-Rule.md index 38dffc6ac..97ab27ca0 100644 --- a/src/UserGuide/Master/Tree/SQL-Manual/Syntax-Rule.md +++ b/src/UserGuide/Master/Tree/SQL-Manual/Syntax-Rule.md @@ -21,11 +21,11 @@ # Identifiers -## Literal Values +## 1. Literal Values This section describes how to write literal values in IoTDB. These include strings, numbers, timestamp values, boolean values, and NULL. -### String Literals +### 1.1 String Literals in IoTDB, **A string is a sequence of bytes or characters, enclosed within either single quote (`'`) or double quote (`"`) characters.** Examples: @@ -130,7 +130,7 @@ The following examples demonstrate how quoting and escaping work: """string" // "string ``` -### Numeric Literals +### 1.2 Numeric Literals Number literals include integer (exact-value) literals and floating-point (approximate-value) literals. @@ -144,27 +144,27 @@ The `FLOAT` and `DOUBLE` data types are floating-point types and calculations ar An integer may be used in floating-point context; it is interpreted as the equivalent floating-point number. -### Timestamp Literals +### 1.3 Timestamp Literals The timestamp is the time point at which data is produced. It includes absolute timestamps and relative timestamps in IoTDB. For information about timestamp support in IoTDB, see [Data Type Doc](../Background-knowledge/Data-Type.md). Specially, `NOW()` represents a constant timestamp that indicates the system time at which the statement began to execute. -### Boolean Literals +### 1.4 Boolean Literals The constants `TRUE` and `FALSE` evaluate to 1 and 0, respectively. The constant names can be written in any lettercase. -### NULL Values +### 1.5 NULL Values The `NULL` value means “no data.” `NULL` can be written in any lettercase. -## Identifier +## 2. Identifier -### Usage scenarios +### 2.1 Usage scenarios Certain objects within IoTDB, including `TRIGGER`, `FUNCTION`(UDF), `CONTINUOUS QUERY`, `SCHEMA TEMPLATE`, `USER`, `ROLE`,`Pipe`,`PipeSink`,`alias` and other object names are known as identifiers. -### Constraints +### 2.2 Constraints Below are basic constraints of identifiers, specific identifiers may have other constraints, for example, `user` should consists of more than 4 characters. @@ -172,7 +172,7 @@ Below are basic constraints of identifiers, specific identifiers may have other - [0-9 a-z A-Z _ ] (letters, digits and underscore) - ['\u2E80'..'\u9FFF'] (UNICODE Chinese characters) -### Reverse quotation marks +### 2.3 Reverse quotation marks **If the following situations occur, the identifier needs to be quoted using reverse quotes:** diff --git a/src/UserGuide/Master/Tree/SQL-Manual/UDF-Libraries_apache.md b/src/UserGuide/Master/Tree/SQL-Manual/UDF-Libraries_apache.md index c2a0dcd54..675c870b7 100644 --- a/src/UserGuide/Master/Tree/SQL-Manual/UDF-Libraries_apache.md +++ b/src/UserGuide/Master/Tree/SQL-Manual/UDF-Libraries_apache.md @@ -17,9 +17,7 @@ ​ specific language governing permissions and limitations ​ under the License. ---> - -# UDF Libraries +--> # UDF Libraries @@ -27,7 +25,7 @@ Based on the ability of user-defined functions, IoTDB provides a series of funct > Note: The functions in the current UDF library only support millisecond level timestamp accuracy. -## Installation steps +## 1. Installation steps 1. Please obtain the compressed file of the UDF library JAR package that is compatible with the IoTDB version. @@ -46,9 +44,9 @@ Based on the ability of user-defined functions, IoTDB provides a series of funct - All SQL statements - Open the SQl file in the compressed package, copy all SQL statements, and in the SQL operation interface of IoTDB's SQL command line terminal (CLI), execute all SQl statements to batch register UDFs -## Data Quality +## 2. Data Quality -### Completeness +### 2.1 Completeness #### Registration statement @@ -179,7 +177,7 @@ Output series: +-----------------------------+--------------------------------------------+ ``` -### Consistency +### 2.2 Consistency #### Registration statement @@ -309,7 +307,7 @@ Output series: +-----------------------------+-------------------------------------------+ ``` -### Timeliness +### 2.3 Timeliness #### Registration statement @@ -439,7 +437,7 @@ Output series: +-----------------------------+------------------------------------------+ ``` -### Validity +### 2.4 Validity #### Registration statement @@ -592,9 +590,9 @@ Output series: --> -## Data Profiling +## 3. Data Profiling -### ACF +### 3.1 ACF #### Registration statement @@ -659,7 +657,7 @@ Output series: +-----------------------------+--------------------+ ``` -### Distinct +### 3.2 Distinct #### Registration statement @@ -718,7 +716,7 @@ Output series: +-----------------------------+-------------------------+ ``` -### Histogram +### 3.3 Histogram #### Registration statement @@ -803,7 +801,7 @@ Output series: +-----------------------------+---------------------------------------------------------------+ ``` -### Integral +### 3.4 Integral #### Registration statement @@ -900,7 +898,7 @@ Output series: Calculation expression: $$\frac{1}{2\times 60}[(1+2) \times 1 + (2+5) \times 1 + (5+6) \times 1 + (6+7) \times 1 + (7+8) \times 3 + (8+10) \times 2] = 0.958$$ -### IntegralAvg +### 3.5 IntegralAvg #### Registration statement @@ -967,7 +965,7 @@ Output series: Calculation expression: $$\frac{1}{2}[(1+2) \times 1 + (2+5) \times 1 + (5+6) \times 1 + (6+7) \times 1 + (7+8) \times 3 + (8+10) \times 2] / 10 = 5.75$$ -### Mad +### 3.6 Mad #### Registration statement @@ -1066,7 +1064,7 @@ Output series: +-----------------------------+---------------------------------+ ``` -### Median +### 3.7 Median #### Registration statement @@ -1136,7 +1134,7 @@ Output series: +-----------------------------+------------------------------------+ ``` -### MinMax +### 3.8 MinMax #### Registration statement @@ -1227,7 +1225,7 @@ Output series: ``` -### MvAvg +### 3.9 MvAvg #### Registration statement @@ -1313,7 +1311,7 @@ Output series: +-----------------------------+---------------------------------+ ``` -### PACF +### 3.10 PACF #### Registration statement @@ -1371,7 +1369,7 @@ Output series: +-----------------------------+--------------------------------+ ``` -### Percentile +### 3.11 Percentile #### Registration statement @@ -1444,7 +1442,7 @@ Output series: +-----------------------------+-------------------------------------------------------+ ``` -### Quantile +### 3.12 Quantile #### Registration statement @@ -1506,7 +1504,7 @@ Output series: +-----------------------------+------------------------------------------------+ ``` -### Period +### 3.13 Period #### Registration statement @@ -1561,7 +1559,7 @@ Output series: +-----------------------------+-----------------------+ ``` -### QLB +### 3.14 QLB #### Registration statement @@ -1651,7 +1649,7 @@ Output series: +-----------------------------+--------------------+ ``` -### Resample +### 3.15 Resample #### Registration statement @@ -1786,7 +1784,7 @@ Output series: +-----------------------------+-----------------------------------------------------------------------+ ``` -### Sample +### 3.16 Sample #### Registration statement @@ -1890,7 +1888,7 @@ Output series: +-----------------------------+------------------------------------------------------+ ``` -### Segment +### 3.17 Segment #### Registration statement @@ -1988,7 +1986,7 @@ Output series: +-----------------------------+------------------------------------+ ``` -### Skew +### 3.18 Skew #### Registration statement @@ -2055,7 +2053,7 @@ Output series: +-----------------------------+-----------------------+ ``` -### Spline +### 3.19 Spline #### Registration statement @@ -2266,7 +2264,7 @@ Output series: +-----------------------------+------------------------------------+ ``` -### Spread +### 3.20 Spread #### Registration statement @@ -2330,7 +2328,7 @@ Output series: -### ZScore +### 3.21 ZScore #### Registration statement @@ -2440,9 +2438,9 @@ Output series: --> -## Anomaly Detection +## 4. Anomaly Detection -### IQR +### 4.1 IQR #### Registration statement @@ -2515,7 +2513,7 @@ Output series: +-----------------------------+-----------------+ ``` -### KSigma +### 4.2 KSigma #### Registration statement @@ -2586,7 +2584,7 @@ Output series: +-----------------------------+---------------------------------+ ``` -### LOF +### 4.3 LOF #### Registration statement @@ -2718,7 +2716,7 @@ Output series: +-----------------------------+--------------------+ ``` -### MissDetect +### 4.4 MissDetect #### Registration statement @@ -2812,7 +2810,7 @@ Output series: +-----------------------------+------------------------------------------+ ``` -### Range +### 4.5 Range #### Registration statement @@ -2883,7 +2881,7 @@ Output series: +-----------------------------+------------------------------------------------------------------+ ``` -### TwoSidedFilter +### 4.6 TwoSidedFilter #### Registration statement @@ -2982,7 +2980,7 @@ Output series: +-----------------------------+------------+ ``` -### Outlier +### 4.7 Outlier #### Registration statement @@ -3057,7 +3055,7 @@ Output series: ``` -### MasterTrain +### 4.8 MasterTrain #### Usage @@ -3140,7 +3138,7 @@ Output series: +-----------------------------+---------------------------------------------------------------------------------------------+ ``` -### MasterDetect +### 4.9 MasterDetect #### Usage @@ -3309,9 +3307,9 @@ Output series: --> -## Frequency Domain Analysis +## 5. Frequency Domain Analysis -### Conv +### 5.1 Conv #### Registration statement @@ -3364,7 +3362,7 @@ Output series: +-----------------------------+--------------------------------------+ ``` -### Deconv +### 5.2 Deconv #### Registration statement @@ -3450,7 +3448,7 @@ Output series: +-----------------------------+--------------------------------------------------------------+ ``` -### DWT +### 5.3 DWT #### Registration statement @@ -3537,7 +3535,7 @@ Output series: +-----------------------------+-------------------------------------+ ``` -### FFT +### 5.4 FFT #### Registration statement @@ -3667,7 +3665,7 @@ Note: Based on the conjugation of the Fourier transform result, only the first h According to the given parameter, data points are reserved from low frequency to high frequency until the reserved energy ratio exceeds it. The last data point is reserved to indicate the length of the series. -### HighPass +### 5.5 HighPass #### Registration statement @@ -3760,7 +3758,7 @@ Output series: Note: The input is $y=sin(2\pi t/4)+2sin(2\pi t/5)$ with a length of 20. Thus, the output is $y=sin(2\pi t/4)$ after high-pass filtering. -### IFFT +### 5.6 IFFT #### Registration statement @@ -3843,7 +3841,7 @@ Output series: +-----------------------------+-------------------------------------------------------+ ``` -### LowPass +### 5.7 LowPass #### Registration statement @@ -3957,9 +3955,9 @@ Note: The input is $y=sin(2\pi t/4)+2sin(2\pi t/5)$ with a length of 20. Thus, t --> -## Data Matching +## 6. Data Matching -### Cov +### 6.1 Cov #### Registration statement @@ -4026,7 +4024,7 @@ Output series: +-----------------------------+-------------------------------------+ ``` -### DTW +### 6.2 DTW #### Registration statement @@ -4097,7 +4095,7 @@ Output series: +-----------------------------+-------------------------------------+ ``` -### Pearson +### 6.3 Pearson #### Registration statement @@ -4164,7 +4162,7 @@ Output series: +-----------------------------+-----------------------------------------+ ``` -### PtnSym +### 6.4 PtnSym #### Registration statement @@ -4230,7 +4228,7 @@ Output series: +-----------------------------+------------------------------------------------------+ ``` -### XCorr +### 6.5 XCorr #### Registration statement @@ -4323,9 +4321,9 @@ Output series: --> -## Data Repairing +## 7. Data Repairing -### TimestampRepair +### 7.1 TimestampRepair #### Registration statement @@ -4434,7 +4432,7 @@ Output series: +-----------------------------+--------------------------------+ ``` -### ValueFill +### 7.2 ValueFill #### Registration statement @@ -4552,7 +4550,7 @@ Output series: +-----------------------------+-------------------------------------------+ ``` -### ValueRepair +### 7.3 ValueRepair #### Registration statement @@ -4678,7 +4676,7 @@ Output series: +-----------------------------+-------------------------------------------------+ ``` -### MasterRepair +### 7.4 MasterRepair #### Usage @@ -4739,7 +4737,7 @@ Output series: +-----------------------------+-------------------------------------------------------------------------------------------+ ``` -### SeasonalRepair +### 7.5 SeasonalRepair #### Usage This function is used to repair the value of the seasonal time series via decomposition. Currently, two methods are supported: **Classical** - detect irregular fluctuations through residual component decomposed by classical decomposition, and repair them through moving average; **Improved** - detect irregular fluctuations through residual component decomposed by improved decomposition, and repair them through moving median. @@ -4864,9 +4862,9 @@ Output series: --> -## Series Discovery +## 8. Series Discovery -### ConsecutiveSequences +### 8.1 ConsecutiveSequences #### Registration statement @@ -4960,7 +4958,7 @@ Output series: +-----------------------------+------------------------------------------------------+ ``` -### ConsecutiveWindows +### 8.2 ConsecutiveWindows #### Registration statement @@ -5050,9 +5048,9 @@ Output series: --> -## Machine Learning +## 9. Machine Learning -### AR +### 9.1 AR #### Registration statement @@ -5119,7 +5117,7 @@ Output Series: +-----------------------------+---------------------------+ ``` -### Representation +### 9.2 Representation #### Usage @@ -5183,7 +5181,7 @@ Output Series: +-----------------------------+-------------------------------------------------+ ``` -### RM +### 9.3 RM #### Usage diff --git a/src/UserGuide/Master/Tree/SQL-Manual/UDF-Libraries_timecho.md b/src/UserGuide/Master/Tree/SQL-Manual/UDF-Libraries_timecho.md index d4ee30c76..21151b7f0 100644 --- a/src/UserGuide/Master/Tree/SQL-Manual/UDF-Libraries_timecho.md +++ b/src/UserGuide/Master/Tree/SQL-Manual/UDF-Libraries_timecho.md @@ -21,13 +21,11 @@ # UDF Libraries -# UDF Libraries - Based on the ability of user-defined functions, IoTDB provides a series of functions for temporal data processing, including data quality, data profiling, anomaly detection, frequency domain analysis, data matching, data repairing, sequence discovery, machine learning, etc., which can meet the needs of industrial fields for temporal data processing. > Note: The functions in the current UDF library only support millisecond level timestamp accuracy. -## Installation steps +## 1. Installation steps 1. Please obtain the compressed file of the UDF library JAR package that is compatible with the IoTDB version. @@ -46,9 +44,9 @@ Based on the ability of user-defined functions, IoTDB provides a series of funct - All SQL statements - Open the SQl file in the compressed package, copy all SQL statements, and execute all SQl statements in the SQL command line terminal (CLI) of IoTDB or the SQL operation interface of the visualization console (Workbench) to batch register UDF -## Data Quality +## 2. Data Quality -### Completeness +### 2.1 Completeness #### Registration statement @@ -179,7 +177,7 @@ Output series: +-----------------------------+--------------------------------------------+ ``` -### Consistency +### 2.2 Consistency #### Registration statement @@ -309,7 +307,7 @@ Output series: +-----------------------------+-------------------------------------------+ ``` -### Timeliness +### 2.3 Timeliness #### Registration statement @@ -439,7 +437,7 @@ Output series: +-----------------------------+------------------------------------------+ ``` -### Validity +### 2.4 Validity #### Registration statement @@ -592,9 +590,9 @@ Output series: --> -## Data Profiling +## 3. Data Profiling -### ACF +### 3.1 ACF #### Registration statement @@ -659,7 +657,7 @@ Output series: +-----------------------------+--------------------+ ``` -### Distinct +### 3.2 Distinct #### Registration statement @@ -718,7 +716,7 @@ Output series: +-----------------------------+-------------------------+ ``` -### Histogram +### 3.3 Histogram #### Registration statement @@ -803,7 +801,7 @@ Output series: +-----------------------------+---------------------------------------------------------------+ ``` -### Integral +### 3.4 Integral #### Registration statement @@ -900,7 +898,7 @@ Output series: Calculation expression: $$\frac{1}{2\times 60}[(1+2) \times 1 + (2+5) \times 1 + (5+6) \times 1 + (6+7) \times 1 + (7+8) \times 3 + (8+10) \times 2] = 0.958$$ -### IntegralAvg +### 3.5 IntegralAvg #### Registration statement @@ -967,7 +965,7 @@ Output series: Calculation expression: $$\frac{1}{2}[(1+2) \times 1 + (2+5) \times 1 + (5+6) \times 1 + (6+7) \times 1 + (7+8) \times 3 + (8+10) \times 2] / 10 = 5.75$$ -### Mad +### 3.6 Mad #### Registration statement @@ -1066,7 +1064,7 @@ Output series: +-----------------------------+---------------------------------+ ``` -### Median +### 3.7 Median #### Registration statement @@ -1136,7 +1134,7 @@ Output series: +-----------------------------+------------------------------------+ ``` -### MinMax +### 3.8 MinMax #### Registration statement @@ -1227,7 +1225,7 @@ Output series: ``` -### MvAvg +### 3.9 MvAvg #### Registration statement @@ -1313,7 +1311,7 @@ Output series: +-----------------------------+---------------------------------+ ``` -### PACF +### 3.10 PACF #### Registration statement @@ -1371,7 +1369,7 @@ Output series: +-----------------------------+--------------------------------+ ``` -### Percentile +### 3.11 Percentile #### Registration statement @@ -1444,7 +1442,7 @@ Output series: +-----------------------------+-------------------------------------------------------+ ``` -### Quantile +### 3.12 Quantile #### Registration statement @@ -1506,7 +1504,7 @@ Output series: +-----------------------------+------------------------------------------------+ ``` -### Period +### 3.13 Period #### Registration statement @@ -1561,7 +1559,7 @@ Output series: +-----------------------------+-----------------------+ ``` -### QLB +### 3.14 QLB #### Registration statement @@ -1651,7 +1649,7 @@ Output series: +-----------------------------+--------------------+ ``` -### Resample +### 3.15 Resample #### Registration statement @@ -1786,7 +1784,7 @@ Output series: +-----------------------------+-----------------------------------------------------------------------+ ``` -### Sample +### 3.16 Sample #### Registration statement @@ -1890,7 +1888,7 @@ Output series: +-----------------------------+------------------------------------------------------+ ``` -### Segment +### 3.17 Segment #### Registration statement @@ -1988,7 +1986,7 @@ Output series: +-----------------------------+------------------------------------+ ``` -### Skew +### 3.18 Skew #### Registration statement @@ -2055,7 +2053,7 @@ Output series: +-----------------------------+-----------------------+ ``` -### Spline +### 3.19 Spline #### Registration statement @@ -2266,7 +2264,7 @@ Output series: +-----------------------------+------------------------------------+ ``` -### Spread +### 3.20 Spread #### Registration statement @@ -2330,7 +2328,7 @@ Output series: -### ZScore +### 3.21 ZScore #### Registration statement @@ -2440,9 +2438,9 @@ Output series: --> -## Anomaly Detection +## 4. Anomaly Detection -### IQR +### 4.1 IQR #### Registration statement @@ -2515,7 +2513,7 @@ Output series: +-----------------------------+-----------------+ ``` -### KSigma +### 4.2 KSigma #### Registration statement @@ -2586,7 +2584,7 @@ Output series: +-----------------------------+---------------------------------+ ``` -### LOF +### 4.3 LOF #### Registration statement @@ -2718,7 +2716,7 @@ Output series: +-----------------------------+--------------------+ ``` -### MissDetect +### 4.4 MissDetect #### Registration statement @@ -2812,7 +2810,7 @@ Output series: +-----------------------------+------------------------------------------+ ``` -### Range +### 4.5 Range #### Registration statement @@ -2883,7 +2881,7 @@ Output series: +-----------------------------+------------------------------------------------------------------+ ``` -### TwoSidedFilter +### 4.6 TwoSidedFilter #### Registration statement @@ -2982,7 +2980,7 @@ Output series: +-----------------------------+------------+ ``` -### Outlier +### 4.7 Outlier #### Registration statement @@ -3057,7 +3055,7 @@ Output series: ``` -### MasterTrain +### 4.8 MasterTrain #### Usage @@ -3140,7 +3138,7 @@ Output series: +-----------------------------+---------------------------------------------------------------------------------------------+ ``` -### MasterDetect +### 4.9 MasterDetect #### Usage @@ -3309,9 +3307,9 @@ Output series: --> -## Frequency Domain Analysis +## 5. Frequency Domain Analysis -### Conv +### 5.1 Conv #### Registration statement @@ -3364,7 +3362,7 @@ Output series: +-----------------------------+--------------------------------------+ ``` -### Deconv +### 5.2 Deconv #### Registration statement @@ -3450,7 +3448,7 @@ Output series: +-----------------------------+--------------------------------------------------------------+ ``` -### DWT +### 5.3 DWT #### Registration statement @@ -3537,7 +3535,7 @@ Output series: +-----------------------------+-------------------------------------+ ``` -### FFT +### 5.4 FFT #### Registration statement @@ -3667,7 +3665,7 @@ Note: Based on the conjugation of the Fourier transform result, only the first h According to the given parameter, data points are reserved from low frequency to high frequency until the reserved energy ratio exceeds it. The last data point is reserved to indicate the length of the series. -### HighPass +### 5.5 HighPass #### Registration statement @@ -3760,7 +3758,7 @@ Output series: Note: The input is $y=sin(2\pi t/4)+2sin(2\pi t/5)$ with a length of 20. Thus, the output is $y=sin(2\pi t/4)$ after high-pass filtering. -### IFFT +### 5.6 IFFT #### Registration statement @@ -3843,7 +3841,7 @@ Output series: +-----------------------------+-------------------------------------------------------+ ``` -### LowPass +### 5.7 LowPass #### Registration statement @@ -3937,7 +3935,7 @@ Output series: Note: The input is $y=sin(2\pi t/4)+2sin(2\pi t/5)$ with a length of 20. Thus, the output is $y=2sin(2\pi t/5)$ after low-pass filtering. -### Envelope +### 5.8 Envelope #### Registration statement @@ -4017,9 +4015,9 @@ Output series: ``` -## Data Matching +## 6. Data Matching -### Cov +### 6.1 Cov #### Registration statement @@ -4086,7 +4084,7 @@ Output series: +-----------------------------+-------------------------------------+ ``` -### DTW +### 6.2 DTW #### Registration statement @@ -4157,7 +4155,7 @@ Output series: +-----------------------------+-------------------------------------+ ``` -### Pearson +### 6.3 Pearson #### Registration statement @@ -4224,7 +4222,7 @@ Output series: +-----------------------------+-----------------------------------------+ ``` -### PtnSym +### 6.4 PtnSym #### Registration statement @@ -4290,7 +4288,7 @@ Output series: +-----------------------------+------------------------------------------------------+ ``` -### XCorr +### 6.5 XCorr #### Registration statement @@ -4383,9 +4381,9 @@ Output series: --> -## Data Repairing +## 7. Data Repairing -### TimestampRepair +### 7.1 TimestampRepair #### Registration statement @@ -4494,7 +4492,7 @@ Output series: +-----------------------------+--------------------------------+ ``` -### ValueFill +### 7.2 ValueFill #### Registration statement @@ -4612,7 +4610,7 @@ Output series: +-----------------------------+-------------------------------------------+ ``` -### ValueRepair +### 7.3 ValueRepair #### Registration statement @@ -4738,7 +4736,7 @@ Output series: +-----------------------------+-------------------------------------------------+ ``` -### MasterRepair +### 7.4 MasterRepair #### Usage @@ -4799,7 +4797,7 @@ Output series: +-----------------------------+-------------------------------------------------------------------------------------------+ ``` -### SeasonalRepair +### 7.5 SeasonalRepair #### Usage This function is used to repair the value of the seasonal time series via decomposition. Currently, two methods are supported: **Classical** - detect irregular fluctuations through residual component decomposed by classical decomposition, and repair them through moving average; **Improved** - detect irregular fluctuations through residual component decomposed by improved decomposition, and repair them through moving median. @@ -4924,9 +4922,9 @@ Output series: --> -## Series Discovery +## 8. Series Discovery -### ConsecutiveSequences +### 8.1 ConsecutiveSequences #### Registration statement @@ -5020,7 +5018,7 @@ Output series: +-----------------------------+------------------------------------------------------+ ``` -### ConsecutiveWindows +### 8.2 ConsecutiveWindows #### Registration statement @@ -5110,9 +5108,9 @@ Output series: --> -## Machine Learning +## 9. Machine Learning -### AR +### 9.1 AR #### Registration statement @@ -5179,7 +5177,7 @@ Output Series: +-----------------------------+---------------------------+ ``` -### Representation +### 9.2 Representation #### Usage @@ -5243,7 +5241,7 @@ Output Series: +-----------------------------+-------------------------------------------------+ ``` -### RM +### 9.3 RM #### Usage diff --git a/src/UserGuide/Master/Tree/Tools-System/Benchmark.md b/src/UserGuide/Master/Tree/Tools-System/Benchmark.md index 3ffd64c78..8172be84c 100644 --- a/src/UserGuide/Master/Tree/Tools-System/Benchmark.md +++ b/src/UserGuide/Master/Tree/Tools-System/Benchmark.md @@ -52,22 +52,22 @@ Currently IoT-benchmark supports the following time series databases, versions a Table 1-1 Comparison of big data test benchmarks -## Software Installation and Environment Setup +## 1. Software Installation and Environment Setup -### Prerequisites +### 1.1 Prerequisites 1. Java 8 2. Maven 3.6+ 3. The corresponding appropriate version of the database, such as Apache IoTDB 1.0 -### How to Get IoT Benchmark +### 1.2 How to Get IoT Benchmark - **Get the binary package**: Enter https://github.com/thulab/iot-benchmark/releases to download the required installation package. Download it as a compressed file, select a folder to decompress and use it. - Compiled from source (can be tested with Apache IoTDB 1.0): - The first step (compile the latest IoTDB Session package): Enter the official website https://github.com/apache/iotdb/tree/rel/1.0 to download the IoTDB source code, and run the command `mvn clean package install -pl session -am -DskipTests` in the root directory to compiles the latest package for IoTDB Session. - The second step (compile the IoTDB Benchmark test package): Enter the official website https://github.com/thulab/iot-benchmark to download the source code, run `mvn clean package install -pl iotdb-1.0 -am -DskipTests` in the root directory to compile Apache IoTDB version 1.0 test package. The relative path between the test package and the root directory is `./iotdb-1.0/target/iotdb-1.0-0.0.1/iotdb-1.0-0.0.1`. -### IoT Benchmark's Test Package Structure +### 1.3 IoT Benchmark's Test Package Structure The directory structure of the test package is shown in Figure 1-3 below. The test configuration file is conf/config.properties, and the test startup scripts are benchmark\.sh (Linux & MacOS) and benchmark.bat (Windows). The detailed usage of the files is shown in Table 1-2. @@ -89,14 +89,14 @@ Figure 1-3 List of files and folders Table 1-2 Usage list of files and folders -### IoT Benchmark Execution Test +### 1.4 IoT Benchmark Execution Test 1. Modify the configuration file according to the test requirements. For the main parameters, see next chapter. The corresponding configuration file is conf/config.properties. For example, to test Apache IoTDB 1.0, you need to modify DB_SWITCH=IoTDB-100-SESSION_BY_TABLET. 2. Start the time series database under test. 3. Running. 4. Start IoT-benchmark to execute the test. Observe the status of the time series database and IoT-benchmark under test during execution, and view the results and analyze the test process after execution. -### IoT Benchmark Results Interpretation +### 1.5 IoT Benchmark Results Interpretation All the log files of the test are stored in the logs folder, and the test results are stored in the data/csvOutput folder after the test is completed. For example, after the test, we get the following result matrix: @@ -113,11 +113,11 @@ All the log files of the test are stored in the logs folder, and the test result - MIN: minimum operation time - Pn: the quantile value of the overall distribution of operations, for example, P25 is the lower quartile. -## Main Parameters +## 2. Main Parameters This chapter mainly explains the purpose and configuration method of the main parameters. -### Working Mode and Operation Proportion +### 2.1 Working Mode and Operation Proportion - The working mode parameter "BENCHMARK_WORK_MODE" can be selected as "default mode" and "server monitoring"; the "server monitoring" mode can be started directly by executing the ser-benchmark\.sh script, and the script will automatically modify this parameter. "Default mode" is a commonly used test mode, combined with the configuration of the OPERATION_PROPORTION parameter to achieve the definition of test operation proportions of "pure write", "pure query" and "read-write mix". @@ -130,11 +130,11 @@ Table 1-3 Test mode | default mode | testWithDefaultPath | Supports mixed workloads with multiple read and write operations | | server mode | serverMODE | Server resource usage monitoring mode (running in this mode is started by the ser-benchmark\.sh script, no need to manually configure this parameter) | -### Server Connection Information +### 2.2 Server Connection Information After the working mode is specified, how to inform IoT-benchmark of the information of the time series database under test? Currently, the type of the time-series database under test is informed through "DB_SWITCH"; the network address of the time-series database under test is informed through "HOST"; the network port of the time-series database under test is informed through "PORT"; the login user name of the time-series database under test is informed through "USERNAME"; "PASSWORD" informs the password of the login user of the time series database under test; informs the name of the time series database under test through "DB_NAME"; informs the connection authentication token of the time series database under test through "TOKEN" (used by InfluxDB 2.0). -### Write Scene Setup Parameters +### 2.3 Write Scene Setup Parameters Table 1-4 Write scene setup parameters @@ -156,7 +156,7 @@ Table 1-4 Write scene setup parameters According to the configuration parameters in Table 1-4, the test scenario can be described as follows: write 30,000 (100 devices, 300 sensors for each device) time series sequential data for a day on October 30, 2022 to the time series database under test, in total 2.592 billion data points. The 300 sensor data types of each device are 50 Booleans, 50 integers, 50 long integers, 50 floats, 50 doubles, and 50 characters. If we change the value of IS_OUT_OF_ORDER in the table to true, then the scenario is: write 30,000 time series data on October 30, 2022 to the measured time series database, and there are 30% out of order data ( arrives in the time series database later than other data points whose generation time is later than itself). -### Query Scene Setup Parameters +### 2.4 Query Scene Setup Parameters Table 1-5 Query scene setup parameters @@ -189,13 +189,13 @@ Table 1-6 Query types and example SQL According to the configuration parameters in Table 1-5, the test scenario can be described as follows: Execute 10 reverse order time range queries with value filtering for 2 devices and 2 sensors from the time series database under test. The SQL statement is: `select s_0,s_31from data where time >2022-10-30T00:00:00+08:00 and time < 2022-10-30T00:04:10+08:00 and s_0 > -5 and device in d_21,d_46 order by time desc`. -### Persistence of Test Process and Test Results +### 2.5 Persistence of Test Process and Test Results IoT-benchmark currently supports persisting the test process and test results to IoTDB, MySQL, and CSV through the configuration parameter "TEST_DATA_PERSISTENCE"; writing to MySQL and CSV can define the upper limit of the number of rows in the sub-database and sub-table, such as "RECORD_SPLIT=true, RECORD_SPLIT_MAX_LINE=10000000" means that each database table or CSV file is divided and stored according to the total number of 10 million rows; if the records are recorded to MySQL or IoTDB, database link information needs to be provided, including "TEST_DATA_STORE_IP" the IP address of the database, "TEST_DATA_STORE_PORT" the port number of the database, "TEST_DATA_STORE_DB" the name of the database, "TEST_DATA_STORE_USER" the database user name, and "TEST_DATA_STORE_PW" the database user password. If we set "TEST_DATA_PERSISTENCE=CSV", we can see the newly generated data folder under the IoT-benchmark root directory during and after the test execution, which contains the csv folder to record the test process; the csvOutput folder to record the test results . If we set "TEST_DATA_PERSISTENCE=MySQL", it will create a data table named "testWithDefaultPath_tested database name_remarks_test start time" in the specified MySQL database before the test starts to record the test process; it will record the test process in the "CONFIG" data table (create the table if it does not exist), write the configuration information of this test; when the test is completed, the result of this test will be written in the data table named "FINAL_RESULT" (create the table if it does not exist). -## Use Case +## 3. Use Case We take the application of CRRC Qingdao Sifang Vehicle Research Institute Co., Ltd. as an example, and refer to the scene described in "Apache IoTDB in Intelligent Operation and Maintenance Platform Storage" for practical operation instructions. @@ -222,7 +222,7 @@ Table 2-2 Virtual machine usage | 172.21.4.4 | KaiosDB | | 172.21.4.5 | MySQL | -### Write Test +### 3.1 Write Test Scenario description: Create 100 clients to simulate 100 trains, each train has 3000 sensors, the data type is DOUBLE, the data time interval is 500ms (2Hz), and they are sent sequentially. Referring to the above requirements, we need to modify the IoT-benchmark configuration parameters as listed in Table 2-3. @@ -297,7 +297,7 @@ So what is the resource usage of each server during the test? What is the specif Figure 2-6 Visualization of testing process in Tableau -### Query Test +### 3.2 Query Test Scenario description: In the writing test scenario, 10 clients are simulated to perform all types of query tasks on the data stored in the time series database Apache-IoTDB. The configuration is as follows. @@ -322,7 +322,7 @@ Results: Figure 2-7 Query test results -### Description of Other Parameters +### 3.3 Description of Other Parameters In the previous chapters, the write performance comparison between Apache-IoTDB and KairosDB was performed, but if the user wants to perform a simulated real write rate test, how to configure it? How to control if the test time is too long? Are there any regularities in the generated simulated data? If the IoT-Benchmark server configuration is low, can multiple machines be used to simulate pressure output? diff --git a/src/UserGuide/Master/Tree/Tools-System/CLI.md b/src/UserGuide/Master/Tree/Tools-System/CLI.md index 47a038952..9dd4a1248 100644 --- a/src/UserGuide/Master/Tree/Tools-System/CLI.md +++ b/src/UserGuide/Master/Tree/Tools-System/CLI.md @@ -26,7 +26,7 @@ IoTDB provides Cli/shell tools for users to interact with IoTDB server in comman > Note: In this document, \$IOTDB\_HOME represents the path of the IoTDB installation directory. -## Installation +## 1. Installation If you use the source code version of IoTDB, then under the root path of IoTDB, execute: @@ -38,9 +38,9 @@ After build, the IoTDB Cli will be in the folder "cli/target/iotdb-cli-{project. If you download the binary version, then the Cli can be used directly in sbin folder. -## Running +## 2. Running -### Running Cli +### 2.1 Running Cli After installation, there is a default user in IoTDB: `root`, and the default password is `root`. Users can use this username to try IoTDB Cli/Shell tool. The cli startup script is the `start-cli` file under the \$IOTDB\_HOME/bin folder. When starting the script, you need to specify the IP and PORT. (Make sure the IoTDB cluster is running properly when you use Cli/Shell tool to connect to it.) @@ -80,7 +80,7 @@ IoTDB> Enter ```quit``` or `exit` can exit Cli. -### Cli Parameters +### 2.2 Cli Parameters | Parameter name | Parameter type | Required | Description | Example | | :--------------------------- | :------------------------- | :------- | :----------------------------------------------------------- | :------------------ | @@ -109,7 +109,7 @@ The Windows system startup commands are as follows: Shell > sbin\start-cli.bat -h 10.129.187.21 -p 6667 -u root -pw root -disableISO8601 -maxPRC 10 ``` -### CLI Special Command +### 2.3 CLI Special Command Special commands of Cli are below. @@ -125,7 +125,7 @@ Special commands of Cli are below. | `help` | Get hints for CLI special commands | | `exit/quit` | Exit CLI | -### Note on using the CLI with OpenID Connect Auth enabled on Server side +### 2.4 Note on using the CLI with OpenID Connect Auth enabled on Server side Openid connect (oidc) uses keycloack as the authority authentication service of oidc service @@ -235,7 +235,7 @@ The response looks something like The interesting part here is the access token with the key `access_token`. This has to be passed as username (with parameter `-u`) and empty password to the CLI. -### Batch Operation of Cli +### 2.5 Batch Operation of Cli -e parameter is designed for the Cli/shell tool in the situation where you would like to manipulate IoTDB in batches through scripts. By using the -e parameter, you can operate IoTDB without entering the cli's input mode. diff --git a/src/UserGuide/Master/Tree/Tools-System/Maintenance-Tool_apache.md b/src/UserGuide/Master/Tree/Tools-System/Maintenance-Tool_apache.md index 98b69f17c..1587518da 100644 --- a/src/UserGuide/Master/Tree/Tools-System/Maintenance-Tool_apache.md +++ b/src/UserGuide/Master/Tree/Tools-System/Maintenance-Tool_apache.md @@ -20,11 +20,11 @@ --> # Cluster management tool -## IoTDB Data Directory Overview Tool +## 1. IoTDB Data Directory Overview Tool IoTDB data directory overview tool is used to print an overview of the IoTDB data directory structure. The location is tools/tsfile/print-iotdb-data-dir. -### Usage +### 1.1 Usage - For Windows: @@ -40,7 +40,7 @@ IoTDB data directory overview tool is used to print an overview of the IoTDB dat Note: if the storage path of the output overview file is not set, the default relative path "IoTDB_data_dir_overview.txt" will be used. -### Example +### 1.2 Example Use Windows in this example: @@ -81,11 +81,11 @@ data dir num:1 |============================================================== ````````````````````````` -## TsFile Sketch Tool +## 2. TsFile Sketch Tool TsFile sketch tool is used to print the content of a TsFile in sketch mode. The location is tools/tsfile/print-tsfile. -### Usage +### 2.1 Usage - For Windows: @@ -101,7 +101,7 @@ TsFile sketch tool is used to print the content of a TsFile in sketch mode. The Note: if the storage path of the output sketch file is not set, the default relative path "TsFile_sketch_view.txt" will be used. -### Example +### 2.2 Example Use Windows in this example: @@ -169,11 +169,11 @@ Explanations: - "||||||||||||||||||||" is the guide information added to enhance readability, not the actual data stored in TsFile. - The last printed "IndexOfTimerseriesIndex Tree" is a reorganization of the metadata index tree at the end of the TsFile, which is convenient for intuitive understanding, and again not the actual data stored in TsFile. -## TsFile Resource Sketch Tool +## 3. TsFile Resource Sketch Tool TsFile resource sketch tool is used to print the content of a TsFile resource file. The location is tools/tsfile/print-tsfile-resource-files. -### Usage +### 3.1 Usage - For Windows: @@ -187,7 +187,7 @@ TsFile resource sketch tool is used to print the content of a TsFile resource fi ./print-tsfile-resource-files.sh ``` -### Example +### 3.2 Example Use Windows in this example: diff --git a/src/UserGuide/Master/Tree/Tools-System/Maintenance-Tool_timecho.md b/src/UserGuide/Master/Tree/Tools-System/Maintenance-Tool_timecho.md index d77787934..651154352 100644 --- a/src/UserGuide/Master/Tree/Tools-System/Maintenance-Tool_timecho.md +++ b/src/UserGuide/Master/Tree/Tools-System/Maintenance-Tool_timecho.md @@ -20,14 +20,14 @@ --> # Cluster management tool -## IoTDB-OpsKit +## 1. IoTDB-OpsKit The IoTDB OpsKit is an easy-to-use operation and maintenance tool (enterprise version tool). It is designed to solve the operation and maintenance problems of multiple nodes in the IoTDB distributed system. It mainly includes cluster deployment, cluster start and stop, elastic expansion, configuration update, data export and other functions, thereby realizing one-click command issuance for complex database clusters, which greatly Reduce management difficulty. This document will explain how to remotely deploy, configure, start and stop IoTDB cluster instances with cluster management tools. -### Environment dependence +### 1.1 Environment dependence This tool is a supporting tool for TimechoDB(Enterprise Edition based on IoTDB). You can contact your sales representative to obtain the tool download method. @@ -35,7 +35,7 @@ The machine where IoTDB is to be deployed needs to rely on jdk 8 and above, lsof Tip: The IoTDB cluster management tool requires an account with root privileges -### Deployment method +### 1.2 Deployment method #### Download and install @@ -61,7 +61,7 @@ iotdbctl cluster check example /sbin/iotdbctl cluster check example ``` -### Introduction to cluster configuration files +### 1.3 Introduction to cluster configuration files * There is a cluster configuration yaml file in the `iotdbctl/config` directory. The yaml file name is the cluster name. There can be multiple yaml files. In order to facilitate users to configure yaml files, a `default_cluster.yaml` example is provided under the iotdbctl/config directory. * The yaml file configuration consists of five major parts: `global`, `confignode_servers`, `datanode_servers`, `grafana_server`, and `prometheus_server` @@ -152,7 +152,7 @@ If metrics are configured in `iotdb-system.properties` and `iotdb-system.propert Note: How to configure the value corresponding to the yaml key to contain special characters such as: etc. It is recommended to use double quotes for the entire value, and do not use paths containing spaces in the corresponding file paths to prevent abnormal recognition problems. -### scenes to be used +### 1.4 scenes to be used #### Clean data @@ -247,7 +247,7 @@ iotdbctl cluster start default_cluster For more detailed parameters, please refer to the cluster configuration file introduction above -### Command +### 1.5 Command The basic usage of this tool is: ```bash @@ -295,7 +295,7 @@ iotdbctl cluster deploy default_cluster -### Detailed command execution process +### 1.6 Detailed command execution process The following commands are executed using default_cluster.yaml as an example, and users can modify them to their own cluster files to execute @@ -741,7 +741,7 @@ iotdbctl cluster exportschema default_cluster -N datanode1 -param "-t ./ -pf ./p -### Introduction to Cluster Deployment Tool Samples +### 1.7 Introduction to Cluster Deployment Tool Samples In the cluster deployment tool installation directory config/example, there are three yaml examples. If necessary, you can copy them to config and modify them. @@ -752,11 +752,11 @@ In the cluster deployment tool installation directory config/example, there are | default\_3c3d\_grafa\_prome | 3 confignode and 3 datanode, Grafana, Prometheus configuration examples | -## IoTDB Data Directory Overview Tool +## 2. IoTDB Data Directory Overview Tool IoTDB data directory overview tool is used to print an overview of the IoTDB data directory structure. The location is tools/tsfile/print-iotdb-data-dir. -### Usage +### 2.1 Usage - For Windows: @@ -772,7 +772,7 @@ IoTDB data directory overview tool is used to print an overview of the IoTDB dat Note: if the storage path of the output overview file is not set, the default relative path "IoTDB_data_dir_overview.txt" will be used. -### Example +### 2.2 Example Use Windows in this example: @@ -813,11 +813,11 @@ data dir num:1 |============================================================== ````````````````````````` -## TsFile Sketch Tool +## 3. TsFile Sketch Tool TsFile sketch tool is used to print the content of a TsFile in sketch mode. The location is tools/tsfile/print-tsfile. -### Usage +### 3.1 Usage - For Windows: @@ -833,7 +833,7 @@ TsFile sketch tool is used to print the content of a TsFile in sketch mode. The Note: if the storage path of the output sketch file is not set, the default relative path "TsFile_sketch_view.txt" will be used. -### Example +### 3.2 Example Use Windows in this example: @@ -901,11 +901,11 @@ Explanations: - "||||||||||||||||||||" is the guide information added to enhance readability, not the actual data stored in TsFile. - The last printed "IndexOfTimerseriesIndex Tree" is a reorganization of the metadata index tree at the end of the TsFile, which is convenient for intuitive understanding, and again not the actual data stored in TsFile. -## TsFile Resource Sketch Tool +## 4. TsFile Resource Sketch Tool TsFile resource sketch tool is used to print the content of a TsFile resource file. The location is tools/tsfile/print-tsfile-resource-files. -### Usage +### 4.1 Usage - For Windows: @@ -919,7 +919,7 @@ TsFile resource sketch tool is used to print the content of a TsFile resource fi ./print-tsfile-resource-files.sh ``` -### Example +### 4.2 Example Use Windows in this example: diff --git a/src/UserGuide/Master/Tree/Tools-System/Monitor-Tool_apache.md b/src/UserGuide/Master/Tree/Tools-System/Monitor-Tool_apache.md index 9aa65e5dd..fcb72d515 100644 --- a/src/UserGuide/Master/Tree/Tools-System/Monitor-Tool_apache.md +++ b/src/UserGuide/Master/Tree/Tools-System/Monitor-Tool_apache.md @@ -23,9 +23,9 @@ The deployment of monitoring tools can refer to the document [Monitoring Panel Deployment](../Deployment-and-Maintenance/Monitoring-panel-deployment.md) section. -## Prometheus +## 1. Prometheus -### The mapping from metric type to prometheus format +### 1.1 The mapping from metric type to prometheus format > For metrics whose Metric Name is name and Tags are K1=V1, ..., Kn=Vn, the mapping is as follows, where value is a > specific value @@ -38,7 +38,7 @@ The deployment of monitoring tools can refer to the document [Monitoring Panel D | Rate | name_total{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn"} value
name_total{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn", rate="m1"} value
name_total{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn", rate="m5"} value
name_total{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn", rate="m15"} value
name_total{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn", rate="mean"} value | | Timer | name_seconds_max{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn"} value
name_seconds_sum{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn"} value
name_seconds_count{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn"} value
name_seconds{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn", quantile="0.5"} value
name_seconds{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn", quantile="0.99"} value | -### Config File +### 1.2 Config File 1) Taking DataNode as an example, modify the iotdb-system.properties configuration file as follows: @@ -64,7 +64,7 @@ file_count{name="seq",} 2.0 ... ``` -### Prometheus + Grafana +### 1.3 Prometheus + Grafana As shown above, IoTDB exposes monitoring metrics data in the standard Prometheus format to the outside world. Prometheus can be used to collect and store monitoring indicators, and Grafana can be used to visualize monitoring indicators. @@ -107,7 +107,7 @@ The following documents may help you have a good journey with Prometheus and Gra [Grafana query metrics from Prometheus](https://prometheus.io/docs/visualization/grafana/#grafana-support-for-prometheus) -## Apache IoTDB Dashboard +## 2. Apache IoTDB Dashboard `Apache IoTDB Dashboard` is available as a supplement to IoTDB Enterprise Edition, designed for unified centralized operations and management. With it, multiple clusters can be monitored through a single panel. You can access the Dashboard's Json file by contacting Commerce. @@ -118,7 +118,7 @@ The following documents may help you have a good journey with Prometheus and Gra -### Cluster Overview +### 2.1 Cluster Overview Including but not limited to: @@ -132,7 +132,7 @@ Including but not limited to: ![](/img/%E7%9B%91%E6%8E%A7%20%E6%A6%82%E8%A7%88.png) -### Data Writing +### 2.2 Data Writing Including but not limited to: @@ -142,7 +142,7 @@ Including but not limited to: ![](/img/%E7%9B%91%E6%8E%A7%20%E5%86%99%E5%85%A5.png) -### Data Querying +### 2.3 Data Querying Including but not limited to: @@ -156,7 +156,7 @@ Including but not limited to: ![](/img/%E7%9B%91%E6%8E%A7%20%E6%9F%A5%E8%AF%A2.png) -### Storage Engine +### 2.4 Storage Engine Including but not limited to: @@ -166,7 +166,7 @@ Including but not limited to: ![](/img/%E7%9B%91%E6%8E%A7%20%E5%AD%98%E5%82%A8%E5%BC%95%E6%93%8E.png) -### System Monitoring +### 2.5 System Monitoring Including but not limited to: diff --git a/src/UserGuide/Master/Tree/Tools-System/Monitor-Tool_timecho.md b/src/UserGuide/Master/Tree/Tools-System/Monitor-Tool_timecho.md index 5e0964932..aa3301a6c 100644 --- a/src/UserGuide/Master/Tree/Tools-System/Monitor-Tool_timecho.md +++ b/src/UserGuide/Master/Tree/Tools-System/Monitor-Tool_timecho.md @@ -23,9 +23,9 @@ The deployment of monitoring tools can refer to the document [Monitoring Panel Deployment](../Deployment-and-Maintenance/Monitoring-panel-deployment.md) section. -## Prometheus +## 1. Prometheus -### The mapping from metric type to prometheus format +### 1.1 The mapping from metric type to prometheus format > For metrics whose Metric Name is name and Tags are K1=V1, ..., Kn=Vn, the mapping is as follows, where value is a > specific value @@ -38,7 +38,7 @@ The deployment of monitoring tools can refer to the document [Monitoring Panel D | Rate | name_total{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn"} value
name_total{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn", rate="m1"} value
name_total{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn", rate="m5"} value
name_total{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn", rate="m15"} value
name_total{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn", rate="mean"} value | | Timer | name_seconds_max{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn"} value
name_seconds_sum{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn"} value
name_seconds_count{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn"} value
name_seconds{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn", quantile="0.5"} value
name_seconds{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn", quantile="0.99"} value | -### Config File +### 1.2 Config File 1) Taking DataNode as an example, modify the iotdb-system.properties configuration file as follows: @@ -64,7 +64,7 @@ file_count{name="seq",} 2.0 ... ``` -### Prometheus + Grafana +### 1.3 Prometheus + Grafana As shown above, IoTDB exposes monitoring metrics data in the standard Prometheus format to the outside world. Prometheus can be used to collect and store monitoring indicators, and Grafana can be used to visualize monitoring indicators. @@ -107,7 +107,7 @@ The following documents may help you have a good journey with Prometheus and Gra [Grafana query metrics from Prometheus](https://prometheus.io/docs/visualization/grafana/#grafana-support-for-prometheus) -## Apache IoTDB Dashboard +## 2. Apache IoTDB Dashboard We introduce the Apache IoTDB Dashboard, designed for unified centralized operations and management. With it, multiple clusters can be monitored through a single panel. @@ -118,7 +118,7 @@ We introduce the Apache IoTDB Dashboard, designed for unified centralized operat You can access the Dashboard's Json file in the enterprise edition. -### Cluster Overview +### 2.1 Cluster Overview Including but not limited to: @@ -132,7 +132,7 @@ Including but not limited to: ![](/img/%E7%9B%91%E6%8E%A7%20%E6%A6%82%E8%A7%88.png) -### Data Writing +### 2.2 Data Writing Including but not limited to: @@ -142,7 +142,7 @@ Including but not limited to: ![](/img/%E7%9B%91%E6%8E%A7%20%E5%86%99%E5%85%A5.png) -### Data Querying +### 2.3 Data Querying Including but not limited to: @@ -156,7 +156,7 @@ Including but not limited to: ![](/img/%E7%9B%91%E6%8E%A7%20%E6%9F%A5%E8%AF%A2.png) -### Storage Engine +### 2.4 Storage Engine Including but not limited to: @@ -166,7 +166,7 @@ Including but not limited to: ![](/img/%E7%9B%91%E6%8E%A7%20%E5%AD%98%E5%82%A8%E5%BC%95%E6%93%8E.png) -### System Monitoring +### 2.5 System Monitoring Including but not limited to: diff --git a/src/UserGuide/Master/Tree/Tools-System/Workbench_timecho.md b/src/UserGuide/Master/Tree/Tools-System/Workbench_timecho.md index 8b124a643..cb0d17f1e 100644 --- a/src/UserGuide/Master/Tree/Tools-System/Workbench_timecho.md +++ b/src/UserGuide/Master/Tree/Tools-System/Workbench_timecho.md @@ -2,10 +2,10 @@ The deployment of the visualization console can refer to the document [Workbench Deployment](../Deployment-and-Maintenance/workbench-deployment_timecho.md) chapter. -## Product Introduction +## 1. Product Introduction IoTDB Visualization Console is an extension component developed for industrial scenarios based on the IoTDB Enterprise Edition time series database. It integrates real-time data collection, storage, and analysis, aiming to provide users with efficient and reliable real-time data storage and query solutions. It features lightweight, high performance, and ease of use, seamlessly integrating with the Hadoop and Spark ecosystems. It is suitable for high-speed writing and complex analytical queries of massive time series data in industrial IoT applications. -## Instructions for Use +## 2. Instructions for Use | **Functional Module** | **Functional Description** | | ---------------------- | ------------------------------------------------------------ | | Instance Management | Support unified management of connected instances, support creation, editing, and deletion, while visualizing the relationships between multiple instances, helping customers manage multiple database instances more clearly | diff --git a/src/UserGuide/Master/Tree/User-Manual/AINode_apache.md b/src/UserGuide/Master/Tree/User-Manual/AINode_apache.md index 3045fa625..20f5e1192 100644 --- a/src/UserGuide/Master/Tree/User-Manual/AINode_apache.md +++ b/src/UserGuide/Master/Tree/User-Manual/AINode_apache.md @@ -33,7 +33,7 @@ The responsibilities of the three nodes are as follows: - **DataNode**: responsible for receiving and parsing SQL requests from users; responsible for storing time-series data; responsible for preprocessing computation of data. - **AINode**: responsible for model file import creation and model inference. -## Advantageous features +## 1. Advantageous features Compared with building a machine learning service alone, it has the following advantages: @@ -50,7 +50,7 @@ Compared with building a machine learning service alone, it has the following ad -## Basic Concepts +## 2. Basic Concepts - **Model**: a machine learning model that takes time-series data as input and outputs the results or decisions of an analysis task. Model is the basic management unit of AINode, which supports adding (registration), deleting, checking, and using (inference) of models. - **Create**: Load externally designed or trained model files or algorithms into MLNode for unified management and use by IoTDB. @@ -61,16 +61,16 @@ Compared with building a machine learning service alone, it has the following ad :::: -## Installation and Deployment +## 3. Installation and Deployment The deployment of AINode can be found in the document [Deployment Guidelines](../Deployment-and-Maintenance/AINode_Deployment_apache.md#ainode-deployment) . -## Usage Guidelines +## 4. Usage Guidelines AINode provides model creation and deletion process for deep learning models related to timing data. Built-in models do not need to be created and deleted, they can be used directly, and the built-in model instances created after inference is completed will be destroyed automatically. -### Registering Models +### 4.1 Registering Models A trained deep learning model can be registered by specifying the vector dimensions of the model's inputs and outputs, which can be used for model inference. @@ -156,7 +156,7 @@ After the SQL is executed, the registration process will be carried out asynchro Once the model registration is complete, you can call specific functions and perform model inference by using normal queries. -### Viewing Models +### 4.2 Viewing Models Successfully registered models can be queried for model-specific information through the show models command. The SQL definition is as follows: @@ -204,7 +204,7 @@ IoTDB> show models We have registered the corresponding model earlier, you can view the model status through the corresponding designation, active indicates that the model is successfully registered and can be used for inference. -### Delete Model +### 4.3 Delete Model For a successfully registered model, the user can delete it via SQL. In addition to deleting the meta information on the configNode, this operation also deletes all the related model files under the AINode. The SQL is as follows: @@ -214,7 +214,7 @@ drop model You need to specify the model model_name that has been successfully registered to delete the corresponding model. Since model deletion involves the deletion of data on multiple nodes, the operation will not be completed immediately, and the state of the model at this time is DROPPING, and the model in this state cannot be used for model inference. -### Using Built-in Model Reasoning +### 4.4 Using Built-in Model Reasoning The SQL syntax is as follows: @@ -284,7 +284,7 @@ IoTDB> call inference(_Stray, "select s0 from root.eg.airline", k=2) Total line number = 144 ``` -### Reasoning with Deep Learning Models +### 4.5 Reasoning with Deep Learning Models The SQL syntax is as follows: @@ -444,7 +444,7 @@ Total line number = 4 In the result set, each row's label corresponds to the output of the anomaly detection model after inputting each group of 24 rows of data. -## Privilege Management +## 5. Privilege Management When using AINode related functions, the authentication of IoTDB itself can be used to do a permission management, users can only use the model management related functions when they have the USE_MODEL permission. When using the inference function, the user needs to have the permission to access the source sequence corresponding to the SQL of the input model. @@ -453,9 +453,9 @@ When using AINode related functions, the authentication of IoTDB itself can be u | USE_MODEL | create model/show models/drop model | √ | √ | x | | READ_DATA| call inference | √ | √|√ | -## Practical Examples +## 6. Practical Examples -### Power Load Prediction +### 6.1 Power Load Prediction In some industrial scenarios, there is a need to predict power loads, which can be used to optimise power supply, conserve energy and resources, support planning and expansion, and enhance power system reliability. @@ -526,7 +526,7 @@ The data before 10/24 00:00 represents the past data input to the model, the blu As can be seen, we have used the relationship between the six load information and the corresponding time oil temperatures for the past 96 hours (4 days) to model the possible changes in this data for the oil temperature for the next 48 hours (2 days) based on the inter-relationships between the sequences learned previously, and it can be seen that the predicted curves maintain a high degree of consistency in trend with the actual results after visualisation. -### Power Prediction +### 6.2 Power Prediction Power monitoring of current, voltage and power data is required in substations for detecting potential grid problems, identifying faults in the power system, effectively managing grid loads and analysing power system performance and trends. @@ -592,7 +592,7 @@ The data before 02/14 20:48 represents the past data input to the model, the blu It can be seen that we used the voltage data from the past 10 minutes and, based on the previously learned inter-sequence relationships, modeled the possible changes in the phase C voltage data for the next 5 minutes. The visualized forecast curve shows a certain degree of synchronicity with the actual results in terms of trend. -### Anomaly Detection +### 6.3 Anomaly Detection In the civil aviation and transport industry, there exists a need for anomaly detection of the number of passengers travelling on an aircraft. The results of anomaly detection can be used to guide the adjustment of flight scheduling to make the organisation more efficient. diff --git a/src/UserGuide/Master/Tree/User-Manual/AINode_timecho.md b/src/UserGuide/Master/Tree/User-Manual/AINode_timecho.md index b797c8dfb..9c593ad07 100644 --- a/src/UserGuide/Master/Tree/User-Manual/AINode_timecho.md +++ b/src/UserGuide/Master/Tree/User-Manual/AINode_timecho.md @@ -33,7 +33,7 @@ The responsibilities of the three nodes are as follows: - **DataNode**: responsible for receiving and parsing SQL requests from users; responsible for storing time-series data; responsible for preprocessing computation of data. - **AINode**: responsible for model file import creation and model inference. -## Advantageous features +## 1. Advantageous features Compared with building a machine learning service alone, it has the following advantages: @@ -50,7 +50,7 @@ Compared with building a machine learning service alone, it has the following ad -## Basic Concepts +## 2. Basic Concepts - **Model**: a machine learning model that takes time-series data as input and outputs the results or decisions of an analysis task. Model is the basic management unit of AINode, which supports adding (registration), deleting, checking, and using (inference) of models. - **Create**: Load externally designed or trained model files or algorithms into MLNode for unified management and use by IoTDB. @@ -61,16 +61,16 @@ Compared with building a machine learning service alone, it has the following ad :::: -## Installation and Deployment +## 3. Installation and Deployment The deployment of AINode can be found in the document [Deployment Guidelines](../Deployment-and-Maintenance/AINode_Deployment_timecho.md#AINode-部署) . -## Usage Guidelines +## 4. Usage Guidelines AINode provides model creation and deletion process for deep learning models related to timing data. Built-in models do not need to be created and deleted, they can be used directly, and the built-in model instances created after inference is completed will be destroyed automatically. -### Registering Models +### 4.1 Registering Models A trained deep learning model can be registered by specifying the vector dimensions of the model's inputs and outputs, which can be used for model inference. @@ -156,7 +156,7 @@ After the SQL is executed, the registration process will be carried out asynchro Once the model registration is complete, you can call specific functions and perform model inference by using normal queries. -### Viewing Models +### 4.2 Viewing Models Successfully registered models can be queried for model-specific information through the show models command. The SQL definition is as follows: @@ -204,7 +204,7 @@ IoTDB> show models We have registered the corresponding model earlier, you can view the model status through the corresponding designation, active indicates that the model is successfully registered and can be used for inference. -### Delete Model +### 4.3 Delete Model For a successfully registered model, the user can delete it via SQL. In addition to deleting the meta information on the configNode, this operation also deletes all the related model files under the AINode. The SQL is as follows: @@ -214,7 +214,7 @@ drop model You need to specify the model model_name that has been successfully registered to delete the corresponding model. Since model deletion involves the deletion of data on multiple nodes, the operation will not be completed immediately, and the state of the model at this time is DROPPING, and the model in this state cannot be used for model inference. -### Using Built-in Model Reasoning +### 4.4 Using Built-in Model Reasoning The SQL syntax is as follows: @@ -284,7 +284,7 @@ IoTDB> call inference(_Stray, "select s0 from root.eg.airline", k=2) Total line number = 144 ``` -### Reasoning with Deep Learning Models +### 4.5 Reasoning with Deep Learning Models The SQL syntax is as follows: @@ -444,7 +444,7 @@ Total line number = 4 In the result set, each row's label corresponds to the output of the anomaly detection model after inputting each group of 24 rows of data. -## Privilege Management +## 5. Privilege Management When using AINode related functions, the authentication of IoTDB itself can be used to do a permission management, users can only use the model management related functions when they have the USE_MODEL permission. When using the inference function, the user needs to have the permission to access the source sequence corresponding to the SQL of the input model. @@ -453,9 +453,9 @@ When using AINode related functions, the authentication of IoTDB itself can be u | USE_MODEL | create model/show models/drop model | √ | √ | x | | READ_DATA| call inference | √ | √|√ | -## Practical Examples +## 6. Practical Examples -### Power Load Prediction +### 6.1 Power Load Prediction In some industrial scenarios, there is a need to predict power loads, which can be used to optimise power supply, conserve energy and resources, support planning and expansion, and enhance power system reliability. @@ -526,7 +526,7 @@ The data before 10/24 00:00 represents the past data input to the model, the blu As can be seen, we have used the relationship between the six load information and the corresponding time oil temperatures for the past 96 hours (4 days) to model the possible changes in this data for the oil temperature for the next 48 hours (2 days) based on the inter-relationships between the sequences learned previously, and it can be seen that the predicted curves maintain a high degree of consistency in trend with the actual results after visualisation. -### Power Prediction +### 6.2 Power Prediction Power monitoring of current, voltage and power data is required in substations for detecting potential grid problems, identifying faults in the power system, effectively managing grid loads and analysing power system performance and trends. @@ -592,7 +592,7 @@ The data before 02/14 20:48 represents the past data input to the model, the blu It can be seen that we used the voltage data from the past 10 minutes and, based on the previously learned inter-sequence relationships, modeled the possible changes in the phase C voltage data for the next 5 minutes. The visualized forecast curve shows a certain degree of synchronicity with the actual results in terms of trend. -### Anomaly Detection +### 6.3 Anomaly Detection In the civil aviation and transport industry, there exists a need for anomaly detection of the number of passengers travelling on an aircraft. The results of anomaly detection can be used to guide the adjustment of flight scheduling to make the organisation more efficient. diff --git a/src/UserGuide/Master/Tree/User-Manual/Audit-Log_timecho.md b/src/UserGuide/Master/Tree/User-Manual/Audit-Log_timecho.md index ea72a56e0..78744238c 100644 --- a/src/UserGuide/Master/Tree/User-Manual/Audit-Log_timecho.md +++ b/src/UserGuide/Master/Tree/User-Manual/Audit-Log_timecho.md @@ -21,14 +21,14 @@ # Audit log -## Background of the function +## 1. Background of the function Audit log is the record credentials of a database, which can be queried by the audit log function to ensure information security by various operations such as user add, delete, change and check in the database. With the audit log function of IoTDB, the following scenarios can be achieved: - We can decide whether to record audit logs according to the source of the link ( human operation or not), such as: non-human operation such as hardware collector write data no need to record audit logs, human operation such as ordinary users through cli, workbench and other tools to operate the data need to record audit logs. - Filter out system-level write operations, such as those recorded by the IoTDB monitoring system itself. -### Scene Description +### 1.1 Scene Description #### Logging all operations (add, delete, change, check) of all users @@ -43,7 +43,7 @@ Client Sources: No audit logs are required for data written by the hardware collector via Session/JDBC/MQTT if it is a non-human action. -## Function Definition +## 2. Function Definition It is available through through configurations: @@ -57,7 +57,7 @@ It is available through through configurations: 2. data and metadata query operations 3. metadata class adding, modifying, and deleting operations. -### configuration item +### 2.1 configuration item In iotdb-system.properties, change the following configurations: diff --git a/src/UserGuide/Master/Tree/User-Manual/Authority-Management.md b/src/UserGuide/Master/Tree/User-Manual/Authority-Management.md index 0724d13c9..ea58dc7bd 100644 --- a/src/UserGuide/Master/Tree/User-Manual/Authority-Management.md +++ b/src/UserGuide/Master/Tree/User-Manual/Authority-Management.md @@ -25,41 +25,41 @@ IoTDB provides permission management operations, offering users the ability to m This article introduces the basic concepts of the permission module in IoTDB, including user definition, permission management, authentication logic, and use cases. In the JAVA programming environment, you can use the [JDBC API](https://chat.openai.com/API/Programming-JDBC.md) to execute permission management statements individually or in batches. -## Basic Concepts +## 1. Basic Concepts -### User +### 1.1 User A user is a legitimate user of the database. Each user corresponds to a unique username and has a password as a means of authentication. Before using the database, a person must provide a valid (i.e., stored in the database) username and password for a successful login. -### Permission +### 1.2 Permission The database provides various operations, but not all users can perform all operations. If a user can perform a certain operation, they are said to have permission to execute that operation. Permissions are typically limited in scope by a path, and [path patterns](https://chat.openai.com/Basic-Concept/Data-Model-and-Terminology.md) can be used to manage permissions flexibly. -### Role +### 1.3 Role A role is a collection of multiple permissions and has a unique role name as an identifier. Roles often correspond to real-world identities (e.g., a traffic dispatcher), and a real-world identity may correspond to multiple users. Users with the same real-world identity often have the same permissions, and roles are abstractions for unified management of such permissions. -### Default Users and Roles +### 1.4 Default Users and Roles After installation and initialization, IoTDB includes a default user: root, with the default password root. This user is an administrator with fixed permissions, which cannot be granted or revoked and cannot be deleted. There is only one administrator user in the database. A newly created user or role does not have any permissions initially. -## User Definition +## 2. User Definition Users with MANAGE_USER and MANAGE_ROLE permissions or administrators can create users or roles. Creating a user must meet the following constraints. -### Username Constraints +### 2.1 Username Constraints 4 to 32 characters, supports the use of uppercase and lowercase English letters, numbers, and special characters (`!@#$%^&*()_+-=`). Users cannot create users with the same name as the administrator. -### Password Constraints +### 2.2 Password Constraints 4 to 32 characters, can use uppercase and lowercase letters, numbers, and special characters (`!@#$%^&*()_+-=`). Passwords are encrypted by default using MD5. -### Role Name Constraints +### 2.3 Role Name Constraints 4 to 32 characters, supports the use of uppercase and lowercase English letters, numbers, and special characters (`!@#$%^&*()_+-=`). @@ -67,11 +67,11 @@ Users cannot create roles with the same name as the administrator. -## Permission Management +## 3. Permission Management IoTDB primarily has two types of permissions: series permissions and global permissions. -### Series Permissions +### 3.1 Series Permissions Series permissions constrain the scope and manner in which users access data. IOTDB support authorization for both absolute paths and prefix-matching paths, and can be effective at the timeseries granularity. @@ -87,7 +87,7 @@ The table below describes the types and scope of these permissions: | WRITE_SCHEMA | Allows obtaining detailed information about the metadata tree under the authorized path.
Allows creating, deleting, and modifying time series, templates, views, etc. under the authorized path. When creating or modifying views, it checks the WRITE_SCHEMA permission for the view path and READ_SCHEMA permission for the data source. When querying and inserting data into views, it checks the READ_DATA and WRITE_DATA permissions for the view path.
Allows setting, unsetting, and viewing TTL under the authorized path.
Allows attaching or detaching templates under the authorized path. | -### Global Permissions +### 3.2 Global Permissions Global permissions constrain the database functions that users can use and restrict commands that change the system and task state. Once a user obtains global authorization, they can manage the database. The table below describes the types of system permissions: @@ -115,7 +115,7 @@ Regarding template permissions: -### Granting and Revoking Permissions +### 3.3 Granting and Revoking Permissions In IoTDB, users can obtain permissions through three methods: @@ -135,7 +135,7 @@ Revoking a user's permissions can be done through the following methods: -## Authentication +## 4. Authentication User permissions mainly consist of three parts: permission scope (path), permission type, and the "with grant option" flag: @@ -160,7 +160,7 @@ Please note that the following operations require checking multiple permissions: 3. View permissions and data source permissions are independent. Performing read and write operations on a view will only check the permissions of the view itself and will not perform permission validation on the source path. -## Function Syntax and Examples +## 5. Function Syntax and Examples IoTDB provides composite permissions for user authorization: @@ -174,7 +174,7 @@ Composite permissions are not specific permissions themselves but a shorthand wa The following series of specific use cases will demonstrate the usage of permission statements. Non-administrator users executing the following statements require obtaining the necessary permissions, which are indicated after the operation description. -### User and Role Related +### 5.1 User and Role Related - Create user (Requires MANAGE_USER permission) @@ -273,7 +273,7 @@ ALTER USER SET PASSWORD ; eg: ALTER USER tempuser SET PASSWORD 'newpwd'; ``` -### Authorization and Deauthorization +### 5.2 Authorization and Deauthorization Users can use authorization statements to grant permissions to other users. The syntax is as follows: @@ -337,7 +337,7 @@ eg: REVOKE ALL ON ROOT.** FROM USER user1; -## Examples +## 6. Examples Based on the described [sample data](https://github.com/thulab/iotdb/files/4438687/OtherMaterial-Sample.Data.txt), IoTDB's sample data may belong to different power generation groups such as ln, sgcc, and so on. Different power generation groups do not want other groups to access their database data, so we need to implement data isolation at the group level. @@ -439,7 +439,7 @@ IoTDB> INSERT INTO root.ln.wf01.wt01(timestamp, status) values(1509465600000, tr Msg: 803: No permissions for this operation, please add privilege WRITE_DATA on [root.ln.wf01.wt01.status] ``` -## Other Explanations +## 7. Other Explanations Roles are collections of permissions, and both permissions and roles are attributes of users. In other words, a role can have multiple permissions, and a user can have multiple roles and permissions (referred to as the user's self-permissions). @@ -451,7 +451,7 @@ At the same time, changes to roles will be immediately reflected in all users wh -## Upgrading from a previous version +## 8. Upgrading from a previous version Before version 1.3, there were many different permission types. In 1.3 version's implementation, we have streamlined the permission types. diff --git a/src/UserGuide/Master/Tree/User-Manual/Data-Recovery.md b/src/UserGuide/Master/Tree/User-Manual/Data-Recovery.md index ab0cd6ea0..a1e374dd6 100644 --- a/src/UserGuide/Master/Tree/User-Manual/Data-Recovery.md +++ b/src/UserGuide/Master/Tree/User-Manual/Data-Recovery.md @@ -19,11 +19,11 @@ --> -## Data Recovery +# Data Recovery Used to fix issues in data, such as data in sequential space not being arranged in chronological order. -### START REPAIR DATA +## 1. START REPAIR DATA Start a repair task to scan all files created before current time. The repair task will scan all tsfiles and repair some bad files. @@ -34,7 +34,7 @@ IoTDB> START REPAIR DATA ON LOCAL IoTDB> START REPAIR DATA ON CLUSTER ``` -### STOP REPAIR DATA +## 2. STOP REPAIR DATA Stop the running repair task. To restart the stopped task. If there is a stopped repair task, it can be restart and recover the repair progress by executing SQL `START REPAIR DATA`. diff --git a/src/UserGuide/Master/Tree/User-Manual/Data-Sync_apache.md b/src/UserGuide/Master/Tree/User-Manual/Data-Sync_apache.md index a2ffa995e..1db4190cd 100644 --- a/src/UserGuide/Master/Tree/User-Manual/Data-Sync_apache.md +++ b/src/UserGuide/Master/Tree/User-Manual/Data-Sync_apache.md @@ -23,9 +23,9 @@ Data synchronization is a typical requirement in industrial Internet of Things (IoT). Through data synchronization mechanisms, it is possible to achieve data sharing between IoTDB, and to establish a complete data link to meet the needs for internal and external network data interconnectivity, edge-cloud synchronization, data migration, and data backup. -## Function Overview +## 1. Function Overview -### Data Synchronization +### 1.1 Data Synchronization A data synchronization task consists of three stages: @@ -77,7 +77,7 @@ By declaratively configuring the specific content of the three parts through SQL -### Functional limitations and instructions +### 1.2 Functional limitations and instructions The schema and auth synchronization functions have the following limitations: @@ -89,7 +89,7 @@ The schema and auth synchronization functions have the following limitations: - During data synchronization tasks, please avoid performing any deletion operations to prevent inconsistent states between the two ends. -## Usage Instructions +## 2. Usage Instructions Data synchronization tasks have three states: RUNNING, STOPPED, and DROPPED. The task state transitions are shown in the following diagram: @@ -99,7 +99,7 @@ After creation, the task will start directly, and when the task stops abnormally Provide the following SQL statements for state management of synchronization tasks. -### Create Task +### 2.1 Create Task Use the `CREATE PIPE` statement to create a data synchronization task. The `PipeId` and `sink` attributes are required, while `source` and `processor` are optional. When entering the SQL, note that the order of the `SOURCE` and `SINK` plugins cannot be swapped. @@ -123,7 +123,7 @@ WITH SINK ( **IF NOT EXISTS semantics**: Used in creation operations to ensure that the create command is executed when the specified Pipe does not exist, preventing errors caused by attempting to create an existing Pipe. -### Start Task +### 2.2 Start Task Start processing data: @@ -131,7 +131,7 @@ Start processing data: START PIPE ``` -### Stop Task +### 2.3 Stop Task Stop processing data: @@ -139,7 +139,7 @@ Stop processing data: STOP PIPE ``` -### Delete Task +### 2.4 Delete Task Deletes the specified task: @@ -150,7 +150,7 @@ DROP PIPE [IF EXISTS] Deleting a task does not require stopping the synchronization task first. -### View Task +### 2.5 View Task View all tasks: @@ -186,7 +186,7 @@ The meanings of each column are as follows: - **RemainingEventCount (Statistics with Delay)**: The number of remaining events, which is the total count of all events in the current data synchronization task, including data and schema synchronization events, as well as system and user-defined events. - **EstimatedRemainingSeconds (Statistics with Delay)**: The estimated remaining time, based on the current number of events and the rate at the pipe, to complete the transfer. -### Synchronization Plugins +### 2.6 Synchronization Plugins To make the overall architecture more flexible to match different synchronization scenario requirements, we support plugin assembly within the synchronization task framework. The system comes with some pre-installed common plugins that you can use directly. At the same time, you can also customize processor plugins and Sink plugins, and load them into the IoTDB system for use. You can view the plugins in the system (including custom and built-in plugins) with the following statement: @@ -257,9 +257,9 @@ Detailed introduction of pre-installed plugins is as follows (for detailed param For importing custom plugins, please refer to the [Stream Processing](./Streaming_apache.md#custom-stream-processing-plugin-management) section. -## Use examples +## 3. Use examples -### Full data synchronisation +### 3.1 Full data synchronisation This example is used to demonstrate the synchronisation of all data from one IoTDB to another IoTDB with the data link as shown below: @@ -274,7 +274,7 @@ with sink ( 'node-urls' = '127.0.0.1:6668', -- The URL of the data service port of the DataNode node on the target IoTDB ``` -### Partial data synchronization +### 3.2 Partial data synchronization This example is used to demonstrate the synchronisation of data from a certain historical time range (8:00pm 23 August 2023 to 8:00pm 23 October 2023) to another IoTDB, the data link is shown below: @@ -298,7 +298,7 @@ with SINK ( ) ``` -### Edge-cloud data transfer +### 3.3 Edge-cloud data transfer This example is used to demonstrate the scenario where data from multiple IoTDB is transferred to the cloud, with data from clusters B, C, and D all synchronized to cluster A, as shown in the figure below: @@ -350,7 +350,7 @@ with sink ( ) ``` -### Cascading data transfer +### 3.4 Cascading data transfer This example is used to demonstrate the scenario where data is transferred in a cascading manner between multiple IoTDB, with data from cluster A synchronized to cluster B, and then to cluster C, as shown in the figure below: @@ -384,7 +384,7 @@ with sink ( ``` -### Compression Synchronization (V1.3.3+) +### 3.5 Compression Synchronization (V1.3.3+) IoTDB supports specifying data compression methods during synchronization. Real time compression and transmission of data can be achieved by configuring the `compressor` parameter. `Compressor` currently supports 5 optional algorithms: snappy/gzip/lz4/zstd/lzma2, and can choose multiple compression algorithm combinations to compress in the order of configuration `rate-limit-bytes-per-second`(supported in V1.3.3 and later versions) is the maximum number of bytes allowed to be transmitted per second, calculated as compressed bytes. If it is less than 0, there is no limit. @@ -398,7 +398,7 @@ with sink ( ) ``` -### Encrypted Synchronization (V1.3.1+) +### 3.6 Encrypted Synchronization (V1.3.1+) IoTDB supports the use of SSL encryption during the synchronization process, ensuring the secure transfer of data between different IoTDB instances. By configuring SSL-related parameters, such as the certificate address and password (`ssl.trust-store-path`)、(`ssl.trust-store-pwd`), data can be protected by SSL encryption during the synchronization process. @@ -414,7 +414,7 @@ with sink ( ) ``` -## Reference: Notes +## 4. Reference: Notes You can adjust the parameters for data synchronization by modifying the IoTDB configuration file (`iotdb-system.properties`), such as the directory for storing synchronized data. The complete configuration is as follows: @@ -478,9 +478,9 @@ pipe_sink_max_client_number=16 pipe_all_sinks_rate_limit_bytes_per_second=-1 ``` -## Reference: parameter description +## 5. Reference: parameter description -### source parameter(V1.3.3) +### 5.1 source parameter(V1.3.3) | key | value | value range | required or not | default value | | :------------------------------ | :----------------------------------------------------------- | :------------------------------------- | :------- | :------------- | @@ -504,7 +504,7 @@ pipe_all_sinks_rate_limit_bytes_per_second=-1 > - **batch**: In this mode, tasks process and send data in batches (according to the underlying data files). It is characterized by low timeliness and high throughput. -## sink parameter +### 5.2 sink parameter > In versions 1.3.3 and above, when only the sink is included, the additional "with sink" prefix is no longer required. diff --git a/src/UserGuide/Master/Tree/User-Manual/Data-Sync_timecho.md b/src/UserGuide/Master/Tree/User-Manual/Data-Sync_timecho.md index 5b7075004..4db269933 100644 --- a/src/UserGuide/Master/Tree/User-Manual/Data-Sync_timecho.md +++ b/src/UserGuide/Master/Tree/User-Manual/Data-Sync_timecho.md @@ -23,9 +23,9 @@ Data synchronization is a typical requirement in industrial Internet of Things (IoT). Through data synchronization mechanisms, it is possible to achieve data sharing between IoTDB, and to establish a complete data link to meet the needs for internal and external network data interconnectivity, edge-cloud synchronization, data migration, and data backup. -## Function Overview +## 1. Function Overview -### Data Synchronization +### 1.1 Data Synchronization A data synchronization task consists of three stages: @@ -77,7 +77,7 @@ By declaratively configuring the specific content of the three parts through SQL -### Functional limitations and instructions +### 1.2 Functional limitations and instructions The schema and auth synchronization functions have the following limitations: @@ -91,7 +91,7 @@ The schema and auth synchronization functions have the following limitations: - During data synchronization tasks, please avoid performing any deletion operations to prevent inconsistent states between the two ends. -## Usage Instructions +## 2. Usage Instructions Data synchronization tasks have three states: RUNNING, STOPPED, and DROPPED. The task state transitions are shown in the following diagram: @@ -101,7 +101,7 @@ After creation, the task will start directly, and when the task stops abnormally Provide the following SQL statements for state management of synchronization tasks. -### Create Task +### 2.1 Create Task Use the `CREATE PIPE` statement to create a data synchronization task. The `PipeId` and `sink` attributes are required, while `source` and `processor` are optional. When entering the SQL, note that the order of the `SOURCE` and `SINK` plugins cannot be swapped. @@ -125,7 +125,7 @@ WITH SINK ( **IF NOT EXISTS semantics**: Used in creation operations to ensure that the create command is executed when the specified Pipe does not exist, preventing errors caused by attempting to create an existing Pipe. -### Start Task +### 2.2 Start Task Start processing data: @@ -133,7 +133,7 @@ Start processing data: START PIPE ``` -### Stop Task +### 2.3 Stop Task Stop processing data: @@ -141,7 +141,7 @@ Stop processing data: STOP PIPE ``` -### Delete Task +### 2.4 Delete Task Deletes the specified task: @@ -152,7 +152,7 @@ DROP PIPE [IF EXISTS] Deleting a task does not require stopping the synchronization task first. -### View Task +### 2.5 View Task View all tasks: @@ -188,7 +188,7 @@ The meanings of each column are as follows: - **RemainingEventCount (Statistics with Delay)**: The number of remaining events, which is the total count of all events in the current data synchronization task, including data and schema synchronization events, as well as system and user-defined events. - **EstimatedRemainingSeconds (Statistics with Delay)**: The estimated remaining time, based on the current number of events and the rate at the pipe, to complete the transfer. -### Synchronization Plugins +### 2.6 Synchronization Plugins To make the overall architecture more flexible to match different synchronization scenario requirements, we support plugin assembly within the synchronization task framework. The system comes with some pre-installed common plugins that you can use directly. At the same time, you can also customize processor plugins and Sink plugins, and load them into the IoTDB system for use. You can view the plugins in the system (including custom and built-in plugins) with the following statement: @@ -265,9 +265,9 @@ Detailed introduction of pre-installed plugins is as follows (for detailed param For importing custom plugins, please refer to the [Stream Processing](./Streaming_timecho.md#custom-stream-processing-plugin-management) section. -## Use examples +## 3. Use examples -### Full data synchronisation +### 3.1 Full data synchronisation This example is used to demonstrate the synchronisation of all data from one IoTDB to another IoTDB with the data link as shown below: @@ -282,7 +282,7 @@ with sink ( 'node-urls' = '127.0.0.1:6668', -- The URL of the data service port of the DataNode node on the target IoTDB ``` -### Partial data synchronization +### 3.2 Partial data synchronization This example is used to demonstrate the synchronisation of data from a certain historical time range (8:00pm 23 August 2023 to 8:00pm 23 October 2023) to another IoTDB, the data link is shown below: @@ -306,7 +306,7 @@ with SINK ( ) ``` -### Bidirectional data transfer +### 3.3 Bidirectional data transfer This example is used to demonstrate the scenario where two IoTDB act as active-active pairs, with the data link shown in the figure below: @@ -344,7 +344,7 @@ with sink ( ) ``` -### Edge-cloud data transfer +### 3.4 Edge-cloud data transfer This example is used to demonstrate the scenario where data from multiple IoTDB is transferred to the cloud, with data from clusters B, C, and D all synchronized to cluster A, as shown in the figure below: @@ -396,7 +396,7 @@ with sink ( ) ``` -### Cascading data transfer +### 3.5 Cascading data transfer This example is used to demonstrate the scenario where data is transferred in a cascading manner between multiple IoTDB, with data from cluster A synchronized to cluster B, and then to cluster C, as shown in the figure below: @@ -429,7 +429,7 @@ with sink ( ) ``` -### Cross-gate data transfer +### 3.6 Cross-gate data transfer This example is used to demonstrate the scenario where data from one IoTDB is synchronized to another IoTDB through a unidirectional gateway, as shown in the figure below: @@ -458,7 +458,7 @@ with sink ( XL—GAP | No Limit | No Limit | -### Compression Synchronization (V1.3.3+) +### 3.7 Compression Synchronization (V1.3.3+) IoTDB supports specifying data compression methods during synchronization. Real time compression and transmission of data can be achieved by configuring the `compressor` parameter. `Compressor` currently supports 5 optional algorithms: snappy/gzip/lz4/zstd/lzma2, and can choose multiple compression algorithm combinations to compress in the order of configuration `rate-limit-bytes-per-second`(supported in V1.3.3 and later versions) is the maximum number of bytes allowed to be transmitted per second, calculated as compressed bytes. If it is less than 0, there is no limit. @@ -472,7 +472,7 @@ with sink ( ) ``` -### Encrypted Synchronization (V1.3.1+) +### 3.8 Encrypted Synchronization (V1.3.1+) IoTDB supports the use of SSL encryption during the synchronization process, ensuring the secure transfer of data between different IoTDB instances. By configuring SSL-related parameters, such as the certificate address and password (`ssl.trust-store-path`)、(`ssl.trust-store-pwd`), data can be protected by SSL encryption during the synchronization process. @@ -488,7 +488,7 @@ with sink ( ) ``` -## Reference: Notes +## 4. Reference: Notes You can adjust the parameters for data synchronization by modifying the IoTDB configuration file (`iotdb-system.properties`), such as the directory for storing synchronized data. The complete configuration is as follows: @@ -563,9 +563,9 @@ pipe_air_gap_receiver_port=9780 pipe_all_sinks_rate_limit_bytes_per_second=-1 ``` -## Reference: parameter description +## 5. Reference: parameter description -### source parameter(V1.3.3) +### 5.1 source parameter(V1.3.3) | key | value | value range | required or not | default value | | :------------------------------ | :----------------------------------------------------------- | :------------------------------------- | :------- | :------------- | @@ -589,7 +589,7 @@ pipe_all_sinks_rate_limit_bytes_per_second=-1 > - **batch**: In this mode, tasks process and send data in batches (according to the underlying data files). It is characterized by low timeliness and high throughput. -## sink parameter +### 5.2 sink parameter > In versions 1.3.3 and above, when only the sink is included, the additional "with sink" prefix is no longer required. diff --git a/src/UserGuide/Master/Tree/User-Manual/Database-Programming.md b/src/UserGuide/Master/Tree/User-Manual/Database-Programming.md index b524ca274..40c1426c8 100644 --- a/src/UserGuide/Master/Tree/User-Manual/Database-Programming.md +++ b/src/UserGuide/Master/Tree/User-Manual/Database-Programming.md @@ -22,13 +22,13 @@ # CONTINUOUS QUERY(CQ) -## Introduction +## 1. Introduction Continuous queries(CQ) are queries that run automatically and periodically on realtime data and store query results in other specified time series. Users can implement sliding window streaming computing through continuous query, such as calculating the hourly average temperature of a sequence and writing it into a new sequence. Users can customize the `RESAMPLE` clause to create different sliding windows, which can achieve a certain degree of tolerance for out-of-order data. -## Syntax +## 2. Syntax ```sql CREATE (CONTINUOUS QUERY | CQ) @@ -57,7 +57,7 @@ END > 2. GROUP BY TIME CLAUSE is different, it doesn't contain its original first display window parameter which is [start_time, end_time). It's still because IoTDB will automatically generate a time range for the query each time it's executed. > 3. If there is no group by time clause in query, EVERY clause is required, otherwise IoTDB will throw an error. -### Descriptions of parameters in CQ syntax +### 2.1 Descriptions of parameters in CQ syntax - `` specifies the globally unique id of CQ. - `` specifies the query execution time interval. We currently support the units of ns, us, ms, s, m, h, d, w, and its value should not be lower than the minimum threshold configured by the user, which is `continuous_query_min_every_interval`. It's an optional parameter, default value is set to `group_by_interval` in group by clause. @@ -101,7 +101,7 @@ END - `DISCARD` means that we just discard the current cq execution task and wait for the next execution time and do the next time interval cq task. If using `DISCARD` policy, some time intervals won't be executed when the execution time of one cq task is longer than the ``. However, once a cq task is executed, it will use the latest time interval, so it can catch up at the sacrifice of some time intervals being discarded. -## Examples of CQ +## 3. Examples of CQ The examples below use the following sample data. It's a real time data stream and we can assume that the data arrives on time. @@ -121,7 +121,7 @@ The examples below use the following sample data. It's a real time data stream a +-----------------------------+-----------------------------+-----------------------------+-----------------------------+-----------------------------+ ```` -### Configuring execution intervals +### 3.1 Configuring execution intervals Use an `EVERY` interval in the `RESAMPLE` clause to specify the CQ’s execution interval, if not specific, default value is equal to `group_by_interval`. @@ -179,7 +179,7 @@ At **2021-05-11T22:19:00.000+08:00**, `cq1` executes a query within the time ran +-----------------------------+---------------------------------+---------------------------------+---------------------------------+---------------------------------+ ```` -### Configuring time range for resampling +### 3.2 Configuring time range for resampling Use `start_time_offset` in the `RANGE` clause to specify the start time of the CQ’s time range, if not specific, default value is equal to `EVERY` interval. @@ -254,7 +254,7 @@ At **2021-05-11T22:19:00.000+08:00**, `cq2` executes a query within the time ran +-----------------------------+---------------------------------+---------------------------------+---------------------------------+---------------------------------+ ```` -### Configuring execution intervals and CQ time ranges +### 3.3 Configuring execution intervals and CQ time ranges Use an `EVERY` interval and `RANGE` interval in the `RESAMPLE` clause to specify the CQ’s execution interval and the length of the CQ’s time range. And use `fill()` to change the value reported for time intervals with no data. @@ -319,7 +319,7 @@ Notice that `cq3` will calculate the results for some time interval many times, +-----------------------------+---------------------------------+---------------------------------+---------------------------------+---------------------------------+ ```` -### Configuring end_time_offset for CQ time range +### 3.4 Configuring end_time_offset for CQ time range Use an `EVERY` interval and `RANGE` interval in the RESAMPLE clause to specify the CQ’s execution interval and the length of the CQ’s time range. And use `fill()` to change the value reported for time intervals with no data. @@ -378,7 +378,7 @@ Notice that `cq4` will calculate the results for all time intervals only once af +-----------------------------+---------------------------------+---------------------------------+---------------------------------+---------------------------------+ ```` -### CQ without group by clause +### 3.5 CQ without group by clause Use an `EVERY` interval in the `RESAMPLE` clause to specify the CQ’s execution interval and the length of the CQ’s time range. @@ -484,9 +484,9 @@ At **2021-05-11T22:19:00.000+08:00**, `cq5` executes a query within the time ran +-----------------------------+-------------------------------+-----------+ ```` -## CQ Management +## 4. CQ Management -### Listing continuous queries +### 4.1 Listing continuous queries List every CQ on the IoTDB Cluster with: @@ -509,7 +509,7 @@ we will get: | s1_count_cq | CREATE CQ s1_count_cq
BEGIN
SELECT count(s1)
INTO root.sg_count.d.count_s1
FROM root.sg.d
GROUP BY(30m)
END | active | -### Dropping continuous queries +### 4.2 Dropping continuous queries Drop a CQ with a specific `cq_id`: @@ -527,24 +527,24 @@ Drop the CQ named `s1_count_cq`: DROP CONTINUOUS QUERY s1_count_cq; ``` -### Altering continuous queries +### 4.3 Altering continuous queries CQs can't be altered once they're created. To change a CQ, you must `DROP` and re`CREATE` it with the updated settings. -## CQ Use Cases +## 5. CQ Use Cases -### Downsampling and Data Retention +### 5.1 Downsampling and Data Retention Use CQs with `TTL` set on database in IoTDB to mitigate storage concerns. Combine CQs and `TTL` to automatically downsample high precision data to a lower precision and remove the dispensable, high precision data from the database. -### Recalculating expensive queries +### 5.2 Recalculating expensive queries Shorten query runtimes by pre-calculating expensive queries with CQs. Use a CQ to automatically downsample commonly-queried, high precision data to a lower precision. Queries on lower precision data require fewer resources and return faster. > Pre-calculate queries for your preferred graphing tool to accelerate the population of graphs and dashboards. -### Substituting for sub-query +### 5.3 Substituting for sub-query IoTDB does not support sub queries. We can get the same functionality by creating a CQ as a sub query and store its result into other time series and then querying from those time series again will be like doing nested sub query. @@ -583,7 +583,7 @@ SELECT avg(count_s1) from root.sg_count.d; ``` -## System Parameter Configuration +## 6. System Parameter Configuration | Name | Description | Data Type | Default Value | | :------------------------------------------ | ------------------------------------------------------------ | --------- | ------------- | diff --git a/src/UserGuide/Master/Tree/User-Manual/IoTDB-View_timecho.md b/src/UserGuide/Master/Tree/User-Manual/IoTDB-View_timecho.md index 195847395..b136941e2 100644 --- a/src/UserGuide/Master/Tree/User-Manual/IoTDB-View_timecho.md +++ b/src/UserGuide/Master/Tree/User-Manual/IoTDB-View_timecho.md @@ -21,9 +21,9 @@ # View -## Sequence View Application Background +## 1. Sequence View Application Background -## Application Scenario 1 Time Series Renaming (PI Asset Management) +## 2. Application Scenario 1 Time Series Renaming (PI Asset Management) In practice, the equipment collecting data may be named with identification numbers that are difficult to be understood by human beings, which brings difficulties in querying to the business layer. @@ -33,19 +33,19 @@ The Sequence View, on the other hand, is able to re-organise the management of t It is difficult for the user to understand. However, at this point, the user is able to rename it using the sequence view feature, map it to a sequence view, and use `root.view.device001.temperature` to access the captured data. -### Application Scenario 2 Simplifying business layer query logic +### 2.1 Application Scenario 2 Simplifying business layer query logic Sometimes users have a large number of devices that manage a large number of time series. When conducting a certain business, the user wants to deal with only some of these sequences. At this time, the focus of attention can be picked out by the sequence view function, which is convenient for repeated querying and writing. **For example**: Users manage a product assembly line with a large number of time series for each segment of the equipment. The temperature inspector only needs to focus on the temperature of the equipment, so he can extract the temperature-related sequences and compose the sequence view. -### Application Scenario 3 Auxiliary Rights Management +### 2.2 Application Scenario 3 Auxiliary Rights Management In the production process, different operations are generally responsible for different scopes. For security reasons, it is often necessary to restrict the access scope of the operations staff through permission management. **For example**: The safety management department now only needs to monitor the temperature of each device in a production line, but these data are stored in the same database with other confidential data. At this point, it is possible to create a number of new views that contain only temperature-related time series on the production line, and then to give the security officer access to only these sequence views, thus achieving the purpose of permission restriction. -### Motivation for designing sequence view functionality +### 2.3 Motivation for designing sequence view functionality Combining the above two types of usage scenarios, the motivations for designing sequence view functionality, are: @@ -53,13 +53,13 @@ Combining the above two types of usage scenarios, the motivations for designing 2. to simplify the query logic at the business level. 3. Auxiliary rights management, open data to specific users through the view. -## Sequence View Concepts +## 3. Sequence View Concepts -### Terminology Concepts +### 3.1 Terminology Concepts Concept: If not specified, the views specified in this document are **Sequence Views**, and new features such as device views may be introduced in the future. -### Sequence view +### 3.2 Sequence view A sequence view is a way of organising the management of time series. @@ -69,7 +69,7 @@ A sequence view is a virtual time series, and each virtual time series is like a Users can create views using complex SQL queries, where the sequence view acts as a stored query statement, and when data is read from the view, the stored query statement is used as the source of the data in the FROM clause. -### Alias Sequences +### 3.3 Alias Sequences There is a special class of beings in a sequence view that satisfy all of the following conditions: @@ -81,13 +81,13 @@ Such a sequence view is called an **alias sequence**, or alias sequence view. A ** All sequence views, including aliased sequences, do not currently support Trigger functionality. ** -### Nested Views +### 3.4 Nested Views A user may want to select a number of sequences from an existing sequence view to form a new sequence view, called a nested view. **The current version does not support the nested view feature**. -### Some constraints on sequence views in IoTDB +### 3.5 Some constraints on sequence views in IoTDB #### Constraint 1 A sequence view must depend on one or several time series @@ -130,9 +130,9 @@ However, their metadata such as tags and attributes are not shared. This is because the business query, view-oriented users are concerned about the structure of the current view, and if you use group by tag and other ways to do the query, obviously want to get the view contains the corresponding tag grouping effect, rather than the time series of the tag grouping effect (the user is not even aware of those time series). -## Sequence view functionality +## 4. Sequence view functionality -### Creating a view +### 4.1 Creating a view Creating a sequence view is similar to creating a time series, the difference is that you need to specify the data source, i.e., the original sequence, through the AS keyword. @@ -340,7 +340,7 @@ The SELECT clause used when creating a serial view is subject to certain restric Simply put, after `AS` you can only use `SELECT ... FROM ... ` and the results of this query must form a time series. -### View Data Queries +### 4.2 View Data Queries For the data query functions that can be supported, the sequence view and time series can be used indiscriminately with identical behaviour when performing time series data queries. @@ -361,7 +361,7 @@ However, if the user wants to query the metadata of the sequence, such as tag, a In addition, for aliased sequences, if the user wants to get information about the time series such as tags, attributes, etc., the user needs to query the mapping of the view columns to find the corresponding time series, and then query the time series for the tags, attributes, etc. The method of querying the mapping of the view columns will be explained in section 3.5. -### Modify Views +### 4.3 Modify Views The modification operations supported by the view include: modifying its calculation logic,modifying tag/attributes, and deleting. @@ -432,7 +432,7 @@ Since a view is a sequence, a view can be deleted as if it were a time series. DELETE VIEW root.view.device.avg_temperatue ``` -### View Synchronisation +### 4.4 View Synchronisation #### If the dependent original sequence is deleted @@ -452,7 +452,7 @@ Please refer to the previous section 2.1.6 Restrictions2 for more details. Please refer to the previous section 2.1.6 Restriction 5 for details. -### View Metadata Queries +### 4.5 View Metadata Queries View metadata query specifically refers to querying the metadata of the view itself (e.g., how many columns the view has), as well as information about the views in the database (e.g., what views are available). @@ -518,7 +518,7 @@ The last column, `SOURCE`, shows the data source for the sequence view, listing Both of the above queries involve the data type of the view. The data type of a view is inferred from the original time series type of the query statement or alias sequence that defines the view. This data type is computed in real time based on the current state of the system, so the data type queried at different moments may be changing. -## FAQ +## 5. FAQ #### Q1: I want the view to implement the function of type conversion. For example, a time series of type int32 was originally placed in the same view as other series of type int64. I now want all the data queried through the view to be automatically converted to int64 type. diff --git a/src/UserGuide/Master/Tree/User-Manual/Streaming_apache.md b/src/UserGuide/Master/Tree/User-Manual/Streaming_apache.md index cfb6a3433..da81e199f 100644 --- a/src/UserGuide/Master/Tree/User-Manual/Streaming_apache.md +++ b/src/UserGuide/Master/Tree/User-Manual/Streaming_apache.md @@ -43,9 +43,9 @@ Users can configure the specific attributes of these three subtasks declarativel Using the stream processing framework, it is possible to build a complete data pipeline to fulfill various requirements such as *edge-to-cloud synchronization, remote disaster recovery, and read/write load balancing across multiple databases*. -## Custom Stream Processing Plugin Development +## 1. Custom Stream Processing Plugin Development -### Programming development dependencies +### 1.1 Programming development dependencies It is recommended to use Maven to build the project. Add the following dependencies in the `pom.xml` file. Please make sure to choose dependencies with the same version as the IoTDB server version. @@ -58,7 +58,7 @@ It is recommended to use Maven to build the project. Add the following dependenc ``` -### Event-Driven Programming Model +### 1.2 Event-Driven Programming Model The design of user programming interfaces for stream processing plugins follows the principles of the event-driven programming model. In this model, events serve as the abstraction of data in the user programming interface. The programming interface is decoupled from the specific execution method, allowing the focus to be on describing how the system expects events (data) to be processed upon arrival. @@ -128,7 +128,7 @@ public interface TsFileInsertionEvent extends Event { } ``` -### Custom Stream Processing Plugin Programming Interface Definition +### 1.3 Custom Stream Processing Plugin Programming Interface Definition Based on the custom stream processing plugin programming interface, users can easily write data extraction plugins, data processing plugins, and data sending plugins, allowing the stream processing functionality to adapt flexibly to various industrial scenarios. #### Data Extraction Plugin Interface @@ -434,11 +434,11 @@ public interface PipeSink extends PipePlugin { } ``` -## Custom Stream Processing Plugin Management +## 2. Custom Stream Processing Plugin Management To ensure the flexibility and usability of user-defined plugins in production environments, the system needs to provide the capability to dynamically manage plugins. This section introduces the management statements for stream processing plugins, which enable the dynamic and unified management of plugins. -### Load Plugin Statement +### 2.1 Load Plugin Statement In IoTDB, to dynamically load a user-defined plugin into the system, you first need to implement a specific plugin class based on PipeSource, PipeProcessor, or PipeSink. Then, you need to compile and package the plugin class into an executable jar file. Finally, you can use the loading plugin management statement to load the plugin into IoTDB. @@ -460,7 +460,7 @@ AS 'edu.tsinghua.iotdb.pipe.ExampleProcessor' USING URI ``` -### Delete Plugin Statement +### 2.2 Delete Plugin Statement When user no longer wants to use a plugin and needs to uninstall the plugin from the system, you can use the Remove plugin statement as shown below. ```sql @@ -469,16 +469,16 @@ DROP PIPEPLUGIN [IF EXISTS] **IF EXISTS semantics**: Used in deletion operations to ensure that when a specified Pipe Plugin exists, the delete command is executed to prevent errors caused by attempting to delete a non-existent Pipe Plugin. -### Show Plugin Statement +### 2.3 Show Plugin Statement User can also view the plugin in the system on need. The statement to view plugin is as follows. ```sql SHOW PIPEPLUGINS ``` -## System Pre-installed Stream Processing Plugin +## 3. System Pre-installed Stream Processing Plugin -### Pre-built Source Plugin +### 3.1 Pre-built Source Plugin #### iotdb-source @@ -528,7 +528,7 @@ Function: Extract historical or realtime data inside IoTDB into pipe. > > The historical data transmission phase and the realtime data transmission phase are executed serially. Only when the historical data transmission phase is completed, the realtime data transmission phase is executed.** -### Pre-built Processor Plugin +### 3.2 Pre-built Processor Plugin #### do-nothing-processor @@ -538,7 +538,7 @@ Function: Do not do anything with the events passed in by the source. | key | value | value range | required or optional with default | |-----------|----------------------|------------------------------|-----------------------------------| | processor | do-nothing-processor | String: do-nothing-processor | required | -### Pre-built Sink Plugin +### 3.3 Pre-built Sink Plugin #### do-nothing-sink @@ -549,9 +549,9 @@ Function: Does not do anything with the events passed in by the processor. |------|-----------------|-------------------------|-----------------------------------| | sink | do-nothing-sink | String: do-nothing-sink | required | -## Stream Processing Task Management +## 4. Stream Processing Task Management -### Create Stream Processing Task +### 4.1 Create Stream Processing Task A stream processing task can be created using the `CREATE PIPE` statement, a sample SQL statement is shown below: @@ -643,7 +643,7 @@ The expressed semantics are: synchronise the full amount of historical data and - IoTDB A -> IoTDB B -> IoTDB A - IoTDB A -> IoTDB A -### Start Stream Processing Task +### 4.2 Start Stream Processing Task After the successful execution of the CREATE PIPE statement, task-related instances will be created. However, the overall task's running status will be set to STOPPED(V1.3.0), meaning the task will not immediately process data. In version 1.3.1 and later, the status of the task will be set to RUNNING after CREATE. @@ -652,7 +652,7 @@ You can use the START PIPE statement to make the stream processing task start pr START PIPE ``` -### Stop Stream Processing Task +### 4.3 Stop Stream Processing Task Use the STOP PIPE statement to stop the stream processing task from processing data: @@ -660,7 +660,7 @@ Use the STOP PIPE statement to stop the stream processing task from processing d STOP PIPE ``` -### Delete Stream Processing Task +### 4.4 Delete Stream Processing Task If a stream processing task is in the RUNNING state, you can use the DROP PIPE statement to stop it and delete the entire task: @@ -670,7 +670,7 @@ DROP PIPE Before deleting a stream processing task, there is no need to execute the STOP operation. -### Show Stream Processing Task +### 4.5 Show Stream Processing Task Use the SHOW PIPES statement to view all stream processing tasks: ```sql @@ -701,7 +701,7 @@ SHOW PIPES WHERE SINK USED BY ``` -### Stream Processing Task Running Status Migration +### 4.6 Stream Processing Task Running Status Migration A stream processing task status can transition through several states during the lifecycle of a data synchronization pipe: @@ -717,9 +717,9 @@ The following diagram illustrates the different states and their transitions: ![state migration diagram](/img/%E7%8A%B6%E6%80%81%E8%BF%81%E7%A7%BB%E5%9B%BE.png) -## Authority Management +## 5. Authority Management -### Stream Processing Task +### 5.1 Stream Processing Task | Authority Name | Description | |----------------|---------------------------------| @@ -728,7 +728,7 @@ The following diagram illustrates the different states and their transitions: | USE_PIPE | Stop task,path-independent | | USE_PIPE | Uninstall task,path-independent | | USE_PIPE | Query task,path-independent | -### Stream Processing Task Plugin +### 5.2 Stream Processing Task Plugin | Authority Name | Description | @@ -737,7 +737,7 @@ The following diagram illustrates the different states and their transitions: | USE_PIPE | Delete stream processing task plugin,path-independent | | USE_PIPE | Query stream processing task plugin,path-independent | -## Configure Parameters +## 6. Configure Parameters In iotdb-system.properties : diff --git a/src/UserGuide/Master/Tree/User-Manual/Streaming_timecho.md b/src/UserGuide/Master/Tree/User-Manual/Streaming_timecho.md index 8d6f50e44..3f72c1f8f 100644 --- a/src/UserGuide/Master/Tree/User-Manual/Streaming_timecho.md +++ b/src/UserGuide/Master/Tree/User-Manual/Streaming_timecho.md @@ -42,9 +42,9 @@ Users can declaratively configure the specific attributes of the three subtasks Using the stream processing framework, a complete data link can be built to meet the needs of end-side-cloud synchronization, off-site disaster recovery, and read-write load sub-library*. -## Custom stream processing plugin development +## 1. Custom stream processing plugin development -### Programming development dependencies +### 1.1 Programming development dependencies It is recommended to use maven to build the project and add the following dependencies in `pom.xml`. Please be careful to select the same dependency version as the IoTDB server version. @@ -57,7 +57,7 @@ It is recommended to use maven to build the project and add the following depend ``` -### Event-driven programming model +### 1.2 Event-driven programming model The user programming interface design of the stream processing plugin refers to the general design concept of the event-driven programming model. Events are data abstractions in the user programming interface, and the programming interface is decoupled from the specific execution method. It only needs to focus on describing the processing method expected by the system after the event (data) reaches the system. @@ -127,7 +127,7 @@ public interface TsFileInsertionEvent extends Event { } ``` -### Custom stream processing plugin programming interface definition +### 1.3 Custom stream processing plugin programming interface definition Based on the custom stream processing plugin programming interface, users can easily write data extraction plugins, data processing plugins and data sending plugins, so that the stream processing function can be flexibly adapted to various industrial scenarios. @@ -438,12 +438,12 @@ public interface PipeSink extends PipePlugin { } ``` -## Custom stream processing plugin management +## 2. Custom stream processing plugin management In order to ensure the flexibility and ease of use of user-defined plugins in actual production, the system also needs to provide the ability to dynamically and uniformly manage plugins. The stream processing plugin management statements introduced in this chapter provide an entry point for dynamic unified management of plugins. -### Load plugin statement +### 2.1 Load plugin statement In IoTDB, if you want to dynamically load a user-defined plugin in the system, you first need to implement a specific plugin class based on PipeSource, PipeProcessor or PipeSink. Then the plugin class needs to be compiled and packaged into a jar executable file, and finally the plugin is loaded into IoTDB using the management statement for loading the plugin. @@ -483,7 +483,7 @@ AS 'edu.tsinghua.iotdb.pipe.ExampleProcessor' USING URI ``` -### Delete plugin statement +### 2.2 Delete plugin statement When the user no longer wants to use a plugin and needs to uninstall the plugin from the system, he can use the delete plugin statement as shown in the figure. @@ -493,16 +493,16 @@ DROP PIPEPLUGIN [IF EXISTS] **IF EXISTS semantics**: Used in deletion operations to ensure that when a specified Pipe Plugin exists, the delete command is executed to prevent errors caused by attempting to delete a non-existent Pipe Plugin. -### View plugin statements +### 2.3 View plugin statements Users can also view plugins in the system on demand. View the statement of the plugin as shown in the figure. ```sql SHOW PIPEPLUGINS ``` -## System preset stream processing plugin +## 3. System preset stream processing plugin -### Pre-built Source Plugin +### 3.1 Pre-built Source Plugin #### iotdb-source @@ -567,7 +567,7 @@ Function: Extract historical or realtime data inside IoTDB into pipe. > * If you want to use pipe to build data synchronization of A -> B -> C, then the pipe of B -> C needs to set this parameter to true, so that the data written by A to B through the pipe in A -> B can be forwarded correctly. to C > * If you want to use pipe to build two-way data synchronization (dual-active) of A \<-> B, then the pipes of A -> B and B -> A need to set this parameter to false, otherwise the data will be endless. inter-cluster round-robin forwarding -### Preset processor plugin +### 3.2 Preset processor plugin #### do-nothing-processor @@ -578,7 +578,7 @@ Function: No processing is done on the events passed in by the source. |-----------|----------------------|------------------------------|-----------------------------------| | processor | do-nothing-processor | String: do-nothing-processor | required | -### Preset sink plugin +### 3.3 Preset sink plugin #### do-nothing-sink @@ -588,9 +588,9 @@ Function: No processing is done on the events passed in by the processor. |------|-----------------|-------------------------|-----------------------------------| | sink | do-nothing-sink | String: do-nothing-sink | required | -## Stream processing task management +## 4. Stream processing task management -### Create a stream processing task +### 4.1 Create a stream processing task Use the `CREATE PIPE` statement to create a stream processing task. Taking the creation of a data synchronization stream processing task as an example, the sample SQL statement is as follows: @@ -681,7 +681,7 @@ The semantics expressed are: synchronize all historical data in this database in - IoTDB A -> IoTDB B -> IoTDB A - IoTDB A -> IoTDB A -### Start the stream processing task +### 4.2 Start the stream processing task After the CREATE PIPE statement is successfully executed, the stream processing task-related instance will be created, but the running status of the entire stream processing task will be set to STOPPED(V1.3.0), that is, the stream processing task will not process data immediately. In version 1.3.1 and later, the status of the task will be set to RUNNING after CREATE. @@ -691,7 +691,7 @@ You can use the START PIPE statement to cause a stream processing task to start START PIPE ``` -### Stop the stream processing task +### 4.3 Stop the stream processing task Use the STOP PIPE statement to stop the stream processing task from processing data: @@ -699,7 +699,7 @@ Use the STOP PIPE statement to stop the stream processing task from processing d STOP PIPE ``` -### Delete stream processing tasks +### 4.4 Delete stream processing tasks Use the DROP PIPE statement to stop the stream processing task from processing data (when the stream processing task status is RUNNING), and then delete the entire stream processing task: @@ -709,7 +709,7 @@ DROP PIPE Users do not need to perform a STOP operation before deleting the stream processing task. -### Display stream processing tasks +### 4.5 Display stream processing tasks Use the SHOW PIPES statement to view all stream processing tasks: @@ -742,7 +742,7 @@ SHOW PIPES WHERE SINK USED BY ``` -### Stream processing task running status migration +### 4.6 Stream processing task running status migration A stream processing pipe will pass through various states during its managed life cycle: @@ -758,9 +758,9 @@ The following diagram shows all states and state transitions: ![State migration diagram](/img/%E7%8A%B6%E6%80%81%E8%BF%81%E7%A7%BB%E5%9B%BE.png) -## authority management +## 5. authority management -### Stream processing tasks +### 5.1 Stream processing tasks | Permission name | Description | @@ -771,7 +771,7 @@ The following diagram shows all states and state transitions: | USE_PIPE | Offload stream processing tasks. The path is irrelevant. | | USE_PIPE | Query stream processing tasks. The path is irrelevant. | -### Stream processing task plugin +### 5.2 Stream processing task plugin | Permission name | Description | @@ -780,7 +780,7 @@ The following diagram shows all states and state transitions: | USE_PIPE | Uninstall the stream processing task plugin. The path is irrelevant. | | USE_PIPE | Query stream processing task plugin. The path is irrelevant. | -## Configuration parameters +## 6. Configuration parameters In iotdb-system.properties: diff --git a/src/UserGuide/Master/Tree/User-Manual/Tiered-Storage_timecho.md b/src/UserGuide/Master/Tree/User-Manual/Tiered-Storage_timecho.md index 1cb50b1ee..3826db94d 100644 --- a/src/UserGuide/Master/Tree/User-Manual/Tiered-Storage_timecho.md +++ b/src/UserGuide/Master/Tree/User-Manual/Tiered-Storage_timecho.md @@ -20,11 +20,11 @@ --> # Tiered Storage -## Overview +## 1. Overview The Tiered storage functionality allows users to define multiple layers of storage, spanning across multiple types of storage media (Memory mapped directory, SSD, rotational hard discs or cloud storage). While memory and cloud storage is usually singular, the local file system storages can consist of multiple directories joined together into one tier. Meanwhile, users can classify data based on its hot or cold nature and store data of different categories in specified "tier". Currently, IoTDB supports the classification of hot and cold data through TTL (Time to live / age) of data. When the data in one tier does not meet the TTL rules defined in the current tier, the data will be automatically migrated to the next tier. -## Parameter Definition +## 2. Parameter Definition To enable tiered storage in IoTDB, you need to configure the following aspects: @@ -48,7 +48,7 @@ The specific parameter definitions and their descriptions are as follows. | remote_tsfile_cache_page_size_in_kb | 20480 |Block size of locally cached files stored in the cloud | If remote storage is not used, no configuration required | | remote_tsfile_cache_max_disk_usage_in_mb | 51200 | Maximum Disk Occupancy Size for Cloud Storage Local Cache | If remote storage is not used, no configuration required | -## local tiered storag configuration example +## 3. local tiered storag configuration example The following is an example of a local two-level storage configuration. @@ -66,7 +66,7 @@ In this example, two levels of storage are configured, specifically: | tier 1 | path 1:/data1/data | data for last 1 day | 20% | | tier 2 | path 2:/data2/data path 2:/data3/data | data from 1 day ago | 10% | -## remote tiered storag configuration example +## 4. remote tiered storag configuration example The following takes three-level storage as an example: diff --git a/src/UserGuide/Master/Tree/User-Manual/Trigger.md b/src/UserGuide/Master/Tree/User-Manual/Trigger.md index 7c4e163fb..5e05607da 100644 --- a/src/UserGuide/Master/Tree/User-Manual/Trigger.md +++ b/src/UserGuide/Master/Tree/User-Manual/Trigger.md @@ -21,7 +21,7 @@ # TRIGGER -## Instructions +## 1. Instructions The trigger provides a mechanism for listening to changes in time series data. With user-defined logic, tasks such as alerting and data forwarding can be conducted. @@ -29,29 +29,29 @@ The trigger is implemented based on the reflection mechanism. Users can monitor The document will help you learn to define and manage triggers. -### Pattern for Listening +### 1.1 Pattern for Listening A single trigger can be used to listen for data changes in a time series that match a specific pattern. For example, a trigger can listen for the data changes of time series `root.sg.a`, or time series that match the pattern `root.sg.*`. When you register a trigger, you can specify the path pattern that the trigger listens on through an SQL statement. -### Trigger Type +### 1.2 Trigger Type There are currently two types of triggers, and you can specify the type through an SQL statement when registering a trigger: - Stateful triggers: The execution logic of this type of trigger may depend on data from multiple insertion statement . The framework will aggregate the data written by different nodes into the same trigger instance for calculation to retain context information. This type of trigger is usually used for sampling or statistical data aggregation for a period of time. information. Only one node in the cluster holds an instance of a stateful trigger. - Stateless triggers: The execution logic of the trigger is only related to the current input data. The framework does not need to aggregate the data of different nodes into the same trigger instance. This type of trigger is usually used for calculation of single row data and abnormal detection. Each node in the cluster holds an instance of a stateless trigger. -### Trigger Event +### 1.3 Trigger Event There are currently two trigger events for the trigger, and other trigger events will be expanded in the future. When you register a trigger, you can specify the trigger event through an SQL statement: - BEFORE INSERT: Fires before the data is persisted. **Please note that currently the trigger does not support data cleaning and will not change the data to be persisted itself.** - AFTER INSERT: Fires after the data is persisted. -## How to Implement a Trigger +## 2. How to Implement a Trigger You need to implement the trigger by writing a Java class, where the dependency shown below is required. If you use [Maven](http://search.maven.org/), you can search for them directly from the [Maven repository](http://search.maven.org/). -### Dependency +### 2.1 Dependency ```xml @@ -64,7 +64,7 @@ You need to implement the trigger by writing a Java class, where the dependency Note that the dependency version should be correspondent to the target server version. -### Interface Description +### 2.2 Interface Description To implement a trigger, you need to implement the `org.apache.iotdb.trigger.api.Trigger` class. @@ -208,7 +208,7 @@ When the trigger fails to fire, we will take corresponding actions according to } ``` -### Example +### 2.3 Example If you use [Maven](http://search.maven.org/), you can refer to our sample project **trigger-example**. @@ -318,13 +318,13 @@ public class ClusterAlertingExample implements Trigger { } ``` -## Trigger Management +## 3. Trigger Management You can create and drop a trigger through an SQL statement, and you can also query all registered triggers through an SQL statement. **We recommend that you stop insertion while creating triggers.** -### Create Trigger +### 3.1 Create Trigger Triggers can be registered on arbitrary path patterns. The time series registered with the trigger will be listened to by the trigger. When there is data change on the series, the corresponding fire method in the trigger will be called. @@ -400,7 +400,7 @@ The above SQL statement creates a trigger named triggerTest: - The JAR package URI is http://jar/ClusterAlertingExample.jar - When creating the trigger instance, two parameters, name and limit, are passed in. -### Drop Trigger +### 3.2 Drop Trigger The trigger can be dropped by specifying the trigger ID. During the process of dropping the trigger, the `onDrop` interface of the trigger will be called only once. @@ -421,7 +421,7 @@ DROP TRIGGER triggerTest1 The above statement will drop the trigger with ID triggerTest1. -### Show Trigger +### 3.3 Show Trigger You can query information about triggers that exist in the cluster through an SQL statement. @@ -437,7 +437,7 @@ The result set format of this statement is as follows: | ------------ | ---------------------------- | -------------------- | ------------------------------------------- | ----------- | --------------------------------------- | --------------------------------------- | | triggerTest1 | BEFORE_INSERT / AFTER_INSERT | STATELESS / STATEFUL | INACTIVE / ACTIVE / DROPPING / TRANSFFERING | root.** | org.apache.iotdb.trigger.TriggerExample | ALL(STATELESS) / DATA_NODE_ID(STATEFUL) | -### Trigger State +### 3.4 Trigger State During the process of creating and dropping triggers in the cluster, we maintain the states of the triggers. The following is a description of these states: @@ -448,7 +448,7 @@ During the process of creating and dropping triggers in the cluster, we maintain | DROPPING | Intermediate state of executing `DROP TRIGGER`, the cluster is in the process of dropping the trigger. | NO | | TRANSFERRING | The cluster is migrating the location of this trigger instance. | NO | -## Notes +## 4. Notes - The trigger takes effect from the time of registration, and does not process the existing historical data. **That is, only insertion requests that occur after the trigger is successfully registered will be listened to by the trigger. ** - The fire process of trigger is synchronous currently, so you need to ensure the efficiency of the trigger, otherwise the writing performance may be greatly affected. **You need to guarantee concurrency safety of triggers yourself**. @@ -458,7 +458,7 @@ During the process of creating and dropping triggers in the cluster, we maintain - The trigger JAR package has a size limit, which must be less than min(`config_node_ratis_log_appender_buffer_size_max`, 2G), where `config_node_ratis_log_appender_buffer_size_max` is a configuration item. For the specific meaning, please refer to the IOTDB configuration item description. - **It is better not to have classes with the same full class name but different function implementations in different JAR packages.** For example, trigger1 and trigger2 correspond to resources trigger1.jar and trigger2.jar respectively. If two JAR packages contain a `org.apache.iotdb.trigger.example.AlertListener` class, when `CREATE TRIGGER` uses this class, the system will randomly load the class in one of the JAR packages, which will eventually leads the inconsistent behavior of trigger and other issues. -## Configuration Parameters +## 5. Configuration Parameters | Parameter | Meaning | | ------------------------------------------------- | ------------------------------------------------------------ | diff --git a/src/UserGuide/Master/Tree/User-Manual/UDF-development.md b/src/UserGuide/Master/Tree/User-Manual/UDF-development.md index 0a3efb6bb..a866f4df6 100644 --- a/src/UserGuide/Master/Tree/User-Manual/UDF-development.md +++ b/src/UserGuide/Master/Tree/User-Manual/UDF-development.md @@ -15,7 +15,7 @@ If you use [Maven](http://search.maven.org/), you can search for the development ``` -## 1.2 UDTF(User Defined Timeseries Generating Function) +### 1.2 UDTF(User Defined Timeseries Generating Function) To write a UDTF, you need to inherit the `org.apache.iotdb.udf.api.UDTF` class, and at least implement the `beforeStart` method and a `transform` method. @@ -698,7 +698,7 @@ If you use Maven, you can build your own UDF project referring to our **udf-exam This part mainly introduces how external users can contribute their own UDFs to the IoTDB community. -#### 2.1 Prerequisites +### 2.1 Prerequisites 1. UDFs must be universal. @@ -709,7 +709,7 @@ This part mainly introduces how external users can contribute their own UDFs to 2. The UDF you are going to contribute has been well tested and can run normally in the production environment. -#### 2.2 What you need to prepare +### 2.2 What you need to prepare 1. UDF source code 2. Test cases diff --git a/src/UserGuide/Master/Tree/User-Manual/White-List_timecho.md b/src/UserGuide/Master/Tree/User-Manual/White-List_timecho.md index 5194f7051..ae49c1648 100644 --- a/src/UserGuide/Master/Tree/User-Manual/White-List_timecho.md +++ b/src/UserGuide/Master/Tree/User-Manual/White-List_timecho.md @@ -21,17 +21,17 @@ # White List -**function description** +## 1. **Function Description** Allow which client addresses can connect to IoTDB -**configuration file** +## 2. **Configuration File** conf/iotdb-system.properties conf/white.list -**configuration item** +## 3. **Configuration Item** iotdb-system.properties: @@ -57,7 +57,7 @@ Decide which IP addresses can connect to IoTDB 10.100.0.* ``` -**note** +## 4. **Note** 1. If the white list itself is cancelled via the session client, the current connection is not immediately disconnected. It is rejected the next time the connection is created. 2. If white.list is modified directly, it takes effect within one minute. If modified via the session client, it takes effect immediately, updating the values in memory and the white.list disk file. diff --git a/src/UserGuide/latest-Table/API/Programming-JDBC_apache.md b/src/UserGuide/latest-Table/API/Programming-JDBC_apache.md index 13c16fe39..69c64c77b 100644 --- a/src/UserGuide/latest-Table/API/Programming-JDBC_apache.md +++ b/src/UserGuide/latest-Table/API/Programming-JDBC_apache.md @@ -18,19 +18,20 @@ under the License. --> +# JDBC The IoTDB JDBC provides a standardized way to interact with the IoTDB database, allowing users to execute SQL statements from Java programs for managing databases and time-series data. It supports operations such as connecting to the database, creating, querying, updating, and deleting data, as well as batch insertion and querying of time-series data. **Note:** The current JDBC implementation is designed primarily for integration with third-party tools. High-performance writing **may not be achieved** when using JDBC for insert operations. For Java applications, it is recommended to use the **JAVA Native API** for optimal performance. -## Prerequisites +## 1. Prerequisites -### **Environment Requirements** +### 1.1 **Environment Requirements** - **JDK:** Version 1.8 or higher - **Maven:** Version 3.6 or higher -### **Adding Maven Dependencies** +### 1.2 **Adding Maven Dependencies** Add the following dependency to your Maven `pom.xml` file: @@ -44,13 +45,13 @@ Add the following dependency to your Maven `pom.xml` file: ``` -## Read and Write Operations +## 2. Read and Write Operations **Write Operations:** Perform database operations such as inserting data, creating databases, and creating time-series using the `execute` method. **Read Operations:** Execute queries using the `executeQuery` method and retrieve results via the `ResultSet` object. -### Method Overview +### 2.1 Method Overview | **Method Name** | **Description** | **Parameters** | **Return Value** | | ------------------------------------------------------------ | ----------------------------------------------------------- | ------------------------------------------------------------ | ------------------------------------------------- | @@ -63,7 +64,7 @@ Add the following dependency to your Maven `pom.xml` file: | ResultSet.next() | Moves to the next row in the result set | None | `boolean`: Whether the move was successful | | ResultSet.getString(int columnIndex) | Retrieves the string value of a specified column | `columnIndex`: Column index (starting from 1) | `String`: Column value | -## Sample Code +## 3. Sample Code **Note:** When using the Table Model, you must specify the `sql_dialect` parameter as `table` in the URL. Example: diff --git a/src/UserGuide/latest-Table/API/Programming-Java-Native-API_apache.md b/src/UserGuide/latest-Table/API/Programming-Java-Native-API_apache.md index 2fd8718be..ae2296b47 100644 --- a/src/UserGuide/latest-Table/API/Programming-Java-Native-API_apache.md +++ b/src/UserGuide/latest-Table/API/Programming-Java-Native-API_apache.md @@ -18,17 +18,18 @@ under the License. --> +# Java Native API IoTDB provides a Java native client driver and a session pool management mechanism. These tools enable developers to interact with IoTDB using object-oriented APIs, allowing time-series objects to be directly assembled and inserted into the database without constructing SQL statements. It is recommended to use the `ITableSessionPool` for multi-threaded database operations to maximize efficiency. -## Prerequisites +## 1. Prerequisites -### Environment Requirements +### 1.1 Environment Requirements - **JDK**: Version 1.8 or higher - **Maven**: Version 3.6 or higher -### Adding Maven Dependencies +### 1.2 Adding Maven Dependencies ```XML @@ -40,9 +41,9 @@ IoTDB provides a Java native client driver and a session pool management mechani ``` -## Read and Write Operations +## 2. Read and Write Operations -### ITableSession Interface +### 2.1 ITableSession Interface The `ITableSession` interface defines basic operations for interacting with IoTDB, including data insertion, query execution, and session closure. Note that this interface is **not thread-safe**. @@ -124,7 +125,7 @@ public interface ITableSession extends AutoCloseable { } ``` -### TableSessionBuilder Class +### 2.2 TableSessionBuilder Class The `TableSessionBuilder` class is a builder for configuring and creating instances of the `ITableSession` interface. It allows developers to set connection parameters, query parameters, and security features. @@ -336,9 +337,9 @@ public class TableSessionBuilder { } ``` -## Session Pool +## 3. Session Pool -### ITableSessionPool Interface +### 3.1 ITableSessionPool Interface The `ITableSessionPool` interface manages a pool of `ITableSession` instances, enabling efficient reuse of connections and proper cleanup of resources. @@ -378,7 +379,7 @@ public interface ITableSessionPool { } ``` -### TableSessionPoolBuilder Class +### 3.2 TableSessionPoolBuilder Class The `TableSessionPoolBuilder` class is a builder for configuring and creating `ITableSessionPool` instances, supporting options like connection settings and pooling behavior. diff --git a/src/UserGuide/latest/Ecosystem-Integration/DBeaver.md b/src/UserGuide/latest/Ecosystem-Integration/DBeaver.md index cd28d1b38..9f0323b86 100644 --- a/src/UserGuide/latest/Ecosystem-Integration/DBeaver.md +++ b/src/UserGuide/latest/Ecosystem-Integration/DBeaver.md @@ -23,11 +23,11 @@ DBeaver is a SQL client software application and a database administration tool. It can use the JDBC application programming interface (API) to interact with IoTDB via the JDBC driver. -## DBeaver Installation +## 1. DBeaver Installation * From DBeaver site: https://dbeaver.io/download/ -## IoTDB Installation +## 2. IoTDB Installation * Download binary version * From IoTDB site: https://iotdb.apache.org/Download/ @@ -35,7 +35,7 @@ DBeaver is a SQL client software application and a database administration tool. * Or compile from source code * See https://github.com/apache/iotdb -## Connect IoTDB and DBeaver +## 3. Connect IoTDB and DBeaver 1. Start IoTDB server diff --git a/src/UserGuide/latest/Ecosystem-Integration/DataEase.md b/src/UserGuide/latest/Ecosystem-Integration/DataEase.md index 021ed7d68..ddfbb62ce 100644 --- a/src/UserGuide/latest/Ecosystem-Integration/DataEase.md +++ b/src/UserGuide/latest/Ecosystem-Integration/DataEase.md @@ -20,7 +20,7 @@ --> # DataEase -## Product Overview +## 1. Product Overview 1. Introduction to DataEase @@ -37,7 +37,7 @@
-## Installation Requirements +## 2. Installation Requirements | **Preparation Content** | **Version Requirements** | | :-------------------- | :----------------------------------------------------------- | @@ -46,7 +46,7 @@ | DataEase | Requires v1 series v1.18 version, please refer to the official [DataEase Installation Guide](https://dataease.io/docs/v2/installation/offline_INSTL_and_UPG/)(V2.x is currently not supported. For integration with other versions, please contact Timecho) | | DataEase-IoTDB Connector | Please contact Timecho for assistance | -## Installation Steps +## 3. Installation Steps Step 1: Please contact Timecho to obtain the file and unzip the installation package `iotdb-api-source-1.0.0.zip` @@ -88,16 +88,16 @@ Step 4: After startup, you can check whether the startup was successful through lsof -i:8097 // The port configured in the file where the IoTDB API Source listens ``` -## Instructions +## 4. Instructions -### Sign in DataEase +### 4.1 Sign in DataEase 1. Sign in DataEase,access address: `http://[target server IP address]:80`
-### Configure data source +### 4.2 Configure data source 1. Navigate to "Data Source".
@@ -153,7 +153,7 @@ Step 4: After startup, you can check whether the startup was successful through
-### Configure the Dataset +### 4.3 Configure the Dataset 1. Create API dataset: Navigate to "Data Set",click on the "+" on the top left corner, select "API dataset" and choose the directory where this dataset is located to enter the New API Dataset interface.
@@ -189,7 +189,7 @@ Step 4: After startup, you can check whether the startup was successful through
-### Configure Dashboard +### 4.4 Configure Dashboard 1. Navigate to "Dashboard", click on "+" to create a directory, then click on "+" of the directory and select "Create Dashboard".
diff --git a/src/UserGuide/latest/Ecosystem-Integration/Flink-IoTDB.md b/src/UserGuide/latest/Ecosystem-Integration/Flink-IoTDB.md index efb39723c..31176cd24 100644 --- a/src/UserGuide/latest/Ecosystem-Integration/Flink-IoTDB.md +++ b/src/UserGuide/latest/Ecosystem-Integration/Flink-IoTDB.md @@ -23,12 +23,12 @@ IoTDB integration for [Apache Flink](https://flink.apache.org/). This module includes the IoTDB sink that allows a flink job to write events into timeseries, and the IoTDB source allowing reading data from IoTDB. -## IoTDBSink +## 1. IoTDBSink To use the `IoTDBSink`, you need construct an instance of it by specifying `IoTDBSinkOptions` and `IoTSerializationSchema` instances. The `IoTDBSink` send only one event after another by default, but you can change to batch by invoking `withBatchSize(int)`. -### Example +### 1.1 Example This example shows a case that sends data to a IoTDB server from a Flink job: @@ -115,17 +115,17 @@ public class FlinkIoTDBSink { ``` -### Usage +### 1.2 Usage * Launch the IoTDB server. * Run `org.apache.iotdb.flink.FlinkIoTDBSink.java` to run the flink job on local mini cluster. -## IoTDBSource +## 2. IoTDBSource To use the `IoTDBSource`, you need to construct an instance of `IoTDBSource` by specifying `IoTDBSourceOptions` and implementing the abstract method `convert()` in `IoTDBSource`. The `convert` methods defines how you want the row data to be transformed. -### Example +### 2.1 Example This example shows a case where data are read from IoTDB. ```java import org.apache.iotdb.flink.options.IoTDBSourceOptions; @@ -209,7 +209,7 @@ public class FlinkIoTDBSource { } ``` -### Usage +### 2.2 Usage Launch the IoTDB server. Run org.apache.iotdb.flink.FlinkIoTDBSource.java to run the flink job on local mini cluster. diff --git a/src/UserGuide/latest/Ecosystem-Integration/Flink-TsFile.md b/src/UserGuide/latest/Ecosystem-Integration/Flink-TsFile.md index e1ea626dd..79e29ab4a 100644 --- a/src/UserGuide/latest/Ecosystem-Integration/Flink-TsFile.md +++ b/src/UserGuide/latest/Ecosystem-Integration/Flink-TsFile.md @@ -21,7 +21,7 @@ # Apache Flink(TsFile) -## About Flink-TsFile-Connector +## 1. About Flink-TsFile-Connector Flink-TsFile-Connector implements the support of Flink for external data sources of Tsfile type. This enables users to read and write Tsfile by Flink via DataStream/DataSet API. @@ -31,9 +31,9 @@ With this connector, you can * load a single TsFile or multiple TsFiles(only for DataSet), from either the local file system or hdfs, into Flink * load all files in a specific directory, from either the local file system or hdfs, into Flink -## Quick Start +## 2. Quick Start -### TsFileInputFormat Example +### 2.1 TsFileInputFormat Example 1. create TsFileInputFormat with default RowRowRecordParser. @@ -93,7 +93,7 @@ for (String s : result) { } ``` -### Example of TSRecordOutputFormat +### 2.2 Example of TSRecordOutputFormat 1. create TSRecordOutputFormat with default RowTSRecordConverter. diff --git a/src/UserGuide/latest/Ecosystem-Integration/Grafana-Connector.md b/src/UserGuide/latest/Ecosystem-Integration/Grafana-Connector.md index 92fb176fa..718c28c07 100644 --- a/src/UserGuide/latest/Ecosystem-Integration/Grafana-Connector.md +++ b/src/UserGuide/latest/Ecosystem-Integration/Grafana-Connector.md @@ -23,14 +23,14 @@ Grafana is an open source volume metrics monitoring and visualization tool, which can be used to display time series data and application runtime analysis. Grafana supports Graphite, InfluxDB and other major time series databases as data sources. IoTDB-Grafana-Connector is a connector which we developed to show time series data in IoTDB by reading data from IoTDB and sends to Grafana(https://grafana.com/). Before using this tool, make sure Grafana and IoTDB are correctly installed and started. -## Installation and deployment +## 1. Installation and deployment -### Install Grafana +### 1.1 Install Grafana * Download url: https://grafana.com/grafana/download * Version >= 4.4.1 -### Install data source plugin +### 1.2 Install data source plugin * Plugin name: simple-json-datasource * Download url: https://github.com/grafana/simple-json-datasource @@ -64,7 +64,7 @@ Please try to find config file of grafana(eg. customer.ini in windows, and /etc/ allow_loading_unsigned_plugins = "grafana-simple-json-datasource" ``` -### Start Grafana +### 1.3 Start Grafana If Unix is used, Grafana will start automatically after installing, or you can run `sudo service grafana-server start` command. See more information [here](http://docs.grafana.org/installation/debian/). If Mac and `homebrew` are used to install Grafana, you can use `homebrew` to start Grafana. @@ -73,17 +73,17 @@ See more information [here](http://docs.grafana.org/installation/mac/). If Windows is used, start Grafana by executing grafana-server.exe, located in the bin directory, preferably from the command line. See more information [here](http://docs.grafana.org/installation/windows/). -## IoTDB installation +## 2. IoTDB installation See https://github.com/apache/iotdb -## IoTDB-Grafana-Connector installation +## 3. IoTDB-Grafana-Connector installation ```shell git clone https://github.com/apache/iotdb.git ``` -## Start IoTDB-Grafana-Connector +## 4. Start IoTDB-Grafana-Connector * Option one @@ -117,13 +117,13 @@ $ java -jar iotdb-grafana-connector-{version}.war To configure properties, move the `grafana-connector/src/main/resources/application.properties` to the same directory as the war package (`grafana/target`) -## Explore in Grafana +## 5. Explore in Grafana The default port of Grafana is 3000, see http://localhost:3000/ Username and password are both "admin" by default. -### Add data source +### 5.1 Add data source Select `Data Sources` and then `Add data source`, select `SimpleJson` in `Type` and `URL` is http://localhost:8888. After that, make sure IoTDB has been started, click "Save & Test", and "Data Source is working" will be shown to indicate successful configuration. @@ -131,13 +131,13 @@ After that, make sure IoTDB has been started, click "Save & Test", and "Data Sou -### Design in dashboard +### 5.2 Design in dashboard Add diagrams in dashboard and customize your query. See http://docs.grafana.org/guides/getting_started/ -## config grafana +## 6. config grafana ``` # ip and port of IoTDB diff --git a/src/UserGuide/latest/Ecosystem-Integration/Grafana-Plugin.md b/src/UserGuide/latest/Ecosystem-Integration/Grafana-Plugin.md index 0597fb124..afb2eaa93 100644 --- a/src/UserGuide/latest/Ecosystem-Integration/Grafana-Plugin.md +++ b/src/UserGuide/latest/Ecosystem-Integration/Grafana-Plugin.md @@ -27,23 +27,23 @@ Grafana is an open source volume metrics monitoring and visualization tool, whic We developed the Grafana-Plugin for IoTDB, using the IoTDB REST service to present time series data and providing many visualization methods for time series data. Compared with previous IoTDB-Grafana-Connector, current Grafana-Plugin performs more efficiently and supports more query types. So, **we recommend using Grafana-Plugin instead of IoTDB-Grafana-Connector**. -## Installation and deployment +## 1. Installation and deployment -### Install Grafana +### 1.1 Install Grafana * Download url: https://grafana.com/grafana/download * Version >= 9.3.0 -### Acquisition method of grafana plugin +### 1.2 Acquisition method of grafana plugin #### Download apache-iotdb-datasource from Grafana's official website Download url:https://grafana.com/api/plugins/apache-iotdb-datasource/versions/1.0.0/download -### Install Grafana-Plugin +### 1.3 Install Grafana-Plugin -### Method 1: Install using the grafana cli tool (recommended) +#### Method 1: Install using the grafana cli tool (recommended) * Use the grafana cli tool to install apache-iotdb-datasource from the command line. The command content is as follows: @@ -51,11 +51,11 @@ Download url:https://grafana.com/api/plugins/apache-iotdb-datasource/versions/ grafana-cli plugins install apache-iotdb-datasource ``` -### Method 2: Install using the Grafana interface (recommended) +#### Method 2: Install using the Grafana interface (recommended) * Click on Configuration ->Plugins ->Search IoTDB from local Grafana to install the plugin -### Method 3: Manually install the grafana-plugin plugin (not recommended) +#### Method 3: Manually install the grafana-plugin plugin (not recommended) * Copy the front-end project target folder generated above to Grafana's plugin directory `${Grafana directory}\data\plugins\`。If there is no such directory, you can manually create it or start grafana and it will be created automatically. Of course, you can also modify the location of plugins. For details, please refer to the following instructions for modifying the location of Grafana's plugin directory. @@ -64,7 +64,7 @@ grafana-cli plugins install apache-iotdb-datasource For more details,please click [here](https://grafana.com/docs/grafana/latest/plugins/installation/) -### Start Grafana +### 1.4 Start Grafana Start Grafana with the following command in the Grafana directory: @@ -89,7 +89,7 @@ For more details,please click [here](https://grafana.com/docs/grafana/latest/i -### Configure IoTDB REST Service +### 1.5 Configure IoTDB REST Service * Modify `{iotdb directory}/conf/iotdb-system.properties` as following: @@ -104,9 +104,9 @@ rest_service_port=18080 Start IoTDB (restart if the IoTDB service is already started) -## How to use Grafana-Plugin +## 2. How to use Grafana-Plugin -### Access Grafana dashboard +### 2.1 Access Grafana dashboard Grafana displays data in a web page dashboard. Please open your browser and visit `http://:` when using it. @@ -115,7 +115,7 @@ Grafana displays data in a web page dashboard. Please open your browser and visi * The default login username and password are both `admin`. -### Add IoTDB as Data Source +### 2.2 Add IoTDB as Data Source Click the `Settings` icon on the left, select the `Data Source` option, and then click `Add data source`. @@ -135,7 +135,7 @@ Click `Save & Test`, and `Data source is working` will appear. -### Create a new Panel +### 2.3 Create a new Panel Click the `Dashboards` icon on the left, and select `Manage` option. @@ -195,7 +195,7 @@ Select a time series in the TIME-SERIES selection box, select a function in the -### Support for variables and template functions +### 2.4 Support for variables and template functions Both SQL: Full Customized and SQL: Drop-down List input methods support the variable and template functions of grafana. In the following example, raw input method is used, and aggregation is similar. @@ -246,7 +246,7 @@ In addition to the examples above, the following statements are supported: Tip: If the query field contains Boolean data, the result value will be converted to 1 by true and 0 by false. -### Grafana alert function +### 2.5 Grafana alert function This plugin supports Grafana alert function. @@ -293,6 +293,6 @@ For example, we have 3 conditions in the following order: Condition: B (Evaluate 10. We can also configure `Contact points` for alarms to receive alarm notifications. For more detailed operations, please refer to the official document (https://grafana.com/docs/grafana/latest/alerting/manage-notifications/create-contact-point/). -## More Details about Grafana +## 3. More Details about Grafana For more details about Grafana operation, please refer to the official Grafana documentation: http://docs.grafana.org/guides/getting_started/. diff --git a/src/UserGuide/latest/Ecosystem-Integration/Hive-TsFile.md b/src/UserGuide/latest/Ecosystem-Integration/Hive-TsFile.md index e8b4dc30d..227a1e383 100644 --- a/src/UserGuide/latest/Ecosystem-Integration/Hive-TsFile.md +++ b/src/UserGuide/latest/Ecosystem-Integration/Hive-TsFile.md @@ -20,7 +20,7 @@ --> # Apache Hive(TsFile) -## About Hive-TsFile-Connector +## 1. About Hive-TsFile-Connector Hive-TsFile-Connector implements the support of Hive for external data sources of Tsfile type. This enables users to operate TsFile by Hive. @@ -31,7 +31,7 @@ With this connector, you can * Query the tsfile through HQL. * As of now, the write operation is not supported in hive-connector. So, insert operation in HQL is not allowed while operating tsfile through hive. -## System Requirements +## 2. System Requirements |Hadoop Version |Hive Version | Java Version | TsFile | |------------- |------------ | ------------ |------------ | @@ -39,7 +39,7 @@ With this connector, you can > Note: For more information about how to download and use TsFile, please see the following link: https://github.com/apache/iotdb/tree/master/tsfile. -## Data Type Correspondence +## 3. Data Type Correspondence | TsFile data type | Hive field type | | ---------------- | --------------- | @@ -51,7 +51,7 @@ With this connector, you can | TEXT | STRING | -## Add Dependency For Hive +## 4. Add Dependency For Hive To use hive-connector in hive, we should add the hive-connector jar into hive. @@ -67,7 +67,7 @@ Added resources: [/Users/hive/iotdb/hive-connector/target/hive-connector-1.0.0-j ``` -## Create Tsfile-backed Hive tables +## 5. Create Tsfile-backed Hive tables To create a Tsfile-backed table, specify the `serde` as `org.apache.iotdb.hive.TsFileSerDe`, specify the `inputformat` as `org.apache.iotdb.hive.TSFHiveInputFormat`, @@ -110,7 +110,7 @@ Time taken: 0.053 seconds, Fetched: 2 row(s) ``` At this point, the Tsfile-backed table can be worked with in Hive like any other table. -## Query from TsFile-backed Hive tables +## 6. Query from TsFile-backed Hive tables Before we do any queries, we should set the `hive.input.format` in hive by executing the following command. @@ -123,7 +123,7 @@ We can use any query operations through HQL to analyse it. For example: -### Select Clause Example +### 6.1 Select Clause Example ``` hive> select * from only_sensor_1 limit 10; @@ -141,7 +141,7 @@ OK Time taken: 1.464 seconds, Fetched: 10 row(s) ``` -### Aggregate Clause Example +### 6.2 Aggregate Clause Example ``` hive> select count(*) from only_sensor_1; diff --git a/src/UserGuide/latest/Ecosystem-Integration/Ignition-IoTDB-plugin_timecho.md b/src/UserGuide/latest/Ecosystem-Integration/Ignition-IoTDB-plugin_timecho.md index 10f07ed73..ac82207e8 100644 --- a/src/UserGuide/latest/Ecosystem-Integration/Ignition-IoTDB-plugin_timecho.md +++ b/src/UserGuide/latest/Ecosystem-Integration/Ignition-IoTDB-plugin_timecho.md @@ -21,7 +21,7 @@ # Ignition -## Product Overview +## 1. Product Overview 1. Introduction to Ignition @@ -38,7 +38,7 @@ ![](/img/20240703114443.png) -## Installation Requirements +## 2. Installation Requirements | **Preparation Content** | Version Requirements | | ------------------------------- | ------------------------------------------------------------ | @@ -47,15 +47,15 @@ | Ignition-IoTDB Connector module | Please contact Business to obtain | | Ignition-IoTDB With JDBC module | Download address:https://repo1.maven.org/maven2/org/apache/iotdb/iotdb-jdbc/ | -## Instruction Manual For Ignition-IoTDB Connector +## 3. Instruction Manual For Ignition-IoTDB Connector -### Introduce +### 3.1 Introduce The Ignition-IoTDB Connector module can store data in a database connection associated with the historical database provider. The data is directly stored in a table in the SQL database based on its data type, as well as a millisecond timestamp. Store data only when making changes based on the value pattern and dead zone settings on each label, thus avoiding duplicate and unnecessary data storage. The Ignition-IoTDB Connector provides the ability to store the data collected by Ignition into IoTDB. -### Installation Steps +### 3.2 Installation Steps Step 1: Enter the `Configuration` - `System` - `Modules` module and click on the `Install or Upgrade a Module` button at the bottom @@ -157,7 +157,7 @@ The configuration content is as follows: -### Instructions +### 3.3 Instructions #### Configure Historical Data Storage @@ -233,13 +233,13 @@ The configuration content is as follows: system.iotdb.query("IoTDB", "select * from root.db.Sine where time > 1709563427247") ``` -## Ignition-IoTDB With JDBC +## 4. Ignition-IoTDB With JDBC -### Introduce +### 4.1 Introduce Ignition-IoTDB With JDBC provides a JDBC driver that allows users to connect and query the Ignition IoTDB database using standard JDBC APIs -### Installation Steps +### 4.2 Installation Steps Step 1: Enter the `Configuration` - `Databases` -`Drivers` module and create the `Translator` @@ -253,7 +253,7 @@ Step 3: Enter the `Configuration` - `Databases` - `Connections` module, create a ![](/img/Ignition-IoTDBWithJDBC-3.png) -### Instructions +### 4.3 Instructions #### Data Writing diff --git a/src/UserGuide/latest/Ecosystem-Integration/NiFi-IoTDB.md b/src/UserGuide/latest/Ecosystem-Integration/NiFi-IoTDB.md index 531c5119c..b45eefb0b 100644 --- a/src/UserGuide/latest/Ecosystem-Integration/NiFi-IoTDB.md +++ b/src/UserGuide/latest/Ecosystem-Integration/NiFi-IoTDB.md @@ -20,7 +20,7 @@ --> # Apache NiFi -## Apache NiFi Introduction +## 1. Apache NiFi Introduction Apache NiFi is an easy to use, powerful, and reliable system to process and distribute data. @@ -46,11 +46,11 @@ Apache NiFi includes the following capabilities: * Multi-tenant authorization and policy management * Standard protocols for encrypted communication including TLS and SSH -## PutIoTDBRecord +## 2. PutIoTDBRecord This is a processor that reads the content of the incoming FlowFile as individual records using the configured 'Record Reader' and writes them to Apache IoTDB using native interface. -### Properties of PutIoTDBRecord +### 2.1 Properties of PutIoTDBRecord | property | description | default value | necessary | |---------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| ------------- | --------- | @@ -65,7 +65,7 @@ This is a processor that reads the content of the incoming FlowFile as individua | Aligned | Whether using aligned interface? It can be updated by expression language. | false | false | | MaxRowNumber | Specifies the max row number of each tablet. It can be updated by expression language. | 1024 | false | -### Inferred Schema of Flowfile +### 2.2 Inferred Schema of Flowfile There are a couple of rules about flowfile: @@ -75,7 +75,7 @@ There are a couple of rules about flowfile: 4. Fields excepted time must start with `root.`. 5. The supported data types are `INT`, `LONG`, `FLOAT`, `DOUBLE`, `BOOLEAN`, `TEXT`. -### Convert Schema by property +### 2.3 Convert Schema by property As mentioned above, converting schema by property which is more flexible and stronger than inferred schema. @@ -108,7 +108,7 @@ The structure of property `Schema`: 6. The supported `encoding` are `PLAIN`, `DICTIONARY`, `RLE`, `DIFF`, `TS_2DIFF`, `BITMAP`, `GORILLA_V1`, `REGULAR`, `GORILLA`, `CHIMP`, `SPRINTZ`, `RLBE`. 7. The supported `compressionType` are `UNCOMPRESSED`, `SNAPPY`, `GZIP`, `LZO`, `SDT`, `PAA`, `PLA`, `LZ4`, `ZSTD`, `LZMA2`. -## Relationships +## 3. Relationships | relationship | description | | ------------ | ---------------------------------------------------- | @@ -116,11 +116,11 @@ The structure of property `Schema`: | failure | The shema or flow file is abnormal. | -## QueryIoTDBRecord +## 4. QueryIoTDBRecord This is a processor that reads the sql query from the incoming FlowFile and using it to query the result from IoTDB using native interface. Then it use the configured 'Record Writer' to generate the flowfile -### Properties of QueryIoTDBRecord +### 4.1 Properties of QueryIoTDBRecord | property | description | default value | necessary | |---------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------| --------- | @@ -133,7 +133,7 @@ This is a processor that reads the sql query from the incoming FlowFile and usin | iotdb-query-chunk-size | Chunking can be used to return results in a stream of smaller batches (each has a partial results up to a chunk size) rather than as a single response. Chunking queries can return an unlimited number of rows. Note: Chunking is enable when result chunk size is greater than 0 | 0 | false | -## Relationships +## 5. Relationships | relationship | description | | ------------ | ---------------------------------------------------- | diff --git a/src/UserGuide/latest/Ecosystem-Integration/Spark-IoTDB.md b/src/UserGuide/latest/Ecosystem-Integration/Spark-IoTDB.md index 7e03da5c2..3193ce505 100644 --- a/src/UserGuide/latest/Ecosystem-Integration/Spark-IoTDB.md +++ b/src/UserGuide/latest/Ecosystem-Integration/Spark-IoTDB.md @@ -21,7 +21,7 @@ # Apache Spark(IoTDB) -## Supported Versions +## 1. Supported Versions Supported versions of Spark and Scala are as follows: @@ -29,16 +29,16 @@ Supported versions of Spark and Scala are as follows: |----------------|---------------| | `2.4.0-latest` | `2.11, 2.12` | -## Precautions +## 2. Precautions 1. The current version of `spark-iotdb-connector` supports Scala `2.11` and `2.12`, but not `2.13`. 2. `spark-iotdb-connector` supports usage in Spark for both Java, Scala, and PySpark. -## Deployment +## 3. Deployment `spark-iotdb-connector` has two use cases: IDE development and `spark-shell` debugging. -### IDE Development +### 3.1 IDE Development For IDE development, simply add the following dependency to the `pom.xml` file: @@ -51,7 +51,7 @@ For IDE development, simply add the following dependency to the `pom.xml` file: ``` -### `spark-shell` Debugging +### 3.2 `spark-shell` Debugging To use `spark-iotdb-connector` in `spark-shell`, you need to download the `with-dependencies` version of the jar package from the official website. After that, copy the jar package to the `${SPARK_HOME}/jars` directory. @@ -81,9 +81,9 @@ At last, copy the jar package to the ${SPARK_HOME}/jars directory. Simply execut cp iotdb-jdbc-{version}-SNAPSHOT-jar-with-dependencies.jar $SPARK_HOME/jars/ ``` -## Usage +## 4. Usage -### Parameters +### 4.1Parameters | Parameter | Description | Default Value | Scope | Can be Empty | |--------------|--------------------------------------------------------------------------------------------------------------|---------------|-------------|--------------| @@ -95,7 +95,7 @@ cp iotdb-jdbc-{version}-SNAPSHOT-jar-with-dependencies.jar $SPARK_HOME/jars/ | lowerBound | The start timestamp of the query (inclusive) | 0 | read | true | | upperBound | The end timestamp of the query (inclusive) | 0 | read | true | -### Reading Data from IoTDB +### 4.2 Reading Data from IoTDB Here is an example that demonstrates how to read data from IoTDB into a DataFrame: @@ -117,7 +117,7 @@ df.printSchema() df.show() ``` -### Writing Data to IoTDB +### 4.3 Writing Data to IoTDB Here is an example that demonstrates how to write data to IoTDB: @@ -163,7 +163,7 @@ dfWithColumn.write.format("org.apache.iotdb.spark.db") .save ``` -### Wide and Narrow Table Conversion +### 4.4 Wide and Narrow Table Conversion Here are examples of how to convert between wide and narrow tables: @@ -184,7 +184,7 @@ import org.apache.iotdb.spark.db._ val wide_df = Transformer.toWideForm(spark, narrow_df) ``` -## Wide and Narrow Tables +## 5. Wide and Narrow Tables Using the TsFile structure as an example: there are three measurements in the TsFile pattern, namely `Status`, `Temperature`, and `Hardware`. The basic information for each of these three measurements is as diff --git a/src/UserGuide/latest/Ecosystem-Integration/Spark-TsFile.md b/src/UserGuide/latest/Ecosystem-Integration/Spark-TsFile.md index 151d81e14..6df313fcb 100644 --- a/src/UserGuide/latest/Ecosystem-Integration/Spark-TsFile.md +++ b/src/UserGuide/latest/Ecosystem-Integration/Spark-TsFile.md @@ -21,7 +21,7 @@ # Apache Spark(TsFile) -## About Spark-TsFile-Connector +## 1. About Spark-TsFile-Connector Spark-TsFile-Connector implements the support of Spark for external data sources of Tsfile type. This enables users to read, write and query Tsfile by Spark. @@ -31,7 +31,7 @@ With this connector, you can * load all files in a specific directory, from either the local file system or hdfs, into Spark * write data from Spark into TsFile -## System Requirements +## 2. System Requirements |Spark Version | Scala Version | Java Version | TsFile | |:-------------: | :-------------: | :------------: |:------------: | @@ -40,8 +40,8 @@ With this connector, you can > Note: For more information about how to download and use TsFile, please see the following link: https://github.com/apache/iotdb/tree/master/tsfile. > Currently we only support spark version 2.4.3 and there are some known issue on 2.4.7, do no use it -## Quick Start -### Local Mode +## 3. Quick Start +### 3.1 Local Mode Start Spark with TsFile-Spark-Connector in local mode: @@ -56,7 +56,7 @@ Note: * See https://github.com/apache/iotdb/tree/master/tsfile for how to get TsFile. -### Distributed Mode +### 3.2 Distributed Mode Start Spark with TsFile-Spark-Connector in distributed mode (That is, the spark cluster is connected by spark-shell): @@ -70,7 +70,7 @@ Note: * Multiple jar packages are separated by commas without any spaces. * See https://github.com/apache/iotdb/tree/master/tsfile for how to get TsFile. -## Data Type Correspondence +## 4. Data Type Correspondence | TsFile data type | SparkSQL data type| | --------------| -------------- | @@ -81,7 +81,7 @@ Note: | DOUBLE | DoubleType | | TEXT | StringType | -## Schema Inference +## 5. Schema Inference The way to display TsFile is dependent on the schema. Take the following TsFile structure as an example: There are three measurements in the TsFile schema: status, temperature, and hardware. The basic information of these three measurements is listed: @@ -122,7 +122,7 @@ You can also use narrow table form which as follows: (You can see part 6 about h -## Scala API +## 6. Scala API NOTE: Remember to assign necessary read and write permissions in advance. diff --git a/src/UserGuide/latest/Ecosystem-Integration/Thingsboard.md b/src/UserGuide/latest/Ecosystem-Integration/Thingsboard.md index 024d3ed46..f304c277f 100644 --- a/src/UserGuide/latest/Ecosystem-Integration/Thingsboard.md +++ b/src/UserGuide/latest/Ecosystem-Integration/Thingsboard.md @@ -20,7 +20,7 @@ --> # ThingsBoard -## Product Overview +## 1. Product Overview 1. Introduction to ThingsBoard @@ -32,11 +32,11 @@ ThingsBoard IoTDB provides the ability to store data from ThingsBoard to IoTDB, and also supports reading data information from the `root.thingsboard` database in ThingsBoard. The detailed architecture diagram is shown in yellow in the following figure. -### Relationship Diagram +### 1.1 Relationship Diagram ![](/img/Thingsboard-2.png) -## Installation Requirements +## 2. Installation Requirements | **Preparation Content** | **Version Requirements** | | :---------------------------------------- | :----------------------------------------------------------- | @@ -44,7 +44,7 @@ | IoTDB |IoTDB v1.3.0 or above. Please refer to the [Deployment guidance](../Deployment-and-Maintenance/IoTDB-Package.md) | | ThingsBoard
(IoTDB adapted version) | Please contact Timecho staff to obtain the installation package. Detailed installation steps are provided below. | -## Installation Steps +## 3. Installation Steps Please refer to the installation steps on [ThingsBoard Official Website](https://thingsboard.io/docs/user-guide/install/ubuntu/),wherein: @@ -73,7 +73,7 @@ export IoTDB_MAX_SIZE=200 ## The maximum number of sessions in the session export IoTDB_DATABASE=root.thingsboard ## Thingsboard data is written to the database stored in IoTDB, supporting customization ``` -## Instructions +## 4. Instructions 1. Set up devices and connect datasource: Add a new device under "Entities" - "Devices" in Thingsboard and send data to the specified devices through gateway. diff --git a/src/UserGuide/latest/Ecosystem-Integration/Zeppelin-IoTDB.md b/src/UserGuide/latest/Ecosystem-Integration/Zeppelin-IoTDB.md index a572cdfca..f72578e84 100644 --- a/src/UserGuide/latest/Ecosystem-Integration/Zeppelin-IoTDB.md +++ b/src/UserGuide/latest/Ecosystem-Integration/Zeppelin-IoTDB.md @@ -21,7 +21,7 @@ # Apache Zeppelin -## About Zeppelin +## 1. About Zeppelin Zeppelin is a web-based notebook that enables interactive data analytics. You can connect to data sources and perform interactive operations with SQL, Scala, etc. The operations can be saved as documents, just like Jupyter. Zeppelin has already supported many data sources, including Spark, ElasticSearch, Cassandra, and InfluxDB. Now, we have enabled Zeppelin to operate IoTDB via SQL. @@ -29,9 +29,9 @@ Zeppelin is a web-based notebook that enables interactive data analytics. You ca -## Zeppelin-IoTDB Interpreter +## 2. Zeppelin-IoTDB Interpreter -### System Requirements +### 2.1 System Requirements | IoTDB Version | Java Version | Zeppelin Version | | :-----------: | :-----------: | :--------------: | @@ -46,7 +46,7 @@ Install Zeppelin: Suppose Zeppelin is placed at `$Zeppelin_HOME`. -### Build Interpreter +### 2.2 Build Interpreter ``` cd $IoTDB_HOME @@ -61,7 +61,7 @@ The interpreter will be in the folder: -### Install Interpreter +### 2.3 Install Interpreter Once you have built your interpreter, create a new folder under the Zeppelin interpreter directory and put the built interpreter into it. @@ -71,7 +71,7 @@ Once you have built your interpreter, create a new folder under the Zeppelin int cp $IoTDB_HOME/zeppelin-interpreter/target/zeppelin-{version}-SNAPSHOT-jar-with-dependencies.jar $Zeppelin_HOME/interpreter/iotdb ``` -### Modify Configuration +### 2.4 Modify Configuration Enter `$Zeppelin_HOME/conf` and use template to create Zeppelin configuration file: @@ -82,7 +82,7 @@ cp zeppelin-site.xml.template zeppelin-site.xml Open the zeppelin-site.xml file and change the `zeppelin.server.addr` item to `0.0.0.0` -### Running Zeppelin and IoTDB +### 2.5 Running Zeppelin and IoTDB Go to `$Zeppelin_HOME` and start Zeppelin by running: @@ -110,7 +110,7 @@ Go to `$IoTDB_HOME` and start IoTDB server: -## Use Zeppelin-IoTDB +## 3. Use Zeppelin-IoTDB Wait for Zeppelin server to start, then visit http://127.0.0.1:8080/ @@ -164,7 +164,7 @@ The above demo notebook can be found at `$IoTDB_HOME/zeppelin-interpreter/Zeppe -## Configuration +## 4. Configuration You can configure the connection parameters in http://127.0.0.1:8080/#/interpreter : diff --git a/src/UserGuide/latest/Reference/Common-Config-Manual.md b/src/UserGuide/latest/Reference/Common-Config-Manual.md index 57d7c1e54..ca38812b8 100644 --- a/src/UserGuide/latest/Reference/Common-Config-Manual.md +++ b/src/UserGuide/latest/Reference/Common-Config-Manual.md @@ -26,16 +26,16 @@ IoTDB common files for ConfigNode and DataNode are under `conf`. * `iotdb-system.properties`:IoTDB system configurations. -## Effective +## 1. Effective Different configuration parameters take effect in the following three ways: + **Only allowed to be modified in first start up:** Can't be modified after first start, otherwise the ConfigNode/DataNode cannot start. + **After restarting system:** Can be modified after the ConfigNode/DataNode first start, but take effect after restart. + **hot-load:** Can be modified while the ConfigNode/DataNode is running, and trigger through sending the command(sql) `load configuration` or `set configuration` to the IoTDB server by client or session. -## Configuration File +## 2. Configuration File -### Replication Configuration +### 2.1 Replication Configuration * config\_node\_consensus\_protocol\_class @@ -82,7 +82,7 @@ Different configuration parameters take effect in the following three ways: | Default | org.apache.iotdb.consensus.simple.SimpleConsensus | | Effective | Only allowed to be modified in first start up | -### Load balancing Configuration +### 2.2 Load balancing Configuration * series\_partition\_slot\_num @@ -192,7 +192,7 @@ Different configuration parameters take effect in the following three ways: | Default | true | | Effective | After restarting system | -### Cluster Management +### 2.3 Cluster Management * cluster\_name @@ -233,7 +233,7 @@ Different configuration parameters take effect in the following three ways: | Default | 0.05 | | Effective | After restarting system | -### Memory Control Configuration +### 2.4 Memory Control Configuration * datanode\_memory\_proportion @@ -370,7 +370,7 @@ Different configuration parameters take effect in the following three ways: |Default| 1000 | |Effective|After restarting system| -### Schema Engine Configuration +### 2.5 Schema Engine Configuration * schema\_engine\_mode @@ -435,7 +435,7 @@ Different configuration parameters take effect in the following three ways: |Default| 10000 | |Effective|After restarting system| -### Configurations for creating schema automatically +### 2.6 Configurations for creating schema automatically * enable\_auto\_create\_schema @@ -491,7 +491,7 @@ Different configuration parameters take effect in the following three ways: | Default | FLOAT | | Effective | After restarting system | -### Query Configurations +### 2.7 Query Configurations * read\_consistency\_level @@ -636,7 +636,7 @@ Different configuration parameters take effect in the following three ways: |Default| 100000 | |Effective|After restarting system| -### TTL Configuration +### 2.8 TTL Configuration * ttl\_check\_interval | Name | ttl\_check\_interval | @@ -665,7 +665,7 @@ Different configuration parameters take effect in the following three ways: | Effective | After restarting system | -### Storage Engine Configuration +### 2.9 Storage Engine Configuration * timestamp\_precision @@ -839,7 +839,7 @@ Different configuration parameters take effect in the following three ways: | Default | 10 | | Effective | After restarting system | -### Compaction Configurations +### 2.10 Compaction Configurations * enable\_seq\_space\_compaction @@ -1138,7 +1138,7 @@ Different configuration parameters take effect in the following three ways: |Default| 4 | |Effective| hot-load | -### Write Ahead Log Configuration +### 2.11 Write Ahead Log Configuration * wal\_mode @@ -1239,7 +1239,7 @@ Different configuration parameters take effect in the following three ways: | Default | 20000 | | Effective | hot-load | -### TsFile Configurations +### 2.12 TsFile Configurations * group\_size\_in\_byte @@ -1332,7 +1332,7 @@ Different configuration parameters take effect in the following three ways: | Effective | After restarting system | -### Authorization Configuration +### 2.13 Authorization Configuration * authorizer\_provider\_class @@ -1389,7 +1389,7 @@ Different configuration parameters take effect in the following three ways: | Default | 30 | | Effective | After restarting system | -### UDF Configuration +### 2.14 UDF Configuration * udf\_initial\_byte\_array\_length\_for\_memory\_control @@ -1436,7 +1436,7 @@ Different configuration parameters take effect in the following three ways: | Default | ext/udf(Windows:ext\\udf) | | Effective | After restarting system | -### Trigger Configuration +### 2.15 Trigger Configuration * trigger\_lib\_dir @@ -1458,7 +1458,7 @@ Different configuration parameters take effect in the following three ways: | Effective | After restarting system | -### SELECT-INTO +### 2.16 SELECT-INTO * into\_operation\_buffer\_size\_in\_byte @@ -1488,7 +1488,7 @@ Different configuration parameters take effect in the following three ways: | Default | 2 | | Effective | After restarting system | -### Continuous Query +### 2.17 Continuous Query * continuous\_query\_execution\_thread @@ -1508,7 +1508,7 @@ Different configuration parameters take effect in the following three ways: | Default | 1s | | Effective | After restarting system | -### PIPE Configuration +### 2.18 PIPE Configuration * pipe_lib_dir @@ -1582,7 +1582,7 @@ Different configuration parameters take effect in the following three ways: | Default Value | -1 | | Effective | Can be hot-loaded | -### IOTConsensus Configuration +### 2.19 IOTConsensus Configuration * data_region_iot_max_log_entries_num_per_batch @@ -1620,7 +1620,7 @@ Different configuration parameters take effect in the following three ways: | Default | 0.6 | | Effective | After restarting system | -### RatisConsensus Configuration +### 2.20 RatisConsensus Configuration * config\_node\_ratis\_log\_appender\_buffer\_size\_max @@ -2074,7 +2074,7 @@ Different configuration parameters take effect in the following three ways: | Default | 86400 (seconds) | | Effective | After restarting system | -### Procedure Configuration +### 2.21 Procedure Configuration * procedure\_core\_worker\_thread\_count @@ -2105,7 +2105,7 @@ Different configuration parameters take effect in the following three ways: | Default | 800 | | Effective | After restarting system | -### MQTT Broker Configuration +### 2.22 MQTT Broker Configuration * enable\_mqtt\_service @@ -2164,7 +2164,7 @@ Different configuration parameters take effect in the following three ways: -#### TsFile Active Listening&Loading Function Configuration +### 2.23 TsFile Active Listening&Loading Function Configuration * load\_active\_listening\_enable diff --git a/src/UserGuide/latest/Reference/ConfigNode-Config-Manual.md b/src/UserGuide/latest/Reference/ConfigNode-Config-Manual.md index 80c2cbaf7..7a2b8e6f6 100644 --- a/src/UserGuide/latest/Reference/ConfigNode-Config-Manual.md +++ b/src/UserGuide/latest/Reference/ConfigNode-Config-Manual.md @@ -27,7 +27,7 @@ IoTDB ConfigNode files are under `conf`. * `iotdb-system.properties`:IoTDB system configurations. -## Environment Configuration File(confignode-env.sh/bat) +## 1. Environment Configuration File(confignode-env.sh/bat) The environment configuration file is mainly used to configure the Java environment related parameters when ConfigNode is running, such as JVM related configuration. This part of the configuration is passed to the JVM when the ConfigNode starts. @@ -61,11 +61,11 @@ The details of each parameter are as follows: |Effective|After restarting system| -## ConfigNode Configuration File (iotdb-system.properties) +## 2. ConfigNode Configuration File (iotdb-system.properties) The global configuration of cluster is in ConfigNode. -### Config Node RPC Configuration +### 2.1 Config Node RPC Configuration * cn\_internal\_address @@ -85,7 +85,7 @@ The global configuration of cluster is in ConfigNode. |Default| 10710 | |Effective|Only allowed to be modified in first start up| -### Consensus +### 2.2 Consensus * cn\_consensus\_port @@ -96,7 +96,7 @@ The global configuration of cluster is in ConfigNode. |Default| 10720 | |Effective|Only allowed to be modified in first start up| -### SeedConfigNode +### 2.3 SeedConfigNode * cn\_seed\_config\_node @@ -107,7 +107,7 @@ The global configuration of cluster is in ConfigNode. |Default| 127.0.0.1:10710 | |Effective| Only allowed to be modified in first start up | -### Directory configuration +### 2.4 Directory configuration * cn\_system\_dir @@ -127,7 +127,7 @@ The global configuration of cluster is in ConfigNode. |Default| data/confignode/consensus(Windows:data\\confignode\\consensus) | |Effective| After restarting system | -### Thrift RPC configuration +### 2.5 Thrift RPC configuration * cn\_rpc\_thrift\_compression\_enable @@ -220,4 +220,4 @@ The global configuration of cluster is in ConfigNode. | Default | 300 | | Effective | After restarting system | -### Metric Configuration +### 2.6 Metric Configuration diff --git a/src/UserGuide/latest/Reference/DataNode-Config-Manual_apache.md b/src/UserGuide/latest/Reference/DataNode-Config-Manual_apache.md index b568ab7ad..d452a6673 100644 --- a/src/UserGuide/latest/Reference/DataNode-Config-Manual_apache.md +++ b/src/UserGuide/latest/Reference/DataNode-Config-Manual_apache.md @@ -27,14 +27,14 @@ We use the same configuration files for IoTDB DataNode and Standalone version, a * `iotdb-system.properties`:IoTDB system configurations. -## Hot Modification Configuration +## 1. ot Modification Configuration For the convenience of users, IoTDB provides users with hot modification function, that is, modifying some configuration parameters in `iotdb-system.properties` during the system operation and applying them to the system immediately. In the parameters described below, these parameters whose way of `Effective` is `hot-load` support hot modification. Trigger way: The client sends the command(sql) `load configuration` or `set configuration` to the IoTDB server. -## Environment Configuration File(datanode-env.sh/bat) +## 2. Environment Configuration File(datanode-env.sh/bat) The environment configuration file is mainly used to configure the Java environment related parameters when DataNode is running, such as JVM related configuration. This part of the configuration is passed to the JVM when the DataNode starts. @@ -94,7 +94,7 @@ The details of each parameter are as follows: |Default|127.0.0.1| |Effective|After restarting system| -## JMX Authorization +## 3. JMX Authorization We **STRONGLY RECOMMENDED** you CHANGE the PASSWORD for the JMX remote connection. @@ -102,9 +102,9 @@ The user and passwords are in ${IOTDB\_CONF}/conf/jmx.password. The permission definitions are in ${IOTDB\_CONF}/conf/jmx.access. -## DataNode/Standalone Configuration File (iotdb-system.properties) +## 4. DataNode/Standalone Configuration File (iotdb-system.properties) -### Data Node RPC Configuration +### 4.1 Data Node RPC Configuration * dn\_rpc\_address @@ -178,7 +178,7 @@ The permission definitions are in ${IOTDB\_CONF}/conf/jmx.access. |Default| 5000 | |Effective| After restarting system | -### SSL Configuration +### 4.2 SSL Configuration * enable\_thrift\_ssl @@ -216,7 +216,7 @@ The permission definitions are in ${IOTDB\_CONF}/conf/jmx.access. |Default| "" | |Effective| After restarting system | -### SeedConfigNode +### 4.3 SeedConfigNode * dn\_seed\_config\_node @@ -227,7 +227,7 @@ The permission definitions are in ${IOTDB\_CONF}/conf/jmx.access. |Default| 127.0.0.1:10710 | |Effective| Only allowed to be modified in first start up | -### Connection Configuration +### 4.4 Connection Configuration * dn\_rpc\_thrift\_compression\_enable @@ -320,7 +320,7 @@ The permission definitions are in ${IOTDB\_CONF}/conf/jmx.access. | Default | 300 | | Effective | After restarting system | -### Dictionary Configuration +### 4.5 Dictionary Configuration * dn\_system\_dir @@ -385,9 +385,9 @@ The permission definitions are in ${IOTDB\_CONF}/conf/jmx.access. | Default | data/datanode/sync | | Effective | After restarting system | -### Metric Configuration +### 4.6 Metric Configuration -## Enable GC log +## 5. Enable GC log GC log is off by default. For performance tuning, you may want to collect the GC info. @@ -405,7 +405,7 @@ sbin\start-datanode.bat printgc GC log is stored at `IOTDB_HOME/logs/gc.log`. There will be at most 10 gc.log.* files and each one can reach to 10MB. -### REST Service Configuration +### 5.1 REST Service Configuration * enable\_rest\_service diff --git a/src/UserGuide/latest/Reference/DataNode-Config-Manual_timecho.md b/src/UserGuide/latest/Reference/DataNode-Config-Manual_timecho.md index 94ede5013..4dbe77f53 100644 --- a/src/UserGuide/latest/Reference/DataNode-Config-Manual_timecho.md +++ b/src/UserGuide/latest/Reference/DataNode-Config-Manual_timecho.md @@ -27,14 +27,14 @@ We use the same configuration files for IoTDB DataNode and Standalone version, a * `iotdb-system.properties`:IoTDB system configurations. -## Hot Modification Configuration +## 1. Hot Modification Configuration For the convenience of users, IoTDB provides users with hot modification function, that is, modifying some configuration parameters in `iotdb-system.properties` during the system operation and applying them to the system immediately. In the parameters described below, these parameters whose way of `Effective` is `hot-load` support hot modification. Trigger way: The client sends the command(sql) `load configuration` or `set configuration` to the IoTDB server. -## Environment Configuration File(datanode-env.sh/bat) +## 2. Environment Configuration File(datanode-env.sh/bat) The environment configuration file is mainly used to configure the Java environment related parameters when DataNode is running, such as JVM related configuration. This part of the configuration is passed to the JVM when the DataNode starts. @@ -94,7 +94,7 @@ The details of each parameter are as follows: |Default|127.0.0.1| |Effective|After restarting system| -## JMX Authorization +## 3. JMX Authorization We **STRONGLY RECOMMENDED** you CHANGE the PASSWORD for the JMX remote connection. @@ -102,9 +102,9 @@ The user and passwords are in ${IOTDB\_CONF}/conf/jmx.password. The permission definitions are in ${IOTDB\_CONF}/conf/jmx.access. -## DataNode/Standalone Configuration File (iotdb-system.properties) +## 4. DataNode/Standalone Configuration File (iotdb-system.properties) -### Data Node RPC Configuration +### 4.1 Data Node RPC Configuration * dn\_rpc\_address @@ -178,7 +178,7 @@ The permission definitions are in ${IOTDB\_CONF}/conf/jmx.access. |Default| 5000 | |Effective| After restarting system | -### SSL Configuration +### 4.2 SSL Configuration * enable\_thrift\_ssl @@ -216,7 +216,7 @@ The permission definitions are in ${IOTDB\_CONF}/conf/jmx.access. |Default| "" | |Effective| After restarting system | -### SeedConfigNode +### 4.3 SeedConfigNode * dn\_seed\_config\_node @@ -227,7 +227,7 @@ The permission definitions are in ${IOTDB\_CONF}/conf/jmx.access. |Default| 127.0.0.1:10710 | |Effective| Only allowed to be modified in first start up | -### Connection Configuration +### 4.4 Connection Configuration * dn\_rpc\_thrift\_compression\_enable @@ -320,7 +320,7 @@ The permission definitions are in ${IOTDB\_CONF}/conf/jmx.access. | Default | 300 | | Effective | After restarting system | -### Dictionary Configuration +### 4.5 Dictionary Configuration * dn\_system\_dir @@ -385,9 +385,9 @@ The permission definitions are in ${IOTDB\_CONF}/conf/jmx.access. | Default | data/datanode/sync | | Effective | After restarting system | -### Metric Configuration +### 4.6 Metric Configuration -## Enable GC log +## 5. Enable GC log GC log is off by default. For performance tuning, you may want to collect the GC info. @@ -405,7 +405,7 @@ sbin\start-datanode.bat printgc GC log is stored at `IOTDB_HOME/logs/gc.log`. There will be at most 10 gc.log.* files and each one can reach to 10MB. -### REST Service Configuration +### 5.1 REST Service Configuration * enable\_rest\_service diff --git a/src/UserGuide/latest/SQL-Manual/Function-and-Expression.md b/src/UserGuide/latest/SQL-Manual/Function-and-Expression.md index c208d6f17..3c315f44f 100644 --- a/src/UserGuide/latest/SQL-Manual/Function-and-Expression.md +++ b/src/UserGuide/latest/SQL-Manual/Function-and-Expression.md @@ -19,11 +19,11 @@ specific language governing permissions and limitations under the License. ---> +--> -## Arithmetic Operators and Functions +## 1. Arithmetic Operators and Functions -### Arithmetic Operators +### 1.1 Arithmetic Operators #### Unary Arithmetic Operators @@ -65,7 +65,7 @@ Total line number = 5 It costs 0.014s ``` -### Arithmetic Functions +### 1.2 Arithmetic Functions Currently, IoTDB supports the following mathematical functions. The behavior of these mathematical functions is consistent with the behavior of these functions in the Java Math standard library. @@ -156,9 +156,9 @@ It costs 0.059s --> -## Comparison Operators and Functions +## 2. Comparison Operators and Functions -### Basic comparison operators +### 2.1 Basic comparison operators Supported operators `>`, `>=`, `<`, `<=`, `==`, `!=` (or `<>` ) @@ -188,7 +188,7 @@ IoTDB> select a, b, a > 10, a <= b, !(a <= b), a > 10 && a > b from root.test; +-----------------------------+-----------+-----------+----------------+--------------------------+---------------------------+------------------------------------------------+ ``` -### `BETWEEN ... AND ...` operator +### 2.2 `BETWEEN ... AND ...` operator |operator |meaning| |-----------------------------|-----------| @@ -205,7 +205,7 @@ select temperature from root.sg1.d1 where temperature between 36.5 and 40; select temperature from root.sg1.d1 where temperature not between 36.5 and 40; ``` -### Fuzzy matching operator +### 2.3 Fuzzy matching operator For TEXT type data, support fuzzy matching of data using `Like` and `Regexp` operators. @@ -311,7 +311,7 @@ operation result +-----------------------------+-----------+------- ------------------+--------------------------+ ``` -### `IS NULL` operator +### 2.4 `IS NULL` operator |operator |meaning| |-----------------------------|-----------| @@ -330,7 +330,7 @@ select code from root.sg1.d1 where temperature is null; select code from root.sg1.d1 where temperature is not null; ``` -### `IN` operator +### 2.5 `IN` operator |operator |meaning| |-----------------------------|-----------| @@ -377,7 +377,7 @@ Output 2: +-----------------------------+-----------+------- -------------+ ``` -### Condition Functions +### 2.6 Condition Functions Condition functions are used to check whether timeseries data points satisfy some specific condition. @@ -462,9 +462,9 @@ IoTDB> select ts, in_range(ts,'lower'='2', 'upper'='3.1') from root.test; --> -## Logical Operators +## 3. Logical Operators -### Unary Logical Operators +### 3.1 Unary Logical Operators Supported operator `!` @@ -474,7 +474,7 @@ Output data type: `BOOLEAN` Hint: the priority of `!` is the same as `-`. Remember to use brackets to modify priority. -### Binary Logical Operators +### 3.2 Binary Logical Operators Supported operators AND:`and`,`&`, `&&`; OR:`or`,`|`,`||` @@ -526,7 +526,7 @@ IoTDB> select a, b, a > 10, a <= b, !(a <= b), a > 10 && a > b from root.test; --> -## Aggregate Functions +## 4. Aggregate Functions Aggregate functions are many-to-one functions. They perform aggregate calculations on a set of values, resulting in a single aggregated result. @@ -553,7 +553,7 @@ The aggregate functions supported by IoTDB are as follows: | COUNT_TIME | The number of timestamps in the query data set. When used with `align by device`, the result is the number of timestamps in the data set per device. | All data Types, the input parameter can only be `*` | / | INT64 | -### COUNT +### 4.1 COUNT #### example @@ -572,7 +572,7 @@ Total line number = 1 It costs 0.016s ``` -### COUNT_IF +### 4.2 COUNT_IF #### Grammar ```sql @@ -637,7 +637,7 @@ Result: +------------------------------------------------------------------------+------------------------------------------------------------------------+ ``` -### TIME_DURATION +### 4.3 TIME_DURATION #### Grammar ```sql time_duration(Path) @@ -691,7 +691,7 @@ Result: ``` > Note: Returns 0 if there is only one data point, or null if the data point is null. -### COUNT_TIME +### 4.4 COUNT_TIME #### Grammar ```sql count_time(*) @@ -821,9 +821,9 @@ Result --> -## String Processing +## 5. String Processing -### STRING_CONTAINS +### 5.1 STRING_CONTAINS #### Function introduction @@ -856,7 +856,7 @@ Total line number = 3 It costs 0.007s ``` -### STRING_MATCHES +### 5.2 STRING_MATCHES #### Function introduction @@ -889,7 +889,7 @@ Total line number = 3 It costs 0.007s ``` -### Length +### 5.3 Length #### Usage @@ -933,7 +933,7 @@ Output series: +-----------------------------+--------------+----------------------+ ``` -### Locate +### 5.4 Locate #### Usage @@ -999,7 +999,7 @@ Output series: +-----------------------------+--------------+------------------------------------------------------+ ``` -### StartsWith +### 5.5 StartsWith #### Usage @@ -1046,7 +1046,7 @@ Output series: +-----------------------------+--------------+----------------------------------------+ ``` -### EndsWith +### 5.6 EndsWith #### Usage @@ -1093,7 +1093,7 @@ Output series: +-----------------------------+--------------+--------------------------------------+ ``` -### Concat +### 5.7 Concat #### Usage @@ -1161,7 +1161,7 @@ Output series: +-----------------------------+--------------+--------------+-----------------------------------------------------------------------------------------------+ ``` -### substring +### 5.8 substring #### Usage @@ -1209,7 +1209,7 @@ Output series: +-----------------------------+--------------+--------------------------------------+ ``` -### replace +### 5.9 replace #### Usage @@ -1257,7 +1257,7 @@ Output series: +-----------------------------+--------------+-----------------------------------+ ``` -### Upper +### 5.10 Upper #### Usage @@ -1301,7 +1301,7 @@ Output series: +-----------------------------+--------------+---------------------+ ``` -### Lower +### 5.11 Lower #### Usage @@ -1345,7 +1345,7 @@ Output series: +-----------------------------+--------------+---------------------+ ``` -### Trim +### 5.12 Trim #### Usage @@ -1389,7 +1389,7 @@ Output series: +-----------------------------+--------------+--------------------+ ``` -### StrCmp +### 5.13 StrCmp #### Usage @@ -1435,7 +1435,7 @@ Output series: ``` -### StrReplace +### 5.14 StrReplace #### Usage @@ -1514,7 +1514,7 @@ Output series: +-----------------------------+-----------------------------------------------------+ ``` -### RegexMatch +### 5.15 RegexMatch #### Usage @@ -1573,7 +1573,7 @@ Output series: +-----------------------------+----------------------------------------------------------------------+ ``` -### RegexReplace +### 5.16 RegexReplace #### Usage @@ -1632,7 +1632,7 @@ Output series: +-----------------------------+-----------------------------------------------------------+ ``` -### RegexSplit +### 5.17 RegexSplit #### Usage @@ -1733,7 +1733,7 @@ Output series: --> -## Data Type Conversion Function +## 6. Data Type Conversion Function The IoTDB currently supports 6 data types, including INT32, INT64 ,FLOAT, DOUBLE, BOOLEAN, TEXT. When we query or evaluate data, we may need to convert data types, such as TEXT to INT32, or FLOAT to DOUBLE. IoTDB supports cast function to convert data types. @@ -1754,7 +1754,7 @@ The syntax of the cast function is consistent with that of PostgreSQL. The data | **BOOLEAN** | true: 1
false: 0 | true: 1L
false: 0 | true: 1.0f
false: 0 | true: 1.0
false: 0 | No need to cast | true: "true"
false: "false" | | **TEXT** | Integer.parseInt() | Long.parseLong() | Float.parseFloat() | Double.parseDouble() | text.toLowerCase =="true" : true
text.toLowerCase =="false" : false
Otherwise: throw Exception | No need to cast | -### Examples +### 6.1 Examples ``` // timeseries @@ -1833,7 +1833,7 @@ IoTDB> select cast(s6 as BOOLEAN) from root.sg.d1 where time >= 2 --> -## Constant Timeseries Generating Functions +## 7. Constant Timeseries Generating Functions The constant timeseries generating function is used to generate a timeseries in which the values of all data points are the same. @@ -1891,7 +1891,7 @@ It costs 0.005s --> -## Selector Functions +## 8. Selector Functions Currently, IoTDB supports the following selector functions: @@ -1943,7 +1943,7 @@ It costs 0.006s --> -## Continuous Interval Functions +## 9. Continuous Interval Functions The continuous interval functions are used to query all continuous intervals that meet specified conditions. They can be divided into two categories according to return value: @@ -1957,7 +1957,7 @@ They can be divided into two categories according to return value: | ZERO_COUNT | INT32/ INT64/ FLOAT/ DOUBLE/ BOOLEAN | `min`:Optional with default value `1L`
`max`:Optional with default value `Long.MAX_VALUE` | Long | Return intervals' start times and the number of data points in the interval in which the value is always 0(false). Data points number `n` satisfy `n >= min && n <= max` | | NON_ZERO_COUNT | INT32/ INT64/ FLOAT/ DOUBLE/ BOOLEAN | `min`:Optional with default value `1L`
`max`:Optional with default value `Long.MAX_VALUE` | Long | Return intervals' start times and the number of data points in the interval in which the value is always not 0(false). Data points number `n` satisfy `n >= min && n <= max` | -### Demonstrate +### 9.1 Demonstrate Example data: ``` IoTDB> select s1,s2,s3,s4,s5 from root.sg.d2; @@ -2017,7 +2017,7 @@ Result: --> -## Variation Trend Calculation Functions +## 10. Variation Trend Calculation Functions Currently, IoTDB supports the following variation trend calculation functions: @@ -2052,7 +2052,7 @@ Total line number = 5 It costs 0.014s ``` -### Example +### 10.1 Example #### RawData @@ -2132,9 +2132,9 @@ Result: --> -## Sample Functions +## 11. Sample Functions -### Equal Size Bucket Sample Function +### 11.1 Equal Size Bucket Sample Function This function samples the input sequence in equal size buckets, that is, according to the downsampling ratio and downsampling method given by the user, the input sequence is equally divided into several buckets according to a fixed number of points. Sampling by the given sampling method within each bucket. - `proportion`: sample ratio, the value range is `(0, 1]`. @@ -2360,7 +2360,7 @@ Total line number = 10 It costs 0.041s ``` -### M4 Function +### 11.2 M4 Function M4 is used to sample the `first, last, bottom, top` points for each sliding window: @@ -2533,9 +2533,9 @@ It is worth noting that both functions sort and deduplicate the aggregated point --> -## Time Series Processing +## 12. Time Series Processing -### CHANGE_POINTS +### 12.1 CHANGE_POINTS #### Usage @@ -2604,9 +2604,9 @@ Output series: --> -## Lambda Expression +## 13. Lambda Expression -### JEXL Function +### 13.1 JEXL Function Java Expression Language (JEXL) is an expression language engine. We use JEXL to extend UDFs, which are implemented on the command line with simple lambda expressions. See the link for [operators supported in jexl lambda expressions](https://commons.apache.org/proper/commons-jexl/apidocs/org/apache/commons/jexl3/package-summary.html#customization). @@ -2683,9 +2683,9 @@ It costs 0.118s --> -## Conditional Expressions +## 14. Conditional Expressions -### CASE +### 14.1 CASE The CASE expression is a kind of conditional expression that can be used to return different values based on specific conditions, similar to the if-else statements in other languages. diff --git a/src/UserGuide/latest/SQL-Manual/Keywords.md b/src/UserGuide/latest/SQL-Manual/Keywords.md index c098b3e99..dae343532 100644 --- a/src/UserGuide/latest/SQL-Manual/Keywords.md +++ b/src/UserGuide/latest/SQL-Manual/Keywords.md @@ -21,13 +21,13 @@ # Keywords -Reserved words(Can not be used as identifier): +## 1. Reserved words(Can not be used as identifier): - ROOT - TIME - TIMESTAMP -Common Keywords: +## 2. Common Keywords: - ADD - AFTER diff --git a/src/UserGuide/latest/SQL-Manual/Operator-and-Expression.md b/src/UserGuide/latest/SQL-Manual/Operator-and-Expression.md index 1b6fd667f..ee905b41f 100644 --- a/src/UserGuide/latest/SQL-Manual/Operator-and-Expression.md +++ b/src/UserGuide/latest/SQL-Manual/Operator-and-Expression.md @@ -27,9 +27,9 @@ A list of all available functions, both built-in and custom, can be displayed wi See the documentation [Select-Expression](../SQL-Manual/Function-and-Expression.md#selector-functions) for the behavior of operators and functions in SQL. -## OPERATORS +## 1. OPERATORS -### Arithmetic Operators +### 1.1 Arithmetic Operators | Operator | Meaning | | -------- | ------------------------- | @@ -43,7 +43,7 @@ See the documentation [Select-Expression](../SQL-Manual/Function-and-Expression. For details and examples, see the document [Arithmetic Operators and Functions](../SQL-Manual/Function-and-Expression.md#arithmetic-functions). -### Comparison Operators +### 1.2 Comparison Operators | Operator | Meaning | | ------------------------- | ------------------------------------ | @@ -66,7 +66,7 @@ For details and examples, see the document [Arithmetic Operators and Functions]( For details and examples, see the document [Comparison Operators and Functions](../SQL-Manual/Function-and-Expression.md#comparison-operators-and-functions). -### Logical Operators +### 1.3 Logical Operators | Operator | Meaning | | --------------------------- | --------------------------------- | @@ -76,7 +76,7 @@ For details and examples, see the document [Comparison Operators and Functions]( For details and examples, see the document [Logical Operators](../SQL-Manual/Function-and-Expression.md#logical-operators). -### Operator Precedence +### 1.4 Operator Precedence The precedence of operators is arranged as shown below from high to low, and operators on the same row have the same precedence. @@ -93,11 +93,11 @@ AND, &, && OR, |, || ``` -## BUILT-IN FUNCTIONS +## 2. BUILT-IN FUNCTIONS The built-in functions can be used in IoTDB without registration, and the functions in the data quality function library need to be registered by referring to the registration steps in the next chapter before they can be used. -### Aggregate Functions +### 2.1 Aggregate Functions | Function Name | Description | Allowed Input Series Data Types | Required Attributes | Output Series Data Type | |---------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------| @@ -125,7 +125,7 @@ The built-in functions can be used in IoTDB without registration, and the functi For details and examples, see the document [Aggregate Functions](../SQL-Manual/Function-and-Expression.md#aggregate-functions). -### Arithmetic Functions +### 2.2 Arithmetic Functions | Function Name | Allowed Input Series Data Types | Output Series Data Type | Required Attributes | Corresponding Implementation in the Java Standard Library | | ------------- | ------------------------------- | ----------------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | @@ -152,7 +152,7 @@ For details and examples, see the document [Aggregate Functions](../SQL-Manual/F For details and examples, see the document [Arithmetic Operators and Functions](../SQL-Manual/Function-and-Expression.md#arithmetic-operators-and-functions). -### Comparison Functions +### 2.3 Comparison Functions | Function Name | Allowed Input Series Data Types | Required Attributes | Output Series Data Type | Description | | ------------- | ------------------------------- | ----------------------------------------- | ----------------------- | --------------------------------------------- | @@ -161,7 +161,7 @@ For details and examples, see the document [Arithmetic Operators and Functions]( For details and examples, see the document [Comparison Operators and Functions](../SQL-Manual/Function-and-Expression.md#comparison-operators-and-functions). -### String Processing Functions +### 2.4 String Processing Functions | Function Name | Allowed Input Series Data Types | Required Attributes | Output Series Data Type | Description | | --------------- |---------------------------------| ------------------------------------------------------------ | ----------------------- | ------------------------------------------------------------ | @@ -181,7 +181,7 @@ For details and examples, see the document [Comparison Operators and Functions]( For details and examples, see the document [String Processing](../SQL-Manual/Function-and-Expression.md#string-processing). -### Data Type Conversion Function +### 2.5 Data Type Conversion Function | Function Name | Required Attributes | Output Series Data Type | Description | | ------------- | ------------------------------------------------------------ | ----------------------- | ------------------------------------------------------------ | @@ -189,7 +189,7 @@ For details and examples, see the document [String Processing](../SQL-Manual/Fun For details and examples, see the document [Data Type Conversion Function](../SQL-Manual/Function-and-Expression.md#data-type-conversion-function). -### Constant Timeseries Generating Functions +### 2.6 Constant Timeseries Generating Functions | Function Name | Required Attributes | Output Series Data Type | Description | | ------------- | ------------------------------------------------------------ | -------------------------------------------- | ------------------------------------------------------------ | @@ -199,7 +199,7 @@ For details and examples, see the document [Data Type Conversion Function](../SQ For details and examples, see the document [Constant Timeseries Generating Functions](../SQL-Manual/Function-and-Expression.md#constant-timeseries-generating-functions). -### Selector Functions +### 2.7 Selector Functions | Function Name | Allowed Input Series Data Types | Required Attributes | Output Series Data Type | Description | | ------------- |-------------------------------------------------------------------| ------------------------------------------------------------ | ----------------------------- | ------------------------------------------------------------ | @@ -208,7 +208,7 @@ For details and examples, see the document [Constant Timeseries Generating Funct For details and examples, see the document [Selector Functions](../SQL-Manual/Function-and-Expression.md#selector-functions). -### Continuous Interval Functions +### 2.8 Continuous Interval Functions | Function Name | Allowed Input Series Data Types | Required Attributes | Output Series Data Type | Description | | ----------------- | ------------------------------------ | ------------------------------------------------------------ | ----------------------- | ------------------------------------------------------------ | @@ -219,7 +219,7 @@ For details and examples, see the document [Selector Functions](../SQL-Manual/Fu For details and examples, see the document [Continuous Interval Functions](../SQL-Manual/Function-and-Expression.md#continuous-interval-functions). -### Variation Trend Calculation Functions +### 2.9 Variation Trend Calculation Functions | Function Name | Allowed Input Series Data Types | Required Attributes | Output Series Data Type | Description | | ----------------------- | ----------------------------------------------- | ------------------------------------------------------------ | ----------------------------- | ------------------------------------------------------------ | @@ -232,7 +232,7 @@ For details and examples, see the document [Continuous Interval Functions](../SQ For details and examples, see the document [Variation Trend Calculation Functions](../SQL-Manual/Function-and-Expression.md#variation-trend-calculation-functions). -### Sample Functions +### 2.10 Sample Functions | Function Name | Allowed Input Series Data Types | Required Attributes | Output Series Data Type | Description | | -------------------------------- | ------------------------------- | ------------------------------------------------------------ | ------------------------------ | ------------------------------------------------------------ | @@ -244,7 +244,7 @@ For details and examples, see the document [Variation Trend Calculation Function For details and examples, see the document [Sample Functions](../SQL-Manual/Function-and-Expression.md#sample-functions). -### Change Points Function +### 2.11 Change Points Function | Function Name | Allowed Input Series Data Types | Required Attributes | Output Series Data Type | Description | | ------------- | ------------------------------- | ------------------- | ----------------------------- | ----------------------------------------------------------- | @@ -253,7 +253,7 @@ For details and examples, see the document [Sample Functions](../SQL-Manual/Func For details and examples, see the document [Time-Series](../SQL-Manual/Function-and-Expression.md#time-series-processing). -## LAMBDA EXPRESSION +## 3. LAMBDA EXPRESSION | Function Name | Allowed Input Series Data Types | Required Attributes | Output Series Data Type | Series Data Type Description | | ------------- | ----------------------------------------------- | ------------------------------------------------------------ | ----------------------------------------------- | ------------------------------------------------------------ | @@ -261,7 +261,7 @@ For details and examples, see the document [Time-Series](../SQL-Manual/Function- For details and examples, see the document [Lambda](../SQL-Manual/Function-and-Expression.md#lambda-expression). -## CONDITIONAL EXPRESSION +## 4. CONDITIONAL EXPRESSION | Expression Name | Description | | --------------- | -------------------- | @@ -269,7 +269,7 @@ For details and examples, see the document [Lambda](../SQL-Manual/Function-and-E For details and examples, see the document [Conditional Expressions](../SQL-Manual/Function-and-Expression.md#conditional-expressions). -## SELECT EXPRESSION +## 5. SELECT EXPRESSION The `SELECT` clause specifies the output of the query, consisting of several `selectExpr`. Each `selectExpr` defines one or more columns in the query result. @@ -285,7 +285,7 @@ The `SELECT` clause specifies the output of the query, consisting of several `se - Time series generation functions (including built-in functions and user-defined functions) - constant -### Use Alias +### 5.1 Use Alias Since the unique data model of IoTDB, lots of additional information like device will be carried before each sensor. Sometimes, we want to query just one specific device, then these prefix information show frequently will be redundant in this situation, influencing the analysis of result set. At this time, we can use `AS` function provided by IoTDB, assign an alias to time series selected in query. @@ -302,11 +302,11 @@ The result set is: | ... | ... | ... | -### Operator +### 5.2 Operator See this documentation for a list of operators supported in IoTDB. -### Function +### 5.3 Function #### Aggregate Functions @@ -339,7 +339,7 @@ See this documentation for a list of built-in functions supported in IoTDB. IoTDB supports function extension through User Defined Function (click for [User-Defined Function](../User-Manual/Database-Programming.md#udtfuser-defined-timeseries-generating-function)) capability. -### Nested Expressions +### 5.4 Nested Expressions IoTDB supports the calculation of arbitrary nested expressions. Since time series query and aggregation query can not be used in a query statement at the same time, we divide nested expressions into two types, which are nested expressions with time series query and nested expressions with aggregation query. diff --git a/src/UserGuide/latest/SQL-Manual/SQL-Manual.md b/src/UserGuide/latest/SQL-Manual/SQL-Manual.md index f86e66790..9e0b3582e 100644 --- a/src/UserGuide/latest/SQL-Manual/SQL-Manual.md +++ b/src/UserGuide/latest/SQL-Manual/SQL-Manual.md @@ -21,25 +21,25 @@ # SQL Manual -## DATABASE MANAGEMENT +## 1. DATABASE MANAGEMENT For more details, see document [Operate-Metadata](../Basic-Concept/Operate-Metadata.md). -### Create Database +### 1.1 Create Database ```sql IoTDB > create database root.ln IoTDB > create database root.sgcc ``` -### Show Databases +### 1.2 Show Databases ```sql IoTDB> SHOW DATABASES IoTDB> SHOW DATABASES root.** ``` -### Delete Database +### 1.3 Delete Database ```sql IoTDB > DELETE DATABASE root.ln @@ -48,7 +48,7 @@ IoTDB > DELETE DATABASE root.sgcc IoTDB > DELETE DATABASE root.** ``` -### Count Databases +### 1.4 Count Databases ```sql IoTDB> count databases @@ -57,7 +57,7 @@ IoTDB> count databases root.sgcc.* IoTDB> count databases root.sgcc ``` -### Setting up heterogeneous databases (Advanced operations) +### 1.5 Setting up heterogeneous databases (Advanced operations) #### Set heterogeneous parameters when creating a Database @@ -77,7 +77,7 @@ ALTER DATABASE root.db WITH SCHEMA_REGION_GROUP_NUM=1, DATA_REGION_GROUP_NUM=2; SHOW DATABASES DETAILS ``` -### TTL +### 1.6 TTL #### Set TTL @@ -103,7 +103,7 @@ IoTDB> SHOW TTL ON StorageGroupNames IoTDB> SHOW DEVICES ``` -## DEVICE TEMPLATE +## 2. DEVICE TEMPLATE For more details, see document [Operate-Metadata](../Basic-Concept/Operate-Metadata.md). @@ -115,7 +115,7 @@ For more details, see document [Operate-Metadata](../Basic-Concept/Operate-Metad ![img](/img/templateEN.jpg) -### Create Device Template +### 2.1 Create Device Template **Example 1:** Create a template containing two non-aligned timeseires @@ -131,13 +131,13 @@ IoTDB> create device template t2 aligned (lat FLOAT encoding=Gorilla, lon FLOAT The` lat` and `lon` measurements are aligned. -### Set Device Template +### 2.2 Set Device Template ```sql IoTDB> set device template t1 to root.sg1.d1 ``` -### Activate Device Template +### 2.3 Activate Device Template ```sql IoTDB> set device template t1 to root.sg1.d1 @@ -146,7 +146,7 @@ IoTDB> create timeseries using device template on root.sg1.d1 IoTDB> create timeseries using device template on root.sg1.d2 ``` -### Show Device Template +### 2.4 Show Device Template ```sql IoTDB> show device templates @@ -155,7 +155,7 @@ IoTDB> show paths set device template t1 IoTDB> show paths using device template t1 ``` -### Deactivate Device Template +### 2.5 Deactivate Device Template ```sql IoTDB> delete timeseries of device template t1 from root.sg1.d1 @@ -164,29 +164,29 @@ IoTDB> delete timeseries of device template t1 from root.sg1.*, root.sg2.* IoTDB> deactivate device template t1 from root.sg1.*, root.sg2.* ``` -### Unset Device Template +### 2.6 Unset Device Template ```sql IoTDB> unset device template t1 from root.sg1.d1 ``` -### Drop Device Template +### 2.7 Drop Device Template ```sql IoTDB> drop device template t1 ``` -### Alter Device Template +### 2.8 Alter Device Template ```sql IoTDB> alter device template t1 add (speed FLOAT encoding=RLE, FLOAT TEXT encoding=PLAIN compression=SNAPPY) ``` -## TIMESERIES MANAGEMENT +## 3. TIMESERIES MANAGEMENT For more details, see document [Operate-Metadata](../Basic-Concept/Operate-Metadata.md). -### Create Timeseries +### 3.1 Create Timeseries ```sql IoTDB > create timeseries root.ln.wf01.wt01.status with datatype=BOOLEAN,encoding=PLAIN @@ -215,13 +215,13 @@ IoTDB > create timeseries root.ln.wf02.wt02.status WITH DATATYPE=BOOLEAN, ENCODI error: encoding TS_2DIFF does not support BOOLEAN ``` -### Create Aligned Timeseries +### 3.2 Create Aligned Timeseries ```sql IoTDB> CREATE ALIGNED TIMESERIES root.ln.wf01.GPS(latitude FLOAT encoding=PLAIN compressor=SNAPPY, longitude FLOAT encoding=PLAIN compressor=SNAPPY) ``` -### Delete Timeseries +### 3.3 Delete Timeseries ```sql IoTDB> delete timeseries root.ln.wf01.wt01.status @@ -230,7 +230,7 @@ IoTDB> delete timeseries root.ln.wf02.* IoTDB> drop timeseries root.ln.wf02.* ``` -### Show Timeseries +### 3.4 Show Timeseries ```sql IoTDB> show timeseries root.** @@ -240,7 +240,7 @@ IoTDB> show timeseries root.ln.** where timeseries contains 'wf01.wt' IoTDB> show timeseries root.ln.** where dataType=FLOAT ``` -### Count Timeseries +### 3.5 Count Timeseries ```sql IoTDB > COUNT TIMESERIES root.** @@ -257,7 +257,7 @@ IoTDB > COUNT TIMESERIES root.ln.** GROUP BY LEVEL=2 IoTDB > COUNT TIMESERIES root.ln.wf01.* GROUP BY LEVEL=2 ``` -### Tag and Attribute Management +### 3.6 Tag and Attribute Management ```sql create timeseries root.turbine.d1.s1(temprature) with datatype=FLOAT, encoding=RLE, compression=SNAPPY tags(tag1=v1, tag2=v2) attributes(attr1=v1, attr2=v2) @@ -362,23 +362,23 @@ IoTDB> show timeseries where TAGS(tag1)='v1' The above operations are supported for timeseries tag, attribute updates, etc. -## NODE MANAGEMENT +## 4. NODE MANAGEMENT For more details, see document [Operate-Metadata](../Basic-Concept/Operate-Metadata.md). -### Show Child Paths +### 4.1 Show Child Paths ```SQL SHOW CHILD PATHS pathPattern ``` -### Show Child Nodes +### 4.2 Show Child Nodes ```SQL SHOW CHILD NODES pathPattern ``` -### Count Nodes +### 4.3 Count Nodes ```SQL IoTDB > COUNT NODES root.** LEVEL=2 @@ -387,7 +387,7 @@ IoTDB > COUNT NODES root.ln.wf01.** LEVEL=3 IoTDB > COUNT NODES root.**.temperature LEVEL=3 ``` -### Show Devices +### 4.4 Show Devices ```SQL IoTDB> show devices @@ -397,7 +397,7 @@ IoTDB> show devices with database IoTDB> show devices root.ln.** with database ``` -### Count Devices +### 4.5 Count Devices ```SQL IoTDB> show devices @@ -405,9 +405,9 @@ IoTDB> count devices IoTDB> count devices root.ln.** ``` -## INSERT & LOAD DATA +## 5. INSERT & LOAD DATA -### Insert Data +### 5.1 Insert Data For more details, see document [Write-Delete-Data](../Basic-Concept/Write-Delete-Data.md). @@ -442,7 +442,7 @@ IoTDB > insert into root.sg1.d1(time, s1, s2) aligned values(2, 2, 2), (3, 3, 3) IoTDB > select * from root.sg1.d1 ``` -### Load External TsFile Tool +### 5.2 Load External TsFile Tool For more details, see document [Data Import](../Tools-System/Data-Import-Tool.md). @@ -469,11 +469,11 @@ For more details, see document [Data Import](../Tools-System/Data-Import-Tool.md ./load-rewrite.bat -f D:\IoTDB\data -h 192.168.0.101 -p 6667 -u root -pw root ``` -## DELETE DATA +## 6. DELETE DATA For more details, see document [Write-Delete-Data](../Basic-Concept/Write-Delete-Data.md). -### Delete Single Timeseries +### 6.1 Delete Single Timeseries ```sql IoTDB > delete from root.ln.wf02.wt02.status where time<=2017-11-01T16:26:00; @@ -491,7 +491,7 @@ expressions like : time > XXX, time <= XXX, or two atomic expressions connected IoTDB > delete from root.ln.wf02.wt02.status ``` -### Delete Multiple Timeseries +### 6.2 Delete Multiple Timeseries ```sql IoTDB > delete from root.ln.wf02.wt02 where time <= 2017-11-01T16:26:00; @@ -500,13 +500,13 @@ IoTDB> delete from root.ln.wf03.wt02.status where time < now() Msg: The statement is executed successfully. ``` -### Delete Time Partition (experimental) +### 6.3 Delete Time Partition (experimental) ```sql IoTDB > DELETE PARTITION root.ln 0,1,2 ``` -## QUERY DATA +## 7. QUERY DATA For more details, see document [Query-Data](../Basic-Concept/Query-Data.md). @@ -532,7 +532,7 @@ SELECT [LAST] selectExpr [, selectExpr] ... [ALIGN BY {TIME | DEVICE}] ``` -### Basic Examples +### 7.1 Basic Examples #### Select a Column of Data Based on a Time Interval @@ -564,7 +564,7 @@ IoTDB > select wf01.wt01.status,wf02.wt02.hardware from root.ln where (time > 20 IoTDB > select * from root.ln.** where time > 1 order by time desc limit 10; ``` -### `SELECT` CLAUSE +### 7.2 `SELECT` CLAUSE #### Use Alias @@ -623,7 +623,7 @@ IoTDB > select last * from root.ln.wf01.wt01 order by timeseries desc; IoTDB > select last * from root.ln.wf01.wt01 order by dataType desc; ``` -### `WHERE` CLAUSE +### 7.3 `WHERE` CLAUSE #### Time Filter @@ -662,7 +662,7 @@ IoTDB > select * from root.sg.d1 where value regexp '^[A-Za-z]+$' IoTDB > select * from root.sg.d1 where value regexp '^[a-z]+$' and time > 100 ``` -### `GROUP BY` CLAUSE +### 7.4 `GROUP BY` CLAUSE - Aggregate By Time without Specifying the Sliding Step Length @@ -754,7 +754,7 @@ IoTDB > SELECT avg(temperature) FROM root.factory1.** GROUP BY TAGS(city, worksh IoTDB > SELECT avg(temperature) FROM root.factory1.** GROUP BY ([1000, 10000), 5s), TAGS(city, workshop); ``` -### `HAVING` CLAUSE +### 7.5 `HAVING` CLAUSE Correct: @@ -772,7 +772,7 @@ IoTDB > select count(s1) from root.** group by ([1,3),1ms), level=1 having sum(d IoTDB > select count(d1.s1) from root.** group by ([1,3),1ms), level=1 having sum(s1) > 1 ``` -### `FILL` CLAUSE +### 7.6 `FILL` CLAUSE #### `PREVIOUS` Fill @@ -798,7 +798,7 @@ IoTDB > select temperature, status from root.sgcc.wf03.wt01 where time >= 2017-1 IoTDB > select temperature, status from root.sgcc.wf03.wt01 where time >= 2017-11-01T16:37:00.000 and time <= 2017-11-01T16:40:00.000 fill(true); ``` -### `LIMIT` and `SLIMIT` CLAUSES (PAGINATION) +### 7.7 `LIMIT` and `SLIMIT` CLAUSES (PAGINATION) #### Row Control over Query Results @@ -823,7 +823,7 @@ IoTDB > select max_value(*) from root.ln.wf01.wt01 group by ([2017-11-01T00:00:0 IoTDB > select * from root.ln.wf01.wt01 limit 10 offset 100 slimit 2 soffset 0 ``` -### `ORDER BY` CLAUSE +### 7.8 `ORDER BY` CLAUSE #### Order by in ALIGN BY TIME mode @@ -855,7 +855,7 @@ IoTDB > select min_value(total),max_value(base) from root.** order by max_value( IoTDB > select score from root.** order by device asc, score desc, time asc align by device ``` -### `ALIGN BY` CLAUSE +### 7.9 `ALIGN BY` CLAUSE #### Align by Device @@ -863,7 +863,7 @@ IoTDB > select score from root.** order by device asc, score desc, time asc alig IoTDB > select * from root.ln.** where time <= 2017-11-01T00:01:00 align by device; ``` -### `INTO` CLAUSE (QUERY WRITE-BACK) +### 7.10 `INTO` CLAUSE (QUERY WRITE-BACK) ```sql IoTDB > select s1, s2 into root.sg_copy.d1(t1), root.sg_copy.d2(t1, t2), root.sg_copy.d1(t2) from root.sg.d1, root.sg.d2; @@ -900,7 +900,7 @@ IoTDB > select * into ::(backup_${4}) from root.sg.** align by device; IoTDB > select s1, s2 into root.sg_copy.d1(t1, t2), aligned root.sg_copy.d2(t1, t2) from root.sg.d1, root.sg.d2 align by device; ``` -## Maintennance +## 8. Maintennance Generate the corresponding query plan: ``` explain select s1,s2 from root.sg.d1 @@ -909,11 +909,11 @@ Execute the corresponding SQL, analyze the execution and output: ``` explain analyze select s1,s2 from root.sg.d1 order by s1 ``` -## OPERATOR +## 9. OPERATOR For more details, see document [Operator-and-Expression](./Operator-and-Expression.md). -### Arithmetic Operators +### 9.1 Arithmetic Operators For details and examples, see the document [Arithmetic Operators and Functions](./Operator-and-Expression.md#arithmetic-operators). @@ -921,7 +921,7 @@ For details and examples, see the document [Arithmetic Operators and Functions]( select s1, - s1, s2, + s2, s1 + s2, s1 - s2, s1 * s2, s1 / s2, s1 % s2 from root.sg.d1 ``` -### Comparison Operators +### 9.2 Comparison Operators For details and examples, see the document [Comparison Operators and Functions](./Operator-and-Expression.md#comparison-operators). @@ -952,7 +952,7 @@ select code from root.sg1.d1 where code not in ('200', '300', '400', '500'); select a, a in (1, 2) from root.test; ``` -### Logical Operators +### 9.3 Logical Operators For details and examples, see the document [Logical Operators](./Operator-and-Expression.md#logical-operators). @@ -960,11 +960,11 @@ For details and examples, see the document [Logical Operators](./Operator-and-Ex select a, b, a > 10, a <= b, !(a <= b), a > 10 && a > b from root.test; ``` -## BUILT-IN FUNCTIONS +## 10. BUILT-IN FUNCTIONS For more details, see document [Operator-and-Expression](./Operator-and-Expression.md#built-in-functions). -### Aggregate Functions +### 10.1 Aggregate Functions For details and examples, see the document [Aggregate Functions](./Operator-and-Expression.md#aggregate-functions). @@ -977,7 +977,7 @@ select count_if(s1=0 & s2=0, 3, 'ignoreNull'='false'), count_if(s1=1 & s2=0, 3, select time_duration(s1) from root.db.d1; ``` -### Arithmetic Functions +### 10.2 Arithmetic Functions For details and examples, see the document [Arithmetic Operators and Functions](./Operator-and-Expression.md#arithmetic-functions). @@ -986,7 +986,7 @@ select s1, sin(s1), cos(s1), tan(s1) from root.sg1.d1 limit 5 offset 1000; select s4,round(s4),round(s4,2),round(s4,-1) from root.sg1.d1; ``` -### Comparison Functions +### 10.3 Comparison Functions For details and examples, see the document [Comparison Operators and Functions](./Operator-and-Expression.md#comparison-functions). @@ -995,7 +995,7 @@ select ts, on_off(ts, 'threshold'='2') from root.test; select ts, in_range(ts, 'lower'='2', 'upper'='3.1') from root.test; ``` -### String Processing Functions +### 10.4 String Processing Functions For details and examples, see the document [String Processing](./Operator-and-Expression.md#string-processing-functions). @@ -1023,7 +1023,7 @@ select regexsplit(s1, "regex"=",", "index"="-1") from root.test.d1 select regexsplit(s1, "regex"=",", "index"="3") from root.test.d1 ``` -### Data Type Conversion Function +### 10.5 Data Type Conversion Function For details and examples, see the document [Data Type Conversion Function](./Operator-and-Expression.md#data-type-conversion-function). @@ -1031,7 +1031,7 @@ For details and examples, see the document [Data Type Conversion Function](./Ope SELECT cast(s1 as INT32) from root.sg ``` -### Constant Timeseries Generating Functions +### 10.6 Constant Timeseries Generating Functions For details and examples, see the document [Constant Timeseries Generating Functions](./Operator-and-Expression.md#constant-timeseries-generating-functions). @@ -1039,7 +1039,7 @@ For details and examples, see the document [Constant Timeseries Generating Funct select s1, s2, const(s1, 'value'='1024', 'type'='INT64'), pi(s2), e(s1, s2) from root.sg1.d1; ``` -### Selector Functions +### 10.7 Selector Functions For details and examples, see the document [Selector Functions](./Operator-and-Expression.md#selector-functions). @@ -1047,7 +1047,7 @@ For details and examples, see the document [Selector Functions](./Operator-and-E select s1, top_k(s1, 'k'='2'), bottom_k(s1, 'k'='2') from root.sg1.d2 where time > 2020-12-10T20:36:15.530+08:00; ``` -### Continuous Interval Functions +### 10.8 Continuous Interval Functions For details and examples, see the document [Continuous Interval Functions](./Operator-and-Expression.md#continuous-interval-functions). @@ -1055,7 +1055,7 @@ For details and examples, see the document [Continuous Interval Functions](./Ope select s1, zero_count(s1), non_zero_count(s2), zero_duration(s3), non_zero_duration(s4) from root.sg.d2; ``` -### Variation Trend Calculation Functions +### 10.9 Variation Trend Calculation Functions For details and examples, see the document [Variation Trend Calculation Functions](./Operator-and-Expression.md#variation-trend-calculation-functions). @@ -1066,7 +1066,7 @@ SELECT DIFF(s1), DIFF(s2) from root.test; SELECT DIFF(s1, 'ignoreNull'='false'), DIFF(s2, 'ignoreNull'='false') from root.test; ``` -### Sample Functions +### 10.10 Sample Functions For details and examples, see the document [Sample Functions](./Operator-and-Expression.md#sample-functions). @@ -1080,7 +1080,7 @@ select M4(s1,'timeInterval'='25','displayWindowBegin'='0','displayWindowEnd'='10 select M4(s1,'windowSize'='10') from root.vehicle.d1 ``` -### Change Points Function +### 10.11 Change Points Function For details and examples, see the document [Time-Series](./Operator-and-Expression.md#change-points-function). @@ -1088,11 +1088,11 @@ For details and examples, see the document [Time-Series](./Operator-and-Expressi select change_points(s1), change_points(s2), change_points(s3), change_points(s4), change_points(s5), change_points(s6) from root.testChangePoints.d1 ``` -## DATA QUALITY FUNCTION LIBRARY +## 11. DATA QUALITY FUNCTION LIBRARY For more details, see document [Operator-and-Expression](../SQL-Manual/UDF-Libraries.md). -### Data Quality +### 11.1 Data Quality For details and examples, see the document [Data-Quality](../SQL-Manual/UDF-Libraries.md#data-quality). @@ -1117,7 +1117,7 @@ select Validity(s1,"window"="15") from root.test.d1 where time <= 2020-01-01 00: select Accuracy(t1,t2,t3,m1,m2,m3) from root.test ``` -### Data Profiling +### 11.2 Data Profiling For details and examples, see the document [Data-Profiling](../SQL-Manual/UDF-Libraries.md#data-profiling). @@ -1197,7 +1197,7 @@ select stddev(s1) from root.test.d1 select zscore(s1) from root.test ``` -### Anomaly Detection +### 11.3 Anomaly Detection For details and examples, see the document [Anomaly-Detection](../SQL-Manual/UDF-Libraries.md#anomaly-detection). @@ -1232,7 +1232,7 @@ select MasterDetect(lo,la,m_lo,m_la,model,'output_type'='repair','p'='3','k'='3' select MasterDetect(lo,la,m_lo,m_la,model,'output_type'='anomaly','p'='3','k'='3','eta'='1.0') from root.test ``` -### Frequency Domain +### 11.4 Frequency Domain For details and examples, see the document [Frequency-Domain](../SQL-Manual/UDF-Libraries.md#frequency-domain-analysis). @@ -1264,7 +1264,7 @@ select lowpass(s1,'wpass'='0.45') from root.test.d1 select envelope(s1) from root.test.d1 ``` -### Data Matching +### 11.5 Data Matching For details and examples, see the document [Data-Matching](../SQL-Manual/UDF-Libraries.md#data-matching). @@ -1285,7 +1285,7 @@ select ptnsym(s4, 'window'='5', 'threshold'='0') from root.test.d1 select xcorr(s1, s2) from root.test.d1 where time <= 2020-01-01 00:00:05 ``` -### Data Repairing +### 11.6 Data Repairing For details and examples, see the document [Data-Repairing](../SQL-Manual/UDF-Libraries.md#data-repairing). @@ -1310,7 +1310,7 @@ select seasonalrepair(s1,'period'=3,'k'=2) from root.test.d2 select seasonalrepair(s1,'method'='improved','period'=3) from root.test.d2 ``` -### Series Discovery +### 11.7 Series Discovery For details and examples, see the document [Series-Discovery](../SQL-Manual/UDF-Libraries.md#series-discovery). @@ -1323,7 +1323,7 @@ select consecutivesequences(s1,s2) from root.test.d1 select consecutivewindows(s1,s2,'length'='10m') from root.test.d1 ``` -### Machine Learning +### 11.8 Machine Learning For details and examples, see the document [Machine-Learning](../SQL-Manual/UDF-Libraries.md#machine-learning). @@ -1338,7 +1338,7 @@ select representation(s0,"tb"="3","vb"="2") from root.test.d0 select rm(s0, s1,"tb"="3","vb"="2") from root.test.d0 ``` -## LAMBDA EXPRESSION +## 12. LAMBDA EXPRESSION For details and examples, see the document [Lambda](../SQL-Manual/UDF-Libraries.md#lambda-expression). @@ -1346,7 +1346,7 @@ For details and examples, see the document [Lambda](../SQL-Manual/UDF-Libraries. select jexl(temperature, 'expr'='x -> {x + x}') as jexl1, jexl(temperature, 'expr'='x -> {x * 3}') as jexl2, jexl(temperature, 'expr'='x -> {x * x}') as jexl3, jexl(temperature, 'expr'='x -> {multiply(x, 100)}') as jexl4, jexl(temperature, st, 'expr'='(x, y) -> {x + y}') as jexl5, jexl(temperature, st, str, 'expr'='(x, y, z) -> {x + y + z}') as jexl6 from root.ln.wf01.wt01;``` ``` -## CONDITIONAL EXPRESSION +## 13. CONDITIONAL EXPRESSION For details and examples, see the document [Conditional Expressions](../SQL-Manual/UDF-Libraries.md#conditional-expressions). @@ -1384,11 +1384,11 @@ end as `result` from root.test4 ``` -## TRIGGER +## 14. TRIGGER For more details, see document [Database-Programming](../User-Manual/Database-Programming.md). -### Create Trigger +### 14.1 Create Trigger ```sql // Create Trigger @@ -1421,7 +1421,7 @@ triggerAttribute ; ``` -### Drop Trigger +### 14.2 Drop Trigger ```sql // Drop Trigger @@ -1430,13 +1430,13 @@ dropTrigger ; ``` -### Show Trigger +### 14.3 Show Trigger ```sql SHOW TRIGGERS ``` -## CONTINUOUS QUERY (CQ) +## 15. CONTINUOUS QUERY (CQ) For more details, see document [Operator-and-Expression](./Operator-and-Expression.md). @@ -1461,7 +1461,7 @@ BEGIN END ``` -### Configuring execution intervals +### 15.1 Configuring execution intervals ```sql CREATE CONTINUOUS QUERY cq1 @@ -1474,7 +1474,7 @@ SELECT max_value(temperature) END ``` -### Configuring time range for resampling +### 15.2 Configuring time range for resampling ```sql CREATE CONTINUOUS QUERY cq2 @@ -1487,7 +1487,7 @@ BEGIN END ``` -### Configuring execution intervals and CQ time ranges +### 15.3 Configuring execution intervals and CQ time ranges ```sql CREATE CONTINUOUS QUERY cq3 @@ -1501,7 +1501,7 @@ BEGIN END ``` -### Configuring end_time_offset for CQ time range +### 15.4 Configuring end_time_offset for CQ time range ```sql CREATE CONTINUOUS QUERY cq4 @@ -1515,7 +1515,7 @@ BEGIN END ``` -### CQ without group by clause +### 15.5 CQ without group by clause ```sql CREATE CONTINUOUS QUERY cq5 @@ -1528,7 +1528,7 @@ BEGIN END ``` -### CQ Management +### 15.6 CQ Management #### Listing continuous queries @@ -1546,23 +1546,23 @@ DROP (CONTINUOUS QUERY | CQ) CQs can't be altered once they're created. To change a CQ, you must `DROP` and re`CREATE` it with the updated settings. -## USER-DEFINED FUNCTION (UDF) +## 16. USER-DEFINED FUNCTION (UDF) For more details, see document [Operator-and-Expression](../SQL-Manual/UDF-Libraries.md). -### UDF Registration +### 16.1 UDF Registration ```sql CREATE FUNCTION AS (USING URI URI-STRING)? ``` -### UDF Deregistration +### 16.2 UDF Deregistration ```sql DROP FUNCTION ``` -### UDF Queries +### 16.3 UDF Queries ```sql SELECT example(*) from root.sg.d1 @@ -1578,17 +1578,17 @@ SELECT s1 * example(* / s1 + s2) FROM root.sg.d1; SELECT s1, s2, s1 + example(s1, s2), s1 - example(s1 + example(s1, s2) / s2) FROM root.sg.d1; ``` -### Show All Registered UDFs +### 16.4 Show All Registered UDFs ```sql SHOW FUNCTIONS ``` -## ADMINISTRATION MANAGEMENT +## 17. ADMINISTRATION MANAGEMENT For more details, see document [Operator-and-Expression](./Operator-and-Expression.md). -### SQL Statements +### 17.1 SQL Statements - Create user (Requires MANAGE_USER permission) @@ -1679,7 +1679,7 @@ ALTER USER SET PASSWORD ; eg: ALTER USER tempuser SET PASSWORD 'newpwd'; ``` -### Authorization and Deauthorization +### 17.2 Authorization and Deauthorization ```sql diff --git a/src/UserGuide/latest/SQL-Manual/Syntax-Rule.md b/src/UserGuide/latest/SQL-Manual/Syntax-Rule.md index 38dffc6ac..97ab27ca0 100644 --- a/src/UserGuide/latest/SQL-Manual/Syntax-Rule.md +++ b/src/UserGuide/latest/SQL-Manual/Syntax-Rule.md @@ -21,11 +21,11 @@ # Identifiers -## Literal Values +## 1. Literal Values This section describes how to write literal values in IoTDB. These include strings, numbers, timestamp values, boolean values, and NULL. -### String Literals +### 1.1 String Literals in IoTDB, **A string is a sequence of bytes or characters, enclosed within either single quote (`'`) or double quote (`"`) characters.** Examples: @@ -130,7 +130,7 @@ The following examples demonstrate how quoting and escaping work: """string" // "string ``` -### Numeric Literals +### 1.2 Numeric Literals Number literals include integer (exact-value) literals and floating-point (approximate-value) literals. @@ -144,27 +144,27 @@ The `FLOAT` and `DOUBLE` data types are floating-point types and calculations ar An integer may be used in floating-point context; it is interpreted as the equivalent floating-point number. -### Timestamp Literals +### 1.3 Timestamp Literals The timestamp is the time point at which data is produced. It includes absolute timestamps and relative timestamps in IoTDB. For information about timestamp support in IoTDB, see [Data Type Doc](../Background-knowledge/Data-Type.md). Specially, `NOW()` represents a constant timestamp that indicates the system time at which the statement began to execute. -### Boolean Literals +### 1.4 Boolean Literals The constants `TRUE` and `FALSE` evaluate to 1 and 0, respectively. The constant names can be written in any lettercase. -### NULL Values +### 1.5 NULL Values The `NULL` value means “no data.” `NULL` can be written in any lettercase. -## Identifier +## 2. Identifier -### Usage scenarios +### 2.1 Usage scenarios Certain objects within IoTDB, including `TRIGGER`, `FUNCTION`(UDF), `CONTINUOUS QUERY`, `SCHEMA TEMPLATE`, `USER`, `ROLE`,`Pipe`,`PipeSink`,`alias` and other object names are known as identifiers. -### Constraints +### 2.2 Constraints Below are basic constraints of identifiers, specific identifiers may have other constraints, for example, `user` should consists of more than 4 characters. @@ -172,7 +172,7 @@ Below are basic constraints of identifiers, specific identifiers may have other - [0-9 a-z A-Z _ ] (letters, digits and underscore) - ['\u2E80'..'\u9FFF'] (UNICODE Chinese characters) -### Reverse quotation marks +### 2.3 Reverse quotation marks **If the following situations occur, the identifier needs to be quoted using reverse quotes:** diff --git a/src/UserGuide/latest/SQL-Manual/UDF-Libraries_apache.md b/src/UserGuide/latest/SQL-Manual/UDF-Libraries_apache.md index c2a0dcd54..675c870b7 100644 --- a/src/UserGuide/latest/SQL-Manual/UDF-Libraries_apache.md +++ b/src/UserGuide/latest/SQL-Manual/UDF-Libraries_apache.md @@ -17,9 +17,7 @@ ​ specific language governing permissions and limitations ​ under the License. ---> - -# UDF Libraries +--> # UDF Libraries @@ -27,7 +25,7 @@ Based on the ability of user-defined functions, IoTDB provides a series of funct > Note: The functions in the current UDF library only support millisecond level timestamp accuracy. -## Installation steps +## 1. Installation steps 1. Please obtain the compressed file of the UDF library JAR package that is compatible with the IoTDB version. @@ -46,9 +44,9 @@ Based on the ability of user-defined functions, IoTDB provides a series of funct - All SQL statements - Open the SQl file in the compressed package, copy all SQL statements, and in the SQL operation interface of IoTDB's SQL command line terminal (CLI), execute all SQl statements to batch register UDFs -## Data Quality +## 2. Data Quality -### Completeness +### 2.1 Completeness #### Registration statement @@ -179,7 +177,7 @@ Output series: +-----------------------------+--------------------------------------------+ ``` -### Consistency +### 2.2 Consistency #### Registration statement @@ -309,7 +307,7 @@ Output series: +-----------------------------+-------------------------------------------+ ``` -### Timeliness +### 2.3 Timeliness #### Registration statement @@ -439,7 +437,7 @@ Output series: +-----------------------------+------------------------------------------+ ``` -### Validity +### 2.4 Validity #### Registration statement @@ -592,9 +590,9 @@ Output series: --> -## Data Profiling +## 3. Data Profiling -### ACF +### 3.1 ACF #### Registration statement @@ -659,7 +657,7 @@ Output series: +-----------------------------+--------------------+ ``` -### Distinct +### 3.2 Distinct #### Registration statement @@ -718,7 +716,7 @@ Output series: +-----------------------------+-------------------------+ ``` -### Histogram +### 3.3 Histogram #### Registration statement @@ -803,7 +801,7 @@ Output series: +-----------------------------+---------------------------------------------------------------+ ``` -### Integral +### 3.4 Integral #### Registration statement @@ -900,7 +898,7 @@ Output series: Calculation expression: $$\frac{1}{2\times 60}[(1+2) \times 1 + (2+5) \times 1 + (5+6) \times 1 + (6+7) \times 1 + (7+8) \times 3 + (8+10) \times 2] = 0.958$$ -### IntegralAvg +### 3.5 IntegralAvg #### Registration statement @@ -967,7 +965,7 @@ Output series: Calculation expression: $$\frac{1}{2}[(1+2) \times 1 + (2+5) \times 1 + (5+6) \times 1 + (6+7) \times 1 + (7+8) \times 3 + (8+10) \times 2] / 10 = 5.75$$ -### Mad +### 3.6 Mad #### Registration statement @@ -1066,7 +1064,7 @@ Output series: +-----------------------------+---------------------------------+ ``` -### Median +### 3.7 Median #### Registration statement @@ -1136,7 +1134,7 @@ Output series: +-----------------------------+------------------------------------+ ``` -### MinMax +### 3.8 MinMax #### Registration statement @@ -1227,7 +1225,7 @@ Output series: ``` -### MvAvg +### 3.9 MvAvg #### Registration statement @@ -1313,7 +1311,7 @@ Output series: +-----------------------------+---------------------------------+ ``` -### PACF +### 3.10 PACF #### Registration statement @@ -1371,7 +1369,7 @@ Output series: +-----------------------------+--------------------------------+ ``` -### Percentile +### 3.11 Percentile #### Registration statement @@ -1444,7 +1442,7 @@ Output series: +-----------------------------+-------------------------------------------------------+ ``` -### Quantile +### 3.12 Quantile #### Registration statement @@ -1506,7 +1504,7 @@ Output series: +-----------------------------+------------------------------------------------+ ``` -### Period +### 3.13 Period #### Registration statement @@ -1561,7 +1559,7 @@ Output series: +-----------------------------+-----------------------+ ``` -### QLB +### 3.14 QLB #### Registration statement @@ -1651,7 +1649,7 @@ Output series: +-----------------------------+--------------------+ ``` -### Resample +### 3.15 Resample #### Registration statement @@ -1786,7 +1784,7 @@ Output series: +-----------------------------+-----------------------------------------------------------------------+ ``` -### Sample +### 3.16 Sample #### Registration statement @@ -1890,7 +1888,7 @@ Output series: +-----------------------------+------------------------------------------------------+ ``` -### Segment +### 3.17 Segment #### Registration statement @@ -1988,7 +1986,7 @@ Output series: +-----------------------------+------------------------------------+ ``` -### Skew +### 3.18 Skew #### Registration statement @@ -2055,7 +2053,7 @@ Output series: +-----------------------------+-----------------------+ ``` -### Spline +### 3.19 Spline #### Registration statement @@ -2266,7 +2264,7 @@ Output series: +-----------------------------+------------------------------------+ ``` -### Spread +### 3.20 Spread #### Registration statement @@ -2330,7 +2328,7 @@ Output series: -### ZScore +### 3.21 ZScore #### Registration statement @@ -2440,9 +2438,9 @@ Output series: --> -## Anomaly Detection +## 4. Anomaly Detection -### IQR +### 4.1 IQR #### Registration statement @@ -2515,7 +2513,7 @@ Output series: +-----------------------------+-----------------+ ``` -### KSigma +### 4.2 KSigma #### Registration statement @@ -2586,7 +2584,7 @@ Output series: +-----------------------------+---------------------------------+ ``` -### LOF +### 4.3 LOF #### Registration statement @@ -2718,7 +2716,7 @@ Output series: +-----------------------------+--------------------+ ``` -### MissDetect +### 4.4 MissDetect #### Registration statement @@ -2812,7 +2810,7 @@ Output series: +-----------------------------+------------------------------------------+ ``` -### Range +### 4.5 Range #### Registration statement @@ -2883,7 +2881,7 @@ Output series: +-----------------------------+------------------------------------------------------------------+ ``` -### TwoSidedFilter +### 4.6 TwoSidedFilter #### Registration statement @@ -2982,7 +2980,7 @@ Output series: +-----------------------------+------------+ ``` -### Outlier +### 4.7 Outlier #### Registration statement @@ -3057,7 +3055,7 @@ Output series: ``` -### MasterTrain +### 4.8 MasterTrain #### Usage @@ -3140,7 +3138,7 @@ Output series: +-----------------------------+---------------------------------------------------------------------------------------------+ ``` -### MasterDetect +### 4.9 MasterDetect #### Usage @@ -3309,9 +3307,9 @@ Output series: --> -## Frequency Domain Analysis +## 5. Frequency Domain Analysis -### Conv +### 5.1 Conv #### Registration statement @@ -3364,7 +3362,7 @@ Output series: +-----------------------------+--------------------------------------+ ``` -### Deconv +### 5.2 Deconv #### Registration statement @@ -3450,7 +3448,7 @@ Output series: +-----------------------------+--------------------------------------------------------------+ ``` -### DWT +### 5.3 DWT #### Registration statement @@ -3537,7 +3535,7 @@ Output series: +-----------------------------+-------------------------------------+ ``` -### FFT +### 5.4 FFT #### Registration statement @@ -3667,7 +3665,7 @@ Note: Based on the conjugation of the Fourier transform result, only the first h According to the given parameter, data points are reserved from low frequency to high frequency until the reserved energy ratio exceeds it. The last data point is reserved to indicate the length of the series. -### HighPass +### 5.5 HighPass #### Registration statement @@ -3760,7 +3758,7 @@ Output series: Note: The input is $y=sin(2\pi t/4)+2sin(2\pi t/5)$ with a length of 20. Thus, the output is $y=sin(2\pi t/4)$ after high-pass filtering. -### IFFT +### 5.6 IFFT #### Registration statement @@ -3843,7 +3841,7 @@ Output series: +-----------------------------+-------------------------------------------------------+ ``` -### LowPass +### 5.7 LowPass #### Registration statement @@ -3957,9 +3955,9 @@ Note: The input is $y=sin(2\pi t/4)+2sin(2\pi t/5)$ with a length of 20. Thus, t --> -## Data Matching +## 6. Data Matching -### Cov +### 6.1 Cov #### Registration statement @@ -4026,7 +4024,7 @@ Output series: +-----------------------------+-------------------------------------+ ``` -### DTW +### 6.2 DTW #### Registration statement @@ -4097,7 +4095,7 @@ Output series: +-----------------------------+-------------------------------------+ ``` -### Pearson +### 6.3 Pearson #### Registration statement @@ -4164,7 +4162,7 @@ Output series: +-----------------------------+-----------------------------------------+ ``` -### PtnSym +### 6.4 PtnSym #### Registration statement @@ -4230,7 +4228,7 @@ Output series: +-----------------------------+------------------------------------------------------+ ``` -### XCorr +### 6.5 XCorr #### Registration statement @@ -4323,9 +4321,9 @@ Output series: --> -## Data Repairing +## 7. Data Repairing -### TimestampRepair +### 7.1 TimestampRepair #### Registration statement @@ -4434,7 +4432,7 @@ Output series: +-----------------------------+--------------------------------+ ``` -### ValueFill +### 7.2 ValueFill #### Registration statement @@ -4552,7 +4550,7 @@ Output series: +-----------------------------+-------------------------------------------+ ``` -### ValueRepair +### 7.3 ValueRepair #### Registration statement @@ -4678,7 +4676,7 @@ Output series: +-----------------------------+-------------------------------------------------+ ``` -### MasterRepair +### 7.4 MasterRepair #### Usage @@ -4739,7 +4737,7 @@ Output series: +-----------------------------+-------------------------------------------------------------------------------------------+ ``` -### SeasonalRepair +### 7.5 SeasonalRepair #### Usage This function is used to repair the value of the seasonal time series via decomposition. Currently, two methods are supported: **Classical** - detect irregular fluctuations through residual component decomposed by classical decomposition, and repair them through moving average; **Improved** - detect irregular fluctuations through residual component decomposed by improved decomposition, and repair them through moving median. @@ -4864,9 +4862,9 @@ Output series: --> -## Series Discovery +## 8. Series Discovery -### ConsecutiveSequences +### 8.1 ConsecutiveSequences #### Registration statement @@ -4960,7 +4958,7 @@ Output series: +-----------------------------+------------------------------------------------------+ ``` -### ConsecutiveWindows +### 8.2 ConsecutiveWindows #### Registration statement @@ -5050,9 +5048,9 @@ Output series: --> -## Machine Learning +## 9. Machine Learning -### AR +### 9.1 AR #### Registration statement @@ -5119,7 +5117,7 @@ Output Series: +-----------------------------+---------------------------+ ``` -### Representation +### 9.2 Representation #### Usage @@ -5183,7 +5181,7 @@ Output Series: +-----------------------------+-------------------------------------------------+ ``` -### RM +### 9.3 RM #### Usage diff --git a/src/UserGuide/latest/SQL-Manual/UDF-Libraries_timecho.md b/src/UserGuide/latest/SQL-Manual/UDF-Libraries_timecho.md index d4ee30c76..21151b7f0 100644 --- a/src/UserGuide/latest/SQL-Manual/UDF-Libraries_timecho.md +++ b/src/UserGuide/latest/SQL-Manual/UDF-Libraries_timecho.md @@ -21,13 +21,11 @@ # UDF Libraries -# UDF Libraries - Based on the ability of user-defined functions, IoTDB provides a series of functions for temporal data processing, including data quality, data profiling, anomaly detection, frequency domain analysis, data matching, data repairing, sequence discovery, machine learning, etc., which can meet the needs of industrial fields for temporal data processing. > Note: The functions in the current UDF library only support millisecond level timestamp accuracy. -## Installation steps +## 1. Installation steps 1. Please obtain the compressed file of the UDF library JAR package that is compatible with the IoTDB version. @@ -46,9 +44,9 @@ Based on the ability of user-defined functions, IoTDB provides a series of funct - All SQL statements - Open the SQl file in the compressed package, copy all SQL statements, and execute all SQl statements in the SQL command line terminal (CLI) of IoTDB or the SQL operation interface of the visualization console (Workbench) to batch register UDF -## Data Quality +## 2. Data Quality -### Completeness +### 2.1 Completeness #### Registration statement @@ -179,7 +177,7 @@ Output series: +-----------------------------+--------------------------------------------+ ``` -### Consistency +### 2.2 Consistency #### Registration statement @@ -309,7 +307,7 @@ Output series: +-----------------------------+-------------------------------------------+ ``` -### Timeliness +### 2.3 Timeliness #### Registration statement @@ -439,7 +437,7 @@ Output series: +-----------------------------+------------------------------------------+ ``` -### Validity +### 2.4 Validity #### Registration statement @@ -592,9 +590,9 @@ Output series: --> -## Data Profiling +## 3. Data Profiling -### ACF +### 3.1 ACF #### Registration statement @@ -659,7 +657,7 @@ Output series: +-----------------------------+--------------------+ ``` -### Distinct +### 3.2 Distinct #### Registration statement @@ -718,7 +716,7 @@ Output series: +-----------------------------+-------------------------+ ``` -### Histogram +### 3.3 Histogram #### Registration statement @@ -803,7 +801,7 @@ Output series: +-----------------------------+---------------------------------------------------------------+ ``` -### Integral +### 3.4 Integral #### Registration statement @@ -900,7 +898,7 @@ Output series: Calculation expression: $$\frac{1}{2\times 60}[(1+2) \times 1 + (2+5) \times 1 + (5+6) \times 1 + (6+7) \times 1 + (7+8) \times 3 + (8+10) \times 2] = 0.958$$ -### IntegralAvg +### 3.5 IntegralAvg #### Registration statement @@ -967,7 +965,7 @@ Output series: Calculation expression: $$\frac{1}{2}[(1+2) \times 1 + (2+5) \times 1 + (5+6) \times 1 + (6+7) \times 1 + (7+8) \times 3 + (8+10) \times 2] / 10 = 5.75$$ -### Mad +### 3.6 Mad #### Registration statement @@ -1066,7 +1064,7 @@ Output series: +-----------------------------+---------------------------------+ ``` -### Median +### 3.7 Median #### Registration statement @@ -1136,7 +1134,7 @@ Output series: +-----------------------------+------------------------------------+ ``` -### MinMax +### 3.8 MinMax #### Registration statement @@ -1227,7 +1225,7 @@ Output series: ``` -### MvAvg +### 3.9 MvAvg #### Registration statement @@ -1313,7 +1311,7 @@ Output series: +-----------------------------+---------------------------------+ ``` -### PACF +### 3.10 PACF #### Registration statement @@ -1371,7 +1369,7 @@ Output series: +-----------------------------+--------------------------------+ ``` -### Percentile +### 3.11 Percentile #### Registration statement @@ -1444,7 +1442,7 @@ Output series: +-----------------------------+-------------------------------------------------------+ ``` -### Quantile +### 3.12 Quantile #### Registration statement @@ -1506,7 +1504,7 @@ Output series: +-----------------------------+------------------------------------------------+ ``` -### Period +### 3.13 Period #### Registration statement @@ -1561,7 +1559,7 @@ Output series: +-----------------------------+-----------------------+ ``` -### QLB +### 3.14 QLB #### Registration statement @@ -1651,7 +1649,7 @@ Output series: +-----------------------------+--------------------+ ``` -### Resample +### 3.15 Resample #### Registration statement @@ -1786,7 +1784,7 @@ Output series: +-----------------------------+-----------------------------------------------------------------------+ ``` -### Sample +### 3.16 Sample #### Registration statement @@ -1890,7 +1888,7 @@ Output series: +-----------------------------+------------------------------------------------------+ ``` -### Segment +### 3.17 Segment #### Registration statement @@ -1988,7 +1986,7 @@ Output series: +-----------------------------+------------------------------------+ ``` -### Skew +### 3.18 Skew #### Registration statement @@ -2055,7 +2053,7 @@ Output series: +-----------------------------+-----------------------+ ``` -### Spline +### 3.19 Spline #### Registration statement @@ -2266,7 +2264,7 @@ Output series: +-----------------------------+------------------------------------+ ``` -### Spread +### 3.20 Spread #### Registration statement @@ -2330,7 +2328,7 @@ Output series: -### ZScore +### 3.21 ZScore #### Registration statement @@ -2440,9 +2438,9 @@ Output series: --> -## Anomaly Detection +## 4. Anomaly Detection -### IQR +### 4.1 IQR #### Registration statement @@ -2515,7 +2513,7 @@ Output series: +-----------------------------+-----------------+ ``` -### KSigma +### 4.2 KSigma #### Registration statement @@ -2586,7 +2584,7 @@ Output series: +-----------------------------+---------------------------------+ ``` -### LOF +### 4.3 LOF #### Registration statement @@ -2718,7 +2716,7 @@ Output series: +-----------------------------+--------------------+ ``` -### MissDetect +### 4.4 MissDetect #### Registration statement @@ -2812,7 +2810,7 @@ Output series: +-----------------------------+------------------------------------------+ ``` -### Range +### 4.5 Range #### Registration statement @@ -2883,7 +2881,7 @@ Output series: +-----------------------------+------------------------------------------------------------------+ ``` -### TwoSidedFilter +### 4.6 TwoSidedFilter #### Registration statement @@ -2982,7 +2980,7 @@ Output series: +-----------------------------+------------+ ``` -### Outlier +### 4.7 Outlier #### Registration statement @@ -3057,7 +3055,7 @@ Output series: ``` -### MasterTrain +### 4.8 MasterTrain #### Usage @@ -3140,7 +3138,7 @@ Output series: +-----------------------------+---------------------------------------------------------------------------------------------+ ``` -### MasterDetect +### 4.9 MasterDetect #### Usage @@ -3309,9 +3307,9 @@ Output series: --> -## Frequency Domain Analysis +## 5. Frequency Domain Analysis -### Conv +### 5.1 Conv #### Registration statement @@ -3364,7 +3362,7 @@ Output series: +-----------------------------+--------------------------------------+ ``` -### Deconv +### 5.2 Deconv #### Registration statement @@ -3450,7 +3448,7 @@ Output series: +-----------------------------+--------------------------------------------------------------+ ``` -### DWT +### 5.3 DWT #### Registration statement @@ -3537,7 +3535,7 @@ Output series: +-----------------------------+-------------------------------------+ ``` -### FFT +### 5.4 FFT #### Registration statement @@ -3667,7 +3665,7 @@ Note: Based on the conjugation of the Fourier transform result, only the first h According to the given parameter, data points are reserved from low frequency to high frequency until the reserved energy ratio exceeds it. The last data point is reserved to indicate the length of the series. -### HighPass +### 5.5 HighPass #### Registration statement @@ -3760,7 +3758,7 @@ Output series: Note: The input is $y=sin(2\pi t/4)+2sin(2\pi t/5)$ with a length of 20. Thus, the output is $y=sin(2\pi t/4)$ after high-pass filtering. -### IFFT +### 5.6 IFFT #### Registration statement @@ -3843,7 +3841,7 @@ Output series: +-----------------------------+-------------------------------------------------------+ ``` -### LowPass +### 5.7 LowPass #### Registration statement @@ -3937,7 +3935,7 @@ Output series: Note: The input is $y=sin(2\pi t/4)+2sin(2\pi t/5)$ with a length of 20. Thus, the output is $y=2sin(2\pi t/5)$ after low-pass filtering. -### Envelope +### 5.8 Envelope #### Registration statement @@ -4017,9 +4015,9 @@ Output series: ``` -## Data Matching +## 6. Data Matching -### Cov +### 6.1 Cov #### Registration statement @@ -4086,7 +4084,7 @@ Output series: +-----------------------------+-------------------------------------+ ``` -### DTW +### 6.2 DTW #### Registration statement @@ -4157,7 +4155,7 @@ Output series: +-----------------------------+-------------------------------------+ ``` -### Pearson +### 6.3 Pearson #### Registration statement @@ -4224,7 +4222,7 @@ Output series: +-----------------------------+-----------------------------------------+ ``` -### PtnSym +### 6.4 PtnSym #### Registration statement @@ -4290,7 +4288,7 @@ Output series: +-----------------------------+------------------------------------------------------+ ``` -### XCorr +### 6.5 XCorr #### Registration statement @@ -4383,9 +4381,9 @@ Output series: --> -## Data Repairing +## 7. Data Repairing -### TimestampRepair +### 7.1 TimestampRepair #### Registration statement @@ -4494,7 +4492,7 @@ Output series: +-----------------------------+--------------------------------+ ``` -### ValueFill +### 7.2 ValueFill #### Registration statement @@ -4612,7 +4610,7 @@ Output series: +-----------------------------+-------------------------------------------+ ``` -### ValueRepair +### 7.3 ValueRepair #### Registration statement @@ -4738,7 +4736,7 @@ Output series: +-----------------------------+-------------------------------------------------+ ``` -### MasterRepair +### 7.4 MasterRepair #### Usage @@ -4799,7 +4797,7 @@ Output series: +-----------------------------+-------------------------------------------------------------------------------------------+ ``` -### SeasonalRepair +### 7.5 SeasonalRepair #### Usage This function is used to repair the value of the seasonal time series via decomposition. Currently, two methods are supported: **Classical** - detect irregular fluctuations through residual component decomposed by classical decomposition, and repair them through moving average; **Improved** - detect irregular fluctuations through residual component decomposed by improved decomposition, and repair them through moving median. @@ -4924,9 +4922,9 @@ Output series: --> -## Series Discovery +## 8. Series Discovery -### ConsecutiveSequences +### 8.1 ConsecutiveSequences #### Registration statement @@ -5020,7 +5018,7 @@ Output series: +-----------------------------+------------------------------------------------------+ ``` -### ConsecutiveWindows +### 8.2 ConsecutiveWindows #### Registration statement @@ -5110,9 +5108,9 @@ Output series: --> -## Machine Learning +## 9. Machine Learning -### AR +### 9.1 AR #### Registration statement @@ -5179,7 +5177,7 @@ Output Series: +-----------------------------+---------------------------+ ``` -### Representation +### 9.2 Representation #### Usage @@ -5243,7 +5241,7 @@ Output Series: +-----------------------------+-------------------------------------------------+ ``` -### RM +### 9.3 RM #### Usage diff --git a/src/UserGuide/latest/Tools-System/Benchmark.md b/src/UserGuide/latest/Tools-System/Benchmark.md index 3ffd64c78..8172be84c 100644 --- a/src/UserGuide/latest/Tools-System/Benchmark.md +++ b/src/UserGuide/latest/Tools-System/Benchmark.md @@ -52,22 +52,22 @@ Currently IoT-benchmark supports the following time series databases, versions a Table 1-1 Comparison of big data test benchmarks -## Software Installation and Environment Setup +## 1. Software Installation and Environment Setup -### Prerequisites +### 1.1 Prerequisites 1. Java 8 2. Maven 3.6+ 3. The corresponding appropriate version of the database, such as Apache IoTDB 1.0 -### How to Get IoT Benchmark +### 1.2 How to Get IoT Benchmark - **Get the binary package**: Enter https://github.com/thulab/iot-benchmark/releases to download the required installation package. Download it as a compressed file, select a folder to decompress and use it. - Compiled from source (can be tested with Apache IoTDB 1.0): - The first step (compile the latest IoTDB Session package): Enter the official website https://github.com/apache/iotdb/tree/rel/1.0 to download the IoTDB source code, and run the command `mvn clean package install -pl session -am -DskipTests` in the root directory to compiles the latest package for IoTDB Session. - The second step (compile the IoTDB Benchmark test package): Enter the official website https://github.com/thulab/iot-benchmark to download the source code, run `mvn clean package install -pl iotdb-1.0 -am -DskipTests` in the root directory to compile Apache IoTDB version 1.0 test package. The relative path between the test package and the root directory is `./iotdb-1.0/target/iotdb-1.0-0.0.1/iotdb-1.0-0.0.1`. -### IoT Benchmark's Test Package Structure +### 1.3 IoT Benchmark's Test Package Structure The directory structure of the test package is shown in Figure 1-3 below. The test configuration file is conf/config.properties, and the test startup scripts are benchmark\.sh (Linux & MacOS) and benchmark.bat (Windows). The detailed usage of the files is shown in Table 1-2. @@ -89,14 +89,14 @@ Figure 1-3 List of files and folders Table 1-2 Usage list of files and folders -### IoT Benchmark Execution Test +### 1.4 IoT Benchmark Execution Test 1. Modify the configuration file according to the test requirements. For the main parameters, see next chapter. The corresponding configuration file is conf/config.properties. For example, to test Apache IoTDB 1.0, you need to modify DB_SWITCH=IoTDB-100-SESSION_BY_TABLET. 2. Start the time series database under test. 3. Running. 4. Start IoT-benchmark to execute the test. Observe the status of the time series database and IoT-benchmark under test during execution, and view the results and analyze the test process after execution. -### IoT Benchmark Results Interpretation +### 1.5 IoT Benchmark Results Interpretation All the log files of the test are stored in the logs folder, and the test results are stored in the data/csvOutput folder after the test is completed. For example, after the test, we get the following result matrix: @@ -113,11 +113,11 @@ All the log files of the test are stored in the logs folder, and the test result - MIN: minimum operation time - Pn: the quantile value of the overall distribution of operations, for example, P25 is the lower quartile. -## Main Parameters +## 2. Main Parameters This chapter mainly explains the purpose and configuration method of the main parameters. -### Working Mode and Operation Proportion +### 2.1 Working Mode and Operation Proportion - The working mode parameter "BENCHMARK_WORK_MODE" can be selected as "default mode" and "server monitoring"; the "server monitoring" mode can be started directly by executing the ser-benchmark\.sh script, and the script will automatically modify this parameter. "Default mode" is a commonly used test mode, combined with the configuration of the OPERATION_PROPORTION parameter to achieve the definition of test operation proportions of "pure write", "pure query" and "read-write mix". @@ -130,11 +130,11 @@ Table 1-3 Test mode | default mode | testWithDefaultPath | Supports mixed workloads with multiple read and write operations | | server mode | serverMODE | Server resource usage monitoring mode (running in this mode is started by the ser-benchmark\.sh script, no need to manually configure this parameter) | -### Server Connection Information +### 2.2 Server Connection Information After the working mode is specified, how to inform IoT-benchmark of the information of the time series database under test? Currently, the type of the time-series database under test is informed through "DB_SWITCH"; the network address of the time-series database under test is informed through "HOST"; the network port of the time-series database under test is informed through "PORT"; the login user name of the time-series database under test is informed through "USERNAME"; "PASSWORD" informs the password of the login user of the time series database under test; informs the name of the time series database under test through "DB_NAME"; informs the connection authentication token of the time series database under test through "TOKEN" (used by InfluxDB 2.0). -### Write Scene Setup Parameters +### 2.3 Write Scene Setup Parameters Table 1-4 Write scene setup parameters @@ -156,7 +156,7 @@ Table 1-4 Write scene setup parameters According to the configuration parameters in Table 1-4, the test scenario can be described as follows: write 30,000 (100 devices, 300 sensors for each device) time series sequential data for a day on October 30, 2022 to the time series database under test, in total 2.592 billion data points. The 300 sensor data types of each device are 50 Booleans, 50 integers, 50 long integers, 50 floats, 50 doubles, and 50 characters. If we change the value of IS_OUT_OF_ORDER in the table to true, then the scenario is: write 30,000 time series data on October 30, 2022 to the measured time series database, and there are 30% out of order data ( arrives in the time series database later than other data points whose generation time is later than itself). -### Query Scene Setup Parameters +### 2.4 Query Scene Setup Parameters Table 1-5 Query scene setup parameters @@ -189,13 +189,13 @@ Table 1-6 Query types and example SQL According to the configuration parameters in Table 1-5, the test scenario can be described as follows: Execute 10 reverse order time range queries with value filtering for 2 devices and 2 sensors from the time series database under test. The SQL statement is: `select s_0,s_31from data where time >2022-10-30T00:00:00+08:00 and time < 2022-10-30T00:04:10+08:00 and s_0 > -5 and device in d_21,d_46 order by time desc`. -### Persistence of Test Process and Test Results +### 2.5 Persistence of Test Process and Test Results IoT-benchmark currently supports persisting the test process and test results to IoTDB, MySQL, and CSV through the configuration parameter "TEST_DATA_PERSISTENCE"; writing to MySQL and CSV can define the upper limit of the number of rows in the sub-database and sub-table, such as "RECORD_SPLIT=true, RECORD_SPLIT_MAX_LINE=10000000" means that each database table or CSV file is divided and stored according to the total number of 10 million rows; if the records are recorded to MySQL or IoTDB, database link information needs to be provided, including "TEST_DATA_STORE_IP" the IP address of the database, "TEST_DATA_STORE_PORT" the port number of the database, "TEST_DATA_STORE_DB" the name of the database, "TEST_DATA_STORE_USER" the database user name, and "TEST_DATA_STORE_PW" the database user password. If we set "TEST_DATA_PERSISTENCE=CSV", we can see the newly generated data folder under the IoT-benchmark root directory during and after the test execution, which contains the csv folder to record the test process; the csvOutput folder to record the test results . If we set "TEST_DATA_PERSISTENCE=MySQL", it will create a data table named "testWithDefaultPath_tested database name_remarks_test start time" in the specified MySQL database before the test starts to record the test process; it will record the test process in the "CONFIG" data table (create the table if it does not exist), write the configuration information of this test; when the test is completed, the result of this test will be written in the data table named "FINAL_RESULT" (create the table if it does not exist). -## Use Case +## 3. Use Case We take the application of CRRC Qingdao Sifang Vehicle Research Institute Co., Ltd. as an example, and refer to the scene described in "Apache IoTDB in Intelligent Operation and Maintenance Platform Storage" for practical operation instructions. @@ -222,7 +222,7 @@ Table 2-2 Virtual machine usage | 172.21.4.4 | KaiosDB | | 172.21.4.5 | MySQL | -### Write Test +### 3.1 Write Test Scenario description: Create 100 clients to simulate 100 trains, each train has 3000 sensors, the data type is DOUBLE, the data time interval is 500ms (2Hz), and they are sent sequentially. Referring to the above requirements, we need to modify the IoT-benchmark configuration parameters as listed in Table 2-3. @@ -297,7 +297,7 @@ So what is the resource usage of each server during the test? What is the specif Figure 2-6 Visualization of testing process in Tableau -### Query Test +### 3.2 Query Test Scenario description: In the writing test scenario, 10 clients are simulated to perform all types of query tasks on the data stored in the time series database Apache-IoTDB. The configuration is as follows. @@ -322,7 +322,7 @@ Results: Figure 2-7 Query test results -### Description of Other Parameters +### 3.3 Description of Other Parameters In the previous chapters, the write performance comparison between Apache-IoTDB and KairosDB was performed, but if the user wants to perform a simulated real write rate test, how to configure it? How to control if the test time is too long? Are there any regularities in the generated simulated data? If the IoT-Benchmark server configuration is low, can multiple machines be used to simulate pressure output? diff --git a/src/UserGuide/latest/Tools-System/CLI.md b/src/UserGuide/latest/Tools-System/CLI.md index 47a038952..9dd4a1248 100644 --- a/src/UserGuide/latest/Tools-System/CLI.md +++ b/src/UserGuide/latest/Tools-System/CLI.md @@ -26,7 +26,7 @@ IoTDB provides Cli/shell tools for users to interact with IoTDB server in comman > Note: In this document, \$IOTDB\_HOME represents the path of the IoTDB installation directory. -## Installation +## 1. Installation If you use the source code version of IoTDB, then under the root path of IoTDB, execute: @@ -38,9 +38,9 @@ After build, the IoTDB Cli will be in the folder "cli/target/iotdb-cli-{project. If you download the binary version, then the Cli can be used directly in sbin folder. -## Running +## 2. Running -### Running Cli +### 2.1 Running Cli After installation, there is a default user in IoTDB: `root`, and the default password is `root`. Users can use this username to try IoTDB Cli/Shell tool. The cli startup script is the `start-cli` file under the \$IOTDB\_HOME/bin folder. When starting the script, you need to specify the IP and PORT. (Make sure the IoTDB cluster is running properly when you use Cli/Shell tool to connect to it.) @@ -80,7 +80,7 @@ IoTDB> Enter ```quit``` or `exit` can exit Cli. -### Cli Parameters +### 2.2 Cli Parameters | Parameter name | Parameter type | Required | Description | Example | | :--------------------------- | :------------------------- | :------- | :----------------------------------------------------------- | :------------------ | @@ -109,7 +109,7 @@ The Windows system startup commands are as follows: Shell > sbin\start-cli.bat -h 10.129.187.21 -p 6667 -u root -pw root -disableISO8601 -maxPRC 10 ``` -### CLI Special Command +### 2.3 CLI Special Command Special commands of Cli are below. @@ -125,7 +125,7 @@ Special commands of Cli are below. | `help` | Get hints for CLI special commands | | `exit/quit` | Exit CLI | -### Note on using the CLI with OpenID Connect Auth enabled on Server side +### 2.4 Note on using the CLI with OpenID Connect Auth enabled on Server side Openid connect (oidc) uses keycloack as the authority authentication service of oidc service @@ -235,7 +235,7 @@ The response looks something like The interesting part here is the access token with the key `access_token`. This has to be passed as username (with parameter `-u`) and empty password to the CLI. -### Batch Operation of Cli +### 2.5 Batch Operation of Cli -e parameter is designed for the Cli/shell tool in the situation where you would like to manipulate IoTDB in batches through scripts. By using the -e parameter, you can operate IoTDB without entering the cli's input mode. diff --git a/src/UserGuide/latest/Tools-System/Maintenance-Tool_apache.md b/src/UserGuide/latest/Tools-System/Maintenance-Tool_apache.md index 98b69f17c..1587518da 100644 --- a/src/UserGuide/latest/Tools-System/Maintenance-Tool_apache.md +++ b/src/UserGuide/latest/Tools-System/Maintenance-Tool_apache.md @@ -20,11 +20,11 @@ --> # Cluster management tool -## IoTDB Data Directory Overview Tool +## 1. IoTDB Data Directory Overview Tool IoTDB data directory overview tool is used to print an overview of the IoTDB data directory structure. The location is tools/tsfile/print-iotdb-data-dir. -### Usage +### 1.1 Usage - For Windows: @@ -40,7 +40,7 @@ IoTDB data directory overview tool is used to print an overview of the IoTDB dat Note: if the storage path of the output overview file is not set, the default relative path "IoTDB_data_dir_overview.txt" will be used. -### Example +### 1.2 Example Use Windows in this example: @@ -81,11 +81,11 @@ data dir num:1 |============================================================== ````````````````````````` -## TsFile Sketch Tool +## 2. TsFile Sketch Tool TsFile sketch tool is used to print the content of a TsFile in sketch mode. The location is tools/tsfile/print-tsfile. -### Usage +### 2.1 Usage - For Windows: @@ -101,7 +101,7 @@ TsFile sketch tool is used to print the content of a TsFile in sketch mode. The Note: if the storage path of the output sketch file is not set, the default relative path "TsFile_sketch_view.txt" will be used. -### Example +### 2.2 Example Use Windows in this example: @@ -169,11 +169,11 @@ Explanations: - "||||||||||||||||||||" is the guide information added to enhance readability, not the actual data stored in TsFile. - The last printed "IndexOfTimerseriesIndex Tree" is a reorganization of the metadata index tree at the end of the TsFile, which is convenient for intuitive understanding, and again not the actual data stored in TsFile. -## TsFile Resource Sketch Tool +## 3. TsFile Resource Sketch Tool TsFile resource sketch tool is used to print the content of a TsFile resource file. The location is tools/tsfile/print-tsfile-resource-files. -### Usage +### 3.1 Usage - For Windows: @@ -187,7 +187,7 @@ TsFile resource sketch tool is used to print the content of a TsFile resource fi ./print-tsfile-resource-files.sh ``` -### Example +### 3.2 Example Use Windows in this example: diff --git a/src/UserGuide/latest/Tools-System/Maintenance-Tool_timecho.md b/src/UserGuide/latest/Tools-System/Maintenance-Tool_timecho.md index d77787934..651154352 100644 --- a/src/UserGuide/latest/Tools-System/Maintenance-Tool_timecho.md +++ b/src/UserGuide/latest/Tools-System/Maintenance-Tool_timecho.md @@ -20,14 +20,14 @@ --> # Cluster management tool -## IoTDB-OpsKit +## 1. IoTDB-OpsKit The IoTDB OpsKit is an easy-to-use operation and maintenance tool (enterprise version tool). It is designed to solve the operation and maintenance problems of multiple nodes in the IoTDB distributed system. It mainly includes cluster deployment, cluster start and stop, elastic expansion, configuration update, data export and other functions, thereby realizing one-click command issuance for complex database clusters, which greatly Reduce management difficulty. This document will explain how to remotely deploy, configure, start and stop IoTDB cluster instances with cluster management tools. -### Environment dependence +### 1.1 Environment dependence This tool is a supporting tool for TimechoDB(Enterprise Edition based on IoTDB). You can contact your sales representative to obtain the tool download method. @@ -35,7 +35,7 @@ The machine where IoTDB is to be deployed needs to rely on jdk 8 and above, lsof Tip: The IoTDB cluster management tool requires an account with root privileges -### Deployment method +### 1.2 Deployment method #### Download and install @@ -61,7 +61,7 @@ iotdbctl cluster check example /sbin/iotdbctl cluster check example ``` -### Introduction to cluster configuration files +### 1.3 Introduction to cluster configuration files * There is a cluster configuration yaml file in the `iotdbctl/config` directory. The yaml file name is the cluster name. There can be multiple yaml files. In order to facilitate users to configure yaml files, a `default_cluster.yaml` example is provided under the iotdbctl/config directory. * The yaml file configuration consists of five major parts: `global`, `confignode_servers`, `datanode_servers`, `grafana_server`, and `prometheus_server` @@ -152,7 +152,7 @@ If metrics are configured in `iotdb-system.properties` and `iotdb-system.propert Note: How to configure the value corresponding to the yaml key to contain special characters such as: etc. It is recommended to use double quotes for the entire value, and do not use paths containing spaces in the corresponding file paths to prevent abnormal recognition problems. -### scenes to be used +### 1.4 scenes to be used #### Clean data @@ -247,7 +247,7 @@ iotdbctl cluster start default_cluster For more detailed parameters, please refer to the cluster configuration file introduction above -### Command +### 1.5 Command The basic usage of this tool is: ```bash @@ -295,7 +295,7 @@ iotdbctl cluster deploy default_cluster -### Detailed command execution process +### 1.6 Detailed command execution process The following commands are executed using default_cluster.yaml as an example, and users can modify them to their own cluster files to execute @@ -741,7 +741,7 @@ iotdbctl cluster exportschema default_cluster -N datanode1 -param "-t ./ -pf ./p -### Introduction to Cluster Deployment Tool Samples +### 1.7 Introduction to Cluster Deployment Tool Samples In the cluster deployment tool installation directory config/example, there are three yaml examples. If necessary, you can copy them to config and modify them. @@ -752,11 +752,11 @@ In the cluster deployment tool installation directory config/example, there are | default\_3c3d\_grafa\_prome | 3 confignode and 3 datanode, Grafana, Prometheus configuration examples | -## IoTDB Data Directory Overview Tool +## 2. IoTDB Data Directory Overview Tool IoTDB data directory overview tool is used to print an overview of the IoTDB data directory structure. The location is tools/tsfile/print-iotdb-data-dir. -### Usage +### 2.1 Usage - For Windows: @@ -772,7 +772,7 @@ IoTDB data directory overview tool is used to print an overview of the IoTDB dat Note: if the storage path of the output overview file is not set, the default relative path "IoTDB_data_dir_overview.txt" will be used. -### Example +### 2.2 Example Use Windows in this example: @@ -813,11 +813,11 @@ data dir num:1 |============================================================== ````````````````````````` -## TsFile Sketch Tool +## 3. TsFile Sketch Tool TsFile sketch tool is used to print the content of a TsFile in sketch mode. The location is tools/tsfile/print-tsfile. -### Usage +### 3.1 Usage - For Windows: @@ -833,7 +833,7 @@ TsFile sketch tool is used to print the content of a TsFile in sketch mode. The Note: if the storage path of the output sketch file is not set, the default relative path "TsFile_sketch_view.txt" will be used. -### Example +### 3.2 Example Use Windows in this example: @@ -901,11 +901,11 @@ Explanations: - "||||||||||||||||||||" is the guide information added to enhance readability, not the actual data stored in TsFile. - The last printed "IndexOfTimerseriesIndex Tree" is a reorganization of the metadata index tree at the end of the TsFile, which is convenient for intuitive understanding, and again not the actual data stored in TsFile. -## TsFile Resource Sketch Tool +## 4. TsFile Resource Sketch Tool TsFile resource sketch tool is used to print the content of a TsFile resource file. The location is tools/tsfile/print-tsfile-resource-files. -### Usage +### 4.1 Usage - For Windows: @@ -919,7 +919,7 @@ TsFile resource sketch tool is used to print the content of a TsFile resource fi ./print-tsfile-resource-files.sh ``` -### Example +### 4.2 Example Use Windows in this example: diff --git a/src/UserGuide/latest/Tools-System/Monitor-Tool_apache.md b/src/UserGuide/latest/Tools-System/Monitor-Tool_apache.md index 9aa65e5dd..fcb72d515 100644 --- a/src/UserGuide/latest/Tools-System/Monitor-Tool_apache.md +++ b/src/UserGuide/latest/Tools-System/Monitor-Tool_apache.md @@ -23,9 +23,9 @@ The deployment of monitoring tools can refer to the document [Monitoring Panel Deployment](../Deployment-and-Maintenance/Monitoring-panel-deployment.md) section. -## Prometheus +## 1. Prometheus -### The mapping from metric type to prometheus format +### 1.1 The mapping from metric type to prometheus format > For metrics whose Metric Name is name and Tags are K1=V1, ..., Kn=Vn, the mapping is as follows, where value is a > specific value @@ -38,7 +38,7 @@ The deployment of monitoring tools can refer to the document [Monitoring Panel D | Rate | name_total{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn"} value
name_total{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn", rate="m1"} value
name_total{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn", rate="m5"} value
name_total{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn", rate="m15"} value
name_total{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn", rate="mean"} value | | Timer | name_seconds_max{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn"} value
name_seconds_sum{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn"} value
name_seconds_count{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn"} value
name_seconds{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn", quantile="0.5"} value
name_seconds{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn", quantile="0.99"} value | -### Config File +### 1.2 Config File 1) Taking DataNode as an example, modify the iotdb-system.properties configuration file as follows: @@ -64,7 +64,7 @@ file_count{name="seq",} 2.0 ... ``` -### Prometheus + Grafana +### 1.3 Prometheus + Grafana As shown above, IoTDB exposes monitoring metrics data in the standard Prometheus format to the outside world. Prometheus can be used to collect and store monitoring indicators, and Grafana can be used to visualize monitoring indicators. @@ -107,7 +107,7 @@ The following documents may help you have a good journey with Prometheus and Gra [Grafana query metrics from Prometheus](https://prometheus.io/docs/visualization/grafana/#grafana-support-for-prometheus) -## Apache IoTDB Dashboard +## 2. Apache IoTDB Dashboard `Apache IoTDB Dashboard` is available as a supplement to IoTDB Enterprise Edition, designed for unified centralized operations and management. With it, multiple clusters can be monitored through a single panel. You can access the Dashboard's Json file by contacting Commerce. @@ -118,7 +118,7 @@ The following documents may help you have a good journey with Prometheus and Gra -### Cluster Overview +### 2.1 Cluster Overview Including but not limited to: @@ -132,7 +132,7 @@ Including but not limited to: ![](/img/%E7%9B%91%E6%8E%A7%20%E6%A6%82%E8%A7%88.png) -### Data Writing +### 2.2 Data Writing Including but not limited to: @@ -142,7 +142,7 @@ Including but not limited to: ![](/img/%E7%9B%91%E6%8E%A7%20%E5%86%99%E5%85%A5.png) -### Data Querying +### 2.3 Data Querying Including but not limited to: @@ -156,7 +156,7 @@ Including but not limited to: ![](/img/%E7%9B%91%E6%8E%A7%20%E6%9F%A5%E8%AF%A2.png) -### Storage Engine +### 2.4 Storage Engine Including but not limited to: @@ -166,7 +166,7 @@ Including but not limited to: ![](/img/%E7%9B%91%E6%8E%A7%20%E5%AD%98%E5%82%A8%E5%BC%95%E6%93%8E.png) -### System Monitoring +### 2.5 System Monitoring Including but not limited to: diff --git a/src/UserGuide/latest/Tools-System/Monitor-Tool_timecho.md b/src/UserGuide/latest/Tools-System/Monitor-Tool_timecho.md index 5e0964932..aa3301a6c 100644 --- a/src/UserGuide/latest/Tools-System/Monitor-Tool_timecho.md +++ b/src/UserGuide/latest/Tools-System/Monitor-Tool_timecho.md @@ -23,9 +23,9 @@ The deployment of monitoring tools can refer to the document [Monitoring Panel Deployment](../Deployment-and-Maintenance/Monitoring-panel-deployment.md) section. -## Prometheus +## 1. Prometheus -### The mapping from metric type to prometheus format +### 1.1 The mapping from metric type to prometheus format > For metrics whose Metric Name is name and Tags are K1=V1, ..., Kn=Vn, the mapping is as follows, where value is a > specific value @@ -38,7 +38,7 @@ The deployment of monitoring tools can refer to the document [Monitoring Panel D | Rate | name_total{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn"} value
name_total{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn", rate="m1"} value
name_total{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn", rate="m5"} value
name_total{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn", rate="m15"} value
name_total{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn", rate="mean"} value | | Timer | name_seconds_max{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn"} value
name_seconds_sum{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn"} value
name_seconds_count{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn"} value
name_seconds{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn", quantile="0.5"} value
name_seconds{cluster="clusterName", nodeType="nodeType", nodeId="nodeId", k1="V1", ..., Kn="Vn", quantile="0.99"} value | -### Config File +### 1.2 Config File 1) Taking DataNode as an example, modify the iotdb-system.properties configuration file as follows: @@ -64,7 +64,7 @@ file_count{name="seq",} 2.0 ... ``` -### Prometheus + Grafana +### 1.3 Prometheus + Grafana As shown above, IoTDB exposes monitoring metrics data in the standard Prometheus format to the outside world. Prometheus can be used to collect and store monitoring indicators, and Grafana can be used to visualize monitoring indicators. @@ -107,7 +107,7 @@ The following documents may help you have a good journey with Prometheus and Gra [Grafana query metrics from Prometheus](https://prometheus.io/docs/visualization/grafana/#grafana-support-for-prometheus) -## Apache IoTDB Dashboard +## 2. Apache IoTDB Dashboard We introduce the Apache IoTDB Dashboard, designed for unified centralized operations and management. With it, multiple clusters can be monitored through a single panel. @@ -118,7 +118,7 @@ We introduce the Apache IoTDB Dashboard, designed for unified centralized operat You can access the Dashboard's Json file in the enterprise edition. -### Cluster Overview +### 2.1 Cluster Overview Including but not limited to: @@ -132,7 +132,7 @@ Including but not limited to: ![](/img/%E7%9B%91%E6%8E%A7%20%E6%A6%82%E8%A7%88.png) -### Data Writing +### 2.2 Data Writing Including but not limited to: @@ -142,7 +142,7 @@ Including but not limited to: ![](/img/%E7%9B%91%E6%8E%A7%20%E5%86%99%E5%85%A5.png) -### Data Querying +### 2.3 Data Querying Including but not limited to: @@ -156,7 +156,7 @@ Including but not limited to: ![](/img/%E7%9B%91%E6%8E%A7%20%E6%9F%A5%E8%AF%A2.png) -### Storage Engine +### 2.4 Storage Engine Including but not limited to: @@ -166,7 +166,7 @@ Including but not limited to: ![](/img/%E7%9B%91%E6%8E%A7%20%E5%AD%98%E5%82%A8%E5%BC%95%E6%93%8E.png) -### System Monitoring +### 2.5 System Monitoring Including but not limited to: diff --git a/src/UserGuide/latest/Tools-System/Workbench_timecho.md b/src/UserGuide/latest/Tools-System/Workbench_timecho.md index 8b124a643..cb0d17f1e 100644 --- a/src/UserGuide/latest/Tools-System/Workbench_timecho.md +++ b/src/UserGuide/latest/Tools-System/Workbench_timecho.md @@ -2,10 +2,10 @@ The deployment of the visualization console can refer to the document [Workbench Deployment](../Deployment-and-Maintenance/workbench-deployment_timecho.md) chapter. -## Product Introduction +## 1. Product Introduction IoTDB Visualization Console is an extension component developed for industrial scenarios based on the IoTDB Enterprise Edition time series database. It integrates real-time data collection, storage, and analysis, aiming to provide users with efficient and reliable real-time data storage and query solutions. It features lightweight, high performance, and ease of use, seamlessly integrating with the Hadoop and Spark ecosystems. It is suitable for high-speed writing and complex analytical queries of massive time series data in industrial IoT applications. -## Instructions for Use +## 2. Instructions for Use | **Functional Module** | **Functional Description** | | ---------------------- | ------------------------------------------------------------ | | Instance Management | Support unified management of connected instances, support creation, editing, and deletion, while visualizing the relationships between multiple instances, helping customers manage multiple database instances more clearly | diff --git a/src/UserGuide/latest/User-Manual/AINode_apache.md b/src/UserGuide/latest/User-Manual/AINode_apache.md index 3045fa625..20f5e1192 100644 --- a/src/UserGuide/latest/User-Manual/AINode_apache.md +++ b/src/UserGuide/latest/User-Manual/AINode_apache.md @@ -33,7 +33,7 @@ The responsibilities of the three nodes are as follows: - **DataNode**: responsible for receiving and parsing SQL requests from users; responsible for storing time-series data; responsible for preprocessing computation of data. - **AINode**: responsible for model file import creation and model inference. -## Advantageous features +## 1. Advantageous features Compared with building a machine learning service alone, it has the following advantages: @@ -50,7 +50,7 @@ Compared with building a machine learning service alone, it has the following ad -## Basic Concepts +## 2. Basic Concepts - **Model**: a machine learning model that takes time-series data as input and outputs the results or decisions of an analysis task. Model is the basic management unit of AINode, which supports adding (registration), deleting, checking, and using (inference) of models. - **Create**: Load externally designed or trained model files or algorithms into MLNode for unified management and use by IoTDB. @@ -61,16 +61,16 @@ Compared with building a machine learning service alone, it has the following ad :::: -## Installation and Deployment +## 3. Installation and Deployment The deployment of AINode can be found in the document [Deployment Guidelines](../Deployment-and-Maintenance/AINode_Deployment_apache.md#ainode-deployment) . -## Usage Guidelines +## 4. Usage Guidelines AINode provides model creation and deletion process for deep learning models related to timing data. Built-in models do not need to be created and deleted, they can be used directly, and the built-in model instances created after inference is completed will be destroyed automatically. -### Registering Models +### 4.1 Registering Models A trained deep learning model can be registered by specifying the vector dimensions of the model's inputs and outputs, which can be used for model inference. @@ -156,7 +156,7 @@ After the SQL is executed, the registration process will be carried out asynchro Once the model registration is complete, you can call specific functions and perform model inference by using normal queries. -### Viewing Models +### 4.2 Viewing Models Successfully registered models can be queried for model-specific information through the show models command. The SQL definition is as follows: @@ -204,7 +204,7 @@ IoTDB> show models We have registered the corresponding model earlier, you can view the model status through the corresponding designation, active indicates that the model is successfully registered and can be used for inference. -### Delete Model +### 4.3 Delete Model For a successfully registered model, the user can delete it via SQL. In addition to deleting the meta information on the configNode, this operation also deletes all the related model files under the AINode. The SQL is as follows: @@ -214,7 +214,7 @@ drop model You need to specify the model model_name that has been successfully registered to delete the corresponding model. Since model deletion involves the deletion of data on multiple nodes, the operation will not be completed immediately, and the state of the model at this time is DROPPING, and the model in this state cannot be used for model inference. -### Using Built-in Model Reasoning +### 4.4 Using Built-in Model Reasoning The SQL syntax is as follows: @@ -284,7 +284,7 @@ IoTDB> call inference(_Stray, "select s0 from root.eg.airline", k=2) Total line number = 144 ``` -### Reasoning with Deep Learning Models +### 4.5 Reasoning with Deep Learning Models The SQL syntax is as follows: @@ -444,7 +444,7 @@ Total line number = 4 In the result set, each row's label corresponds to the output of the anomaly detection model after inputting each group of 24 rows of data. -## Privilege Management +## 5. Privilege Management When using AINode related functions, the authentication of IoTDB itself can be used to do a permission management, users can only use the model management related functions when they have the USE_MODEL permission. When using the inference function, the user needs to have the permission to access the source sequence corresponding to the SQL of the input model. @@ -453,9 +453,9 @@ When using AINode related functions, the authentication of IoTDB itself can be u | USE_MODEL | create model/show models/drop model | √ | √ | x | | READ_DATA| call inference | √ | √|√ | -## Practical Examples +## 6. Practical Examples -### Power Load Prediction +### 6.1 Power Load Prediction In some industrial scenarios, there is a need to predict power loads, which can be used to optimise power supply, conserve energy and resources, support planning and expansion, and enhance power system reliability. @@ -526,7 +526,7 @@ The data before 10/24 00:00 represents the past data input to the model, the blu As can be seen, we have used the relationship between the six load information and the corresponding time oil temperatures for the past 96 hours (4 days) to model the possible changes in this data for the oil temperature for the next 48 hours (2 days) based on the inter-relationships between the sequences learned previously, and it can be seen that the predicted curves maintain a high degree of consistency in trend with the actual results after visualisation. -### Power Prediction +### 6.2 Power Prediction Power monitoring of current, voltage and power data is required in substations for detecting potential grid problems, identifying faults in the power system, effectively managing grid loads and analysing power system performance and trends. @@ -592,7 +592,7 @@ The data before 02/14 20:48 represents the past data input to the model, the blu It can be seen that we used the voltage data from the past 10 minutes and, based on the previously learned inter-sequence relationships, modeled the possible changes in the phase C voltage data for the next 5 minutes. The visualized forecast curve shows a certain degree of synchronicity with the actual results in terms of trend. -### Anomaly Detection +### 6.3 Anomaly Detection In the civil aviation and transport industry, there exists a need for anomaly detection of the number of passengers travelling on an aircraft. The results of anomaly detection can be used to guide the adjustment of flight scheduling to make the organisation more efficient. diff --git a/src/UserGuide/latest/User-Manual/AINode_timecho.md b/src/UserGuide/latest/User-Manual/AINode_timecho.md index b797c8dfb..9c593ad07 100644 --- a/src/UserGuide/latest/User-Manual/AINode_timecho.md +++ b/src/UserGuide/latest/User-Manual/AINode_timecho.md @@ -33,7 +33,7 @@ The responsibilities of the three nodes are as follows: - **DataNode**: responsible for receiving and parsing SQL requests from users; responsible for storing time-series data; responsible for preprocessing computation of data. - **AINode**: responsible for model file import creation and model inference. -## Advantageous features +## 1. Advantageous features Compared with building a machine learning service alone, it has the following advantages: @@ -50,7 +50,7 @@ Compared with building a machine learning service alone, it has the following ad -## Basic Concepts +## 2. Basic Concepts - **Model**: a machine learning model that takes time-series data as input and outputs the results or decisions of an analysis task. Model is the basic management unit of AINode, which supports adding (registration), deleting, checking, and using (inference) of models. - **Create**: Load externally designed or trained model files or algorithms into MLNode for unified management and use by IoTDB. @@ -61,16 +61,16 @@ Compared with building a machine learning service alone, it has the following ad :::: -## Installation and Deployment +## 3. Installation and Deployment The deployment of AINode can be found in the document [Deployment Guidelines](../Deployment-and-Maintenance/AINode_Deployment_timecho.md#AINode-部署) . -## Usage Guidelines +## 4. Usage Guidelines AINode provides model creation and deletion process for deep learning models related to timing data. Built-in models do not need to be created and deleted, they can be used directly, and the built-in model instances created after inference is completed will be destroyed automatically. -### Registering Models +### 4.1 Registering Models A trained deep learning model can be registered by specifying the vector dimensions of the model's inputs and outputs, which can be used for model inference. @@ -156,7 +156,7 @@ After the SQL is executed, the registration process will be carried out asynchro Once the model registration is complete, you can call specific functions and perform model inference by using normal queries. -### Viewing Models +### 4.2 Viewing Models Successfully registered models can be queried for model-specific information through the show models command. The SQL definition is as follows: @@ -204,7 +204,7 @@ IoTDB> show models We have registered the corresponding model earlier, you can view the model status through the corresponding designation, active indicates that the model is successfully registered and can be used for inference. -### Delete Model +### 4.3 Delete Model For a successfully registered model, the user can delete it via SQL. In addition to deleting the meta information on the configNode, this operation also deletes all the related model files under the AINode. The SQL is as follows: @@ -214,7 +214,7 @@ drop model You need to specify the model model_name that has been successfully registered to delete the corresponding model. Since model deletion involves the deletion of data on multiple nodes, the operation will not be completed immediately, and the state of the model at this time is DROPPING, and the model in this state cannot be used for model inference. -### Using Built-in Model Reasoning +### 4.4 Using Built-in Model Reasoning The SQL syntax is as follows: @@ -284,7 +284,7 @@ IoTDB> call inference(_Stray, "select s0 from root.eg.airline", k=2) Total line number = 144 ``` -### Reasoning with Deep Learning Models +### 4.5 Reasoning with Deep Learning Models The SQL syntax is as follows: @@ -444,7 +444,7 @@ Total line number = 4 In the result set, each row's label corresponds to the output of the anomaly detection model after inputting each group of 24 rows of data. -## Privilege Management +## 5. Privilege Management When using AINode related functions, the authentication of IoTDB itself can be used to do a permission management, users can only use the model management related functions when they have the USE_MODEL permission. When using the inference function, the user needs to have the permission to access the source sequence corresponding to the SQL of the input model. @@ -453,9 +453,9 @@ When using AINode related functions, the authentication of IoTDB itself can be u | USE_MODEL | create model/show models/drop model | √ | √ | x | | READ_DATA| call inference | √ | √|√ | -## Practical Examples +## 6. Practical Examples -### Power Load Prediction +### 6.1 Power Load Prediction In some industrial scenarios, there is a need to predict power loads, which can be used to optimise power supply, conserve energy and resources, support planning and expansion, and enhance power system reliability. @@ -526,7 +526,7 @@ The data before 10/24 00:00 represents the past data input to the model, the blu As can be seen, we have used the relationship between the six load information and the corresponding time oil temperatures for the past 96 hours (4 days) to model the possible changes in this data for the oil temperature for the next 48 hours (2 days) based on the inter-relationships between the sequences learned previously, and it can be seen that the predicted curves maintain a high degree of consistency in trend with the actual results after visualisation. -### Power Prediction +### 6.2 Power Prediction Power monitoring of current, voltage and power data is required in substations for detecting potential grid problems, identifying faults in the power system, effectively managing grid loads and analysing power system performance and trends. @@ -592,7 +592,7 @@ The data before 02/14 20:48 represents the past data input to the model, the blu It can be seen that we used the voltage data from the past 10 minutes and, based on the previously learned inter-sequence relationships, modeled the possible changes in the phase C voltage data for the next 5 minutes. The visualized forecast curve shows a certain degree of synchronicity with the actual results in terms of trend. -### Anomaly Detection +### 6.3 Anomaly Detection In the civil aviation and transport industry, there exists a need for anomaly detection of the number of passengers travelling on an aircraft. The results of anomaly detection can be used to guide the adjustment of flight scheduling to make the organisation more efficient. diff --git a/src/UserGuide/latest/User-Manual/Audit-Log_timecho.md b/src/UserGuide/latest/User-Manual/Audit-Log_timecho.md index ea72a56e0..78744238c 100644 --- a/src/UserGuide/latest/User-Manual/Audit-Log_timecho.md +++ b/src/UserGuide/latest/User-Manual/Audit-Log_timecho.md @@ -21,14 +21,14 @@ # Audit log -## Background of the function +## 1. Background of the function Audit log is the record credentials of a database, which can be queried by the audit log function to ensure information security by various operations such as user add, delete, change and check in the database. With the audit log function of IoTDB, the following scenarios can be achieved: - We can decide whether to record audit logs according to the source of the link ( human operation or not), such as: non-human operation such as hardware collector write data no need to record audit logs, human operation such as ordinary users through cli, workbench and other tools to operate the data need to record audit logs. - Filter out system-level write operations, such as those recorded by the IoTDB monitoring system itself. -### Scene Description +### 1.1 Scene Description #### Logging all operations (add, delete, change, check) of all users @@ -43,7 +43,7 @@ Client Sources: No audit logs are required for data written by the hardware collector via Session/JDBC/MQTT if it is a non-human action. -## Function Definition +## 2. Function Definition It is available through through configurations: @@ -57,7 +57,7 @@ It is available through through configurations: 2. data and metadata query operations 3. metadata class adding, modifying, and deleting operations. -### configuration item +### 2.1 configuration item In iotdb-system.properties, change the following configurations: diff --git a/src/UserGuide/latest/User-Manual/Authority-Management.md b/src/UserGuide/latest/User-Manual/Authority-Management.md index 0724d13c9..ea58dc7bd 100644 --- a/src/UserGuide/latest/User-Manual/Authority-Management.md +++ b/src/UserGuide/latest/User-Manual/Authority-Management.md @@ -25,41 +25,41 @@ IoTDB provides permission management operations, offering users the ability to m This article introduces the basic concepts of the permission module in IoTDB, including user definition, permission management, authentication logic, and use cases. In the JAVA programming environment, you can use the [JDBC API](https://chat.openai.com/API/Programming-JDBC.md) to execute permission management statements individually or in batches. -## Basic Concepts +## 1. Basic Concepts -### User +### 1.1 User A user is a legitimate user of the database. Each user corresponds to a unique username and has a password as a means of authentication. Before using the database, a person must provide a valid (i.e., stored in the database) username and password for a successful login. -### Permission +### 1.2 Permission The database provides various operations, but not all users can perform all operations. If a user can perform a certain operation, they are said to have permission to execute that operation. Permissions are typically limited in scope by a path, and [path patterns](https://chat.openai.com/Basic-Concept/Data-Model-and-Terminology.md) can be used to manage permissions flexibly. -### Role +### 1.3 Role A role is a collection of multiple permissions and has a unique role name as an identifier. Roles often correspond to real-world identities (e.g., a traffic dispatcher), and a real-world identity may correspond to multiple users. Users with the same real-world identity often have the same permissions, and roles are abstractions for unified management of such permissions. -### Default Users and Roles +### 1.4 Default Users and Roles After installation and initialization, IoTDB includes a default user: root, with the default password root. This user is an administrator with fixed permissions, which cannot be granted or revoked and cannot be deleted. There is only one administrator user in the database. A newly created user or role does not have any permissions initially. -## User Definition +## 2. User Definition Users with MANAGE_USER and MANAGE_ROLE permissions or administrators can create users or roles. Creating a user must meet the following constraints. -### Username Constraints +### 2.1 Username Constraints 4 to 32 characters, supports the use of uppercase and lowercase English letters, numbers, and special characters (`!@#$%^&*()_+-=`). Users cannot create users with the same name as the administrator. -### Password Constraints +### 2.2 Password Constraints 4 to 32 characters, can use uppercase and lowercase letters, numbers, and special characters (`!@#$%^&*()_+-=`). Passwords are encrypted by default using MD5. -### Role Name Constraints +### 2.3 Role Name Constraints 4 to 32 characters, supports the use of uppercase and lowercase English letters, numbers, and special characters (`!@#$%^&*()_+-=`). @@ -67,11 +67,11 @@ Users cannot create roles with the same name as the administrator. -## Permission Management +## 3. Permission Management IoTDB primarily has two types of permissions: series permissions and global permissions. -### Series Permissions +### 3.1 Series Permissions Series permissions constrain the scope and manner in which users access data. IOTDB support authorization for both absolute paths and prefix-matching paths, and can be effective at the timeseries granularity. @@ -87,7 +87,7 @@ The table below describes the types and scope of these permissions: | WRITE_SCHEMA | Allows obtaining detailed information about the metadata tree under the authorized path.
Allows creating, deleting, and modifying time series, templates, views, etc. under the authorized path. When creating or modifying views, it checks the WRITE_SCHEMA permission for the view path and READ_SCHEMA permission for the data source. When querying and inserting data into views, it checks the READ_DATA and WRITE_DATA permissions for the view path.
Allows setting, unsetting, and viewing TTL under the authorized path.
Allows attaching or detaching templates under the authorized path. | -### Global Permissions +### 3.2 Global Permissions Global permissions constrain the database functions that users can use and restrict commands that change the system and task state. Once a user obtains global authorization, they can manage the database. The table below describes the types of system permissions: @@ -115,7 +115,7 @@ Regarding template permissions: -### Granting and Revoking Permissions +### 3.3 Granting and Revoking Permissions In IoTDB, users can obtain permissions through three methods: @@ -135,7 +135,7 @@ Revoking a user's permissions can be done through the following methods: -## Authentication +## 4. Authentication User permissions mainly consist of three parts: permission scope (path), permission type, and the "with grant option" flag: @@ -160,7 +160,7 @@ Please note that the following operations require checking multiple permissions: 3. View permissions and data source permissions are independent. Performing read and write operations on a view will only check the permissions of the view itself and will not perform permission validation on the source path. -## Function Syntax and Examples +## 5. Function Syntax and Examples IoTDB provides composite permissions for user authorization: @@ -174,7 +174,7 @@ Composite permissions are not specific permissions themselves but a shorthand wa The following series of specific use cases will demonstrate the usage of permission statements. Non-administrator users executing the following statements require obtaining the necessary permissions, which are indicated after the operation description. -### User and Role Related +### 5.1 User and Role Related - Create user (Requires MANAGE_USER permission) @@ -273,7 +273,7 @@ ALTER USER SET PASSWORD ; eg: ALTER USER tempuser SET PASSWORD 'newpwd'; ``` -### Authorization and Deauthorization +### 5.2 Authorization and Deauthorization Users can use authorization statements to grant permissions to other users. The syntax is as follows: @@ -337,7 +337,7 @@ eg: REVOKE ALL ON ROOT.** FROM USER user1; -## Examples +## 6. Examples Based on the described [sample data](https://github.com/thulab/iotdb/files/4438687/OtherMaterial-Sample.Data.txt), IoTDB's sample data may belong to different power generation groups such as ln, sgcc, and so on. Different power generation groups do not want other groups to access their database data, so we need to implement data isolation at the group level. @@ -439,7 +439,7 @@ IoTDB> INSERT INTO root.ln.wf01.wt01(timestamp, status) values(1509465600000, tr Msg: 803: No permissions for this operation, please add privilege WRITE_DATA on [root.ln.wf01.wt01.status] ``` -## Other Explanations +## 7. Other Explanations Roles are collections of permissions, and both permissions and roles are attributes of users. In other words, a role can have multiple permissions, and a user can have multiple roles and permissions (referred to as the user's self-permissions). @@ -451,7 +451,7 @@ At the same time, changes to roles will be immediately reflected in all users wh -## Upgrading from a previous version +## 8. Upgrading from a previous version Before version 1.3, there were many different permission types. In 1.3 version's implementation, we have streamlined the permission types. diff --git a/src/UserGuide/latest/User-Manual/Data-Recovery.md b/src/UserGuide/latest/User-Manual/Data-Recovery.md index ab0cd6ea0..a1e374dd6 100644 --- a/src/UserGuide/latest/User-Manual/Data-Recovery.md +++ b/src/UserGuide/latest/User-Manual/Data-Recovery.md @@ -19,11 +19,11 @@ --> -## Data Recovery +# Data Recovery Used to fix issues in data, such as data in sequential space not being arranged in chronological order. -### START REPAIR DATA +## 1. START REPAIR DATA Start a repair task to scan all files created before current time. The repair task will scan all tsfiles and repair some bad files. @@ -34,7 +34,7 @@ IoTDB> START REPAIR DATA ON LOCAL IoTDB> START REPAIR DATA ON CLUSTER ``` -### STOP REPAIR DATA +## 2. STOP REPAIR DATA Stop the running repair task. To restart the stopped task. If there is a stopped repair task, it can be restart and recover the repair progress by executing SQL `START REPAIR DATA`. diff --git a/src/UserGuide/latest/User-Manual/Data-Sync_apache.md b/src/UserGuide/latest/User-Manual/Data-Sync_apache.md index a2ffa995e..1db4190cd 100644 --- a/src/UserGuide/latest/User-Manual/Data-Sync_apache.md +++ b/src/UserGuide/latest/User-Manual/Data-Sync_apache.md @@ -23,9 +23,9 @@ Data synchronization is a typical requirement in industrial Internet of Things (IoT). Through data synchronization mechanisms, it is possible to achieve data sharing between IoTDB, and to establish a complete data link to meet the needs for internal and external network data interconnectivity, edge-cloud synchronization, data migration, and data backup. -## Function Overview +## 1. Function Overview -### Data Synchronization +### 1.1 Data Synchronization A data synchronization task consists of three stages: @@ -77,7 +77,7 @@ By declaratively configuring the specific content of the three parts through SQL -### Functional limitations and instructions +### 1.2 Functional limitations and instructions The schema and auth synchronization functions have the following limitations: @@ -89,7 +89,7 @@ The schema and auth synchronization functions have the following limitations: - During data synchronization tasks, please avoid performing any deletion operations to prevent inconsistent states between the two ends. -## Usage Instructions +## 2. Usage Instructions Data synchronization tasks have three states: RUNNING, STOPPED, and DROPPED. The task state transitions are shown in the following diagram: @@ -99,7 +99,7 @@ After creation, the task will start directly, and when the task stops abnormally Provide the following SQL statements for state management of synchronization tasks. -### Create Task +### 2.1 Create Task Use the `CREATE PIPE` statement to create a data synchronization task. The `PipeId` and `sink` attributes are required, while `source` and `processor` are optional. When entering the SQL, note that the order of the `SOURCE` and `SINK` plugins cannot be swapped. @@ -123,7 +123,7 @@ WITH SINK ( **IF NOT EXISTS semantics**: Used in creation operations to ensure that the create command is executed when the specified Pipe does not exist, preventing errors caused by attempting to create an existing Pipe. -### Start Task +### 2.2 Start Task Start processing data: @@ -131,7 +131,7 @@ Start processing data: START PIPE ``` -### Stop Task +### 2.3 Stop Task Stop processing data: @@ -139,7 +139,7 @@ Stop processing data: STOP PIPE ``` -### Delete Task +### 2.4 Delete Task Deletes the specified task: @@ -150,7 +150,7 @@ DROP PIPE [IF EXISTS] Deleting a task does not require stopping the synchronization task first. -### View Task +### 2.5 View Task View all tasks: @@ -186,7 +186,7 @@ The meanings of each column are as follows: - **RemainingEventCount (Statistics with Delay)**: The number of remaining events, which is the total count of all events in the current data synchronization task, including data and schema synchronization events, as well as system and user-defined events. - **EstimatedRemainingSeconds (Statistics with Delay)**: The estimated remaining time, based on the current number of events and the rate at the pipe, to complete the transfer. -### Synchronization Plugins +### 2.6 Synchronization Plugins To make the overall architecture more flexible to match different synchronization scenario requirements, we support plugin assembly within the synchronization task framework. The system comes with some pre-installed common plugins that you can use directly. At the same time, you can also customize processor plugins and Sink plugins, and load them into the IoTDB system for use. You can view the plugins in the system (including custom and built-in plugins) with the following statement: @@ -257,9 +257,9 @@ Detailed introduction of pre-installed plugins is as follows (for detailed param For importing custom plugins, please refer to the [Stream Processing](./Streaming_apache.md#custom-stream-processing-plugin-management) section. -## Use examples +## 3. Use examples -### Full data synchronisation +### 3.1 Full data synchronisation This example is used to demonstrate the synchronisation of all data from one IoTDB to another IoTDB with the data link as shown below: @@ -274,7 +274,7 @@ with sink ( 'node-urls' = '127.0.0.1:6668', -- The URL of the data service port of the DataNode node on the target IoTDB ``` -### Partial data synchronization +### 3.2 Partial data synchronization This example is used to demonstrate the synchronisation of data from a certain historical time range (8:00pm 23 August 2023 to 8:00pm 23 October 2023) to another IoTDB, the data link is shown below: @@ -298,7 +298,7 @@ with SINK ( ) ``` -### Edge-cloud data transfer +### 3.3 Edge-cloud data transfer This example is used to demonstrate the scenario where data from multiple IoTDB is transferred to the cloud, with data from clusters B, C, and D all synchronized to cluster A, as shown in the figure below: @@ -350,7 +350,7 @@ with sink ( ) ``` -### Cascading data transfer +### 3.4 Cascading data transfer This example is used to demonstrate the scenario where data is transferred in a cascading manner between multiple IoTDB, with data from cluster A synchronized to cluster B, and then to cluster C, as shown in the figure below: @@ -384,7 +384,7 @@ with sink ( ``` -### Compression Synchronization (V1.3.3+) +### 3.5 Compression Synchronization (V1.3.3+) IoTDB supports specifying data compression methods during synchronization. Real time compression and transmission of data can be achieved by configuring the `compressor` parameter. `Compressor` currently supports 5 optional algorithms: snappy/gzip/lz4/zstd/lzma2, and can choose multiple compression algorithm combinations to compress in the order of configuration `rate-limit-bytes-per-second`(supported in V1.3.3 and later versions) is the maximum number of bytes allowed to be transmitted per second, calculated as compressed bytes. If it is less than 0, there is no limit. @@ -398,7 +398,7 @@ with sink ( ) ``` -### Encrypted Synchronization (V1.3.1+) +### 3.6 Encrypted Synchronization (V1.3.1+) IoTDB supports the use of SSL encryption during the synchronization process, ensuring the secure transfer of data between different IoTDB instances. By configuring SSL-related parameters, such as the certificate address and password (`ssl.trust-store-path`)、(`ssl.trust-store-pwd`), data can be protected by SSL encryption during the synchronization process. @@ -414,7 +414,7 @@ with sink ( ) ``` -## Reference: Notes +## 4. Reference: Notes You can adjust the parameters for data synchronization by modifying the IoTDB configuration file (`iotdb-system.properties`), such as the directory for storing synchronized data. The complete configuration is as follows: @@ -478,9 +478,9 @@ pipe_sink_max_client_number=16 pipe_all_sinks_rate_limit_bytes_per_second=-1 ``` -## Reference: parameter description +## 5. Reference: parameter description -### source parameter(V1.3.3) +### 5.1 source parameter(V1.3.3) | key | value | value range | required or not | default value | | :------------------------------ | :----------------------------------------------------------- | :------------------------------------- | :------- | :------------- | @@ -504,7 +504,7 @@ pipe_all_sinks_rate_limit_bytes_per_second=-1 > - **batch**: In this mode, tasks process and send data in batches (according to the underlying data files). It is characterized by low timeliness and high throughput. -## sink parameter +### 5.2 sink parameter > In versions 1.3.3 and above, when only the sink is included, the additional "with sink" prefix is no longer required. diff --git a/src/UserGuide/latest/User-Manual/Data-Sync_timecho.md b/src/UserGuide/latest/User-Manual/Data-Sync_timecho.md index 5b7075004..4db269933 100644 --- a/src/UserGuide/latest/User-Manual/Data-Sync_timecho.md +++ b/src/UserGuide/latest/User-Manual/Data-Sync_timecho.md @@ -23,9 +23,9 @@ Data synchronization is a typical requirement in industrial Internet of Things (IoT). Through data synchronization mechanisms, it is possible to achieve data sharing between IoTDB, and to establish a complete data link to meet the needs for internal and external network data interconnectivity, edge-cloud synchronization, data migration, and data backup. -## Function Overview +## 1. Function Overview -### Data Synchronization +### 1.1 Data Synchronization A data synchronization task consists of three stages: @@ -77,7 +77,7 @@ By declaratively configuring the specific content of the three parts through SQL -### Functional limitations and instructions +### 1.2 Functional limitations and instructions The schema and auth synchronization functions have the following limitations: @@ -91,7 +91,7 @@ The schema and auth synchronization functions have the following limitations: - During data synchronization tasks, please avoid performing any deletion operations to prevent inconsistent states between the two ends. -## Usage Instructions +## 2. Usage Instructions Data synchronization tasks have three states: RUNNING, STOPPED, and DROPPED. The task state transitions are shown in the following diagram: @@ -101,7 +101,7 @@ After creation, the task will start directly, and when the task stops abnormally Provide the following SQL statements for state management of synchronization tasks. -### Create Task +### 2.1 Create Task Use the `CREATE PIPE` statement to create a data synchronization task. The `PipeId` and `sink` attributes are required, while `source` and `processor` are optional. When entering the SQL, note that the order of the `SOURCE` and `SINK` plugins cannot be swapped. @@ -125,7 +125,7 @@ WITH SINK ( **IF NOT EXISTS semantics**: Used in creation operations to ensure that the create command is executed when the specified Pipe does not exist, preventing errors caused by attempting to create an existing Pipe. -### Start Task +### 2.2 Start Task Start processing data: @@ -133,7 +133,7 @@ Start processing data: START PIPE ``` -### Stop Task +### 2.3 Stop Task Stop processing data: @@ -141,7 +141,7 @@ Stop processing data: STOP PIPE ``` -### Delete Task +### 2.4 Delete Task Deletes the specified task: @@ -152,7 +152,7 @@ DROP PIPE [IF EXISTS] Deleting a task does not require stopping the synchronization task first. -### View Task +### 2.5 View Task View all tasks: @@ -188,7 +188,7 @@ The meanings of each column are as follows: - **RemainingEventCount (Statistics with Delay)**: The number of remaining events, which is the total count of all events in the current data synchronization task, including data and schema synchronization events, as well as system and user-defined events. - **EstimatedRemainingSeconds (Statistics with Delay)**: The estimated remaining time, based on the current number of events and the rate at the pipe, to complete the transfer. -### Synchronization Plugins +### 2.6 Synchronization Plugins To make the overall architecture more flexible to match different synchronization scenario requirements, we support plugin assembly within the synchronization task framework. The system comes with some pre-installed common plugins that you can use directly. At the same time, you can also customize processor plugins and Sink plugins, and load them into the IoTDB system for use. You can view the plugins in the system (including custom and built-in plugins) with the following statement: @@ -265,9 +265,9 @@ Detailed introduction of pre-installed plugins is as follows (for detailed param For importing custom plugins, please refer to the [Stream Processing](./Streaming_timecho.md#custom-stream-processing-plugin-management) section. -## Use examples +## 3. Use examples -### Full data synchronisation +### 3.1 Full data synchronisation This example is used to demonstrate the synchronisation of all data from one IoTDB to another IoTDB with the data link as shown below: @@ -282,7 +282,7 @@ with sink ( 'node-urls' = '127.0.0.1:6668', -- The URL of the data service port of the DataNode node on the target IoTDB ``` -### Partial data synchronization +### 3.2 Partial data synchronization This example is used to demonstrate the synchronisation of data from a certain historical time range (8:00pm 23 August 2023 to 8:00pm 23 October 2023) to another IoTDB, the data link is shown below: @@ -306,7 +306,7 @@ with SINK ( ) ``` -### Bidirectional data transfer +### 3.3 Bidirectional data transfer This example is used to demonstrate the scenario where two IoTDB act as active-active pairs, with the data link shown in the figure below: @@ -344,7 +344,7 @@ with sink ( ) ``` -### Edge-cloud data transfer +### 3.4 Edge-cloud data transfer This example is used to demonstrate the scenario where data from multiple IoTDB is transferred to the cloud, with data from clusters B, C, and D all synchronized to cluster A, as shown in the figure below: @@ -396,7 +396,7 @@ with sink ( ) ``` -### Cascading data transfer +### 3.5 Cascading data transfer This example is used to demonstrate the scenario where data is transferred in a cascading manner between multiple IoTDB, with data from cluster A synchronized to cluster B, and then to cluster C, as shown in the figure below: @@ -429,7 +429,7 @@ with sink ( ) ``` -### Cross-gate data transfer +### 3.6 Cross-gate data transfer This example is used to demonstrate the scenario where data from one IoTDB is synchronized to another IoTDB through a unidirectional gateway, as shown in the figure below: @@ -458,7 +458,7 @@ with sink ( XL—GAP | No Limit | No Limit | -### Compression Synchronization (V1.3.3+) +### 3.7 Compression Synchronization (V1.3.3+) IoTDB supports specifying data compression methods during synchronization. Real time compression and transmission of data can be achieved by configuring the `compressor` parameter. `Compressor` currently supports 5 optional algorithms: snappy/gzip/lz4/zstd/lzma2, and can choose multiple compression algorithm combinations to compress in the order of configuration `rate-limit-bytes-per-second`(supported in V1.3.3 and later versions) is the maximum number of bytes allowed to be transmitted per second, calculated as compressed bytes. If it is less than 0, there is no limit. @@ -472,7 +472,7 @@ with sink ( ) ``` -### Encrypted Synchronization (V1.3.1+) +### 3.8 Encrypted Synchronization (V1.3.1+) IoTDB supports the use of SSL encryption during the synchronization process, ensuring the secure transfer of data between different IoTDB instances. By configuring SSL-related parameters, such as the certificate address and password (`ssl.trust-store-path`)、(`ssl.trust-store-pwd`), data can be protected by SSL encryption during the synchronization process. @@ -488,7 +488,7 @@ with sink ( ) ``` -## Reference: Notes +## 4. Reference: Notes You can adjust the parameters for data synchronization by modifying the IoTDB configuration file (`iotdb-system.properties`), such as the directory for storing synchronized data. The complete configuration is as follows: @@ -563,9 +563,9 @@ pipe_air_gap_receiver_port=9780 pipe_all_sinks_rate_limit_bytes_per_second=-1 ``` -## Reference: parameter description +## 5. Reference: parameter description -### source parameter(V1.3.3) +### 5.1 source parameter(V1.3.3) | key | value | value range | required or not | default value | | :------------------------------ | :----------------------------------------------------------- | :------------------------------------- | :------- | :------------- | @@ -589,7 +589,7 @@ pipe_all_sinks_rate_limit_bytes_per_second=-1 > - **batch**: In this mode, tasks process and send data in batches (according to the underlying data files). It is characterized by low timeliness and high throughput. -## sink parameter +### 5.2 sink parameter > In versions 1.3.3 and above, when only the sink is included, the additional "with sink" prefix is no longer required. diff --git a/src/UserGuide/latest/User-Manual/Database-Programming.md b/src/UserGuide/latest/User-Manual/Database-Programming.md index b524ca274..40c1426c8 100644 --- a/src/UserGuide/latest/User-Manual/Database-Programming.md +++ b/src/UserGuide/latest/User-Manual/Database-Programming.md @@ -22,13 +22,13 @@ # CONTINUOUS QUERY(CQ) -## Introduction +## 1. Introduction Continuous queries(CQ) are queries that run automatically and periodically on realtime data and store query results in other specified time series. Users can implement sliding window streaming computing through continuous query, such as calculating the hourly average temperature of a sequence and writing it into a new sequence. Users can customize the `RESAMPLE` clause to create different sliding windows, which can achieve a certain degree of tolerance for out-of-order data. -## Syntax +## 2. Syntax ```sql CREATE (CONTINUOUS QUERY | CQ) @@ -57,7 +57,7 @@ END > 2. GROUP BY TIME CLAUSE is different, it doesn't contain its original first display window parameter which is [start_time, end_time). It's still because IoTDB will automatically generate a time range for the query each time it's executed. > 3. If there is no group by time clause in query, EVERY clause is required, otherwise IoTDB will throw an error. -### Descriptions of parameters in CQ syntax +### 2.1 Descriptions of parameters in CQ syntax - `` specifies the globally unique id of CQ. - `` specifies the query execution time interval. We currently support the units of ns, us, ms, s, m, h, d, w, and its value should not be lower than the minimum threshold configured by the user, which is `continuous_query_min_every_interval`. It's an optional parameter, default value is set to `group_by_interval` in group by clause. @@ -101,7 +101,7 @@ END - `DISCARD` means that we just discard the current cq execution task and wait for the next execution time and do the next time interval cq task. If using `DISCARD` policy, some time intervals won't be executed when the execution time of one cq task is longer than the ``. However, once a cq task is executed, it will use the latest time interval, so it can catch up at the sacrifice of some time intervals being discarded. -## Examples of CQ +## 3. Examples of CQ The examples below use the following sample data. It's a real time data stream and we can assume that the data arrives on time. @@ -121,7 +121,7 @@ The examples below use the following sample data. It's a real time data stream a +-----------------------------+-----------------------------+-----------------------------+-----------------------------+-----------------------------+ ```` -### Configuring execution intervals +### 3.1 Configuring execution intervals Use an `EVERY` interval in the `RESAMPLE` clause to specify the CQ’s execution interval, if not specific, default value is equal to `group_by_interval`. @@ -179,7 +179,7 @@ At **2021-05-11T22:19:00.000+08:00**, `cq1` executes a query within the time ran +-----------------------------+---------------------------------+---------------------------------+---------------------------------+---------------------------------+ ```` -### Configuring time range for resampling +### 3.2 Configuring time range for resampling Use `start_time_offset` in the `RANGE` clause to specify the start time of the CQ’s time range, if not specific, default value is equal to `EVERY` interval. @@ -254,7 +254,7 @@ At **2021-05-11T22:19:00.000+08:00**, `cq2` executes a query within the time ran +-----------------------------+---------------------------------+---------------------------------+---------------------------------+---------------------------------+ ```` -### Configuring execution intervals and CQ time ranges +### 3.3 Configuring execution intervals and CQ time ranges Use an `EVERY` interval and `RANGE` interval in the `RESAMPLE` clause to specify the CQ’s execution interval and the length of the CQ’s time range. And use `fill()` to change the value reported for time intervals with no data. @@ -319,7 +319,7 @@ Notice that `cq3` will calculate the results for some time interval many times, +-----------------------------+---------------------------------+---------------------------------+---------------------------------+---------------------------------+ ```` -### Configuring end_time_offset for CQ time range +### 3.4 Configuring end_time_offset for CQ time range Use an `EVERY` interval and `RANGE` interval in the RESAMPLE clause to specify the CQ’s execution interval and the length of the CQ’s time range. And use `fill()` to change the value reported for time intervals with no data. @@ -378,7 +378,7 @@ Notice that `cq4` will calculate the results for all time intervals only once af +-----------------------------+---------------------------------+---------------------------------+---------------------------------+---------------------------------+ ```` -### CQ without group by clause +### 3.5 CQ without group by clause Use an `EVERY` interval in the `RESAMPLE` clause to specify the CQ’s execution interval and the length of the CQ’s time range. @@ -484,9 +484,9 @@ At **2021-05-11T22:19:00.000+08:00**, `cq5` executes a query within the time ran +-----------------------------+-------------------------------+-----------+ ```` -## CQ Management +## 4. CQ Management -### Listing continuous queries +### 4.1 Listing continuous queries List every CQ on the IoTDB Cluster with: @@ -509,7 +509,7 @@ we will get: | s1_count_cq | CREATE CQ s1_count_cq
BEGIN
SELECT count(s1)
INTO root.sg_count.d.count_s1
FROM root.sg.d
GROUP BY(30m)
END | active | -### Dropping continuous queries +### 4.2 Dropping continuous queries Drop a CQ with a specific `cq_id`: @@ -527,24 +527,24 @@ Drop the CQ named `s1_count_cq`: DROP CONTINUOUS QUERY s1_count_cq; ``` -### Altering continuous queries +### 4.3 Altering continuous queries CQs can't be altered once they're created. To change a CQ, you must `DROP` and re`CREATE` it with the updated settings. -## CQ Use Cases +## 5. CQ Use Cases -### Downsampling and Data Retention +### 5.1 Downsampling and Data Retention Use CQs with `TTL` set on database in IoTDB to mitigate storage concerns. Combine CQs and `TTL` to automatically downsample high precision data to a lower precision and remove the dispensable, high precision data from the database. -### Recalculating expensive queries +### 5.2 Recalculating expensive queries Shorten query runtimes by pre-calculating expensive queries with CQs. Use a CQ to automatically downsample commonly-queried, high precision data to a lower precision. Queries on lower precision data require fewer resources and return faster. > Pre-calculate queries for your preferred graphing tool to accelerate the population of graphs and dashboards. -### Substituting for sub-query +### 5.3 Substituting for sub-query IoTDB does not support sub queries. We can get the same functionality by creating a CQ as a sub query and store its result into other time series and then querying from those time series again will be like doing nested sub query. @@ -583,7 +583,7 @@ SELECT avg(count_s1) from root.sg_count.d; ``` -## System Parameter Configuration +## 6. System Parameter Configuration | Name | Description | Data Type | Default Value | | :------------------------------------------ | ------------------------------------------------------------ | --------- | ------------- | diff --git a/src/UserGuide/latest/User-Manual/IoTDB-View_timecho.md b/src/UserGuide/latest/User-Manual/IoTDB-View_timecho.md index 195847395..b136941e2 100644 --- a/src/UserGuide/latest/User-Manual/IoTDB-View_timecho.md +++ b/src/UserGuide/latest/User-Manual/IoTDB-View_timecho.md @@ -21,9 +21,9 @@ # View -## Sequence View Application Background +## 1. Sequence View Application Background -## Application Scenario 1 Time Series Renaming (PI Asset Management) +## 2. Application Scenario 1 Time Series Renaming (PI Asset Management) In practice, the equipment collecting data may be named with identification numbers that are difficult to be understood by human beings, which brings difficulties in querying to the business layer. @@ -33,19 +33,19 @@ The Sequence View, on the other hand, is able to re-organise the management of t It is difficult for the user to understand. However, at this point, the user is able to rename it using the sequence view feature, map it to a sequence view, and use `root.view.device001.temperature` to access the captured data. -### Application Scenario 2 Simplifying business layer query logic +### 2.1 Application Scenario 2 Simplifying business layer query logic Sometimes users have a large number of devices that manage a large number of time series. When conducting a certain business, the user wants to deal with only some of these sequences. At this time, the focus of attention can be picked out by the sequence view function, which is convenient for repeated querying and writing. **For example**: Users manage a product assembly line with a large number of time series for each segment of the equipment. The temperature inspector only needs to focus on the temperature of the equipment, so he can extract the temperature-related sequences and compose the sequence view. -### Application Scenario 3 Auxiliary Rights Management +### 2.2 Application Scenario 3 Auxiliary Rights Management In the production process, different operations are generally responsible for different scopes. For security reasons, it is often necessary to restrict the access scope of the operations staff through permission management. **For example**: The safety management department now only needs to monitor the temperature of each device in a production line, but these data are stored in the same database with other confidential data. At this point, it is possible to create a number of new views that contain only temperature-related time series on the production line, and then to give the security officer access to only these sequence views, thus achieving the purpose of permission restriction. -### Motivation for designing sequence view functionality +### 2.3 Motivation for designing sequence view functionality Combining the above two types of usage scenarios, the motivations for designing sequence view functionality, are: @@ -53,13 +53,13 @@ Combining the above two types of usage scenarios, the motivations for designing 2. to simplify the query logic at the business level. 3. Auxiliary rights management, open data to specific users through the view. -## Sequence View Concepts +## 3. Sequence View Concepts -### Terminology Concepts +### 3.1 Terminology Concepts Concept: If not specified, the views specified in this document are **Sequence Views**, and new features such as device views may be introduced in the future. -### Sequence view +### 3.2 Sequence view A sequence view is a way of organising the management of time series. @@ -69,7 +69,7 @@ A sequence view is a virtual time series, and each virtual time series is like a Users can create views using complex SQL queries, where the sequence view acts as a stored query statement, and when data is read from the view, the stored query statement is used as the source of the data in the FROM clause. -### Alias Sequences +### 3.3 Alias Sequences There is a special class of beings in a sequence view that satisfy all of the following conditions: @@ -81,13 +81,13 @@ Such a sequence view is called an **alias sequence**, or alias sequence view. A ** All sequence views, including aliased sequences, do not currently support Trigger functionality. ** -### Nested Views +### 3.4 Nested Views A user may want to select a number of sequences from an existing sequence view to form a new sequence view, called a nested view. **The current version does not support the nested view feature**. -### Some constraints on sequence views in IoTDB +### 3.5 Some constraints on sequence views in IoTDB #### Constraint 1 A sequence view must depend on one or several time series @@ -130,9 +130,9 @@ However, their metadata such as tags and attributes are not shared. This is because the business query, view-oriented users are concerned about the structure of the current view, and if you use group by tag and other ways to do the query, obviously want to get the view contains the corresponding tag grouping effect, rather than the time series of the tag grouping effect (the user is not even aware of those time series). -## Sequence view functionality +## 4. Sequence view functionality -### Creating a view +### 4.1 Creating a view Creating a sequence view is similar to creating a time series, the difference is that you need to specify the data source, i.e., the original sequence, through the AS keyword. @@ -340,7 +340,7 @@ The SELECT clause used when creating a serial view is subject to certain restric Simply put, after `AS` you can only use `SELECT ... FROM ... ` and the results of this query must form a time series. -### View Data Queries +### 4.2 View Data Queries For the data query functions that can be supported, the sequence view and time series can be used indiscriminately with identical behaviour when performing time series data queries. @@ -361,7 +361,7 @@ However, if the user wants to query the metadata of the sequence, such as tag, a In addition, for aliased sequences, if the user wants to get information about the time series such as tags, attributes, etc., the user needs to query the mapping of the view columns to find the corresponding time series, and then query the time series for the tags, attributes, etc. The method of querying the mapping of the view columns will be explained in section 3.5. -### Modify Views +### 4.3 Modify Views The modification operations supported by the view include: modifying its calculation logic,modifying tag/attributes, and deleting. @@ -432,7 +432,7 @@ Since a view is a sequence, a view can be deleted as if it were a time series. DELETE VIEW root.view.device.avg_temperatue ``` -### View Synchronisation +### 4.4 View Synchronisation #### If the dependent original sequence is deleted @@ -452,7 +452,7 @@ Please refer to the previous section 2.1.6 Restrictions2 for more details. Please refer to the previous section 2.1.6 Restriction 5 for details. -### View Metadata Queries +### 4.5 View Metadata Queries View metadata query specifically refers to querying the metadata of the view itself (e.g., how many columns the view has), as well as information about the views in the database (e.g., what views are available). @@ -518,7 +518,7 @@ The last column, `SOURCE`, shows the data source for the sequence view, listing Both of the above queries involve the data type of the view. The data type of a view is inferred from the original time series type of the query statement or alias sequence that defines the view. This data type is computed in real time based on the current state of the system, so the data type queried at different moments may be changing. -## FAQ +## 5. FAQ #### Q1: I want the view to implement the function of type conversion. For example, a time series of type int32 was originally placed in the same view as other series of type int64. I now want all the data queried through the view to be automatically converted to int64 type. diff --git a/src/UserGuide/latest/User-Manual/Streaming_apache.md b/src/UserGuide/latest/User-Manual/Streaming_apache.md index cfb6a3433..da81e199f 100644 --- a/src/UserGuide/latest/User-Manual/Streaming_apache.md +++ b/src/UserGuide/latest/User-Manual/Streaming_apache.md @@ -43,9 +43,9 @@ Users can configure the specific attributes of these three subtasks declarativel Using the stream processing framework, it is possible to build a complete data pipeline to fulfill various requirements such as *edge-to-cloud synchronization, remote disaster recovery, and read/write load balancing across multiple databases*. -## Custom Stream Processing Plugin Development +## 1. Custom Stream Processing Plugin Development -### Programming development dependencies +### 1.1 Programming development dependencies It is recommended to use Maven to build the project. Add the following dependencies in the `pom.xml` file. Please make sure to choose dependencies with the same version as the IoTDB server version. @@ -58,7 +58,7 @@ It is recommended to use Maven to build the project. Add the following dependenc ``` -### Event-Driven Programming Model +### 1.2 Event-Driven Programming Model The design of user programming interfaces for stream processing plugins follows the principles of the event-driven programming model. In this model, events serve as the abstraction of data in the user programming interface. The programming interface is decoupled from the specific execution method, allowing the focus to be on describing how the system expects events (data) to be processed upon arrival. @@ -128,7 +128,7 @@ public interface TsFileInsertionEvent extends Event { } ``` -### Custom Stream Processing Plugin Programming Interface Definition +### 1.3 Custom Stream Processing Plugin Programming Interface Definition Based on the custom stream processing plugin programming interface, users can easily write data extraction plugins, data processing plugins, and data sending plugins, allowing the stream processing functionality to adapt flexibly to various industrial scenarios. #### Data Extraction Plugin Interface @@ -434,11 +434,11 @@ public interface PipeSink extends PipePlugin { } ``` -## Custom Stream Processing Plugin Management +## 2. Custom Stream Processing Plugin Management To ensure the flexibility and usability of user-defined plugins in production environments, the system needs to provide the capability to dynamically manage plugins. This section introduces the management statements for stream processing plugins, which enable the dynamic and unified management of plugins. -### Load Plugin Statement +### 2.1 Load Plugin Statement In IoTDB, to dynamically load a user-defined plugin into the system, you first need to implement a specific plugin class based on PipeSource, PipeProcessor, or PipeSink. Then, you need to compile and package the plugin class into an executable jar file. Finally, you can use the loading plugin management statement to load the plugin into IoTDB. @@ -460,7 +460,7 @@ AS 'edu.tsinghua.iotdb.pipe.ExampleProcessor' USING URI ``` -### Delete Plugin Statement +### 2.2 Delete Plugin Statement When user no longer wants to use a plugin and needs to uninstall the plugin from the system, you can use the Remove plugin statement as shown below. ```sql @@ -469,16 +469,16 @@ DROP PIPEPLUGIN [IF EXISTS] **IF EXISTS semantics**: Used in deletion operations to ensure that when a specified Pipe Plugin exists, the delete command is executed to prevent errors caused by attempting to delete a non-existent Pipe Plugin. -### Show Plugin Statement +### 2.3 Show Plugin Statement User can also view the plugin in the system on need. The statement to view plugin is as follows. ```sql SHOW PIPEPLUGINS ``` -## System Pre-installed Stream Processing Plugin +## 3. System Pre-installed Stream Processing Plugin -### Pre-built Source Plugin +### 3.1 Pre-built Source Plugin #### iotdb-source @@ -528,7 +528,7 @@ Function: Extract historical or realtime data inside IoTDB into pipe. > > The historical data transmission phase and the realtime data transmission phase are executed serially. Only when the historical data transmission phase is completed, the realtime data transmission phase is executed.** -### Pre-built Processor Plugin +### 3.2 Pre-built Processor Plugin #### do-nothing-processor @@ -538,7 +538,7 @@ Function: Do not do anything with the events passed in by the source. | key | value | value range | required or optional with default | |-----------|----------------------|------------------------------|-----------------------------------| | processor | do-nothing-processor | String: do-nothing-processor | required | -### Pre-built Sink Plugin +### 3.3 Pre-built Sink Plugin #### do-nothing-sink @@ -549,9 +549,9 @@ Function: Does not do anything with the events passed in by the processor. |------|-----------------|-------------------------|-----------------------------------| | sink | do-nothing-sink | String: do-nothing-sink | required | -## Stream Processing Task Management +## 4. Stream Processing Task Management -### Create Stream Processing Task +### 4.1 Create Stream Processing Task A stream processing task can be created using the `CREATE PIPE` statement, a sample SQL statement is shown below: @@ -643,7 +643,7 @@ The expressed semantics are: synchronise the full amount of historical data and - IoTDB A -> IoTDB B -> IoTDB A - IoTDB A -> IoTDB A -### Start Stream Processing Task +### 4.2 Start Stream Processing Task After the successful execution of the CREATE PIPE statement, task-related instances will be created. However, the overall task's running status will be set to STOPPED(V1.3.0), meaning the task will not immediately process data. In version 1.3.1 and later, the status of the task will be set to RUNNING after CREATE. @@ -652,7 +652,7 @@ You can use the START PIPE statement to make the stream processing task start pr START PIPE ``` -### Stop Stream Processing Task +### 4.3 Stop Stream Processing Task Use the STOP PIPE statement to stop the stream processing task from processing data: @@ -660,7 +660,7 @@ Use the STOP PIPE statement to stop the stream processing task from processing d STOP PIPE ``` -### Delete Stream Processing Task +### 4.4 Delete Stream Processing Task If a stream processing task is in the RUNNING state, you can use the DROP PIPE statement to stop it and delete the entire task: @@ -670,7 +670,7 @@ DROP PIPE Before deleting a stream processing task, there is no need to execute the STOP operation. -### Show Stream Processing Task +### 4.5 Show Stream Processing Task Use the SHOW PIPES statement to view all stream processing tasks: ```sql @@ -701,7 +701,7 @@ SHOW PIPES WHERE SINK USED BY ``` -### Stream Processing Task Running Status Migration +### 4.6 Stream Processing Task Running Status Migration A stream processing task status can transition through several states during the lifecycle of a data synchronization pipe: @@ -717,9 +717,9 @@ The following diagram illustrates the different states and their transitions: ![state migration diagram](/img/%E7%8A%B6%E6%80%81%E8%BF%81%E7%A7%BB%E5%9B%BE.png) -## Authority Management +## 5. Authority Management -### Stream Processing Task +### 5.1 Stream Processing Task | Authority Name | Description | |----------------|---------------------------------| @@ -728,7 +728,7 @@ The following diagram illustrates the different states and their transitions: | USE_PIPE | Stop task,path-independent | | USE_PIPE | Uninstall task,path-independent | | USE_PIPE | Query task,path-independent | -### Stream Processing Task Plugin +### 5.2 Stream Processing Task Plugin | Authority Name | Description | @@ -737,7 +737,7 @@ The following diagram illustrates the different states and their transitions: | USE_PIPE | Delete stream processing task plugin,path-independent | | USE_PIPE | Query stream processing task plugin,path-independent | -## Configure Parameters +## 6. Configure Parameters In iotdb-system.properties : diff --git a/src/UserGuide/latest/User-Manual/Streaming_timecho.md b/src/UserGuide/latest/User-Manual/Streaming_timecho.md index 8d6f50e44..3f72c1f8f 100644 --- a/src/UserGuide/latest/User-Manual/Streaming_timecho.md +++ b/src/UserGuide/latest/User-Manual/Streaming_timecho.md @@ -42,9 +42,9 @@ Users can declaratively configure the specific attributes of the three subtasks Using the stream processing framework, a complete data link can be built to meet the needs of end-side-cloud synchronization, off-site disaster recovery, and read-write load sub-library*. -## Custom stream processing plugin development +## 1. Custom stream processing plugin development -### Programming development dependencies +### 1.1 Programming development dependencies It is recommended to use maven to build the project and add the following dependencies in `pom.xml`. Please be careful to select the same dependency version as the IoTDB server version. @@ -57,7 +57,7 @@ It is recommended to use maven to build the project and add the following depend ``` -### Event-driven programming model +### 1.2 Event-driven programming model The user programming interface design of the stream processing plugin refers to the general design concept of the event-driven programming model. Events are data abstractions in the user programming interface, and the programming interface is decoupled from the specific execution method. It only needs to focus on describing the processing method expected by the system after the event (data) reaches the system. @@ -127,7 +127,7 @@ public interface TsFileInsertionEvent extends Event { } ``` -### Custom stream processing plugin programming interface definition +### 1.3 Custom stream processing plugin programming interface definition Based on the custom stream processing plugin programming interface, users can easily write data extraction plugins, data processing plugins and data sending plugins, so that the stream processing function can be flexibly adapted to various industrial scenarios. @@ -438,12 +438,12 @@ public interface PipeSink extends PipePlugin { } ``` -## Custom stream processing plugin management +## 2. Custom stream processing plugin management In order to ensure the flexibility and ease of use of user-defined plugins in actual production, the system also needs to provide the ability to dynamically and uniformly manage plugins. The stream processing plugin management statements introduced in this chapter provide an entry point for dynamic unified management of plugins. -### Load plugin statement +### 2.1 Load plugin statement In IoTDB, if you want to dynamically load a user-defined plugin in the system, you first need to implement a specific plugin class based on PipeSource, PipeProcessor or PipeSink. Then the plugin class needs to be compiled and packaged into a jar executable file, and finally the plugin is loaded into IoTDB using the management statement for loading the plugin. @@ -483,7 +483,7 @@ AS 'edu.tsinghua.iotdb.pipe.ExampleProcessor' USING URI ``` -### Delete plugin statement +### 2.2 Delete plugin statement When the user no longer wants to use a plugin and needs to uninstall the plugin from the system, he can use the delete plugin statement as shown in the figure. @@ -493,16 +493,16 @@ DROP PIPEPLUGIN [IF EXISTS] **IF EXISTS semantics**: Used in deletion operations to ensure that when a specified Pipe Plugin exists, the delete command is executed to prevent errors caused by attempting to delete a non-existent Pipe Plugin. -### View plugin statements +### 2.3 View plugin statements Users can also view plugins in the system on demand. View the statement of the plugin as shown in the figure. ```sql SHOW PIPEPLUGINS ``` -## System preset stream processing plugin +## 3. System preset stream processing plugin -### Pre-built Source Plugin +### 3.1 Pre-built Source Plugin #### iotdb-source @@ -567,7 +567,7 @@ Function: Extract historical or realtime data inside IoTDB into pipe. > * If you want to use pipe to build data synchronization of A -> B -> C, then the pipe of B -> C needs to set this parameter to true, so that the data written by A to B through the pipe in A -> B can be forwarded correctly. to C > * If you want to use pipe to build two-way data synchronization (dual-active) of A \<-> B, then the pipes of A -> B and B -> A need to set this parameter to false, otherwise the data will be endless. inter-cluster round-robin forwarding -### Preset processor plugin +### 3.2 Preset processor plugin #### do-nothing-processor @@ -578,7 +578,7 @@ Function: No processing is done on the events passed in by the source. |-----------|----------------------|------------------------------|-----------------------------------| | processor | do-nothing-processor | String: do-nothing-processor | required | -### Preset sink plugin +### 3.3 Preset sink plugin #### do-nothing-sink @@ -588,9 +588,9 @@ Function: No processing is done on the events passed in by the processor. |------|-----------------|-------------------------|-----------------------------------| | sink | do-nothing-sink | String: do-nothing-sink | required | -## Stream processing task management +## 4. Stream processing task management -### Create a stream processing task +### 4.1 Create a stream processing task Use the `CREATE PIPE` statement to create a stream processing task. Taking the creation of a data synchronization stream processing task as an example, the sample SQL statement is as follows: @@ -681,7 +681,7 @@ The semantics expressed are: synchronize all historical data in this database in - IoTDB A -> IoTDB B -> IoTDB A - IoTDB A -> IoTDB A -### Start the stream processing task +### 4.2 Start the stream processing task After the CREATE PIPE statement is successfully executed, the stream processing task-related instance will be created, but the running status of the entire stream processing task will be set to STOPPED(V1.3.0), that is, the stream processing task will not process data immediately. In version 1.3.1 and later, the status of the task will be set to RUNNING after CREATE. @@ -691,7 +691,7 @@ You can use the START PIPE statement to cause a stream processing task to start START PIPE ``` -### Stop the stream processing task +### 4.3 Stop the stream processing task Use the STOP PIPE statement to stop the stream processing task from processing data: @@ -699,7 +699,7 @@ Use the STOP PIPE statement to stop the stream processing task from processing d STOP PIPE ``` -### Delete stream processing tasks +### 4.4 Delete stream processing tasks Use the DROP PIPE statement to stop the stream processing task from processing data (when the stream processing task status is RUNNING), and then delete the entire stream processing task: @@ -709,7 +709,7 @@ DROP PIPE Users do not need to perform a STOP operation before deleting the stream processing task. -### Display stream processing tasks +### 4.5 Display stream processing tasks Use the SHOW PIPES statement to view all stream processing tasks: @@ -742,7 +742,7 @@ SHOW PIPES WHERE SINK USED BY ``` -### Stream processing task running status migration +### 4.6 Stream processing task running status migration A stream processing pipe will pass through various states during its managed life cycle: @@ -758,9 +758,9 @@ The following diagram shows all states and state transitions: ![State migration diagram](/img/%E7%8A%B6%E6%80%81%E8%BF%81%E7%A7%BB%E5%9B%BE.png) -## authority management +## 5. authority management -### Stream processing tasks +### 5.1 Stream processing tasks | Permission name | Description | @@ -771,7 +771,7 @@ The following diagram shows all states and state transitions: | USE_PIPE | Offload stream processing tasks. The path is irrelevant. | | USE_PIPE | Query stream processing tasks. The path is irrelevant. | -### Stream processing task plugin +### 5.2 Stream processing task plugin | Permission name | Description | @@ -780,7 +780,7 @@ The following diagram shows all states and state transitions: | USE_PIPE | Uninstall the stream processing task plugin. The path is irrelevant. | | USE_PIPE | Query stream processing task plugin. The path is irrelevant. | -## Configuration parameters +## 6. Configuration parameters In iotdb-system.properties: diff --git a/src/UserGuide/latest/User-Manual/Tiered-Storage_timecho.md b/src/UserGuide/latest/User-Manual/Tiered-Storage_timecho.md index 1cb50b1ee..3826db94d 100644 --- a/src/UserGuide/latest/User-Manual/Tiered-Storage_timecho.md +++ b/src/UserGuide/latest/User-Manual/Tiered-Storage_timecho.md @@ -20,11 +20,11 @@ --> # Tiered Storage -## Overview +## 1. Overview The Tiered storage functionality allows users to define multiple layers of storage, spanning across multiple types of storage media (Memory mapped directory, SSD, rotational hard discs or cloud storage). While memory and cloud storage is usually singular, the local file system storages can consist of multiple directories joined together into one tier. Meanwhile, users can classify data based on its hot or cold nature and store data of different categories in specified "tier". Currently, IoTDB supports the classification of hot and cold data through TTL (Time to live / age) of data. When the data in one tier does not meet the TTL rules defined in the current tier, the data will be automatically migrated to the next tier. -## Parameter Definition +## 2. Parameter Definition To enable tiered storage in IoTDB, you need to configure the following aspects: @@ -48,7 +48,7 @@ The specific parameter definitions and their descriptions are as follows. | remote_tsfile_cache_page_size_in_kb | 20480 |Block size of locally cached files stored in the cloud | If remote storage is not used, no configuration required | | remote_tsfile_cache_max_disk_usage_in_mb | 51200 | Maximum Disk Occupancy Size for Cloud Storage Local Cache | If remote storage is not used, no configuration required | -## local tiered storag configuration example +## 3. local tiered storag configuration example The following is an example of a local two-level storage configuration. @@ -66,7 +66,7 @@ In this example, two levels of storage are configured, specifically: | tier 1 | path 1:/data1/data | data for last 1 day | 20% | | tier 2 | path 2:/data2/data path 2:/data3/data | data from 1 day ago | 10% | -## remote tiered storag configuration example +## 4. remote tiered storag configuration example The following takes three-level storage as an example: diff --git a/src/UserGuide/latest/User-Manual/Trigger.md b/src/UserGuide/latest/User-Manual/Trigger.md index 7c4e163fb..5e05607da 100644 --- a/src/UserGuide/latest/User-Manual/Trigger.md +++ b/src/UserGuide/latest/User-Manual/Trigger.md @@ -21,7 +21,7 @@ # TRIGGER -## Instructions +## 1. Instructions The trigger provides a mechanism for listening to changes in time series data. With user-defined logic, tasks such as alerting and data forwarding can be conducted. @@ -29,29 +29,29 @@ The trigger is implemented based on the reflection mechanism. Users can monitor The document will help you learn to define and manage triggers. -### Pattern for Listening +### 1.1 Pattern for Listening A single trigger can be used to listen for data changes in a time series that match a specific pattern. For example, a trigger can listen for the data changes of time series `root.sg.a`, or time series that match the pattern `root.sg.*`. When you register a trigger, you can specify the path pattern that the trigger listens on through an SQL statement. -### Trigger Type +### 1.2 Trigger Type There are currently two types of triggers, and you can specify the type through an SQL statement when registering a trigger: - Stateful triggers: The execution logic of this type of trigger may depend on data from multiple insertion statement . The framework will aggregate the data written by different nodes into the same trigger instance for calculation to retain context information. This type of trigger is usually used for sampling or statistical data aggregation for a period of time. information. Only one node in the cluster holds an instance of a stateful trigger. - Stateless triggers: The execution logic of the trigger is only related to the current input data. The framework does not need to aggregate the data of different nodes into the same trigger instance. This type of trigger is usually used for calculation of single row data and abnormal detection. Each node in the cluster holds an instance of a stateless trigger. -### Trigger Event +### 1.3 Trigger Event There are currently two trigger events for the trigger, and other trigger events will be expanded in the future. When you register a trigger, you can specify the trigger event through an SQL statement: - BEFORE INSERT: Fires before the data is persisted. **Please note that currently the trigger does not support data cleaning and will not change the data to be persisted itself.** - AFTER INSERT: Fires after the data is persisted. -## How to Implement a Trigger +## 2. How to Implement a Trigger You need to implement the trigger by writing a Java class, where the dependency shown below is required. If you use [Maven](http://search.maven.org/), you can search for them directly from the [Maven repository](http://search.maven.org/). -### Dependency +### 2.1 Dependency ```xml @@ -64,7 +64,7 @@ You need to implement the trigger by writing a Java class, where the dependency Note that the dependency version should be correspondent to the target server version. -### Interface Description +### 2.2 Interface Description To implement a trigger, you need to implement the `org.apache.iotdb.trigger.api.Trigger` class. @@ -208,7 +208,7 @@ When the trigger fails to fire, we will take corresponding actions according to } ``` -### Example +### 2.3 Example If you use [Maven](http://search.maven.org/), you can refer to our sample project **trigger-example**. @@ -318,13 +318,13 @@ public class ClusterAlertingExample implements Trigger { } ``` -## Trigger Management +## 3. Trigger Management You can create and drop a trigger through an SQL statement, and you can also query all registered triggers through an SQL statement. **We recommend that you stop insertion while creating triggers.** -### Create Trigger +### 3.1 Create Trigger Triggers can be registered on arbitrary path patterns. The time series registered with the trigger will be listened to by the trigger. When there is data change on the series, the corresponding fire method in the trigger will be called. @@ -400,7 +400,7 @@ The above SQL statement creates a trigger named triggerTest: - The JAR package URI is http://jar/ClusterAlertingExample.jar - When creating the trigger instance, two parameters, name and limit, are passed in. -### Drop Trigger +### 3.2 Drop Trigger The trigger can be dropped by specifying the trigger ID. During the process of dropping the trigger, the `onDrop` interface of the trigger will be called only once. @@ -421,7 +421,7 @@ DROP TRIGGER triggerTest1 The above statement will drop the trigger with ID triggerTest1. -### Show Trigger +### 3.3 Show Trigger You can query information about triggers that exist in the cluster through an SQL statement. @@ -437,7 +437,7 @@ The result set format of this statement is as follows: | ------------ | ---------------------------- | -------------------- | ------------------------------------------- | ----------- | --------------------------------------- | --------------------------------------- | | triggerTest1 | BEFORE_INSERT / AFTER_INSERT | STATELESS / STATEFUL | INACTIVE / ACTIVE / DROPPING / TRANSFFERING | root.** | org.apache.iotdb.trigger.TriggerExample | ALL(STATELESS) / DATA_NODE_ID(STATEFUL) | -### Trigger State +### 3.4 Trigger State During the process of creating and dropping triggers in the cluster, we maintain the states of the triggers. The following is a description of these states: @@ -448,7 +448,7 @@ During the process of creating and dropping triggers in the cluster, we maintain | DROPPING | Intermediate state of executing `DROP TRIGGER`, the cluster is in the process of dropping the trigger. | NO | | TRANSFERRING | The cluster is migrating the location of this trigger instance. | NO | -## Notes +## 4. Notes - The trigger takes effect from the time of registration, and does not process the existing historical data. **That is, only insertion requests that occur after the trigger is successfully registered will be listened to by the trigger. ** - The fire process of trigger is synchronous currently, so you need to ensure the efficiency of the trigger, otherwise the writing performance may be greatly affected. **You need to guarantee concurrency safety of triggers yourself**. @@ -458,7 +458,7 @@ During the process of creating and dropping triggers in the cluster, we maintain - The trigger JAR package has a size limit, which must be less than min(`config_node_ratis_log_appender_buffer_size_max`, 2G), where `config_node_ratis_log_appender_buffer_size_max` is a configuration item. For the specific meaning, please refer to the IOTDB configuration item description. - **It is better not to have classes with the same full class name but different function implementations in different JAR packages.** For example, trigger1 and trigger2 correspond to resources trigger1.jar and trigger2.jar respectively. If two JAR packages contain a `org.apache.iotdb.trigger.example.AlertListener` class, when `CREATE TRIGGER` uses this class, the system will randomly load the class in one of the JAR packages, which will eventually leads the inconsistent behavior of trigger and other issues. -## Configuration Parameters +## 5. Configuration Parameters | Parameter | Meaning | | ------------------------------------------------- | ------------------------------------------------------------ | diff --git a/src/UserGuide/latest/User-Manual/UDF-development.md b/src/UserGuide/latest/User-Manual/UDF-development.md index 0a3efb6bb..a866f4df6 100644 --- a/src/UserGuide/latest/User-Manual/UDF-development.md +++ b/src/UserGuide/latest/User-Manual/UDF-development.md @@ -15,7 +15,7 @@ If you use [Maven](http://search.maven.org/), you can search for the development ``` -## 1.2 UDTF(User Defined Timeseries Generating Function) +### 1.2 UDTF(User Defined Timeseries Generating Function) To write a UDTF, you need to inherit the `org.apache.iotdb.udf.api.UDTF` class, and at least implement the `beforeStart` method and a `transform` method. @@ -698,7 +698,7 @@ If you use Maven, you can build your own UDF project referring to our **udf-exam This part mainly introduces how external users can contribute their own UDFs to the IoTDB community. -#### 2.1 Prerequisites +### 2.1 Prerequisites 1. UDFs must be universal. @@ -709,7 +709,7 @@ This part mainly introduces how external users can contribute their own UDFs to 2. The UDF you are going to contribute has been well tested and can run normally in the production environment. -#### 2.2 What you need to prepare +### 2.2 What you need to prepare 1. UDF source code 2. Test cases diff --git a/src/UserGuide/latest/User-Manual/White-List_timecho.md b/src/UserGuide/latest/User-Manual/White-List_timecho.md index 5194f7051..ae49c1648 100644 --- a/src/UserGuide/latest/User-Manual/White-List_timecho.md +++ b/src/UserGuide/latest/User-Manual/White-List_timecho.md @@ -21,17 +21,17 @@ # White List -**function description** +## 1. **Function Description** Allow which client addresses can connect to IoTDB -**configuration file** +## 2. **Configuration File** conf/iotdb-system.properties conf/white.list -**configuration item** +## 3. **Configuration Item** iotdb-system.properties: @@ -57,7 +57,7 @@ Decide which IP addresses can connect to IoTDB 10.100.0.* ``` -**note** +## 4. **Note** 1. If the white list itself is cancelled via the session client, the current connection is not immediately disconnected. It is rejected the next time the connection is created. 2. If white.list is modified directly, it takes effect within one minute. If modified via the session client, it takes effect immediately, updating the values in memory and the white.list disk file.