diff --git a/src/UserGuide/latest/API/Programming-CSharp-Native-API.md b/src/UserGuide/latest/API/Programming-CSharp-Native-API.md index 12d431a3a..06f403f42 100644 --- a/src/UserGuide/latest/API/Programming-CSharp-Native-API.md +++ b/src/UserGuide/latest/API/Programming-CSharp-Native-API.md @@ -1,22 +1,19 @@ # C# Native API @@ -35,33 +32,31 @@ Note that the `Apache.IoTDB` package only supports versions greater than `.net f ## Prerequisites - .NET SDK Version >= 5.0 - .NET Framework >= 4.6.1 +- .NET SDK Version >= 5.0 +- .NET Framework >= 4.6.1 ## How to Use the Client (Quick Start) Users can quickly get started by referring to the use cases under the Apache-IoTDB-Client-CSharp-UserCase directory. These use cases serve as a useful resource for getting familiar with the client's functionality and capabilities. -For those who wish to delve deeper into the client's usage and explore more advanced features, the samples directory contains additional code samples. +For those who wish to delve deeper into the client's usage and explore more advanced features, the samples directory contains additional code samples. ## Developer environment requirements for iotdb-client-csharp -``` -.NET SDK Version >= 5.0 -.NET Framework >= 4.6.1 -ApacheThrift >= 0.14.1 -NLog >= 4.7.9 -``` +- .NET SDK Version >= 5.0 +- .NET Framework >= 4.6.1 +- ApacheThrift >= 0.14.1 +- NLog >= 4.7.9 ### OS -* Linux, Macos or other unix-like OS -* Windows+bash(WSL, cygwin, Git Bash) +- Linux, Macos or other unix-like OS +- Windows+bash(WSL, cygwin, Git Bash) ### Command Line Tools -* dotnet CLI -* Thrift +- dotnet CLI +- Thrift ## Basic interface description @@ -79,7 +74,7 @@ var session_pool = new SessionPool(host, port, pool_size); // Open Session await session_pool.Open(false); -// Create TimeSeries +// Create TimeSeries await session_pool.CreateTimeSeries("root.test_group.test_device.ts1", TSDataType.TEXT, TSEncoding.PLAIN, Compressor.UNCOMPRESSED); await session_pool.CreateTimeSeries("root.test_group.test_device.ts2", TSDataType.BOOLEAN, TSEncoding.PLAIN, Compressor.UNCOMPRESSED); await session_pool.CreateTimeSeries("root.test_group.test_device.ts3", TSDataType.INT32, TSEncoding.PLAIN, Compressor.UNCOMPRESSED); @@ -113,7 +108,7 @@ await session_pool.Close(); - Construction: ```csharp -var rowRecord = +var rowRecord = new RowRecord(long timestamps, List values, List measurements); ``` @@ -131,12 +126,10 @@ var rowRecord = - Construction: ```csharp -var tablet = +var tablet = Tablet(string deviceId, List measurements, List> values, List timestamps); ``` - - ## **API** ### **Basic API** @@ -153,43 +146,43 @@ var tablet = ### **Record API** -| api name | parameters | notes | use example | -| ----------------------------------- | ----------------------------- | ----------------------------------- | ------------------------------------------------------------ | -| InsertRecordAsync | string, RowRecord | insert single record | session_pool.InsertRecordAsync("root.97209_TEST_CSHARP_CLIENT_GROUP.TEST_CSHARP_CLIENT_DEVICE", new RowRecord(1, values, measures)); | -| InsertRecordsAsync | List\, List\ | insert records | session_pool.InsertRecordsAsync(device_id, rowRecords) | -| InsertRecordsOfOneDeviceAsync | string, List\ | insert records of one device | session_pool.InsertRecordsOfOneDeviceAsync(device_id, rowRecords) | -| InsertRecordsOfOneDeviceSortedAsync | string, List\ | insert sorted records of one device | InsertRecordsOfOneDeviceSortedAsync(deviceId, sortedRowRecords); | -| TestInsertRecordAsync | string, RowRecord | test insert record | session_pool.TestInsertRecordAsync("root.97209_TEST_CSHARP_CLIENT_GROUP.TEST_CSHARP_CLIENT_DEVICE", rowRecord) | -| TestInsertRecordsAsync | List\, List\ | test insert record | session_pool.TestInsertRecordsAsync(device_id, rowRecords) | +| api name | parameters | notes | use example | +| ----------------------------------- | --------------------------------- | ----------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------ | +| InsertRecordAsync | string, RowRecord | insert single record | session_pool.InsertRecordAsync("root.97209_TEST_CSHARP_CLIENT_GROUP.TEST_CSHARP_CLIENT_DEVICE", new RowRecord(1, values, measures)); | +| InsertRecordsAsync | List\, List\ | insert records | session_pool.InsertRecordsAsync(device_id, rowRecords) | +| InsertRecordsOfOneDeviceAsync | string, List\ | insert records of one device | session_pool.InsertRecordsOfOneDeviceAsync(device_id, rowRecords) | +| InsertRecordsOfOneDeviceSortedAsync | string, List\ | insert sorted records of one device | InsertRecordsOfOneDeviceSortedAsync(deviceId, sortedRowRecords); | +| TestInsertRecordAsync | string, RowRecord | test insert record | session_pool.TestInsertRecordAsync("root.97209_TEST_CSHARP_CLIENT_GROUP.TEST_CSHARP_CLIENT_DEVICE", rowRecord) | +| TestInsertRecordsAsync | List\, List\ | test insert record | session_pool.TestInsertRecordsAsync(device_id, rowRecords) | ### **Tablet API** -| api name | parameters | notes | use example | -| ---------------------- | ------------ | -------------------- | -------------------------------------------- | -| InsertTabletAsync | Tablet | insert single tablet | session_pool.InsertTabletAsync(tablet) | +| api name | parameters | notes | use example | +| ---------------------- | -------------- | -------------------- | -------------------------------------------- | +| InsertTabletAsync | Tablet | insert single tablet | session_pool.InsertTabletAsync(tablet) | | InsertTabletsAsync | List\ | insert tablets | session_pool.InsertTabletsAsync(tablets) | -| TestInsertTabletAsync | Tablet | test insert tablet | session_pool.TestInsertTabletAsync(tablet) | +| TestInsertTabletAsync | Tablet | test insert tablet | session_pool.TestInsertTabletAsync(tablet) | | TestInsertTabletsAsync | List\ | test insert tablets | session_pool.TestInsertTabletsAsync(tablets) | ### **SQL API** -| api name | parameters | notes | use example | -| ----------------------------- | ---------- | ------------------------------ | ------------------------------------------------------------ | -| ExecuteQueryStatementAsync | string | execute sql query statement | session_pool.ExecuteQueryStatementAsync("select * from root.97209_TEST_CSHARP_CLIENT_GROUP.TEST_CSHARP_CLIENT_DEVICE where time<15"); | +| api name | parameters | notes | use example | +| ----------------------------- | ---------- | ------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| ExecuteQueryStatementAsync | string | execute sql query statement | session_pool.ExecuteQueryStatementAsync("select \* from root.97209_TEST_CSHARP_CLIENT_GROUP.TEST_CSHARP_CLIENT_DEVICE where time<15"); | | ExecuteNonQueryStatementAsync | string | execute sql nonquery statement | session_pool.ExecuteNonQueryStatementAsync( "create timeseries root.97209_TEST_CSHARP_CLIENT_GROUP.TEST_CSHARP_CLIENT_DEVICE.status with datatype=BOOLEAN,encoding=PLAIN") | ### **Scheam API** -| api name | parameters | notes | use example | -| -------------------------- | ------------------------------------------------------------ | --------------------------- | ------------------------------------------------------------ | -| SetStorageGroup | string | set storage group | session_pool.SetStorageGroup("root.97209_TEST_CSHARP_CLIENT_GROUP_01") | -| CreateTimeSeries | string, TSDataType, TSEncoding, Compressor | create time series | session_pool.InsertTabletsAsync(tablets) | -| DeleteStorageGroupAsync | string | delete single storage group | session_pool.DeleteStorageGroupAsync("root.97209_TEST_CSHARP_CLIENT_GROUP_01") | -| DeleteStorageGroupsAsync | List\ | delete storage group | session_pool.DeleteStorageGroupAsync("root.97209_TEST_CSHARP_CLIENT_GROUP") | +| api name | parameters | notes | use example | +| -------------------------- | ---------------------------------------------------------------------------- | --------------------------- | -------------------------------------------------------------------------------------------------- | +| SetStorageGroup | string | set storage group | session_pool.SetStorageGroup("root.97209_TEST_CSHARP_CLIENT_GROUP_01") | +| CreateTimeSeries | string, TSDataType, TSEncoding, Compressor | create time series | session_pool.InsertTabletsAsync(tablets) | +| DeleteStorageGroupAsync | string | delete single storage group | session_pool.DeleteStorageGroupAsync("root.97209_TEST_CSHARP_CLIENT_GROUP_01") | +| DeleteStorageGroupsAsync | List\ | delete storage group | session_pool.DeleteStorageGroupAsync("root.97209_TEST_CSHARP_CLIENT_GROUP") | | CreateMultiTimeSeriesAsync | List\, List\ , List\ , List\ | create multi time series | session_pool.CreateMultiTimeSeriesAsync(ts_path_lst, data_type_lst, encoding_lst, compressor_lst); | -| DeleteTimeSeriesAsync | List\ | delete time series | | -| DeleteTimeSeriesAsync | string | delete time series | | -| DeleteDataAsync | List\, long, long | delete data | session_pool.DeleteDataAsync(ts_path_lst, 2, 3) | +| DeleteTimeSeriesAsync | List\ | delete time series | | +| DeleteTimeSeriesAsync | string | delete time series | | +| DeleteDataAsync | List\, long, long | delete data | session_pool.DeleteDataAsync(ts_path_lst, 2, 3) | ### **Other API** @@ -197,8 +190,6 @@ var tablet = | -------------------------- | ---------- | --------------------------- | ---------------------------------------------------- | | CheckTimeSeriesExistsAsync | string | check if time series exists | session_pool.CheckTimeSeriesExistsAsync(time series) | - - [e.g.](https://github.com/apache/iotdb-client-csharp/tree/main/samples/Apache.IoTDB.Samples) ## SessionPool @@ -210,4 +201,3 @@ We use the `ConcurrentQueue` data structure to encapsulate a client queue to mai When a request occurs, it will try to find an idle client connection from the Connection pool. If there is no idle connection, the program will need to wait until there is an idle connection When a connection is used up, it will automatically return to the pool and wait for the next time it is used up - diff --git a/src/UserGuide/latest/API/Programming-Cpp-Native-API.md b/src/UserGuide/latest/API/Programming-Cpp-Native-API.md index b462983d2..0d2267ff1 100644 --- a/src/UserGuide/latest/API/Programming-Cpp-Native-API.md +++ b/src/UserGuide/latest/API/Programming-Cpp-Native-API.md @@ -1,22 +1,19 @@ # C++ Native API @@ -35,68 +32,75 @@ ### Install Required Dependencies - **MAC** - 1. Install Bison: - Use the following brew command to install the Bison version: - ```shell - brew install bison - ``` + 1. Install Bison: + + Use the following brew command to install the Bison version: - 2. Install Boost: Make sure to install the latest version of Boost. + ```shell + brew install bison + ``` - ```shell - brew install boost - ``` + 2. Install Boost: Make sure to install the latest version of Boost. - 3. Check OpenSSL: Make sure the OpenSSL library is installed. The default OpenSSL header file path is "/usr/local/opt/openssl/include". + ```shell + brew install boost + ``` - If you encounter errors related to OpenSSL not being found during compilation, try adding `-Dopenssl.include.dir=""`. + 3. Check OpenSSL: Make sure the OpenSSL library is installed. The default OpenSSL header file path is "/usr/local/opt/openssl/include". + + If you encounter errors related to OpenSSL not being found during compilation, try adding `-Dopenssl.include.dir=""`. - **Ubuntu 16.04+ or Other Debian-based Systems** Use the following commands to install dependencies: - ```shell - sudo apt-get update - sudo apt-get install gcc g++ bison flex libboost-all-dev libssl-dev - ``` + ```shell + sudo apt-get update + sudo apt-get install gcc g++ bison flex libboost-all-dev libssl-dev + ``` - **CentOS 7.7+/Fedora/Rocky Linux or Other Red Hat-based Systems** Use the yum command to install dependencies: - ```shell - sudo yum update - sudo yum install gcc gcc-c++ boost-devel bison flex openssl-devel - ``` + ```shell + sudo yum update + sudo yum install gcc gcc-c++ boost-devel bison flex openssl-devel + ``` - **Windows** - 1. Set Up the Build Environment - - Install MS Visual Studio (version 2019+ recommended): Make sure to select Visual Studio C/C++ IDE and compiler (supporting CMake, Clang, MinGW) during installation. - - Download and install [CMake](https://cmake.org/download/). + 1. Set Up the Build Environment + + - Install MS Visual Studio (version 2019+ recommended): Make sure to select Visual Studio C/C++ IDE and compiler (supporting CMake, Clang, MinGW) during installation. + - Download and install [CMake](https://cmake.org/download/). + + 2. Download and Install Flex, Bison - 2. Download and Install Flex, Bison - - Download [Win_Flex_Bison](https://sourceforge.net/projects/winflexbison/). - - After downloading, rename the executables to flex.exe and bison.exe to ensure they can be found during compilation, and add the directory of these executables to the PATH environment variable. + - Download [Win_Flex_Bison](https://sourceforge.net/projects/winflexbison/). + - After downloading, rename the executables to flex.exe and bison.exe to ensure they can be found during compilation, and add the directory of these executables to the PATH environment variable. - 3. Install Boost Library - - Download [Boost](https://www.boost.org/users/download/). - - Compile Boost locally: Run `bootstrap.bat` and `b2.exe` in sequence. - - Add the Boost installation directory to the PATH environment variable, e.g., `C:\Program Files (x86)\boost_1_78_0`. + 3. Install Boost Library - 4. Install OpenSSL - - Download and install [OpenSSL](http://slproweb.com/products/Win32OpenSSL.html). - - Add the include directory under the installation directory to the PATH environment variable. + - Download [Boost](https://www.boost.org/users/download/). + - Compile Boost locally: Run `bootstrap.bat` and `b2.exe` in sequence. + - Add the Boost installation directory to the PATH environment variable, e.g., `C:\Program Files (x86)\boost_1_78_0`. + + 4. Install OpenSSL + - Download and install [OpenSSL](http://slproweb.com/products/Win32OpenSSL.html). + - Add the include directory under the installation directory to the PATH environment variable. ### Compilation Clone the source code from git: + ```shell git clone https://github.com/apache/iotdb.git ``` The default main branch is the master branch. If you want to use a specific release version, switch to that branch (e.g., version 1.3.2): + ```shell git checkout rc/1.3.2 ``` @@ -104,30 +108,36 @@ git checkout rc/1.3.2 Run Maven to compile in the IoTDB root directory: - Mac or Linux with glibc version >= 2.32 - ```shell - ./mvnw clean package -pl example/client-cpp-example -am -DskipTests -P with-cpp - ``` + + ```shell + ./mvnw clean package -pl example/client-cpp-example -am -DskipTests -P with-cpp + ``` - Linux with glibc version >= 2.31 - ```shell - ./mvnw clean package -pl example/client-cpp-example -am -DskipTests -P with-cpp -Diotdb-tools-thrift.version=0.14.1.1-old-glibc-SNAPSHOT - ``` + + ```shell + ./mvnw clean package -pl example/client-cpp-example -am -DskipTests -P with-cpp -Diotdb-tools-thrift.version=0.14.1.1-old-glibc-SNAPSHOT + ``` - Linux with glibc version >= 2.17 - ```shell - ./mvnw clean package -pl example/client-cpp-example -am -DskipTests -P with-cpp -Diotdb-tools-thrift.version=0.14.1.1-glibc223-SNAPSHOT - ``` + + ```shell + ./mvnw clean package -pl example/client-cpp-example -am -DskipTests -P with-cpp -Diotdb-tools-thrift.version=0.14.1.1-glibc223-SNAPSHOT + ``` - Windows using Visual Studio 2022 - ```batch - .\mvnw.cmd clean package -pl example/client-cpp-example -am -DskipTests -P with-cpp - ``` + + ```batch + .\mvnw.cmd clean package -pl example/client-cpp-example -am -DskipTests -P with-cpp + ``` - Windows using Visual Studio 2019 - ```batch - .\mvnw.cmd clean package -pl example/client-cpp-example -am -DskipTests -P with-cpp -Dcmake.generator="Visual Studio 16 2019" -Diotdb-tools-thrift.version=0.14.1.1-msvc142-SNAPSHOT - ``` - - If you haven't added the Boost library path to the PATH environment variable, you need to add the relevant parameters to the compile command, e.g., `-DboostIncludeDir="C:\Program Files (x86)\boost_1_78_0" -DboostLibraryDir="C:\Program Files (x86)\boost_1_78_0\stage\lib"`. + + ```batch + .\mvnw.cmd clean package -pl example/client-cpp-example -am -DskipTests -P with-cpp -Dcmake.generator="Visual Studio 16 2019" -Diotdb-tools-thrift.version=0.14.1.1-msvc142-SNAPSHOT + ``` + + - If you haven't added the Boost library path to the PATH environment variable, you need to add the relevant parameters to the compile command, e.g., `-DboostIncludeDir="C:\Program Files (x86)\boost_1_78_0" -DboostLibraryDir="C:\Program Files (x86)\boost_1_78_0\stage\lib"`. After successful compilation, the packaged library files will be located in `iotdb-client/client-cpp/target`, and you can find the compiled example program under `example/client-cpp-example/target`. @@ -136,27 +146,29 @@ After successful compilation, the packaged library files will be located in `iot Q: What are the requirements for the environment on Linux? A: + - The known minimum version requirement for glibc (x86_64 version) is 2.17, and the minimum version for GCC is 5.5. - The known minimum version requirement for glibc (ARM version) is 2.31, and the minimum version for GCC is 10.2. - If the above requirements are not met, you can try compiling Thrift locally: - - Download the code from https://github.com/apache/iotdb-bin-resources/tree/iotdb-tools-thrift-v0.14.1.0/iotdb-tools-thrift. - - Run `./mvnw clean install`. - - Go back to the IoTDB code directory and run `./mvnw clean package -pl example/client-cpp-example -am -DskipTests -P with-cpp`. + - Download the code from . + - Run `./mvnw clean install`. + - Go back to the IoTDB code directory and run `./mvnw clean package -pl example/client-cpp-example -am -DskipTests -P with-cpp`. Q: How to resolve the `undefined reference to '_libc_single_thread'` error during Linux compilation? A: + - This issue is caused by the precompiled Thrift dependencies requiring a higher version of glibc. - You can try adding `-Diotdb-tools-thrift.version=0.14.1.1-glibc223-SNAPSHOT` or `-Diotdb-tools-thrift.version=0.14.1.1-old-glibc-SNAPSHOT` to the Maven compile command. Q: What if I need to compile using Visual Studio 2017 or earlier on Windows? A: -- You can try compiling Thrift locally before compiling the client: - - Download the code from https://github.com/apache/iotdb-bin-resources/tree/iotdb-tools-thrift-v0.14.1.0/iotdb-tools-thrift. - - Run `.\mvnw.cmd clean install`. - - Go back to the IoTDB code directory and run `.\mvnw.cmd clean package -pl example/client-cpp-example -am -DskipTests -P with-cpp -Dcmake.generator="Visual Studio 15 2017"`. +- You can try compiling Thrift locally before compiling the client: + - Download the code from . + - Run `.\mvnw.cmd clean install`. + - Go back to the IoTDB code directory and run `.\mvnw.cmd clean package -pl example/client-cpp-example -am -DskipTests -P with-cpp -Dcmake.generator="Visual Studio 15 2017"`. ## Native APIs @@ -165,19 +177,23 @@ Here we show the commonly used interfaces and their parameters in the Native API ### Initialization - Open a Session + ```cpp -void open(); +void open(); ``` - Open a session, with a parameter to specify whether to enable RPC compression + ```cpp -void open(bool enableRPCCompression); +void open(bool enableRPCCompression); ``` + Notice: this RPC compression status of client must comply with that of IoTDB server - Close a Session + ```cpp -void close(); +void close(); ``` ### Data Definition Interface (DDL) @@ -185,11 +201,13 @@ void close(); #### Database Management - CREATE DATABASE + ```cpp void setStorageGroup(const std::string &storageGroupId); ``` - Delete one or several databases + ```cpp void deleteStorageGroup(const std::string &storageGroup); void deleteStorageGroups(const std::vector &storageGroups); @@ -198,10 +216,11 @@ void deleteStorageGroups(const std::vector &storageGroups); #### Timeseries Management - Create one or multiple timeseries + ```cpp void createTimeseries(const std::string &path, TSDataType::TSDataType dataType, TSEncoding::TSEncoding encoding, CompressionType::CompressionType compressor); - + void createMultiTimeseries(const std::vector &paths, const std::vector &dataTypes, const std::vector &encodings, @@ -213,6 +232,7 @@ void createMultiTimeseries(const std::vector &paths, ``` - Create aligned timeseries + ```cpp void createAlignedTimeseries(const std::string &deviceId, const std::vector &measurements, @@ -222,12 +242,14 @@ void createAlignedTimeseries(const std::string &deviceId, ``` - Delete one or several timeseries + ```cpp void deleteTimeseries(const std::string &path); void deleteTimeseries(const std::vector &paths); ``` - Check whether the specific timeseries exists. + ```cpp bool checkTimeseriesExists(const std::string &path); ``` @@ -235,21 +257,25 @@ bool checkTimeseriesExists(const std::string &path); #### Schema Template - Create a schema template + ```cpp void createSchemaTemplate(const Template &templ); ``` - Set the schema template named `templateName` at path `prefixPath`. + ```cpp void setSchemaTemplate(const std::string &template_name, const std::string &prefix_path); ``` - Unset the schema template + ```cpp void unsetSchemaTemplate(const std::string &prefix_path, const std::string &template_name); ``` - After measurement template created, you can edit the template with belowed APIs. + ```cpp // Add aligned measurements to a template void addAlignedMeasurementsInTemplate(const std::string &template_name, @@ -284,6 +310,7 @@ void deleteNodeInTemplate(const std::string &template_name, const std::string &p ``` - You can query measurement templates with these APIS: + ```cpp // Return the amount of measurements inside a template int countMeasurementsInTemplate(const std::string &template_name); @@ -301,7 +328,6 @@ std::vector showMeasurementsInTemplate(const std::string &template_ std::vector showMeasurementsInTemplate(const std::string &template_name, const std::string &pattern); ``` - ### Data Manipulation Interface (DML) #### Insert @@ -309,24 +335,28 @@ std::vector showMeasurementsInTemplate(const std::string &template_ > It is recommended to use insertTablet to help improve write efficiency. - Insert a Tablet,which is multiple rows of a device, each row has the same measurements - - Better Write Performance - - Support null values: fill the null value with any value, and then mark the null value via BitMap + - Better Write Performance + - Support null values: fill the null value with any value, and then mark the null value via BitMap + ```cpp void insertTablet(Tablet &tablet); ``` - Insert multiple Tablets + ```cpp void insertTablets(std::unordered_map &tablets); ``` - Insert a Record, which contains multiple measurement value of a device at a timestamp + ```cpp void insertRecord(const std::string &deviceId, int64_t time, const std::vector &measurements, const std::vector &types, const std::vector &values); ``` - Insert multiple Records + ```cpp void insertRecords(const std::vector &deviceIds, const std::vector ×, @@ -336,6 +366,7 @@ void insertRecords(const std::vector &deviceIds, ``` - Insert multiple Records that belong to the same device. With type info the server has no need to do type inference, which leads a better performance + ```cpp void insertRecordsOfOneDevice(const std::string &deviceId, std::vector ×, @@ -378,6 +409,7 @@ The Insert of aligned timeseries uses interfaces like `insertAlignedXXX`, and ot #### Delete - Delete data in a time range of one or several timeseries + ```cpp void deleteData(const std::string &path, int64_t endTime); void deleteData(const std::vector &paths, int64_t endTime); @@ -387,16 +419,17 @@ void deleteData(const std::vector &paths, int64_t startTime, int64_ ### IoTDB-SQL Interface - Execute query statement + ```cpp unique_ptr executeQueryStatement(const std::string &sql); ``` - Execute non query statement + ```cpp void executeNonQueryStatement(const std::string &sql); ``` - ## Examples The sample code of using these interfaces is in: @@ -412,17 +445,18 @@ If the compilation finishes successfully, the example project will be placed und If errors occur when compiling thrift source code, try to downgrade your xcode-commandline from 12 to 11.5 -see https://stackoverflow.com/questions/63592445/ld-unsupported-tapi-file-type-tapi-tbd-in-yaml-file/65518087#65518087 - +see ### on Windows When Building Thrift and downloading packages via "wget", a possible annoying issue may occur with error message looks like: + ```shell Failed to delete cached file C:\Users\Administrator\.m2\repository\.cache\download-maven-plugin\index.ser ``` + Possible fixes: -- Try to delete the ".m2\repository\\.cache\" directory and try again. -- Add "\true\" configuration to the download-maven-plugin maven phase that complains this error. +- Try to delete the `.m2\repository\.cache`" directory and try again. +- Add `true` configuration to the download-maven-plugin maven phase that complains this error. diff --git a/src/UserGuide/latest/API/Programming-Go-Native-API.md b/src/UserGuide/latest/API/Programming-Go-Native-API.md index b227ed672..ca8ce541a 100644 --- a/src/UserGuide/latest/API/Programming-Go-Native-API.md +++ b/src/UserGuide/latest/API/Programming-Go-Native-API.md @@ -1,22 +1,19 @@ # Go Native API @@ -25,40 +22,39 @@ The Git repository for the Go Native API client is located [here](https://github ## Dependencies - * golang >= 1.13 - * make >= 3.0 - * curl >= 7.1.1 - * thrift 0.15.0 - * Linux、Macos or other unix-like systems - * Windows+bash (WSL、cygwin、Git Bash) +- golang >= 1.13 +- make >= 3.0 +- curl >= 7.1.1 +- thrift 0.15.0 +- Linux、Macos or other unix-like systems +- Windows+bash (WSL、cygwin、Git Bash) ## Installation - * go mod - -```sh -export GO111MODULE=on -export GOPROXY=https://goproxy.io +- go mod -mkdir session_example && cd session_example + ```sh + export GO111MODULE=on + export GOPROXY=https://goproxy.io -curl -o session_example.go -L https://github.com/apache/iotdb-client-go/raw/main/example/session_example.go + mkdir session_example && cd session_example -go mod init session_example -go run session_example.go -``` + curl -o session_example.go -L https://github.com/apache/iotdb-client-go/raw/main/example/session_example.go -* GOPATH + go mod init session_example + go run session_example.go + ``` -```sh -# get thrift 0.15.0 -go get github.com/apache/thrift -cd $GOPATH/src/github.com/apache/thrift -git checkout 0.15.0 +- GOPATH -mkdir -p $GOPATH/src/iotdb-client-go-example/session_example -cd $GOPATH/src/iotdb-client-go-example/session_example -curl -o session_example.go -L https://github.com/apache/iotdb-client-go/raw/main/example/session_example.go -go run session_example.go -``` + ```sh + # get thrift 0.15.0 + go get github.com/apache/thrift + cd $GOPATH/src/github.com/apache/thrift + git checkout 0.15.0 + mkdir -p $GOPATH/src/iotdb-client-go-example/session_example + cd $GOPATH/src/iotdb-client-go-example/session_example + curl -o session_example.go -L https://github.com/apache/iotdb-client-go/raw/main/example/session_example.go + go run session_example.go + ``` diff --git a/src/UserGuide/latest/API/Programming-JDBC.md b/src/UserGuide/latest/API/Programming-JDBC.md index 0251e469c..fa9fc3cc0 100644 --- a/src/UserGuide/latest/API/Programming-JDBC.md +++ b/src/UserGuide/latest/API/Programming-JDBC.md @@ -1,34 +1,35 @@ # JDBC (Not Recommend) -*NOTICE: CURRENTLY, JDBC IS USED FOR CONNECTING SOME THIRD-PART TOOLS. -IT CAN NOT PROVIDE HIGH THROUGHPUT FOR WRITE OPERATIONS. -PLEASE USE [Java Native API](./Programming-Java-Native-API.md) INSTEAD* +::: warning + +NOTICE: CURRENTLY, JDBC IS USED FOR CONNECTING SOME THIRD-PART TOOLS. +IT CAN NOT PROVIDE HIGH THROUGHPUT FOR WRITE OPERATIONS. +PLEASE USE [Java Native API](./Programming-Java-Native-API.md) INSTEAD + +::: ## Dependencies -* JDK >= 1.8+ -* Maven >= 3.9+ +- JDK >= 1.8+ +- Maven >= 3.9+ ## Installation @@ -110,7 +111,7 @@ public class JDBCExample { //Count timeseries group by each node at the given level statement.execute("COUNT TIMESERIES root GROUP BY LEVEL=3"); outputResult(statement.getResultSet()); - + //Execute insert statements in batch statement.addBatch("INSERT INTO root.demo(timestamp,s0) VALUES(1,1);"); @@ -206,27 +207,37 @@ public class JDBCExample { ``` The parameter `version` can be used in the url: -````java + +```java String url = "jdbc:iotdb://127.0.0.1:6667?version=V_1_0"; -```` -The parameter `version` represents the SQL semantic version used by the client, which is used in order to be compatible with the SQL semantics of `0.12` when upgrading to `0.13`. +``` + +The parameter `version` represents the SQL semantic version used by the client, which is used in order to be compatible with the SQL semantics of `0.12` when upgrading to `0.13`. The possible values are: `V_0_12`, `V_0_13`, `V_1_0`. In addition, IoTDB provides additional interfaces in JDBC for users to read and write the database using different character sets (e.g., GB18030) in the connection. The default character set for IoTDB is UTF-8. When users want to use a character set other than UTF-8, they need to specify the charset property in the JDBC connection. For example: + 1. Create a connection using the GB18030 charset: + ```java DriverManager.getConnection("jdbc:iotdb://127.0.0.1:6667?charset=GB18030", "root", "root"); ``` + 2. When executing SQL with the `IoTDBStatement` interface, the SQL can be provided as a `byte[]` array, and it will be parsed into a string according to the specified charset. + ```java public boolean execute(byte[] sql) throws SQLException; ``` + 3. When outputting query results, the `getBytes` method of `ResultSet` can be used to get `byte[]`, which will be encoded using the charset specified in the connection. + ```java System.out.print(resultSet.getString(i) + " (" + new String(resultSet.getBytes(i), charset) + ")"); ``` + Here is a complete example: + ```java public class JDBCCharsetExample { @@ -293,4 +304,4 @@ public class JDBCCharsetExample { } } } -``` \ No newline at end of file +``` diff --git a/src/UserGuide/latest/API/Programming-Java-Native-API.md b/src/UserGuide/latest/API/Programming-Java-Native-API.md index 387a9e075..6061d473c 100644 --- a/src/UserGuide/latest/API/Programming-Java-Native-API.md +++ b/src/UserGuide/latest/API/Programming-Java-Native-API.md @@ -1,22 +1,19 @@ # Java Native API @@ -25,9 +22,8 @@ ### Dependencies -* JDK >= 1.8 -* Maven >= 3.6 - +- JDK >= 1.8 +- Maven >= 3.6 ### Using IoTDB Java Native API with Maven @@ -45,7 +41,7 @@ - **IoTDB-SQL interface:** The input SQL parameter needs to conform to the [syntax conventions](../User-Manual/Syntax-Rule.md#Literal-Values) and be escaped for JAVA strings. For example, you need to add a backslash before the double-quotes. (That is: after JAVA escaping, it is consistent with the SQL statement executed on the command line.) - **Other interfaces:** - - The node names in path or path prefix as parameter: The node names which should be escaped by backticks (`) in the SQL statement, escaping is required here. + - The node names in path or path prefix as parameter: The node names which should be escaped by backticks (`) in the SQL statement, escaping is required here. - Identifiers (such as template names) as parameters: The identifiers which should be escaped by backticks (`) in the SQL statement, and escaping is not required here. - **Code example for syntax convention could be found at:** `example/session/src/main/java/org/apache/iotdb/SyntaxConventionRelatedExample.java` @@ -55,27 +51,27 @@ Here we show the commonly used interfaces and their parameters in the Native API ### Session Management -* Initialize a Session +- Initialize a Session -``` java -// use default configuration +```java +// use default configuration session = new Session.Builder.build(); // initialize with a single node -session = +session = new Session.Builder() .host(String host) .port(int port) .build(); // initialize with multiple nodes -session = +session = new Session.Builder() .nodeUrls(List nodeUrls) .build(); // other configurations -session = +session = new Session.Builder() .fetchSize(int fetchSize) .username(String username) @@ -89,28 +85,27 @@ session = Version represents the SQL semantic version used by the client, which is used to be compatible with the SQL semantics of 0.12 when upgrading 0.13. The possible values are: `V_0_12`, `V_0_13`, `V_1_0`, etc. +- Open a Session -* Open a Session - -``` java +```java void open() ``` -* Open a session, with a parameter to specify whether to enable RPC compression - -``` java +- Open a session, with a parameter to specify whether to enable RPC compression + +```java void open(boolean enableRPCCompression) ``` Notice: this RPC compression status of client must comply with that of IoTDB server -* Close a Session +- Close a Session -``` java +```java void close() ``` -* SessionPool +- SessionPool We provide a connection pool (`SessionPool) for Native API. Using the interface, you need to define the pool size. @@ -118,7 +113,7 @@ Using the interface, you need to define the pool size. If you can not get a session connection in 60 seconds, there is a warning log but the program will hang. If a session has finished an operation, it will be put back to the pool automatically. -If a session connection is broken, the session will be removed automatically and the pool will try +If a session connection is broken, the session will be removed automatically and the pool will try to create a new session and redo the operation. You can also specify an url list of multiple reachable nodes when creating a SessionPool, just as you would when creating a Session. To ensure high availability of clients in distributed cluster. @@ -126,50 +121,50 @@ For query operations: 1. When using SessionPool to query data, the result set is `SessionDataSetWrapper`; 2. Given a `SessionDataSetWrapper`, if you have not scanned all the data in it and stop to use it, -you have to call `SessionPool.closeResultSet(wrapper)` manually; + you have to call `SessionPool.closeResultSet(wrapper)` manually; 3. When you call `hasNext()` and `next()` of a `SessionDataSetWrapper` and there is an exception, then -you have to call `SessionPool.closeResultSet(wrapper)` manually; + you have to call `SessionPool.closeResultSet(wrapper)` manually; 4. You can call `getColumnNames()` of `SessionDataSetWrapper` to get the column names of query result; -Examples: ```session/src/test/java/org/apache/iotdb/session/pool/SessionPoolTest.java``` +Examples: `session/src/test/java/org/apache/iotdb/session/pool/SessionPoolTest.java` Or `example/session/src/main/java/org/apache/iotdb/SessionPoolExample.java` - ### Database & Timeseries Management API #### Database Management -* CREATE DATABASE +- CREATE DATABASE -``` java -void setStorageGroup(String storageGroupId) +```java +void setStorageGroup(String storageGroupId) ``` -* Delete one or several databases +- Delete one or several databases -``` java +```java void deleteStorageGroup(String storageGroup) void deleteStorageGroups(List storageGroups) ``` #### Timeseries Management -* Create one or multiple timeseries +- Create one or multiple timeseries -``` java +```java void createTimeseries(String path, TSDataType dataType, TSEncoding encoding, CompressionType compressor, Map props, Map tags, Map attributes, String measurementAlias) - + void createMultiTimeseries(List paths, List dataTypes, List encodings, List compressors, List> propsList, List> tagsList, List> attributesList, List measurementAliasList) ``` -* Create aligned timeseries -``` +- Create aligned timeseries + +```java void createAlignedTimeseries(String prefixPath, List measurements, List dataTypes, List encodings, List compressors, List measurementAliasList); @@ -177,25 +172,24 @@ void createAlignedTimeseries(String prefixPath, List measurements, Attention: Alias of measurements are **not supported** currently. -* Delete one or several timeseries +- Delete one or several timeseries -``` java +```java void deleteTimeseries(String path) void deleteTimeseries(List paths) ``` -* Check whether the specific timeseries exists. +- Check whether the specific timeseries exists. -``` java +```java boolean checkTimeseriesExists(String path) ``` #### Schema Template - Create a schema template for massive identical devices will help to improve memory performance. You can use Template, InternalNode and MeasurementNode to depict the structure of the template, and use belowed interface to create it inside session. -``` java +```java public void createSchemaTemplate(Template template); Class Template { @@ -203,7 +197,7 @@ Class Template { private boolean directShareTime; Map children; public Template(String name, boolean isShareTime); - + public void addToTemplate(Node node); public void deleteFromTemplate(String name); public void setShareTime(boolean shareTime); @@ -219,8 +213,8 @@ Class MeasurementNode extends Node { TSDataType dataType; TSEncoding encoding; CompressionType compressor; - public MeasurementNode(String name, - TSDataType dataType, + public MeasurementNode(String name, + TSDataType dataType, TSEncoding encoding, CompressionType compressor); } @@ -230,7 +224,7 @@ We strongly suggest you implement templates only with flat-measurement (like obj A snippet of using above Method and Class: -``` java +```java MeasurementNode nodeX = new MeasurementNode("x", TSDataType.FLOAT, TSEncoding.RLE, CompressionType.SNAPPY); MeasurementNode nodeY = new MeasurementNode("y", TSDataType.FLOAT, TSEncoding.RLE, CompressionType.SNAPPY); MeasurementNode nodeSpeed = new MeasurementNode("speed", TSDataType.DOUBLE, TSEncoding.GORILLA, CompressionType.SNAPPY); @@ -267,25 +261,25 @@ To implement schema template, you can set the measurement template named 'templa **Please notice that, we strongly recommend not setting templates on the nodes above the database to accommodate future updates and collaboration between modules.** -``` java +```java void setSchemaTemplate(String templateName, String prefixPath) ``` Before setting template, you should firstly create the template using -``` java +```java void createSchemaTemplate(Template template) ``` -After setting template to a certain path, you can use the template to create timeseries on given device paths through the following interface, or you can write data directly to trigger timeseries auto creation using schema template under target devices. +After setting template to a certain path, you can use the template to create timeseries on given device paths through the following interface, or you can write data directly to trigger timeseries auto creation using schema template under target devices. -``` java +```java void createTimeseriesUsingSchemaTemplate(List devicePathList) ``` After setting template to a certain path, you can query for info about template using belowed interface in session: -``` java +```java /** @return All template names. */ public List showAllTemplates(); @@ -298,7 +292,7 @@ public List showPathsTemplateUsingOn(String templateName) If you are ready to get rid of schema template, you can drop it with belowed interface. Make sure the template to drop has been unset from MTree. -``` java +```java void unsetSchemaTemplate(String prefixPath, String templateName); public void dropSchemaTemplate(String templateName); ``` @@ -307,19 +301,18 @@ Unset the measurement template named 'templateName' from path 'prefixPath'. When Attention: Unsetting the template named 'templateName' from node at path 'prefixPath' or descendant nodes which have already inserted records using template is **not supported**. - ### Data Manipulation Interface (DML Interface) ### Data Insert API It is recommended to use insertTablet to help improve write efficiency. -* Insert a Tablet,which is multiple rows of a device, each row has the same measurements - * **Better Write Performance** - * **Support batch write** - * **Support null values**: fill the null value with any value, and then mark the null value via BitMap +- Insert a Tablet,which is multiple rows of a device, each row has the same measurements + - **Better Write Performance** + - **Support batch write** + - **Support null values**: fill the null value with any value, and then mark the null value via BitMap -``` java +```java void insertTablet(Tablet tablet) public class Tablet { @@ -342,44 +335,46 @@ public class Tablet { } ``` -* Insert multiple Tablets +- Insert multiple Tablets -``` java +```java void insertTablets(Map tablet) ``` -* Insert a Record, which contains multiple measurement value of a device at a timestamp. This method is equivalent to providing a common interface for multiple data types of values. Later, the value can be cast to the original type through TSDataType. +- Insert a Record, which contains multiple measurement value of a device at a timestamp. This method is equivalent to providing a common interface for multiple data types of values. Later, the value can be cast to the original type through TSDataType. The correspondence between the Object type and the TSDataType type is shown in the following table. - | TSDataType | Object | - |------------|--------------| - | BOOLEAN | Boolean | - | INT32 | Integer | - | DATE | LocalDate | - | INT64 | Long | - | TIMESTAMP | Long | - | FLOAT | Float | - | DOUBLE | Double | + | TSDataType | Object | + | ---------- | -------------- | + | BOOLEAN | Boolean | + | INT32 | Integer | + | DATE | LocalDate | + | INT64 | Long | + | TIMESTAMP | Long | + | FLOAT | Float | + | DOUBLE | Double | | TEXT | String, Binary | | STRING | String, Binary | - | BLOB | Binary | -``` java + | BLOB | Binary | + +```java void insertRecord(String deviceId, long time, List measurements, List types, List values) ``` -* Insert multiple Records +- Insert multiple Records -``` java +```java void insertRecords(List deviceIds, List times, List> measurementsList, List> typesList, List> valuesList) ``` -* Insert multiple Records that belong to the same device. + +- Insert multiple Records that belong to the same device. With type info the server has no need to do type inference, which leads a better performance -``` java +```java void insertRecordsOfOneDevice(String deviceId, List times, List> measurementsList, List> typesList, List> valuesList) @@ -389,22 +384,22 @@ void insertRecordsOfOneDevice(String deviceId, List times, When the data is of String type, we can use the following interface to perform type inference based on the value of the value itself. For example, if value is "true" , it can be automatically inferred to be a boolean type. If value is "3.2" , it can be automatically inferred as a flout type. Without type information, server has to do type inference, which may cost some time. -* Insert a Record, which contains multiple measurement value of a device at a timestamp +- Insert a Record, which contains multiple measurement value of a device at a timestamp -``` java +```java void insertRecord(String prefixPath, long time, List measurements, List values) ``` -* Insert multiple Records +- Insert multiple Records -``` java -void insertRecords(List deviceIds, List times, +```java +void insertRecords(List deviceIds, List times, List> measurementsList, List> valuesList) ``` -* Insert multiple Records that belong to the same device. +- Insert multiple Records that belong to the same device. -``` java +```java void insertStringRecordsOfOneDevice(String deviceId, List times, List> measurementsList, List> valuesList) ``` @@ -413,48 +408,52 @@ void insertStringRecordsOfOneDevice(String deviceId, List times, The Insert of aligned timeseries uses interfaces like insertAlignedXXX, and others are similar to the above interfaces: -* insertAlignedRecord -* insertAlignedRecords -* insertAlignedRecordsOfOneDevice -* insertAlignedStringRecordsOfOneDevice -* insertAlignedTablet -* insertAlignedTablets +- insertAlignedRecord +- insertAlignedRecords +- insertAlignedRecordsOfOneDevice +- insertAlignedStringRecordsOfOneDevice +- insertAlignedTablet +- insertAlignedTablets ### Data Delete API -* Delete data before or equal to a timestamp of one or several timeseries +- Delete data before or equal to a timestamp of one or several timeseries -``` java +```java void deleteData(String path, long time) void deleteData(List paths, long time) ``` ### Data Query API -* Time-series raw data query with time range: +- Time-series raw data query with time range: - The specified query time range is a left-closed right-open interval, including the start time but excluding the end time. -``` java +```java SessionDataSet executeRawDataQuery(List paths, long startTime, long endTime); ``` -* Last query: +- Last query: + - Query the last data, whose timestamp is greater than or equal LastTime. - ``` java + + ```java SessionDataSet executeLastDataQuery(List paths, long LastTime); ``` + - Query the latest point of the specified series of single device quickly, and support redirection; If you are sure that the query path is valid, set 'isLegalPathNodes' to true to avoid performance penalties from path verification. - ``` java + + ```java SessionDataSet executeLastDataQueryForOneDevice( String db, String device, List sensors, boolean isLegalPathNodes); ``` -* Aggregation query: +- Aggregation query: - Support specified query time range: The specified query time range is a left-closed right-open interval, including the start time but not the end time. - Support GROUP BY TIME. -``` java +```java SessionDataSet executeAggregationQuery(List paths, List aggregations); SessionDataSet executeAggregationQuery( @@ -476,9 +475,9 @@ SessionDataSet executeAggregationQuery( long slidingStep); ``` -* Execute query statement +- Execute query statement -``` java +```java SessionDataSet executeQueryStatement(String sql) ``` @@ -488,9 +487,11 @@ SessionDataSet executeQueryStatement(String sql) The `SubscriptionSession` class in the IoTDB subscription client provides interfaces for topic management. The status changes of topics are illustrated in the diagram below: -
- -
+::: center + + + +::: ##### 1.1 Create Topic @@ -526,6 +527,7 @@ Optional getTopic(String topicName) throws Exception; ``` #### 2 Check Subscription Status + The `SubscriptionSession` class in the IoTDB subscription client provides interfaces to check the subscription status: ```Java @@ -539,37 +541,33 @@ When creating a consumer using the JAVA native interface, you need to specify th For both `SubscriptionPullConsumer` and `SubscriptionPushConsumer`, the following common configurations are available: - -| key | **required or optional with default** | description | -| :---------------------- | :----------------------------------------------------------- | :----------------------------------------------------------- | -| host | optional: 127.0.0.1 | `String`: The RPC host of a certain DataNode in IoTDB | -| port | optional: 6667 | Integer: The RPC port of a certain DataNode in IoTDB | -| node-urls | optional: 127.0.0.1:6667 | `List`: The RPC addresses of all DataNodes in IoTDB, can be multiple; either host:port or node-urls can be filled in. If both host:port and node-urls are filled in, the union of host:port and node-urls will be used to form a new node-urls application | -| username | optional: root | `String`: The username of a DataNode in IoTDB | -| password | optional: root | `String`: The password of a DataNode in IoTDB | -| groupId | optional | `String`: consumer group id, if not specified, a new consumer group will be randomly assigned, ensuring that different consumer groups have different consumer group ids | -| consumerId | optional | `String`: consumer client id, if not specified, it will be randomly assigned, ensuring that each consumer client id in the same consumer group is unique | -| heartbeatIntervalMs | optional: 30000 (min: 1000) | `Long`: The interval at which the consumer sends heartbeat requests to the IoTDB DataNode | -| endpointsSyncIntervalMs | optional: 120000 (min: 5000) | `Long`: The interval at which the consumer detects the expansion and contraction of IoTDB cluster nodes and adjusts the subscription connection | -| fileSaveDir | optional: Paths.get(System.getProperty("user.dir"), "iotdb-subscription").toString() | `String`: The temporary directory path where the TsFile files subscribed by the consumer are stored | -| fileSaveFsync | optional: false | `Boolean`: Whether the consumer actively calls fsync during the subscription of TsFile | - +| key | **required or optional with default** | description | +| :---------------------- | :----------------------------------------------------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| host | optional: 127.0.0.1 | `String`: The RPC host of a certain DataNode in IoTDB | +| port | optional: 6667 | Integer: The RPC port of a certain DataNode in IoTDB | +| node-urls | optional: 127.0.0.1:6667 | `List`: The RPC addresses of all DataNodes in IoTDB, can be multiple; either host:port or node-urls can be filled in. If both host:port and node-urls are filled in, the union of host:port and node-urls will be used to form a new node-urls application | +| username | optional: root | `String`: The username of a DataNode in IoTDB | +| password | optional: root | `String`: The password of a DataNode in IoTDB | +| groupId | optional | `String`: consumer group id, if not specified, a new consumer group will be randomly assigned, ensuring that different consumer groups have different consumer group ids | +| consumerId | optional | `String`: consumer client id, if not specified, it will be randomly assigned, ensuring that each consumer client id in the same consumer group is unique | +| heartbeatIntervalMs | optional: 30000 (min: 1000) | `Long`: The interval at which the consumer sends heartbeat requests to the IoTDB DataNode | +| endpointsSyncIntervalMs | optional: 120000 (min: 5000) | `Long`: The interval at which the consumer detects the expansion and contraction of IoTDB cluster nodes and adjusts the subscription connection | +| fileSaveDir | optional: Paths.get(System.getProperty("user.dir"), "iotdb-subscription").toString() | `String`: The temporary directory path where the TsFile files subscribed by the consumer are stored | +| fileSaveFsync | optional: false | `Boolean`: Whether the consumer actively calls fsync during the subscription of TsFile | ##### 3.1 SubscriptionPushConsumer The following are special configurations for `SubscriptionPushConsumer`: - -| key | **required or optional with default** | description | -| :----------------- | :------------------------------------ | :----------------------------------------------------------- | +| key | **required or optional with default** | description | +| :----------------- | :------------------------------------ | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | ackStrategy | optional: `ACKStrategy.AFTER_CONSUME` | Consumption progress confirmation mechanism includes the following options: `ACKStrategy.BEFORE_CONSUME` (submit consumption progress immediately when the consumer receives data, before `onReceive`) `ACKStrategy.AFTER_CONSUME` (submit consumption progress after the consumer has consumed the data, after `onReceive`) | -| consumeListener | optional | Consumption data callback function, need to implement the `ConsumeListener` interface, define the consumption logic of `SessionDataSetsHandler` and `TsFileHandler` form data| -| autoPollIntervalMs | optional: 5000 (min: 500) | Long: The interval at which the consumer automatically pulls data, in ms | -| autoPollTimeoutMs | optional: 10000 (min: 1000) | Long: The timeout time for the consumer to pull data each time, in ms | +| consumeListener | optional | Consumption data callback function, need to implement the `ConsumeListener` interface, define the consumption logic of `SessionDataSetsHandler` and `TsFileHandler` form data | +| autoPollIntervalMs | optional: 5000 (min: 500) | Long: The interval at which the consumer automatically pulls data, in ms | +| autoPollTimeoutMs | optional: 10000 (min: 1000) | Long: The timeout time for the consumer to pull data each time, in ms | Among them, the ConsumerListener interface is defined as follows: - ```Java @FunctionInterface interface ConsumeListener { @@ -588,15 +586,14 @@ enum ConsumeResult { The following are special configurations for `SubscriptionPullConsumer` : -| key | **required or optional with default** | description | -| :----------------- | :------------------------------------ | :----------------------------------------------------------- | +| key | **required or optional with default** | description | +| :----------------- | :------------------------------------ | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | autoCommit | optional: true | Boolean: Whether to automatically commit consumption progress. If this parameter is set to false, the commit method must be called to manually `commit` consumption progress. | -| autoCommitInterval | optional: 5000 (min: 500) | Long: The interval at which consumption progress is automatically committed, in milliseconds. This only takes effect when the autoCommit parameter is true. - | +| autoCommitInterval | optional: 5000 (min: 500) | Long: The interval at which consumption progress is automatically committed, in milliseconds. This only takes effect when the autoCommit parameter is true. | +| | After creating a consumer, you need to manually call the consumer's open method: - ```Java void open() throws Exception; ``` @@ -605,7 +602,7 @@ At this point, the IoTDB subscription client will verify the correctness of the #### 4 Subscribe to Topics -Both `SubscriptionPushConsumer` and `SubscriptionPullConsumer` provide the following JAVA native interfaces for subscribing to topics: +Both `SubscriptionPushConsumer` and `SubscriptionPullConsumer` provide the following JAVA native interfaces for subscribing to topics: ```Java // Subscribe to topics @@ -619,19 +616,17 @@ void subscribe(List topics) throws Exception; - If there are other consumers in the same consumer group that have subscribed to the same topics, the consumer will reuse the corresponding consumption progress. - #### 5 Consume Data For both push and pull mode consumers: - - Only after explicitly subscribing to a topic will the consumer receive data for that topic. - If no topics are subscribed to after creation, the consumer will not be able to consume any data, even if other consumers in the same consumer group have subscribed to some topics. ##### 5.1 SubscriptionPushConsumer -After `SubscriptionPushConsumer` subscribes to topics, there is no need to manually pull data. +After `SubscriptionPushConsumer` subscribes to topics, there is no need to manually pull data. The data consumption logic is within the `consumeListener` configuration specified when creating `SubscriptionPushConsumer`. @@ -648,10 +643,8 @@ List poll(final Set topicNames, final long timeoutM In the poll method, you can specify the topic names to be pulled (if not specified, it defaults to pulling all topics that the consumer has subscribed to) and the timeout period. - When the SubscriptionPullConsumer is configured with the autoCommit parameter set to false, it is necessary to manually call the commitSync and commitAsync methods to synchronously or asynchronously commit the consumption progress of a batch of data: - ```Java void commitSync(final SubscriptionMessage message) throws Exception; void commitSync(final Iterable messages) throws Exception; @@ -693,7 +686,6 @@ void close(); - When a consumer is closed, it will exit the corresponding consumer group and automatically unsubscribe from all topics it is currently subscribed to. - Once a consumer is closed, its lifecycle ends, and it cannot be reopened to subscribe to and consume data again. - #### 7 Code Examples ##### 7.1 Single Pull Consumer Consuming SessionDataSetsHandler Format Data @@ -790,53 +782,52 @@ for (final Thread thread : threads) { ### Other Modules (Execute SQL Directly) -* Execute non query statement +- Execute non query statement -``` java +```java void executeNonQueryStatement(String sql) ``` - ### Write Test Interface (to profile network cost) These methods **don't** insert data into database and server just return after accept the request. -* Test the network and client cost of insertRecord +- Test the network and client cost of insertRecord -``` java +```java void testInsertRecord(String deviceId, long time, List measurements, List values) void testInsertRecord(String deviceId, long time, List measurements, List types, List values) ``` -* Test the network and client cost of insertRecords +- Test the network and client cost of insertRecords -``` java +```java void testInsertRecords(List deviceIds, List times, List> measurementsList, List> valuesList) - + void testInsertRecords(List deviceIds, List times, List> measurementsList, List> typesList List> valuesList) ``` -* Test the network and client cost of insertTablet +- Test the network and client cost of insertTablet -``` java +```java void testInsertTablet(Tablet tablet) ``` -* Test the network and client cost of insertTablets +- Test the network and client cost of insertTablets -``` java +```java void testInsertTablets(Map tablets) ``` ### Coding Examples -To get more information of the following interfaces, please view session/src/main/java/org/apache/iotdb/session/Session.java +To get more information of the following interfaces, please view `session/src/main/java/org/apache/iotdb/session/Session.java` -The sample code of using these interfaces is in example/session/src/main/java/org/apache/iotdb/SessionExample.java,which provides an example of how to open an IoTDB session, execute a batch insertion. +The sample code of using these interfaces is in `example/session/src/main/java/org/apache/iotdb/SessionExample.java`,which provides an example of how to open an IoTDB session, execute a batch insertion. -For examples of aligned timeseries and measurement template, you can refer to `example/session/src/main/java/org/apache/iotdb/AlignedTimeseriesSessionExample.java` \ No newline at end of file +For examples of aligned timeseries and measurement template, you can refer to `example/session/src/main/java/org/apache/iotdb/AlignedTimeseriesSessionExample.java` diff --git a/src/UserGuide/latest/API/Programming-Kafka.md b/src/UserGuide/latest/API/Programming-Kafka.md index 0a041448f..22ad13100 100644 --- a/src/UserGuide/latest/API/Programming-Kafka.md +++ b/src/UserGuide/latest/API/Programming-Kafka.md @@ -1,22 +1,19 @@ # Kafka @@ -28,91 +25,90 @@ ### kafka Producer Producing Data Java Code Example ```java - Properties props = new Properties(); - props.put("bootstrap.servers", "127.0.0.1:9092"); - props.put("key.serializer", StringSerializer.class); - props.put("value.serializer", StringSerializer.class); - KafkaProducer producer = new KafkaProducer<>(props); - producer.send( - new ProducerRecord<>( - "Kafka-Test", "key", "root.kafka," + System.currentTimeMillis() + ",value,INT32,100")); - producer.close(); +Properties props = new Properties(); +props.put("bootstrap.servers", "127.0.0.1:9092"); +props.put("key.serializer", StringSerializer.class); +props.put("value.serializer", StringSerializer.class); +KafkaProducer producer = new KafkaProducer<>(props); +producer.send( + new ProducerRecord<>( + "Kafka-Test", "key", "root.kafka," + System.currentTimeMillis() + ",value,INT32,100")); +producer.close(); ``` ### kafka Consumer Receiving Data Java Code Example ```java - Properties props = new Properties(); - props.put("bootstrap.servers", "127.0.0.1:9092"); - props.put("key.deserializer", StringDeserializer.class); - props.put("value.deserializer", StringDeserializer.class); - props.put("auto.offset.reset", "earliest"); - props.put("group.id", "Kafka-Test"); - KafkaConsumer kafkaConsumer = new KafkaConsumer<>(props); - kafkaConsumer.subscribe(Collections.singleton("Kafka-Test")); - ConsumerRecords records = kafkaConsumer.poll(Duration.ofSeconds(1)); - ``` +Properties props = new Properties(); +props.put("bootstrap.servers", "127.0.0.1:9092"); +props.put("key.deserializer", StringDeserializer.class); +props.put("value.deserializer", StringDeserializer.class); +props.put("auto.offset.reset", "earliest"); +props.put("group.id", "Kafka-Test"); +KafkaConsumer kafkaConsumer = new KafkaConsumer<>(props); +kafkaConsumer.subscribe(Collections.singleton("Kafka-Test")); +ConsumerRecords records = kafkaConsumer.poll(Duration.ofSeconds(1)); +``` ### Example of Java Code Stored in IoTDB Server ```java - SessionPool pool = - new SessionPool.Builder() - .host("127.0.0.1") - .port(6667) - .user("root") - .password("root") - .maxSize(3) - .build(); - List datas = new ArrayList<>(records.count()); - for (ConsumerRecord record : records) { - datas.add(record.value()); +SessionPool pool = + new SessionPool.Builder() + .host("127.0.0.1") + .port(6667) + .user("root") + .password("root") + .maxSize(3) + .build(); +List datas = new ArrayList<>(records.count()); +for (ConsumerRecord record : records) { + datas.add(record.value()); +} +int size = datas.size(); +List deviceIds = new ArrayList<>(size); +List times = new ArrayList<>(size); +List> measurementsList = new ArrayList<>(size); +List> typesList = new ArrayList<>(size); +List> valuesList = new ArrayList<>(size); +for (String data : datas) { + String[] dataArray = data.split(","); + String device = dataArray[0]; + long time = Long.parseLong(dataArray[1]); + List measurements = Arrays.asList(dataArray[2].split(":")); + List types = new ArrayList<>(); + for (String type : dataArray[3].split(":")) { + types.add(TSDataType.valueOf(type)); + } + List values = new ArrayList<>(); + String[] valuesStr = dataArray[4].split(":"); + for (int i = 0; i < valuesStr.length; i++) { + switch (types.get(i)) { + case INT64: + values.add(Long.parseLong(valuesStr[i])); + break; + case DOUBLE: + values.add(Double.parseDouble(valuesStr[i])); + break; + case INT32: + values.add(Integer.parseInt(valuesStr[i])); + break; + case TEXT: + values.add(valuesStr[i]); + break; + case FLOAT: + values.add(Float.parseFloat(valuesStr[i])); + break; + case BOOLEAN: + values.add(Boolean.parseBoolean(valuesStr[i])); + break; } - int size = datas.size(); - List deviceIds = new ArrayList<>(size); - List times = new ArrayList<>(size); - List> measurementsList = new ArrayList<>(size); - List> typesList = new ArrayList<>(size); - List> valuesList = new ArrayList<>(size); - for (String data : datas) { - String[] dataArray = data.split(","); - String device = dataArray[0]; - long time = Long.parseLong(dataArray[1]); - List measurements = Arrays.asList(dataArray[2].split(":")); - List types = new ArrayList<>(); - for (String type : dataArray[3].split(":")) { - types.add(TSDataType.valueOf(type)); - } - List values = new ArrayList<>(); - String[] valuesStr = dataArray[4].split(":"); - for (int i = 0; i < valuesStr.length; i++) { - switch (types.get(i)) { - case INT64: - values.add(Long.parseLong(valuesStr[i])); - break; - case DOUBLE: - values.add(Double.parseDouble(valuesStr[i])); - break; - case INT32: - values.add(Integer.parseInt(valuesStr[i])); - break; - case TEXT: - values.add(valuesStr[i]); - break; - case FLOAT: - values.add(Float.parseFloat(valuesStr[i])); - break; - case BOOLEAN: - values.add(Boolean.parseBoolean(valuesStr[i])); - break; - } - } - deviceIds.add(device); - times.add(time); - measurementsList.add(measurements); - typesList.add(types); - valuesList.add(values); - } - pool.insertRecords(deviceIds, times, measurementsList, typesList, valuesList); - ``` - + } + deviceIds.add(device); + times.add(time); + measurementsList.add(measurements); + typesList.add(types); + valuesList.add(values); +} +pool.insertRecords(deviceIds, times, measurementsList, typesList, valuesList); +``` diff --git a/src/UserGuide/latest/API/Programming-MQTT.md b/src/UserGuide/latest/API/Programming-MQTT.md index 5bbb610cf..d33270105 100644 --- a/src/UserGuide/latest/API/Programming-MQTT.md +++ b/src/UserGuide/latest/API/Programming-MQTT.md @@ -1,23 +1,21 @@ + # MQTT Protocol [MQTT](http://mqtt.org/) is a machine-to-machine (M2M)/"Internet of Things" connectivity protocol. @@ -27,53 +25,60 @@ It is useful for connections with remote locations where a small code footprint IoTDB supports the MQTT v3.1(an OASIS Standard) protocol. IoTDB server includes a built-in MQTT service that allows remote devices send messages into IoTDB server directly. - - +![](https://alioss.timecho.com/docs/img/github/78357432-0c71cf80-75e4-11ea-98aa-c43a54d469ce.png) ## Built-in MQTT Service + The Built-in MQTT Service provide the ability of direct connection to IoTDB through MQTT. It listen the publish messages from MQTT clients - and then write the data into storage immediately. -The MQTT topic corresponds to IoTDB timeseries. +and then write the data into storage immediately. +The MQTT topic corresponds to IoTDB timeseries. The messages payload can be format to events by `PayloadFormatter` which loaded by java SPI, and the default implementation is `JSONPayloadFormatter`. The default `json` formatter support two json format and its json array. The following is an MQTT message payload example: ```json - { - "device":"root.sg.d1", - "timestamp":1586076045524, - "measurements":["s1","s2"], - "values":[0.530635,0.530635] - } +{ + "device": "root.sg.d1", + "timestamp": 1586076045524, + "measurements": ["s1", "s2"], + "values": [0.530635, 0.530635] +} ``` + or + ```json - { - "device":"root.sg.d1", - "timestamps":[1586076045524,1586076065526], - "measurements":["s1","s2"], - "values":[[0.530635,0.530635], [0.530655,0.530695]] - } +{ + "device": "root.sg.d1", + "timestamps": [1586076045524, 1586076065526], + "measurements": ["s1", "s2"], + "values": [ + [0.530635, 0.530635], + [0.530655, 0.530695] + ] +} ``` + or json array of the above two. ## MQTT Configurations + The IoTDB MQTT service load configurations from `${IOTDB_HOME}/${IOTDB_CONF}/iotdb-system.properties` by default. Configurations are as follows: -| NAME | DESCRIPTION | DEFAULT | -| ------------- |:-------------:|:------:| -| enable_mqtt_service | whether to enable the mqtt service | false | -| mqtt_host | the mqtt service binding host | 127.0.0.1 | -| mqtt_port | the mqtt service binding port | 1883 | -| mqtt_handler_pool_size | the handler pool size for handing the mqtt messages | 1 | -| mqtt_payload_formatter | the mqtt message payload formatter | json | -| mqtt_max_message_size | the max mqtt message size in byte| 1048576 | - +| NAME | DESCRIPTION | DEFAULT | +| ---------------------- | :-------------------------------------------------: | :-------: | +| enable_mqtt_service | whether to enable the mqtt service | false | +| mqtt_host | the mqtt service binding host | 127.0.0.1 | +| mqtt_port | the mqtt service binding port | 1883 | +| mqtt_handler_pool_size | the handler pool size for handing the mqtt messages | 1 | +| mqtt_payload_formatter | the mqtt message payload formatter | json | +| mqtt_max_message_size | the max mqtt message size in byte | 1048576 | ## Coding Examples + The following is an example which a mqtt client send messages to IoTDB server. ```java @@ -103,81 +108,82 @@ connection.disconnect(); ## Customize your MQTT Message Format -If you do not like the above Json format, you can customize your MQTT Message format by just writing several lines +If you do not like the above Json format, you can customize your MQTT Message format by just writing several lines of codes. An example can be found in `example/mqtt-customize` project. Steps: + 1. Create a java project, and add dependency: -```xml - - org.apache.iotdb - iotdb-server - 1.1.0-SNAPSHOT - -``` + + ```xml + + org.apache.iotdb + iotdb-server + 1.1.0-SNAPSHOT + + ``` + 2. Define your implementation which implements `org.apache.iotdb.db.protocol.mqtt.PayloadFormatter` -e.g., + e.g., + + ```java + package org.apache.iotdb.mqtt.server; + + import io.netty.buffer.ByteBuf; + import org.apache.iotdb.db.protocol.mqtt.Message; + import org.apache.iotdb.db.protocol.mqtt.PayloadFormatter; + + import java.nio.charset.StandardCharsets; + import java.util.ArrayList; + import java.util.Arrays; + import java.util.List; + + public class CustomizedJsonPayloadFormatter implements PayloadFormatter { + + @Override + public List format(ByteBuf payload) { + // Suppose the payload is a json format + if (payload == null) { + return null; + } + + String json = payload.toString(StandardCharsets.UTF_8); + // parse data from the json and generate Messages and put them into List ret + List ret = new ArrayList<>(); + // this is just an example, so we just generate some Messages directly + for (int i = 0; i < 2; i++) { + long ts = i; + Message message = new Message(); + message.setDevice("d" + i); + message.setTimestamp(ts); + message.setMeasurements(Arrays.asList("s1", "s2")); + message.setValues(Arrays.asList("4.0" + i, "5.0" + i)); + ret.add(message); + } + return ret; + } + + @Override + public String getName() { + // set the value of mqtt_payload_formatter in iotdb-system.properties as the following string: + return "CustomizedJson"; + } + } + ``` -```java -package org.apache.iotdb.mqtt.server; - -import io.netty.buffer.ByteBuf; -import org.apache.iotdb.db.protocol.mqtt.Message; -import org.apache.iotdb.db.protocol.mqtt.PayloadFormatter; - -import java.nio.charset.StandardCharsets; -import java.util.ArrayList; -import java.util.Arrays; -import java.util.List; - -public class CustomizedJsonPayloadFormatter implements PayloadFormatter { - - @Override - public List format(ByteBuf payload) { - // Suppose the payload is a json format - if (payload == null) { - return null; - } - - String json = payload.toString(StandardCharsets.UTF_8); - // parse data from the json and generate Messages and put them into List ret - List ret = new ArrayList<>(); - // this is just an example, so we just generate some Messages directly - for (int i = 0; i < 2; i++) { - long ts = i; - Message message = new Message(); - message.setDevice("d" + i); - message.setTimestamp(ts); - message.setMeasurements(Arrays.asList("s1", "s2")); - message.setValues(Arrays.asList("4.0" + i, "5.0" + i)); - ret.add(message); - } - return ret; - } - - @Override - public String getName() { - // set the value of mqtt_payload_formatter in iotdb-system.properties as the following string: - return "CustomizedJson"; - } -} -``` 3. modify the file in `src/main/resources/META-INF/services/org.apache.iotdb.db.protocol.mqtt.PayloadFormatter`: - clean the file and put your implementation class name into the file. - In this example, the content is: `org.apache.iotdb.mqtt.server.CustomizedJsonPayloadFormatter` + clean the file and put your implementation class name into the file. + In this example, the content is: `org.apache.iotdb.mqtt.server.CustomizedJsonPayloadFormatter` 4. compile your implementation as a jar file: `mvn package -DskipTests` - Then, in your server: + 1. Create ${IOTDB_HOME}/ext/mqtt/ folder, and put the jar into this folder. 2. Update configuration to enable MQTT service. (`enable_mqtt_service=true` in `conf/iotdb-system.properties`) 3. Set the value of `mqtt_payload_formatter` in `conf/iotdb-system.properties` as the value of getName() in your implementation - , in this example, the value is `CustomizedJson` + , in this example, the value is `CustomizedJson` 4. Launch the IoTDB server. 5. Now IoTDB will use your implementation to parse the MQTT message. -More: the message format can be anything you want. For example, if it is a binary format, -just use `payload.forEachByte()` or `payload.array` to get bytes content. - - - +More: the message format can be anything you want. For example, if it is a binary format, +just use `payload.forEachByte()` or `payload.array` to get bytes content. diff --git a/src/UserGuide/latest/API/Programming-NodeJS-Native-API.md b/src/UserGuide/latest/API/Programming-NodeJS-Native-API.md index 35c7964cd..e67f1f0d8 100644 --- a/src/UserGuide/latest/API/Programming-NodeJS-Native-API.md +++ b/src/UserGuide/latest/API/Programming-NodeJS-Native-API.md @@ -1,71 +1,72 @@ # Node.js Native API -Apache IoTDB uses Thrift as a cross-language RPC-framework so access to IoTDB can be achieved through the interfaces provided by Thrift. +Apache IoTDB uses Thrift as a cross-language RPC-framework so access to IoTDB can be achieved through the interfaces provided by Thrift. This document will introduce how to generate a native Node.js interface that can be used to access IoTDB. ## Dependents - * JDK >= 1.8 - * Node.js >= 16.0.0 - * Linux、Macos or like unix - * Windows+bash +- JDK >= 1.8 +- Node.js >= 16.0.0 +- Linux、Macos or like unix +- Windows+bash ## Generate the Node.js native interface 1. Find the `pom.xml` file in the root directory of the IoTDB source code folder. 2. Open the `pom.xml` file and find the following content: + ```xml - - generate-thrift-sources-python - generate-sources - - compile - - - py - ${project.build.directory}/generated-sources-python/ - - + + generate-thrift-sources-python + generate-sources + + compile + + + py + ${project.build.directory}/generated-sources-python/ + + ``` + 3. Duplicate this block and change the `id`, `generator` and `outputDirectory` to this: + ```xml - - generate-thrift-sources-nodejs - generate-sources - - compile - - - js:node - ${project.build.directory}/generated-sources-nodejs/ - - + + generate-thrift-sources-nodejs + generate-sources + + compile + + + js:node + ${project.build.directory}/generated-sources-nodejs/ + + ``` + 4. In the root directory of the IoTDB source code folder,run `mvn clean generate-sources`. -This command will automatically delete the files in `iotdb/iotdb-protocol/thrift/target` and `iotdb/iotdb-protocol/thrift-commons/target`, and repopulate the folder with the newly generated files. -The newly generated JavaScript sources will be located in `iotdb/iotdb-protocol/thrift/target/generated-sources-nodejs` in the various modules of the `iotdb-protocol` module. + This command will automatically delete the files in `iotdb/iotdb-protocol/thrift/target` and `iotdb/iotdb-protocol/thrift-commons/target`, and repopulate the folder with the newly generated files. + The newly generated JavaScript sources will be located in `iotdb/iotdb-protocol/thrift/target/generated-sources-nodejs` in the various modules of the `iotdb-protocol` module. ## Using the Node.js native interface @@ -73,7 +74,7 @@ Simply copy the files in `iotdb/iotdb-protocol/thrift/target/generated-sources-n ## rpc interface -``` +```cpp // open a session TSOpenSessionResp openSession(1:TSOpenSessionReq req); @@ -89,7 +90,7 @@ TSStatus executeBatchStatement(1:TSExecuteBatchStatementReq req); // execute query SQL statement TSExecuteStatementResp executeQueryStatement(1:TSExecuteStatementReq req); -// execute insert, delete and update SQL statement +// execute insert, delete and update SQL statement TSExecuteStatementResp executeUpdateStatement(1:TSExecuteStatementReq req); // fetch next query result @@ -98,7 +99,7 @@ TSFetchResultsResp fetchResults(1:TSFetchResultsReq req) // fetch meta data TSFetchMetadataResp fetchMetadata(1:TSFetchMetadataReq req) -// cancel a query +// cancel a query TSStatus cancelOperation(1:TSCancelOperationReq req); // close a query dataset @@ -178,4 +179,4 @@ TSExecuteStatementResp executeRawDataQuery(1:TSRawDataQueryReq req); // request a statement id from server i64 requestStatementId(1:i64 sessionId); -``` \ No newline at end of file +``` diff --git a/src/UserGuide/latest/API/Programming-ODBC.md b/src/UserGuide/latest/API/Programming-ODBC.md index 8e0d74852..51ac098ba 100644 --- a/src/UserGuide/latest/API/Programming-ODBC.md +++ b/src/UserGuide/latest/API/Programming-ODBC.md @@ -1,146 +1,155 @@ # ODBC + With IoTDB JDBC, IoTDB can be accessed using the ODBC-JDBC bridge. ## Dependencies -* IoTDB-JDBC's jar-with-dependency package -* ODBC-JDBC bridge (e.g. ZappySys JDBC Bridge) + +- IoTDB-JDBC's jar-with-dependency package +- ODBC-JDBC bridge (e.g. ZappySys JDBC Bridge) ## Deployment + ### Preparing JDBC package + Download the source code of IoTDB, and execute the following command in root directory: + ```shell mvn clean package -pl iotdb-client/jdbc -am -DskipTests -P get-jar-with-dependencies ``` + Then, you can see the output `iotdb-jdbc-1.3.2-SNAPSHOT-jar-with-dependencies.jar` under `iotdb-client/jdbc/target` directory. ### Preparing ODBC-JDBC Bridge -*Note: Here we only provide one kind of ODBC-JDBC bridge as the instance. Readers can use other ODBC-JDBC bridges to access IoTDB with the IOTDB-JDBC.* -1. **Download Zappy-Sys ODBC-JDBC Bridge**: - Enter the https://zappysys.com/products/odbc-powerpack/odbc-jdbc-bridge-driver/ website, and click "download". - ![ZappySys_website.jpg](https://alioss.timecho.com/upload/ZappySys_website.jpg) +_Note: Here we only provide one kind of ODBC-JDBC bridge as the instance. Readers can use other ODBC-JDBC bridges to access IoTDB with the IOTDB-JDBC._ + +1. **Download Zappy-Sys ODBC-JDBC Bridge**: + Enter the website, and click "download". + + ![ZappySys_website.jpg](https://alioss.timecho.com/upload/ZappySys_website.jpg) 2. **Prepare IoTDB**: Set up IoTDB cluster, and write a row of data arbitrarily. - ```sql - IoTDB > insert into root.ln.wf02.wt02(timestamp,status) values(1,true) - ``` + + ```sql + IoTDB > insert into root.ln.wf02.wt02(timestamp,status) values(1,true) + ``` 3. **Deploy and Test the Bridge**: - 1. Open ODBC Data Sources(32/64 bit), depending on the bits of Windows. One possible position is `C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Administrative Tools`. - ![ODBC_ADD_EN.jpg](https://alioss.timecho.com/upload/ODBC_ADD_EN.jpg) + 1. Open ODBC Data Sources(32/64 bit), depending on the bits of Windows. One possible position is `C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Administrative Tools`. - 2. Click on "add" and select ZappySys JDBC Bridge. + ![ODBC_ADD_EN.jpg](https://alioss.timecho.com/upload/ODBC_ADD_EN.jpg) - ![ODBC_CREATE_EN.jpg](https://alioss.timecho.com/upload/ODBC_CREATE_EN.jpg) + 2. Click on "add" and select ZappySys JDBC Bridge. - 3. Fill in the following settings: + ![ODBC_CREATE_EN.jpg](https://alioss.timecho.com/upload/ODBC_CREATE_EN.jpg) - | Property | Content | Example | - |---------------------|-----------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------| - | Connection String | jdbc:iotdb://\:\/ | jdbc:iotdb://127.0.0.1:6667/ | - | Driver Class | org.apache.iotdb.jdbc.IoTDBDriver | org.apache.iotdb.jdbc.IoTDBDriver | - | JDBC driver file(s) | The path of IoTDB JDBC jar-with-dependencies | C:\Users\13361\Documents\GitHub\iotdb\iotdb-client\jdbc\target\iotdb-jdbc-1.3.2-SNAPSHOT-jar-with-dependencies.jar | - | User name | IoTDB's user name | root | - | User password | IoTDB's password | root | + 3. Fill in the following settings: - ![ODBC_CONNECTION.png](https://alioss.timecho.com/upload/ODBC_CONNECTION.png) + | Property | Content | Example | + | ------------------- | --------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------ | + | Connection String | jdbc:iotdb://\:\/ | jdbc:iotdb://127.0.0.1:6667/ | + | Driver Class | org.apache.iotdb.jdbc.IoTDBDriver | org.apache.iotdb.jdbc.IoTDBDriver | + | JDBC driver file(s) | The path of IoTDB JDBC jar-with-dependencies | C:\Users\13361\Documents\GitHub\iotdb\iotdb-client\jdbc\target\iotdb-jdbc-1.3.2-SNAPSHOT-jar-with-dependencies.jar | + | User name | IoTDB's user name | root | + | User password | IoTDB's password | root | - 4. Click on "Test Connection" button, and a "Test Connection: SUCCESSFUL" should appear. + ![ODBC_CONNECTION.png](https://alioss.timecho.com/upload/ODBC_CONNECTION.png) - ![ODBC_CONFIG_EN.jpg](https://alioss.timecho.com/upload/ODBC_CONFIG_EN.jpg) + 4. Click on "Test Connection" button, and a "Test Connection: SUCCESSFUL" should appear. - 5. Click the "Preview" button above, and replace the original query text with `select * from root.**`, then click "Preview Data", and the query result should correctly. + ![ODBC_CONFIG_EN.jpg](https://alioss.timecho.com/upload/ODBC_CONFIG_EN.jpg) - ![ODBC_TEST.jpg](https://alioss.timecho.com/upload/ODBC_TEST.jpg) + 5. Click the "Preview" button above, and replace the original query text with `select * from root.**`, then click "Preview Data", and the query result should correctly. + + ![ODBC_TEST.jpg](https://alioss.timecho.com/upload/ODBC_TEST.jpg) 4. **Operate IoTDB's data with ODBC**: After correct deployment, you can use Microsoft's ODBC library to operate IoTDB's data. Here's an example written in C#: - ```C# - using System.Data.Odbc; - - // Get a connection - var dbConnection = new OdbcConnection("DSN=ZappySys JDBC Bridge"); - dbConnection.Open(); - - // Execute the write commands to prepare data - var dbCommand = dbConnection.CreateCommand(); - dbCommand.CommandText = "insert into root.Keller.Flur.Energieversorgung(time, s1) values(1715670861634, 1)"; - dbCommand.ExecuteNonQuery(); - dbCommand.CommandText = "insert into root.Keller.Flur.Energieversorgung(time, s2) values(1715670861634, true)"; - dbCommand.ExecuteNonQuery(); - dbCommand.CommandText = "insert into root.Keller.Flur.Energieversorgung(time, s3) values(1715670861634, 3.1)"; - dbCommand.ExecuteNonQuery(); - - // Execute the read command - dbCommand.CommandText = "SELECT * FROM root.Keller.Flur.Energieversorgung"; - var dbReader = dbCommand.ExecuteReader(); - - // Write the output header - var fCount = dbReader.FieldCount; - Console.Write(":"); - for(var i = 0; i < fCount; i++) - { - var fName = dbReader.GetName(i); - Console.Write(fName + ":"); - } - Console.WriteLine(); - - // Output the content - while (dbReader.Read()) - { - Console.Write(":"); - for(var i = 0; i < fCount; i++) - { - var fieldType = dbReader.GetFieldType(i); - switch (fieldType.Name) - { - case "DateTime": - var dateTime = dbReader.GetInt64(i); - Console.Write(dateTime + ":"); - break; - case "Double": - if (dbReader.IsDBNull(i)) - { - Console.Write("null:"); - } - else - { - var fValue = dbReader.GetDouble(i); - Console.Write(fValue + ":"); - } - break; - default: - Console.Write(fieldType.Name + ":"); - break; - } - } - Console.WriteLine(); - } - - // Shut down gracefully - dbReader.Close(); - dbCommand.Dispose(); - dbConnection.Close(); - ``` + + ```C# + using System.Data.Odbc; + + // Get a connection + var dbConnection = new OdbcConnection("DSN=ZappySys JDBC Bridge"); + dbConnection.Open(); + + // Execute the write commands to prepare data + var dbCommand = dbConnection.CreateCommand(); + dbCommand.CommandText = "insert into root.Keller.Flur.Energieversorgung(time, s1) values(1715670861634, 1)"; + dbCommand.ExecuteNonQuery(); + dbCommand.CommandText = "insert into root.Keller.Flur.Energieversorgung(time, s2) values(1715670861634, true)"; + dbCommand.ExecuteNonQuery(); + dbCommand.CommandText = "insert into root.Keller.Flur.Energieversorgung(time, s3) values(1715670861634, 3.1)"; + dbCommand.ExecuteNonQuery(); + + // Execute the read command + dbCommand.CommandText = "SELECT * FROM root.Keller.Flur.Energieversorgung"; + var dbReader = dbCommand.ExecuteReader(); + + // Write the output header + var fCount = dbReader.FieldCount; + Console.Write(":"); + for(var i = 0; i < fCount; i++) + { + var fName = dbReader.GetName(i); + Console.Write(fName + ":"); + } + Console.WriteLine(); + + // Output the content + while (dbReader.Read()) + { + Console.Write(":"); + for(var i = 0; i < fCount; i++) + { + var fieldType = dbReader.GetFieldType(i); + switch (fieldType.Name) + { + case "DateTime": + var dateTime = dbReader.GetInt64(i); + Console.Write(dateTime + ":"); + break; + case "Double": + if (dbReader.IsDBNull(i)) + { + Console.Write("null:"); + } + else + { + var fValue = dbReader.GetDouble(i); + Console.Write(fValue + ":"); + } + break; + default: + Console.Write(fieldType.Name + ":"); + break; + } + } + Console.WriteLine(); + } + + // Shut down gracefully + dbReader.Close(); + dbCommand.Dispose(); + dbConnection.Close(); + ``` + This program can write data into IoTDB, and query the data we have just written. diff --git a/src/UserGuide/latest/API/Programming-OPC-UA_timecho.md b/src/UserGuide/latest/API/Programming-OPC-UA_timecho.md index 703b47c68..7459c19b7 100644 --- a/src/UserGuide/latest/API/Programming-OPC-UA_timecho.md +++ b/src/UserGuide/latest/API/Programming-OPC-UA_timecho.md @@ -1,22 +1,19 @@ # OPC UA Protocol @@ -29,81 +26,81 @@ OPC UA is a technical specification used in the automation field for communicati - **Client/Server Mode**:In this mode, IoTDB's stream processing engine establishes a connection with the OPC UA Server via an OPC UA Sink. The OPC UA Server maintains data within its Address Space, from which IoTDB can request and retrieve data. Additionally, other OPC UA Clients can access the data on the server. -
- -
+::: center + + +::: - Features: - - OPC UA will organize the device information received from Sink into folders under the Objects folder according to a tree model. + - OPC UA will organize the device information received from Sink into folders under the Objects folder according to a tree model. - - Each measurement point is recorded as a variable node and the latest value in the current database is recorded. + - Each measurement point is recorded as a variable node and the latest value in the current database is recorded. ### OPC UA Pub/Sub Mode - **Pub/Sub Mode**: In this mode, IoTDB's stream processing engine sends data change events to the OPC UA Server through an OPC UA Sink. These events are published to the server's message queue and managed through Event Nodes. Other OPC UA Clients can subscribe to these Event Nodes to receive notifications upon data changes. -
- -
+::: center + + + +::: - Features: - + - Each measurement point is wrapped as an Event Node in OPC UA. - - The relevant fields and their meanings are as follows: - | Field | Meaning | Type (Milo) | Example | - | :--------- | :--------------- | :------------ | :-------------------- | - | Time | Timestamp | DateTime | 1698907326198 | - | SourceName | Full path of the measurement point | String | root.test.opc.sensor0 | - | SourceNode | Data type of the measurement point | NodeId | Int32 | - | Message | Data | LocalizedText | 3.0 | + | Field | Meaning | Type (Milo) | Example | + | :--------- | :--------------------------------- | :------------ | :-------------------- | + | Time | Timestamp | DateTime | 1698907326198 | + | SourceName | Full path of the measurement point | String | root.test.opc.sensor0 | + | SourceNode | Data type of the measurement point | NodeId | Int32 | + | Message | Data | LocalizedText | 3.0 | - Events are only sent to clients that are already listening; if a client is not connected, the Event will be ignored. - ## IoTDB OPC Server Startup method ### Syntax The syntax for creating the Sink is as follows: - -```SQL -create pipe p1 - with source (...) - with processor (...) - with sink ('sink' = 'opc-ua-sink', - 'sink.opcua.tcp.port' = '12686', - 'sink.opcua.https.port' = '8443', - 'sink.user' = 'root', - 'sink.password' = 'root', +```sql +create pipe p1 + with source (...) + with processor (...) + with sink ('sink' = 'opc-ua-sink', + 'sink.opcua.tcp.port' = '12686', + 'sink.opcua.https.port' = '8443', + 'sink.user' = 'root', + 'sink.password' = 'root', 'sink.opcua.security.dir' = '...' ) ``` ### Parameters -| key | value | value range | required or not | default value | -| :------------------------------ | :----------------------------------------------------------- | :------------------------------------- | :------- | :------------- | -| sink | OPC UA SINK | String: opc-ua-sink | Required | | -| sink.opcua.model | OPC UA model used | String: client-server / pub-sub | Optional | client-server | -| sink.opcua.tcp.port | OPC UA's TCP port | Integer: [0, 65536] | Optional | 12686 | -| sink.opcua.https.port | OPC UA's HTTPS port | Integer: [0, 65536] | Optional | 8443 | -| sink.opcua.security.dir | Directory for OPC UA's keys and certificates | String: Path, supports absolute and relative directories | Optional | Opc_security folder/in the conf directory of the DataNode related to iotdb
If there is no conf directory for iotdb (such as launching DataNode in IDEA), it will be the iotdb_opc_Security folder/in the user's home directory | -| sink.opcua.enable-anonymous-access | Whether OPC UA allows anonymous access | Boolean | Optional | true | -| sink.user | User for OPC UA, specified in the configuration | String | Optional | root | -| sink.password | Password for OPC UA, specified in the configuration | String | Optional | root | +| key | value | value range | required or not | default value | +| :--------------------------------- | :-------------------------------------------------- | :------------------------------------------------------- | :-------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| sink | OPC UA SINK | String: opc-ua-sink | Required | | +| sink.opcua.model | OPC UA model used | String: client-server / pub-sub | Optional | client-server | +| sink.opcua.tcp.port | OPC UA's TCP port | Integer: \[0, 65536] | Optional | 12686 | +| sink.opcua.https.port | OPC UA's HTTPS port | Integer: \[0, 65536] | Optional | 8443 | +| sink.opcua.security.dir | Directory for OPC UA's keys and certificates | String: Path, supports absolute and relative directories | Optional | Opc_security folder/in the conf directory of the DataNode related to iotdb
If there is no conf directory for iotdb (such as launching DataNode in IDEA), it will be the iotdb_opc_Security folder/\in the user's home directory | +| sink.opcua.enable-anonymous-access | Whether OPC UA allows anonymous access | Boolean | Optional | true | +| sink.user | User for OPC UA, specified in the configuration | String | Optional | root | +| sink.password | Password for OPC UA, specified in the configuration | String | Optional | root | ### 示例 ```Bash -create pipe p1 +create pipe p1 with sink ('sink' = 'opc-ua-sink', - 'sink.user' = 'root', + 'sink.user' = 'root', 'sink.password' = 'root'); start pipe p1; ``` @@ -116,9 +113,9 @@ start pipe p1; 3. **Multiple DataNodes may have scattered sending/conflict issues**: - - For IoTDB clusters with multiple dataRegions and scattered across different DataNode IPs, data will be sent in a dispersed manner on the leaders of the dataRegions. The client needs to listen to the configuration ports of the DataNode IP separately.。 + - For IoTDB clusters with multiple dataRegions and scattered across different DataNode IPs, data will be sent in a dispersed manner on the leaders of the dataRegions. The client needs to listen to the configuration ports of the DataNode IP separately.。 - - Suggest using this OPC UA server under 1C1D. + - Suggest using this OPC UA server under 1C1D. 4. **Does not support deleting data and modifying measurement point types:** In Client Server mode, OPC UA cannot delete data or change data type settings. In Pub Sub mode, if data is deleted, information cannot be pushed to the client. @@ -128,7 +125,7 @@ start pipe p1; #### Preparation Work -1. Take UAExpert client as an example, download the UAExpert client: https://www.unified-automation.com/downloads/opc-ua-clients.html +1. Take UAExpert client as an example, download the UAExpert client: 2. Install UAExpert and fill in your own certificate information. @@ -136,43 +133,53 @@ start pipe p1; 1. Use the following SQL to create and start the OPC UA Sink in client-server mode. For detailed syntax, please refer to: [IoTDB OPC Server Syntax](#syntax) -```SQL -create pipe p1 with sink ('sink'='opc-ua-sink'); -``` + ```sql + create pipe p1 with sink ('sink'='opc-ua-sink'); + ``` 2. Write some data. -```SQL -insert into root.test.db(time, s2) values(now(), 2) -``` + ```sql + insert into root.test.db(time, s2) values(now(), 2) + ``` -​ The metadata is automatically created and enabled here. + ​The metadata is automatically created and enabled here. 3. Configure the connection to IoTDB in UAExpert, where the password should be set to the one defined in the sink.password parameter (using the default password "root" as an example): -
- -
+ ::: center + + + + ::: + + ::: center + + -
- -
+ ::: 4. After trusting the server's certificate, you can see the written data in the Objects folder on the left. -
- -
+ ::: center + + + + ::: + + ::: center + + -
- -
+ ::: 5. You can drag the node on the left to the center and display the latest value of that node: -
- -
+ ::: center + + + + ::: ### Pub / Sub Mode @@ -193,64 +200,77 @@ The steps are as follows: 1. Start IoTDB and write some data. -```SQL -insert into root.a.b(time, c, d) values(now(), 1, 2); -``` + ```sql + insert into root.a.b(time, c, d) values(now(), 1, 2); + ``` -​ The metadata is automatically created and enabled here. + ​The metadata is automatically created and enabled here. 2. Use the following SQL to create and start the OPC UA Sink in Pub-Sub mode. For detailed syntax, please refer to: [IoTDB OPC Server Syntax](#syntax) -```SQL -create pipe p1 with sink ('sink'='opc-ua-sink', - 'sink.opcua.model'='pub-sub'); -start pipe p1; -``` + ```sql + create pipe p1 with sink ('sink'='opc-ua-sink', + 'sink.opcua.model'='pub-sub'); + start pipe p1; + ``` -​ At this point, you can see that the opc certificate-related directory has been created under the server's conf directory. + ​ At this point, you can see that the opc certificate-related directory has been created under the server's conf directory. -
- -
+ ::: center + + + + ::: 3. Run the Client connection directly; the Client's certificate will be rejected by the server. -
- -
+ ::: center + + + + ::: 4. Go to the server's sink.opcua.security.dir directory, then to the pki's rejected directory, where the Client's certificate should have been generated. -
- -
+ ::: center + + + + ::: 5. Move (not copy) the client's certificate into (not into a subdirectory of) the trusted directory's certs folder in the same directory. -
- -
+ ::: center + + + + ::: 6. Open the Client connection again; the server's certificate should now be rejected by the Client. -
- -
+ ::: center + + + + ::: 7. Go to the client's /client/security directory, then to the pki's rejected directory, and move the server's certificate into (not into a subdirectory of) the trusted directory. -
- -
+ ::: center + + + + ::: 8. Open the Client, and now the two-way trust is successful, and the Client can connect to the server. 9. Write data to the server, and the Client will print out the received data. -
- -
+ ::: center + + + ::: ### Notes @@ -259,4 +279,4 @@ start pipe p1; 2. **No Need to Operate Root Directory Certificates:** During the certificate operation process, there is no need to operate the `iotdb-server.pfx` certificate under the IoTDB security root directory and the `example-client.pfx` directory under the client security directory. When the Client and Server connect bidirectionally, they will send the root directory certificate to each other. If it is the first time the other party sees this certificate, it will be placed in the reject dir. If the certificate is in the trusted/certs, then the other party can trust it. 3. **It is Recommended to Use Java 17+:** -In JVM 8 versions, there may be a key length restriction, resulting in an "Illegal key size" error. For specific versions (such as jdk.1.8u151+), you can add `Security.`*`setProperty`*`("crypto.policy", "unlimited");`; in the create client of ClientExampleRunner to solve this, or you can download the unlimited package `local_policy.jar` and `US_export_policy ` to replace the packages in the `JDK/jre/lib/security `. Download link:https://www.oracle.com/java/technologies/javase-jce8-downloads.html。 + In JVM 8 versions, there may be a key length restriction, resulting in an "Illegal key size" error. For specific versions (such as jdk.1.8u151+), you can add `Security.`_`setProperty`_`("crypto.policy", "unlimited");`; in the create client of ClientExampleRunner to solve this, or you can download the unlimited package `local_policy.jar` and `US_export_policy` to replace the packages in the `JDK/jre/lib/security`. Download link: . diff --git a/src/UserGuide/latest/API/Programming-Python-Native-API.md b/src/UserGuide/latest/API/Programming-Python-Native-API.md index 7e48badac..370522c1c 100644 --- a/src/UserGuide/latest/API/Programming-Python-Native-API.md +++ b/src/UserGuide/latest/API/Programming-Python-Native-API.md @@ -1,22 +1,19 @@ # Python Native API @@ -25,8 +22,6 @@ You have to install thrift (>=0.13) before using the package. - - ## How to use (Example) First, download the package: `pip3 install apache-iotdb` @@ -54,7 +49,7 @@ session.close() ## Initialization -* Initialize a Session +- Initialize a Session ```python session = Session( @@ -68,7 +63,7 @@ session = Session( ) ``` -* Initialize a Session to connect multiple nodes +- Initialize a Session to connect multiple nodes ```python session = Session.init_from_node_urls( @@ -81,7 +76,7 @@ session = Session.init_from_node_urls( ) ``` -* Open a session, with a parameter to specify whether to enable RPC compression +- Open a session, with a parameter to specify whether to enable RPC compression ```python session.open(enable_rpc_compression=False) @@ -89,11 +84,12 @@ session.open(enable_rpc_compression=False) Notice: this RPC compression status of client must comply with that of IoTDB server -* Close a Session +- Close a Session ```python session.close() ``` + ## Managing Session through SessionPool Utilizing SessionPool to manage sessions eliminates the need to worry about session reuse. When the number of session connections reaches the maximum capacity of the pool, requests for acquiring a session will be blocked, and you can set the blocking wait time through parameters. After using a session, it should be returned to the SessionPool using the `putBack` method for proper management. @@ -110,7 +106,9 @@ wait_timeout_in_ms = 3000 # # Create the connection pool session_pool = SessionPool(pool_config, max_pool_size, wait_timeout_in_ms) ``` -### Create a SessionPool using distributed nodes. + +### Create a SessionPool using distributed nodes + ```python pool_config = PoolConfig(node_urls=node_urls=["127.0.0.1:6667", "127.0.0.1:6668", "127.0.0.1:6669"], user_name=username, password=password, fetch_size=1024, @@ -118,6 +116,7 @@ pool_config = PoolConfig(node_urls=node_urls=["127.0.0.1:6667", "127.0.0.1:6668" max_pool_size = 5 wait_timeout_in_ms = 3000 ``` + ### Acquiring a session through SessionPool and manually calling PutBack after use ```python @@ -136,33 +135,34 @@ session_pool.close() ### Database Management -* CREATE DATABASE +- CREATE DATABASE ```python session.set_storage_group(group_name) ``` -* Delete one or several databases +- Delete one or several databases ```python session.delete_storage_group(group_name) session.delete_storage_groups(group_name_lst) ``` + ### Timeseries Management -* Create one or multiple timeseries +- Create one or multiple timeseries ```python session.create_time_series(ts_path, data_type, encoding, compressor, props=None, tags=None, attributes=None, alias=None) - + session.create_multi_time_series( ts_path_lst, data_type_lst, encoding_lst, compressor_lst, props_lst=None, tags_lst=None, attributes_lst=None, alias_lst=None ) ``` -* Create aligned timeseries +- Create aligned timeseries ```python session.create_aligned_time_series( @@ -172,13 +172,13 @@ session.create_aligned_time_series( Attention: Alias of measurements are **not supported** currently. -* Delete one or several timeseries +- Delete one or several timeseries ```python session.delete_time_series(paths_list) ``` -* Check whether the specific timeseries exists +- Check whether the specific timeseries exists ```python session.check_time_series_exists(path) @@ -190,14 +190,13 @@ session.check_time_series_exists(path) It is recommended to use insertTablet to help improve write efficiency. -* Insert a Tablet,which is multiple rows of a device, each row has the same measurements - * **Better Write Performance** - * **Support null values**: fill the null value with any value, and then mark the null value via BitMap (from v0.13) - +- Insert a Tablet,which is multiple rows of a device, each row has the same measurements + - **Better Write Performance** + - **Support null values**: fill the null value with any value, and then mark the null value via BitMap (from v0.13) We have two implementations of Tablet in Python API. -* Normal Tablet +- Normal Tablet ```python values_ = [ @@ -224,12 +223,14 @@ tablet_ = Tablet( ) session.insert_tablet(tablet_) ``` -* Numpy Tablet + +- Numpy Tablet Comparing with Tablet, Numpy Tablet is using [numpy.ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html) to record data. With less memory footprint and time cost of serialization, the insert performance will be better. **Notice** + 1. time and numerical value columns in Tablet is ndarray 2. recommended to use the specific dtypes to each ndarray, see the example below (if not, the default dtypes are also ok). @@ -282,19 +283,19 @@ np_tablet_with_none = NumpyTablet( session.insert_tablet(np_tablet_with_none) ``` -* Insert multiple Tablets +- Insert multiple Tablets ```python session.insert_tablets(tablet_lst) ``` -* Insert a Record +- Insert a Record ```python session.insert_record(device_id, timestamp, measurements_, data_types_, values_) ``` -* Insert multiple Records +- Insert multiple Records ```python session.insert_records( @@ -302,10 +303,9 @@ session.insert_records( ) ``` -* Insert multiple Records that belong to the same device. +- Insert multiple Records that belong to the same device. With type info the server has no need to do type inference, which leads a better performance - ```python session.insert_records_of_one_device(device_id, time_list, measurements_list, data_types_list, values_list) ``` @@ -314,7 +314,7 @@ session.insert_records_of_one_device(device_id, time_list, measurements_list, da When the data is of String type, we can use the following interface to perform type inference based on the value of the value itself. For example, if value is "true" , it can be automatically inferred to be a boolean type. If value is "3.2" , it can be automatically inferred as a flout type. Without type information, server has to do type inference, which may cost some time. -* Insert a Record, which contains multiple measurement value of a device at a timestamp +- Insert a Record, which contains multiple measurement value of a device at a timestamp ```python session.insert_str_record(device_id, timestamp, measurements, string_values) @@ -324,36 +324,38 @@ session.insert_str_record(device_id, timestamp, measurements, string_values) The Insert of aligned timeseries uses interfaces like insert_aligned_XXX, and others are similar to the above interfaces: -* insert_aligned_record -* insert_aligned_records -* insert_aligned_records_of_one_device -* insert_aligned_tablet -* insert_aligned_tablets - +- insert_aligned_record +- insert_aligned_records +- insert_aligned_records_of_one_device +- insert_aligned_tablet +- insert_aligned_tablets ## IoTDB-SQL Interface -* Execute query statement +- Execute query statement ```python session.execute_query_statement(sql) ``` -* Execute non query statement +- Execute non query statement ```python session.execute_non_query_statement(sql) ``` -* Execute statement +- Execute statement ```python session.execute_statement(sql) ``` ## Schema Template + ### Create Schema Template + The step for creating a metadata template is as follows + 1. Create the template class 2. Adding MeasurementNode 3. Execute create schema template function @@ -371,70 +373,87 @@ template.add_template(m_node_z) session.create_schema_template(template) ``` + ### Modify Schema Template measurements + Modify measurements in a template, the template must be already created. These are functions that add or delete some measurement nodes. -* add node in template + +- add node in template + ```python session.add_measurements_in_template(template_name, measurements_path, data_types, encodings, compressors, is_aligned) ``` -* delete node in template +- delete node in template + ```python session.delete_node_in_template(template_name, path) ``` ### Set Schema Template + ```python session.set_schema_template(template_name, prefix_path) ``` ### Uset Schema Template + ```python session.unset_schema_template(template_name, prefix_path) ``` ### Show Schema Template -* Show all schema templates + +- Show all schema templates + ```python session.show_all_templates() ``` -* Count all measurements in templates + +- Count all measurements in templates + ```python session.count_measurements_in_template(template_name) ``` -* Judge whether the path is measurement or not in templates, This measurement must be in the template +- Judge whether the path is measurement or not in templates, This measurement must be in the template + ```python session.count_measurements_in_template(template_name, path) ``` -* Judge whether the path is exist or not in templates, This path may not belong to the template +- Judge whether the path is exist or not in templates, This path may not belong to the template + ```python session.is_path_exist_in_template(template_name, path) ``` -* Show nodes under in schema template +- Show nodes under in schema template + ```python session.show_measurements_in_template(template_name) ``` -* Show the path prefix where a schema template is set +- Show the path prefix where a schema template is set + ```python session.show_paths_template_set_on(template_name) ``` -* Show the path prefix where a schema template is used (i.e. the time series has been created) +- Show the path prefix where a schema template is used (i.e. the time series has been created) + ```python session.show_paths_template_using_on(template_name) ``` ### Drop Schema Template + Delete an existing metadata template,dropping an already set template is not supported + ```python session.drop_schema_template("template_python") ``` - ## Pandas Support To easily transform a query result to a [Pandas Dataframe](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html) @@ -462,12 +481,12 @@ session.close() df = ... ``` - ## IoTDB Testcontainer -The Test Support is based on the lib `testcontainers` (https://testcontainers-python.readthedocs.io/en/latest/index.html) which you need to install in your project if you want to use the feature. +The Test Support is based on the lib `testcontainers` () which you need to install in your project if you want to use the feature. To start (and stop) an IoTDB Database in a Docker container simply do: + ```python class MyTestCase(unittest.TestCase): @@ -484,13 +503,15 @@ by default it will load the image `apache/iotdb:latest`, if you want a specific ## IoTDB DBAPI -IoTDB DBAPI implements the Python DB API 2.0 specification (https://peps.python.org/pep-0249/), which defines a common +IoTDB DBAPI implements the Python DB API 2.0 specification (), which defines a common interface for accessing databases in Python. ### Examples -+ Initialization + +- Initialization The initialized parameters are consistent with the session part (except for the sqlalchemy_mode). + ```python from iotdb.dbapi import connect @@ -501,23 +522,27 @@ password_ = "root" conn = connect(ip, port_, username_, password_,fetch_size=1024,zone_id="UTC+8",sqlalchemy_mode=False) cursor = conn.cursor() ``` -+ simple SQL statement execution + +- simple SQL statement execution + ```python cursor.execute("SELECT ** FROM root") for row in cursor.fetchall(): print(row) ``` -+ execute SQL with parameter +- execute SQL with parameter IoTDB DBAPI supports pyformat style parameters + ```python cursor.execute("SELECT ** FROM root WHERE time < %(time)s",{"time":"2017-11-01T00:08:00.000"}) for row in cursor.fetchall(): print(row) ``` -+ execute SQL with parameter sequences +- execute SQL with parameter sequences + ```python seq_of_parameters = [ {"timestamp": 1, "temperature": 1}, @@ -530,17 +555,21 @@ sql = "insert into root.cursor(timestamp,temperature) values(%(timestamp)s,%(tem cursor.executemany(sql,seq_of_parameters) ``` -+ close the connection and cursor +- close the connection and cursor + ```python cursor.close() conn.close() ``` ## IoTDB SQLAlchemy Dialect (Experimental) + The SQLAlchemy dialect of IoTDB is written to adapt to Apache Superset. This part is still being improved. Please do not use it in the production environment! + ### Mapping of the metadata + The data model used by SQLAlchemy is a relational data model, which describes the relationships between different entities through tables. While the data model of IoTDB is a hierarchical data model, which organizes the data through a tree structure. In order to adapt IoTDB to the dialect of SQLAlchemy, the original data model in IoTDB needs to be reorganized. @@ -554,25 +583,27 @@ The metadata in the IoTDB are: 4. Measurement The metadata in the SQLAlchemy are: + 1. Schema 2. Table 3. Column The mapping relationship between them is: -| The metadata in the SQLAlchemy | The metadata in the IoTDB | -| -------------------- | -------------------------------------------- | -| Schema | Database | -| Table | Path ( from database to entity ) + Entity | -| Column | Measurement | +| The metadata in the SQLAlchemy | The metadata in the IoTDB | +| ------------------------------ | ----------------------------------------- | +| Schema | Database | +| Table | Path ( from database to entity ) + Entity | +| Column | Measurement | The following figure shows the relationship between the two more intuitively: ![sqlalchemy-to-iotdb](https://alioss.timecho.com/docs/img/UserGuide/API/IoTDB-SQLAlchemy/sqlalchemy-to-iotdb.png?raw=true) ### Data type mapping + | data type in IoTDB | data type in SQLAlchemy | -|--------------------|-------------------------| +| ------------------ | ----------------------- | | BOOLEAN | Boolean | | INT32 | Integer | | INT64 | BigInteger | @@ -583,7 +614,7 @@ The following figure shows the relationship between the two more intuitively: ### Example -+ execute statement +- execute statement ```python from sqlalchemy import create_engine @@ -595,7 +626,7 @@ for row in result.fetchall(): print(row) ``` -+ ORM (now only simple queries are supported) +- ORM (now only simple queries are supported) ```python from sqlalchemy import create_engine, Column, Float, BigInteger, MetaData @@ -626,49 +657,39 @@ for row in res: print(row) ``` - ## Developers ### Introduction This is an example of how to connect to IoTDB with python, using the thrift rpc interfaces. Things are almost the same on Windows or Linux, but pay attention to the difference like path separator. - - ### Prerequisites Python3.7 or later is preferred. You have to install Thrift (0.11.0 or later) to compile our thrift file into python code. Below is the official tutorial of installation, eventually, you should have a thrift executable. -``` -http://thrift.apache.org/docs/install/ -``` + Before starting you need to install `requirements_dev.txt` in your python environment, e.g. by calling + ```shell pip install -r requirements_dev.txt ``` - - ### Compile the thrift library and Debug -In the root of IoTDB's source code folder, run `mvn clean generate-sources -pl iotdb-client/client-py -am`. +In the root of IoTDB's source code folder, run `mvn clean generate-sources -pl iotdb-client/client-py -am`. This will automatically delete and repopulate the folder `iotdb/thrift` with the generated thrift files. This folder is ignored from git and should **never be pushed to git!** **Notice** Do not upload `iotdb/thrift` to the git repo. - - - ### Session Client & Example We packed up the Thrift interface in `client-py/src/iotdb/Session.py` (similar with its Java counterpart), also provided an example file `client-py/src/SessionExample.py` of how to use the session module. please read it carefully. - Or, another simple example: ```python @@ -684,8 +705,6 @@ zone = session.get_time_zone() session.close() ``` - - ### Tests Please add your custom tests in `tests` folder. @@ -694,15 +713,11 @@ To run all defined tests just type `pytest .` in the root folder. **Notice** Some tests need docker to be started on your system as a test instance is started in a docker container using [testcontainers](https://testcontainers-python.readthedocs.io/en/latest/index.html). - - ### Futher Tools [black](https://pypi.org/project/black/) and [flake8](https://pypi.org/project/flake8/) are installed for autoformatting and linting. Both can be run by `black .` or `flake8 .` respectively. - - ## Releasing To do a release just ensure that you have the right set of generated thrift files. @@ -710,23 +725,18 @@ Then run linting and auto-formatting. Then, ensure that all tests work (via `pytest .`). Then you are good to go to do a release! - - ### Preparing your environment First, install all necessary dev dependencies via `pip install -r requirements_dev.txt`. - - ### Doing the Release There is a convenient script `release.sh` to do all steps for a release. Namely, these are -* Remove all transient directories from last release (if exists) -* (Re-)generate all generated sources via mvn -* Run Linting (flake8) -* Run Tests via pytest -* Build -* Release to pypi - +- Remove all transient directories from last release (if exists) +- (Re-)generate all generated sources via mvn +- Run Linting (flake8) +- Run Tests via pytest +- Build +- Release to pypi diff --git a/src/UserGuide/latest/API/Programming-Rust-Native-API.md b/src/UserGuide/latest/API/Programming-Rust-Native-API.md index f58df68fc..4ec73a52b 100644 --- a/src/UserGuide/latest/API/Programming-Rust-Native-API.md +++ b/src/UserGuide/latest/API/Programming-Rust-Native-API.md @@ -1,78 +1,77 @@ # Rust Native API Native API -IoTDB uses Thrift as a cross language RPC framework, so access to IoTDB can be achieved through the interface provided by Thrift. +IoTDB uses Thrift as a cross language RPC framework, so access to IoTDB can be achieved through the interface provided by Thrift. This document will introduce how to generate a native Rust interface that can access IoTDB. ## Dependents - * JDK >= 1.8 - * Rust >= 1.0.0 - * thrift 0.14.1 - * Linux、Macos or like unix - * Windows+bash +- JDK >= 1.8 +- Rust >= 1.0.0 +- thrift 0.14.1 +- Linux、Macos or like unix +- Windows+bash Thrift (0.14.1 or higher) must be installed to compile Thrift files into Rust code. The following is the official installation tutorial, and in the end, you should receive a Thrift executable file. -``` -http://thrift.apache.org/docs/install/ -``` + ## Compile the Thrift library and generate the Rust native interface 1. Find the `pom.xml` file in the root directory of the IoTDB source code folder. 2. Open the `pom.xml` file and find the following content: + ```xml - - generate-thrift-sources-python - generate-sources - - compile - - - py - ${project.build.directory}/generated-sources-python/ - - + + generate-thrift-sources-python + generate-sources + + compile + + + py + ${project.build.directory}/generated-sources-python/ + + ``` + 3. Duplicate this block and change the `id`, `generator` and `outputDirectory` to this: + ```xml - - generate-thrift-sources-rust - generate-sources - - compile - - - rs - ${project.build.directory}/generated-sources-rust/ - - + + generate-thrift-sources-rust + generate-sources + + compile + + + rs + ${project.build.directory}/generated-sources-rust/ + + ``` + 4. In the root directory of the IoTDB source code folder,run `mvn clean generate-sources`. -This command will automatically delete the files in `iotdb/iotdb-protocol/thrift/target` and `iotdb/iotdb-protocol/thrift-commons/target`, and repopulate the folder with the newly generated files. -The newly generated Rust sources will be located in `iotdb/iotdb-protocol/thrift/target/generated-sources-rust` in the various modules of the `iotdb-protocol` module. + This command will automatically delete the files in `iotdb/iotdb-protocol/thrift/target` and `iotdb/iotdb-protocol/thrift-commons/target`, and repopulate the folder with the newly generated files. + The newly generated Rust sources will be located in `iotdb/iotdb-protocol/thrift/target/generated-sources-rust` in the various modules of the `iotdb-protocol` module. ## Using the Rust native interface @@ -80,7 +79,7 @@ Copy `iotdb/iotdb-protocol/thrift/target/generated-sources-rust/` and `iotdb/iot ## RPC interface -``` +```cpp // open a session TSOpenSessionResp openSession(1:TSOpenSessionReq req); @@ -96,7 +95,7 @@ TSStatus executeBatchStatement(1:TSExecuteBatchStatementReq req); // execute query SQL statement TSExecuteStatementResp executeQueryStatement(1:TSExecuteStatementReq req); -// execute insert, delete and update SQL statement +// execute insert, delete and update SQL statement TSExecuteStatementResp executeUpdateStatement(1:TSExecuteStatementReq req); // fetch next query result @@ -105,7 +104,7 @@ TSFetchResultsResp fetchResults(1:TSFetchResultsReq req) // fetch meta data TSFetchMetadataResp fetchMetadata(1:TSFetchMetadataReq req) -// cancel a query +// cancel a query TSStatus cancelOperation(1:TSCancelOperationReq req); // close a query dataset diff --git a/src/UserGuide/latest/API/RestServiceV1.md b/src/UserGuide/latest/API/RestServiceV1.md index 738448e87..60840e69d 100644 --- a/src/UserGuide/latest/API/RestServiceV1.md +++ b/src/UserGuide/latest/API/RestServiceV1.md @@ -1,36 +1,34 @@ -# RESTful API V1(Not Recommend) +# RESTful API V1(Not Recommend) + IoTDB's RESTful services can be used for query, write, and management operations, using the OpenAPI standard to define interfaces and generate frameworks. ## Enable RESTful Services RESTful services are disabled by default. -* Developer +- Developer Find the `IoTDBrestServiceConfig` class under `org.apache.iotdb.db.conf.rest` in the sever module, and modify `enableRestService=true`. -* User +- User Find the `conf/conf/iotdb-system.properties` file under the IoTDB installation directory and set `enable_rest_service` to `true` to enable the module. @@ -39,6 +37,7 @@ RESTful services are disabled by default. ``` ## Authentication + Except the liveness probe API `/ping`, RESTful services use the basic authentication. Each URL request needs to carry `'Authorization': 'Basic ' + base64.encode(username + ':' + password)`. The username used in the following examples is: `root`, and password is: `root`. @@ -54,24 +53,26 @@ Authorization: Basic cm9vdDpyb290 HTTP Status Code:`401` HTTP response body: - ```json - { - "code": 600, - "message": "WRONG_LOGIN_PASSWORD_ERROR" - } - ``` + + ```json + { + "code": 600, + "message": "WRONG_LOGIN_PASSWORD_ERROR" + } + ``` - If the `Authorization` header is missing,the following error is returned: HTTP Status Code:`401` HTTP response body: - ```json - { - "code": 603, - "message": "UNINITIALIZED_AUTH_ERROR" - } - ``` + + ```json + { + "code": 603, + "message": "UNINITIALIZED_AUTH_ERROR" + } + ``` ## Interface @@ -85,7 +86,7 @@ Request path: `http://ip:port/ping` The user name used in the example is: root, password: root -Example request: +Example request: ```shell $ curl http://127.0.0.1:18080/ping @@ -98,10 +99,10 @@ Response status codes: Response parameters: -|parameter name |parameter type |parameter describe| -|:--- | :--- | :---| -|code | integer | status code | -| message | string | message | +| parameter name | parameter type | parameter describe | +| :------------- | :------------- | :----------------- | +| code | integer | status code | +| message | string | message | Sample response: @@ -137,18 +138,18 @@ Request path: `http://ip:port/rest/v1/query` Parameter Description: -| parameter name | parameter type | required | parameter description | -|----------------| -------------- | -------- | ------------------------------------------------------------ | -| sql | string | yes | | -| rowLimit | integer | no | The maximum number of rows in the result set that can be returned by a query.
If this parameter is not set, the `rest_query_default_row_size_limit` of the configuration file will be used as the default value.
When the number of rows in the returned result set exceeds the limit, the status code `411` will be returned. | +| parameter name | parameter type | required | parameter description | +| -------------- | -------------- | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| sql | string | yes | | +| rowLimit | integer | no | The maximum number of rows in the result set that can be returned by a query.
If this parameter is not set, the `rest_query_default_row_size_limit` of the configuration file will be used as the default value.
When the number of rows in the returned result set exceeds the limit, the status code `411` will be returned. | Response parameters: -| parameter name | parameter type | parameter description | -|----------------| -------------- | ------------------------------------------------------------ | -| expressions | array | Array of result set column names for data query, `null` for metadata query | -| columnNames | array | Array of column names for metadata query result set, `null` for data query | -| timestamps | array | Timestamp column, `null` for metadata query | +| parameter name | parameter type | parameter description | +| -------------- | -------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| expressions | array | Array of result set column names for data query, `null` for metadata query | +| columnNames | array | Array of column names for metadata query result set, `null` for data query | +| timestamps | array | Timestamp column, `null` for metadata query | | values | array | A two-dimensional array, the first dimension has the same length as the result set column name array, and the second dimension array represents a column of the result set | **Examples:** @@ -157,38 +158,24 @@ Tip: Statements like `select * from root.xx.**` are not recommended because thos **Expression query** - ```shell - curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"sql":"select s3, s4, s3 + 1 from root.sg27 limit 2"}' http://127.0.0.1:18080/rest/v1/query - ```` +```shell +curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"sql":"select s3, s4, s3 + 1 from root.sg27 limit 2"}' http://127.0.0.1:18080/rest/v1/query +``` + Response instance - ```json - { - "expressions": [ - "root.sg27.s3", - "root.sg27.s4", - "root.sg27.s3 + 1" - ], - "columnNames": null, - "timestamps": [ - 1635232143960, - 1635232153960 - ], - "values": [ - [ - 11, - null - ], - [ - false, - true - ], - [ - 12.0, - null - ] - ] - } - ``` + +```json +{ + "expressions": ["root.sg27.s3", "root.sg27.s4", "root.sg27.s3 + 1"], + "columnNames": null, + "timestamps": [1635232143960, 1635232153960], + "values": [ + [11, null], + [false, true], + [12.0, null] + ] +} +``` **Show child paths** @@ -199,16 +186,9 @@ curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X ```json { "expressions": null, - "columnNames": [ - "child paths" - ], + "columnNames": ["child paths"], "timestamps": null, - "values": [ - [ - "root.sg27", - "root.sg28" - ] - ] + "values": [["root.sg27", "root.sg28"]] } ``` @@ -221,16 +201,9 @@ curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X ```json { "expressions": null, - "columnNames": [ - "child nodes" - ], + "columnNames": ["child nodes"], "timestamps": null, - "values": [ - [ - "sg27", - "sg28" - ] - ] + "values": [["sg27", "sg28"]] } ``` @@ -243,20 +216,11 @@ curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X ```json { "expressions": null, - "columnNames": [ - "database", - "ttl" - ], + "columnNames": ["database", "ttl"], "timestamps": null, "values": [ - [ - "root.sg27", - "root.sg28" - ], - [ - null, - null - ] + ["root.sg27", "root.sg28"], + [null, null] ] } ``` @@ -270,19 +234,9 @@ curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X ```json { "expressions": null, - "columnNames": [ - "database", - "ttl" - ], + "columnNames": ["database", "ttl"], "timestamps": null, - "values": [ - [ - "root.sg27" - ], - [ - null - ] - ] + "values": [["root.sg27"], [null]] } ``` @@ -345,54 +299,14 @@ curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X ], "timestamps": null, "values": [ - [ - "root.sg27.s3", - "root.sg27.s4", - "root.sg28.s3", - "root.sg28.s4" - ], - [ - null, - null, - null, - null - ], - [ - "root.sg27", - "root.sg27", - "root.sg28", - "root.sg28" - ], - [ - "INT32", - "BOOLEAN", - "INT32", - "BOOLEAN" - ], - [ - "RLE", - "RLE", - "RLE", - "RLE" - ], - [ - "SNAPPY", - "SNAPPY", - "SNAPPY", - "SNAPPY" - ], - [ - null, - null, - null, - null - ], - [ - null, - null, - null, - null - ] + ["root.sg27.s3", "root.sg27.s4", "root.sg28.s3", "root.sg28.s4"], + [null, null, null, null], + ["root.sg27", "root.sg27", "root.sg28", "root.sg28"], + ["INT32", "BOOLEAN", "INT32", "BOOLEAN"], + ["RLE", "RLE", "RLE", "RLE"], + ["SNAPPY", "SNAPPY", "SNAPPY", "SNAPPY"], + [null, null, null, null], + [null, null, null, null] ] } ``` @@ -418,54 +332,14 @@ curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X ], "timestamps": null, "values": [ - [ - "root.sg28.s4", - "root.sg27.s4", - "root.sg28.s3", - "root.sg27.s3" - ], - [ - null, - null, - null, - null - ], - [ - "root.sg28", - "root.sg27", - "root.sg28", - "root.sg27" - ], - [ - "BOOLEAN", - "BOOLEAN", - "INT32", - "INT32" - ], - [ - "RLE", - "RLE", - "RLE", - "RLE" - ], - [ - "SNAPPY", - "SNAPPY", - "SNAPPY", - "SNAPPY" - ], - [ - null, - null, - null, - null - ], - [ - null, - null, - null, - null - ] + ["root.sg28.s4", "root.sg27.s4", "root.sg28.s3", "root.sg27.s3"], + [null, null, null, null], + ["root.sg28", "root.sg27", "root.sg28", "root.sg27"], + ["BOOLEAN", "BOOLEAN", "INT32", "INT32"], + ["RLE", "RLE", "RLE", "RLE"], + ["SNAPPY", "SNAPPY", "SNAPPY", "SNAPPY"], + [null, null, null, null], + [null, null, null, null] ] } ``` @@ -479,15 +353,9 @@ curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X ```json { "expressions": null, - "columnNames": [ - "count" - ], + "columnNames": ["count"], "timestamps": null, - "values": [ - [ - 4 - ] - ] + "values": [[4]] } ``` @@ -500,15 +368,9 @@ curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X ```json { "expressions": null, - "columnNames": [ - "count" - ], + "columnNames": ["count"], "timestamps": null, - "values": [ - [ - 4 - ] - ] + "values": [[4]] } ``` @@ -521,20 +383,11 @@ curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X ```json { "expressions": null, - "columnNames": [ - "devices", - "isAligned" - ], + "columnNames": ["devices", "isAligned"], "timestamps": null, "values": [ - [ - "root.sg27", - "root.sg28" - ], - [ - "false", - "false" - ] + ["root.sg27", "root.sg28"], + ["false", "false"] ] } ``` @@ -548,25 +401,12 @@ curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X ```json { "expressions": null, - "columnNames": [ - "devices", - "database", - "isAligned" - ], + "columnNames": ["devices", "database", "isAligned"], "timestamps": null, "values": [ - [ - "root.sg27", - "root.sg28" - ], - [ - "root.sg27", - "root.sg28" - ], - [ - "false", - "false" - ] + ["root.sg27", "root.sg28"], + ["root.sg27", "root.sg28"], + ["false", "false"] ] } ``` @@ -580,15 +420,9 @@ curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X ```json { "expressions": null, - "columnNames": [ - "user" - ], + "columnNames": ["user"], "timestamps": null, - "values": [ - [ - "root" - ] - ] + "values": [["root"]] } ``` @@ -600,22 +434,10 @@ curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X ```json { - "expressions": [ - "count(root.sg27.s3)", - "count(root.sg27.s4)" - ], + "expressions": ["count(root.sg27.s3)", "count(root.sg27.s4)"], "columnNames": null, - "timestamps": [ - 0 - ], - "values": [ - [ - 1 - ], - [ - 2 - ] - ] + "timestamps": [0], + "values": [[1], [2]] } ``` @@ -628,19 +450,9 @@ curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X ```json { "expressions": null, - "columnNames": [ - "count(root.sg27.*)", - "count(root.sg28.*)" - ], + "columnNames": ["count(root.sg27.*)", "count(root.sg28.*)"], "timestamps": null, - "values": [ - [ - 3 - ], - [ - 3 - ] - ] + "values": [[3], [3]] } ``` @@ -652,48 +464,15 @@ curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X ```json { - "expressions": [ - "count(root.sg27.s3)", - "count(root.sg27.s4)" - ], + "expressions": ["count(root.sg27.s3)", "count(root.sg27.s4)"], "columnNames": null, "timestamps": [ - 1635232143960, - 1635232144960, - 1635232145960, - 1635232146960, - 1635232147960, - 1635232148960, - 1635232149960, - 1635232150960, - 1635232151960, - 1635232152960 + 1635232143960, 1635232144960, 1635232145960, 1635232146960, 1635232147960, + 1635232148960, 1635232149960, 1635232150960, 1635232151960, 1635232152960 ], "values": [ - [ - 1, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0 - ], - [ - 1, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0 - ] + [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], + [1, 0, 0, 0, 0, 0, 0, 0, 0, 0] ] } ``` @@ -707,25 +486,9 @@ curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" - ```json { "expressions": null, - "columnNames": [ - "timeseries", - "value", - "dataType" - ], - "timestamps": [ - 1635232143960 - ], - "values": [ - [ - "root.sg27.s3" - ], - [ - "11" - ], - [ - "INT32" - ] - ] + "columnNames": ["timeseries", "value", "dataType"], + "timestamps": [1635232143960], + "values": [["root.sg27.s3"], ["11"], ["INT32"]] } ``` @@ -778,23 +541,25 @@ Request path: `http://ip:port/rest/v1/nonQuery` Parameter Description: -|parameter name |parameter type |parameter describe| -|:--- | :--- | :---| -| sql | string | query content | +| parameter name | parameter type | parameter describe | +| :------------- | :------------- | :----------------- | +| sql | string | query content | Example request: + ```shell curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"sql":"CREATE DATABASE root.ln"}' http://127.0.0.1:18080/rest/v1/nonQuery ``` Response parameters: -|parameter name |parameter type |parameter describe| -|:--- | :--- | :---| -| code | integer | status code | -| message | string | message | +| parameter name | parameter type | parameter describe | +| :------------- | :------------- | :----------------- | +| code | integer | status code | +| message | string | message | Sample response: + ```json { "code": 200, @@ -802,8 +567,6 @@ Sample response: } ``` - - ### insertTablet Request method: `POST` @@ -814,28 +577,30 @@ Request path: `http://ip:port/rest/v1/insertTablet` Parameter Description: -| parameter name |parameter type |is required|parameter describe| -|:---------------| :--- | :---| :---| -| timestamps | array | yes | Time column | -| measurements | array | yes | The name of the measuring point | -| dataTypes | array | yes | The data type | -| values | array | yes | Value columns, the values in each column can be `null` | -| isAligned | boolean | yes | Whether to align the timeseries | -| deviceId | string | yes | Device name | +| parameter name | parameter type | is required | parameter describe | +| :------------- | :------------- | :---------- | :----------------------------------------------------- | +| timestamps | array | yes | Time column | +| measurements | array | yes | The name of the measuring point | +| dataTypes | array | yes | The data type | +| values | array | yes | Value columns, the values in each column can be `null` | +| isAligned | boolean | yes | Whether to align the timeseries | +| deviceId | string | yes | Device name | Example request: + ```shell curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"timestamps":[1635232143960,1635232153960],"measurements":["s3","s4"],"dataTypes":["INT32","BOOLEAN"],"values":[[11,null],[false,true]],"isAligned":false,"deviceId":"root.sg27"}' http://127.0.0.1:18080/rest/v1/insertTablet ``` Sample response: -|parameter name |parameter type |parameter describe| -|:--- | :--- | :---| -| code | integer | status code | -| message | string | message | +| parameter name | parameter type | parameter describe | +| :------------- | :------------- | :----------------- | +| code | integer | status code | +| message | string | message | Sample response: + ```json { "code": 200, @@ -847,83 +612,79 @@ Sample response: The configuration is located in 'iotdb-system.properties'. -* Set 'enable_rest_service' to 'true' to enable the module, and 'false' to disable the module. By default, this value is' false '. +- Set 'enable_rest_service' to 'true' to enable the module, and 'false' to disable the module. By default, this value is' false '. ```properties enable_rest_service=true ``` -* This parameter is valid only when 'enable_REST_service =true'. Set 'rest_service_port' to a number (1025 to 65535) to customize the REST service socket port. By default, the value is 18080. +- This parameter is valid only when 'enable_REST_service =true'. Set 'rest_service_port' to a number (1025 to 65535) to customize the REST service socket port. By default, the value is 18080. ```properties rest_service_port=18080 ``` -* Set 'enable_swagger' to 'true' to display rest service interface information through swagger, and 'false' to do not display the rest service interface information through the swagger. By default, this value is' false '. +- Set 'enable_swagger' to 'true' to display rest service interface information through swagger, and 'false' to do not display the rest service interface information through the swagger. By default, this value is' false '. ```properties enable_swagger=false ``` -* The maximum number of rows in the result set that can be returned by a query. When the number of rows in the returned result set exceeds the limit, the status code `411` is returned. +- The maximum number of rows in the result set that can be returned by a query. When the number of rows in the returned result set exceeds the limit, the status code `411` is returned. -````properties +```properties rest_query_default_row_size_limit=10000 -```` +``` -* Expiration time for caching customer login information (used to speed up user authentication, in seconds, 8 hours by default) +- Expiration time for caching customer login information (used to speed up user authentication, in seconds, 8 hours by default) ```properties cache_expire=28800 ``` - -* Maximum number of users stored in the cache (default: 100) +- Maximum number of users stored in the cache (default: 100) ```properties cache_max_num=100 ``` -* Initial cache size (default: 10) +- Initial cache size (default: 10) ```properties cache_init_num=10 ``` -* REST Service whether to enable SSL configuration, set 'enable_https' to' true 'to enable the module, and set' false 'to disable the module. By default, this value is' false '. +- REST Service whether to enable SSL configuration, set 'enable_https' to' true 'to enable the module, and set' false 'to disable the module. By default, this value is' false '. ```properties enable_https=false ``` -* keyStore location path (optional) +- keyStore location path (optional) ```properties key_store_path= ``` - -* keyStore password (optional) +- keyStore password (optional) ```properties key_store_pwd= ``` - -* trustStore location path (optional) +- trustStore location path (optional) ```properties trust_store_path= ``` -* trustStore password (optional) +- trustStore password (optional) ```properties trust_store_pwd= ``` - -* SSL timeout period, in seconds +- SSL timeout period, in seconds ```properties idle_timeout=5000 diff --git a/src/UserGuide/latest/API/RestServiceV2.md b/src/UserGuide/latest/API/RestServiceV2.md index b4c733fb6..2dbd9129c 100644 --- a/src/UserGuide/latest/API/RestServiceV2.md +++ b/src/UserGuide/latest/API/RestServiceV2.md @@ -1,36 +1,34 @@ -# RESTful API V2 +# RESTful API V2 + IoTDB's RESTful services can be used for query, write, and management operations, using the OpenAPI standard to define interfaces and generate frameworks. ## Enable RESTful Services RESTful services are disabled by default. -* Developer +- Developer Find the `IoTDBrestServiceConfig` class under `org.apache.iotdb.db.conf.rest` in the sever module, and modify `enableRestService=true`. -* User +- User Find the `conf/iotdb-system.properties` file under the IoTDB installation directory and set `enable_rest_service` to `true` to enable the module. @@ -39,6 +37,7 @@ RESTful services are disabled by default. ``` ## Authentication + Except the liveness probe API `/ping`, RESTful services use the basic authentication. Each URL request needs to carry `'Authorization': 'Basic ' + base64.encode(username + ':' + password)`. The username used in the following examples is: `root`, and password is: `root`. @@ -54,24 +53,26 @@ Authorization: Basic cm9vdDpyb290 HTTP Status Code:`401` HTTP response body: - ```json - { - "code": 600, - "message": "WRONG_LOGIN_PASSWORD_ERROR" - } - ``` + + ```json + { + "code": 600, + "message": "WRONG_LOGIN_PASSWORD_ERROR" + } + ``` - If the `Authorization` header is missing,the following error is returned: HTTP Status Code:`401` HTTP response body: - ```json - { - "code": 603, - "message": "UNINITIALIZED_AUTH_ERROR" - } - ``` + + ```json + { + "code": 603, + "message": "UNINITIALIZED_AUTH_ERROR" + } + ``` ## Interface @@ -85,7 +86,7 @@ Request path: `http://ip:port/ping` The user name used in the example is: root, password: root -Example request: +Example request: ```shell $ curl http://127.0.0.1:18080/ping @@ -98,10 +99,10 @@ Response status codes: Response parameters: -|parameter name |parameter type |parameter describe| -|:--- | :--- | :---| -|code | integer | status code | -| message | string | message | +| parameter name | parameter type | parameter describe | +| :------------- | :------------- | :----------------- | +| code | integer | status code | +| message | string | message | Sample response: @@ -137,18 +138,18 @@ Request path: `http://ip:port/rest/v2/query` Parameter Description: -| parameter name | parameter type | required | parameter description | -|----------------| -------------- | -------- | ------------------------------------------------------------ | -| sql | string | yes | | +| parameter name | parameter type | required | parameter description | +| -------------- | -------------- | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| sql | string | yes | | | row_limit | integer | no | The maximum number of rows in the result set that can be returned by a query.
If this parameter is not set, the `rest_query_default_row_size_limit` of the configuration file will be used as the default value.
When the number of rows in the returned result set exceeds the limit, the status code `411` will be returned. | Response parameters: -| parameter name | parameter type | parameter description | -|----------------| -------------- | ------------------------------------------------------------ | -| expressions | array | Array of result set column names for data query, `null` for metadata query | -| column_names | array | Array of column names for metadata query result set, `null` for data query | -| timestamps | array | Timestamp column, `null` for metadata query | +| parameter name | parameter type | parameter description | +| -------------- | -------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| expressions | array | Array of result set column names for data query, `null` for metadata query | +| column_names | array | Array of column names for metadata query result set, `null` for data query | +| timestamps | array | Timestamp column, `null` for metadata query | | values | array | A two-dimensional array, the first dimension has the same length as the result set column name array, and the second dimension array represents a column of the result set | **Examples:** @@ -159,33 +160,17 @@ Tip: Statements like `select * from root.xx.**` are not recommended because thos ```shell curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"sql":"select s3, s4, s3 + 1 from root.sg27 limit 2"}' http://127.0.0.1:18080/rest/v2/query -```` +``` ```json { - "expressions": [ - "root.sg27.s3", - "root.sg27.s4", - "root.sg27.s3 + 1" - ], + "expressions": ["root.sg27.s3", "root.sg27.s4", "root.sg27.s3 + 1"], "column_names": null, - "timestamps": [ - 1635232143960, - 1635232153960 - ], + "timestamps": [1635232143960, 1635232153960], "values": [ - [ - 11, - null - ], - [ - false, - true - ], - [ - 12.0, - null - ] + [11, null], + [false, true], + [12.0, null] ] } ``` @@ -199,16 +184,9 @@ curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X ```json { "expressions": null, - "column_names": [ - "child paths" - ], + "column_names": ["child paths"], "timestamps": null, - "values": [ - [ - "root.sg27", - "root.sg28" - ] - ] + "values": [["root.sg27", "root.sg28"]] } ``` @@ -221,16 +199,9 @@ curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X ```json { "expressions": null, - "column_names": [ - "child nodes" - ], + "column_names": ["child nodes"], "timestamps": null, - "values": [ - [ - "sg27", - "sg28" - ] - ] + "values": [["sg27", "sg28"]] } ``` @@ -243,20 +214,11 @@ curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X ```json { "expressions": null, - "column_names": [ - "database", - "ttl" - ], + "column_names": ["database", "ttl"], "timestamps": null, "values": [ - [ - "root.sg27", - "root.sg28" - ], - [ - null, - null - ] + ["root.sg27", "root.sg28"], + [null, null] ] } ``` @@ -270,19 +232,9 @@ curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X ```json { "expressions": null, - "column_names": [ - "database", - "ttl" - ], + "column_names": ["database", "ttl"], "timestamps": null, - "values": [ - [ - "root.sg27" - ], - [ - null - ] - ] + "values": [["root.sg27"], [null]] } ``` @@ -345,54 +297,14 @@ curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X ], "timestamps": null, "values": [ - [ - "root.sg27.s3", - "root.sg27.s4", - "root.sg28.s3", - "root.sg28.s4" - ], - [ - null, - null, - null, - null - ], - [ - "root.sg27", - "root.sg27", - "root.sg28", - "root.sg28" - ], - [ - "INT32", - "BOOLEAN", - "INT32", - "BOOLEAN" - ], - [ - "RLE", - "RLE", - "RLE", - "RLE" - ], - [ - "SNAPPY", - "SNAPPY", - "SNAPPY", - "SNAPPY" - ], - [ - null, - null, - null, - null - ], - [ - null, - null, - null, - null - ] + ["root.sg27.s3", "root.sg27.s4", "root.sg28.s3", "root.sg28.s4"], + [null, null, null, null], + ["root.sg27", "root.sg27", "root.sg28", "root.sg28"], + ["INT32", "BOOLEAN", "INT32", "BOOLEAN"], + ["RLE", "RLE", "RLE", "RLE"], + ["SNAPPY", "SNAPPY", "SNAPPY", "SNAPPY"], + [null, null, null, null], + [null, null, null, null] ] } ``` @@ -418,54 +330,14 @@ curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X ], "timestamps": null, "values": [ - [ - "root.sg28.s4", - "root.sg27.s4", - "root.sg28.s3", - "root.sg27.s3" - ], - [ - null, - null, - null, - null - ], - [ - "root.sg28", - "root.sg27", - "root.sg28", - "root.sg27" - ], - [ - "BOOLEAN", - "BOOLEAN", - "INT32", - "INT32" - ], - [ - "RLE", - "RLE", - "RLE", - "RLE" - ], - [ - "SNAPPY", - "SNAPPY", - "SNAPPY", - "SNAPPY" - ], - [ - null, - null, - null, - null - ], - [ - null, - null, - null, - null - ] + ["root.sg28.s4", "root.sg27.s4", "root.sg28.s3", "root.sg27.s3"], + [null, null, null, null], + ["root.sg28", "root.sg27", "root.sg28", "root.sg27"], + ["BOOLEAN", "BOOLEAN", "INT32", "INT32"], + ["RLE", "RLE", "RLE", "RLE"], + ["SNAPPY", "SNAPPY", "SNAPPY", "SNAPPY"], + [null, null, null, null], + [null, null, null, null] ] } ``` @@ -479,15 +351,9 @@ curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X ```json { "expressions": null, - "column_names": [ - "count" - ], + "column_names": ["count"], "timestamps": null, - "values": [ - [ - 4 - ] - ] + "values": [[4]] } ``` @@ -500,15 +366,9 @@ curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X ```json { "expressions": null, - "column_names": [ - "count" - ], + "column_names": ["count"], "timestamps": null, - "values": [ - [ - 4 - ] - ] + "values": [[4]] } ``` @@ -521,20 +381,11 @@ curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X ```json { "expressions": null, - "column_names": [ - "devices", - "isAligned" - ], + "column_names": ["devices", "isAligned"], "timestamps": null, "values": [ - [ - "root.sg27", - "root.sg28" - ], - [ - "false", - "false" - ] + ["root.sg27", "root.sg28"], + ["false", "false"] ] } ``` @@ -548,25 +399,12 @@ curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X ```json { "expressions": null, - "column_names": [ - "devices", - "database", - "isAligned" - ], + "column_names": ["devices", "database", "isAligned"], "timestamps": null, "values": [ - [ - "root.sg27", - "root.sg28" - ], - [ - "root.sg27", - "root.sg28" - ], - [ - "false", - "false" - ] + ["root.sg27", "root.sg28"], + ["root.sg27", "root.sg28"], + ["false", "false"] ] } ``` @@ -580,15 +418,9 @@ curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X ```json { "expressions": null, - "column_names": [ - "user" - ], + "column_names": ["user"], "timestamps": null, - "values": [ - [ - "root" - ] - ] + "values": [["root"]] } ``` @@ -600,22 +432,10 @@ curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X ```json { - "expressions": [ - "count(root.sg27.s3)", - "count(root.sg27.s4)" - ], + "expressions": ["count(root.sg27.s3)", "count(root.sg27.s4)"], "column_names": null, - "timestamps": [ - 0 - ], - "values": [ - [ - 1 - ], - [ - 2 - ] - ] + "timestamps": [0], + "values": [[1], [2]] } ``` @@ -628,19 +448,9 @@ curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X ```json { "expressions": null, - "column_names": [ - "count(root.sg27.*)", - "count(root.sg28.*)" - ], + "column_names": ["count(root.sg27.*)", "count(root.sg28.*)"], "timestamps": null, - "values": [ - [ - 3 - ], - [ - 3 - ] - ] + "values": [[3], [3]] } ``` @@ -652,48 +462,15 @@ curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X ```json { - "expressions": [ - "count(root.sg27.s3)", - "count(root.sg27.s4)" - ], + "expressions": ["count(root.sg27.s3)", "count(root.sg27.s4)"], "column_names": null, "timestamps": [ - 1635232143960, - 1635232144960, - 1635232145960, - 1635232146960, - 1635232147960, - 1635232148960, - 1635232149960, - 1635232150960, - 1635232151960, - 1635232152960 + 1635232143960, 1635232144960, 1635232145960, 1635232146960, 1635232147960, + 1635232148960, 1635232149960, 1635232150960, 1635232151960, 1635232152960 ], "values": [ - [ - 1, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0 - ], - [ - 1, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0, - 0 - ] + [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], + [1, 0, 0, 0, 0, 0, 0, 0, 0, 0] ] } ``` @@ -707,25 +484,9 @@ curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" - ```json { "expressions": null, - "column_names": [ - "timeseries", - "value", - "dataType" - ], - "timestamps": [ - 1635232143960 - ], - "values": [ - [ - "root.sg27.s3" - ], - [ - "11" - ], - [ - "INT32" - ] - ] + "column_names": ["timeseries", "value", "dataType"], + "timestamps": [1635232143960], + "values": [["root.sg27.s3"], ["11"], ["INT32"]] } ``` @@ -778,23 +539,25 @@ Request path: `http://ip:port/rest/v2/nonQuery` Parameter Description: -|parameter name |parameter type |parameter describe| -|:--- | :--- | :---| -| sql | string | query content | +| parameter name | parameter type | parameter describe | +| :------------- | :------------- | :----------------- | +| sql | string | query content | Example request: + ```shell curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"sql":"CREATE DATABASE root.ln"}' http://127.0.0.1:18080/rest/v2/nonQuery ``` Response parameters: -|parameter name |parameter type |parameter describe| -|:--- | :--- | :---| -| code | integer | status code | -| message | string | message | +| parameter name | parameter type | parameter describe | +| :------------- | :------------- | :----------------- | +| code | integer | status code | +| message | string | message | Sample response: + ```json { "code": 200, @@ -802,8 +565,6 @@ Sample response: } ``` - - ### insertTablet Request method: `POST` @@ -814,28 +575,30 @@ Request path: `http://ip:port/rest/v2/insertTablet` Parameter Description: -| parameter name |parameter type |is required|parameter describe| -|:---------------| :--- | :---| :---| -| timestamps | array | yes | Time column | -| measurements | array | yes | The name of the measuring point | -| data_types | array | yes | The data type | -| values | array | yes | Value columns, the values in each column can be `null` | -| is_aligned | boolean | yes | Whether to align the timeseries | -| device | string | yes | Device name | +| parameter name | parameter type | is required | parameter describe | +| :------------- | :------------- | :---------- | :----------------------------------------------------- | +| timestamps | array | yes | Time column | +| measurements | array | yes | The name of the measuring point | +| data_types | array | yes | The data type | +| values | array | yes | Value columns, the values in each column can be `null` | +| is_aligned | boolean | yes | Whether to align the timeseries | +| device | string | yes | Device name | Example request: + ```shell curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"timestamps":[1635232143960,1635232153960],"measurements":["s3","s4"],"data_types":["INT32","BOOLEAN"],"values":[[11,null],[false,true]],"is_aligned":false,"device":"root.sg27"}' http://127.0.0.1:18080/rest/v2/insertTablet ``` Sample response: -|parameter name |parameter type |parameter describe| -|:--- | :--- | :---| -| code | integer | status code | -| message | string | message | +| parameter name | parameter type | parameter describe | +| :------------- | :------------- | :----------------- | +| code | integer | status code | +| message | string | message | Sample response: + ```json { "code": 200, @@ -853,28 +616,30 @@ Request path: `http://ip:port/rest/v2/insertRecords` Parameter Description: -| parameter name |parameter type |is required|parameter describe| -|:------------------| :--- | :---| :---| -| timestamps | array | yes | Time column | -| measurements_list | array | yes | The name of the measuring point | -| data_types_list | array | yes | The data type | -| values_list | array | yes | Value columns, the values in each column can be `null` | -| devices | string | yes | Device name | -| is_aligned | boolean | yes | Whether to align the timeseries | +| parameter name | parameter type | is required | parameter describe | +| :---------------- | :------------- | :---------- | :----------------------------------------------------- | +| timestamps | array | yes | Time column | +| measurements_list | array | yes | The name of the measuring point | +| data_types_list | array | yes | The data type | +| values_list | array | yes | Value columns, the values in each column can be `null` | +| devices | string | yes | Device name | +| is_aligned | boolean | yes | Whether to align the timeseries | Example request: + ```shell curl -H "Content-Type:application/json" -H "Authorization:Basic cm9vdDpyb290" -X POST --data '{"timestamps":[1635232113960,1635232151960,1635232143960,1635232143960],"measurements_list":[["s33","s44"],["s55","s66"],["s77","s88"],["s771","s881"]],"data_types_list":[["INT32","INT64"],["FLOAT","DOUBLE"],["FLOAT","DOUBLE"],["BOOLEAN","TEXT"]],"values_list":[[1,11],[2.1,2],[4,6],[false,"cccccc"]],"is_aligned":false,"devices":["root.s1","root.s1","root.s1","root.s3"]}' http://127.0.0.1:18080/rest/v2/insertRecords ``` Sample response: -|parameter name |parameter type |parameter describe| -|:--- | :--- | :---| -| code | integer | status code | -| message | string | message | +| parameter name | parameter type | parameter describe | +| :------------- | :------------- | :----------------- | +| code | integer | status code | +| message | string | message | Sample response: + ```json { "code": 200, @@ -882,88 +647,83 @@ Sample response: } ``` - ## Configuration The configuration is located in 'iotdb-system.properties'. -* Set 'enable_rest_service' to 'true' to enable the module, and 'false' to disable the module. By default, this value is' false '. +- Set 'enable_rest_service' to 'true' to enable the module, and 'false' to disable the module. By default, this value is' false '. ```properties enable_rest_service=true ``` -* This parameter is valid only when 'enable_REST_service =true'. Set 'rest_service_port' to a number (1025 to 65535) to customize the REST service socket port. By default, the value is 18080. +- This parameter is valid only when 'enable_REST_service =true'. Set 'rest_service_port' to a number (1025 to 65535) to customize the REST service socket port. By default, the value is 18080. ```properties rest_service_port=18080 ``` -* Set 'enable_swagger' to 'true' to display rest service interface information through swagger, and 'false' to do not display the rest service interface information through the swagger. By default, this value is' false '. +- Set 'enable_swagger' to 'true' to display rest service interface information through swagger, and 'false' to do not display the rest service interface information through the swagger. By default, this value is' false '. ```properties enable_swagger=false ``` -* The maximum number of rows in the result set that can be returned by a query. When the number of rows in the returned result set exceeds the limit, the status code `411` is returned. +- The maximum number of rows in the result set that can be returned by a query. When the number of rows in the returned result set exceeds the limit, the status code `411` is returned. -````properties +```properties rest_query_default_row_size_limit=10000 -```` +``` -* Expiration time for caching customer login information (used to speed up user authentication, in seconds, 8 hours by default) +- Expiration time for caching customer login information (used to speed up user authentication, in seconds, 8 hours by default) ```properties cache_expire=28800 ``` - -* Maximum number of users stored in the cache (default: 100) +- Maximum number of users stored in the cache (default: 100) ```properties cache_max_num=100 ``` -* Initial cache size (default: 10) +- Initial cache size (default: 10) ```properties cache_init_num=10 ``` -* REST Service whether to enable SSL configuration, set 'enable_https' to' true 'to enable the module, and set' false 'to disable the module. By default, this value is' false '. +- REST Service whether to enable SSL configuration, set 'enable_https' to' true 'to enable the module, and set' false 'to disable the module. By default, this value is' false '. ```properties enable_https=false ``` -* keyStore location path (optional) +- keyStore location path (optional) ```properties key_store_path= ``` - -* keyStore password (optional) +- keyStore password (optional) ```properties key_store_pwd= ``` - -* trustStore location path (optional) +- trustStore location path (optional) ```properties trust_store_path= ``` -* trustStore password (optional) +- trustStore password (optional) ```properties trust_store_pwd= ``` - -* SSL timeout period, in seconds +- SSL timeout period, in seconds ```properties idle_timeout=5000