Skip to content

Commit 87b18fa

Browse files
authored
Feature k8s (apache#283)
* [optimize] streamx-packer rename to streamx-flink-packer apache#249 * [Feature] streamx-console/streamx-console-webapp provide additional GUI adaptations for flink k8s mode apache#259
1 parent aecafa6 commit 87b18fa

File tree

60 files changed

+587
-516
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

60 files changed

+587
-516
lines changed

.github/ISSUE_TEMPLATE/bug-report.md

+8-5
Original file line numberDiff line numberDiff line change
@@ -1,21 +1,23 @@
11
---
22
name: "\U0001F41B Bug Report"
3-
about: As a User, I want to report a Bug.
4-
title: "[Bug] Bug title "
3+
about: As a User, I want to report a Bug. title: "[Bug] Bug title "
54
labels: type/bug
65
---
76

87
*Please answer these questions before submitting your issue. Thanks!*
98

109
### Environment Description
11-
* **StreamX Version**:
10+
11+
* **StreamX Version**:
1212
* **JVM version** (`java -version`):
1313
* **OS version** (`uname -a` if on a Unix-like system):
1414

1515
### Bug Description
16-
A clear and concise description of what the bug is.
1716

18-
### How to Reproduce
17+
A clear and concise description of what the bug is.
18+
19+
### How to Reproduce
20+
1921
Steps to reproduce the behavior, for example:
2022

2123
1. Go to '...'
@@ -24,6 +26,7 @@ Steps to reproduce the behavior, for example:
2426
4. See error
2527

2628
### Additional context
29+
2730
Add any other context about the problem here such as JVM track stack log.
2831

2932
### Requirement or improvement

.github/ISSUE_TEMPLATE/enhancement.md

+1-2
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,6 @@
11
---
22
name: "\U0001F680 Enhancement"
3-
about: As a StreamX developer, I want to make an enhancement.
4-
title: "[Enhancement] Enhancement title "
3+
about: As a StreamX developer, I want to make an enhancement. title: "[Enhancement] Enhancement title "
54
labels: type/enhancement
65
---
76

.github/ISSUE_TEMPLATE/feature-request.md

+1-2
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,6 @@
11
---
22
name: "\U0001F680 Feature Request"
3-
about: As a user, I want to request a New Feature on the product.
4-
title: "[Feature] Feature request title "
3+
about: As a user, I want to request a New Feature on the product. title: "[Feature] Feature request title "
54
labels: type/feature-request
65
---
76

.github/ISSUE_TEMPLATE/general-question.md

+1-2
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,6 @@
11
---
22
name: "\U0001F914 Ask a Question"
3-
about: I want to ask a question.
4-
title: "[Question] Question title "
3+
about: I want to ask a question. title: "[Question] Question title "
54
labels: type/question
65
---
76

+9-4
Original file line numberDiff line numberDiff line change
@@ -1,29 +1,34 @@
11
---
22
name: "\U0001F41B 报告 Bug(Bug Report)"
3-
about: 我想要提交一个 StreamX 的 Bug 报告
4-
title: "[Bug] 标题 "
3+
about: 我想要提交一个 StreamX 的 Bug 报告 title: "[Bug] 标题 "
54
labels: type/bug
65
---
76

87
*请在提交你的 issue 前,请回答以下问题,这有助于社区快速定位问题,谢谢! 🙏*
98

109
### Environment Description(运行环境描述)
11-
* **StreamX Version**:
10+
11+
* **StreamX Version**:
1212
* **JVM version** (`java -version`):
1313
* **OS version** (`uname -a` if on a Unix-like system):
1414

1515
### Bug Description(Bug 描述)
16+
1617
请简要、清晰地对您遇到的 Bug 进行描述。
1718

18-
### How to Reproduce(如何重现这个 Bug)
19+
### How to Reproduce(如何重现这个 Bug)
20+
1921
重现 Bug 现场的步骤,比如:
22+
2023
1. Go to '...'
2124
2. Click on '....'
2225
3. Scroll down to '....'
2326
4. See error
2427

2528
### Additional context(额外的上下文信息)
29+
2630
添加额外有助于描述该问题的上下文信息,如 JVM 堆栈日志等
2731

2832
### Requirement or improvement(诉求 & 改进建议)
33+
2934
- 请描述您的诉求,或者对此的改进建议

.github/ISSUE_TEMPLATE/zh-enhancement.md

+1-2
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,6 @@
11
---
22
name: "\U0001F680 优化建议(Enhancement)"
3-
about: 我想提供一些对于 StreamX 的优化/增强建议
4-
title: "[Enhancement] 标题 "
3+
about: 我想提供一些对于 StreamX 的优化/增强建议 title: "[Enhancement] 标题 "
54
labels: type/enhancement
65
---
76

.github/ISSUE_TEMPLATE/zh-feature-request.md

+1-2
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,6 @@
11
---
22
name: "\U0001F680 新功能诉求(Feature Request)"
3-
about: 我有一个绝妙的 idea,希望 StreamX 可以支持它!
4-
title: "[feature] 标题 "
3+
about: 我有一个绝妙的 idea,希望 StreamX 可以支持它! title: "[feature] 标题 "
54
labels: type/feature-request
65
---
76

.github/ISSUE_TEMPLATE/zh-general-question.md

+1-2
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,6 @@
11
---
22
name: "\U0001F914 问题提问(Ask a Question)"
3-
about: 我想提问一个问题
4-
title: "[Question] 标题 "
3+
about: 我想提问一个问题 title: "[Question] 标题 "
54
labels: type/question
65
---
76

README.md

+29-32
Original file line numberDiff line numberDiff line change
@@ -38,9 +38,9 @@ Make Flink|Spark easier
3838
3939
## 🚀 Introduction
4040

41-
The original intention of `StreamX` is to make the development of `Flink` easier. `StreamX` focuses on the management of
42-
development phases and tasks. Our ultimate goal is to build a one-stop big data solution integrating stream processing,
43-
batch processing, data warehouse and data laker.
41+
The original intention of `StreamX` is to make the development of `Flink` easier. `StreamX` focuses on the management of development phases
42+
and tasks. Our ultimate goal is to build a one-stop big data solution integrating stream processing, batch processing, data warehouse and
43+
data laker.
4444

4545
[![StreamX video](http://assets.streamxhub.com/streamx_player.png)](http://assets.streamxhub.com/streamx.mp4)
4646

@@ -75,26 +75,24 @@ batch processing, data warehouse and data laker.
7575

7676
### 1️⃣ streamx-core
7777

78-
`streamx-core` is a framework that focuses on coding, standardizes configuration, and develops in a way that is better
79-
than configuration by convention. Also it provides a development-time `RunTime Content` and a series of `Connector` out
80-
of the box. At the same time, it extends `DataStream` some methods, and integrates `DataStream` and `Flink sql` api to
81-
simplify tedious operations, focus on the business itself, and improve development efficiency and development
82-
experience.
78+
`streamx-core` is a framework that focuses on coding, standardizes configuration, and develops in a way that is better than configuration by
79+
convention. Also it provides a development-time `RunTime Content` and a series of `Connector` out of the box. At the same time, it
80+
extends `DataStream` some methods, and integrates `DataStream` and `Flink sql` api to simplify tedious operations, focus on the business
81+
itself, and improve development efficiency and development experience.
8382

8483
### 2️⃣ streamx-pump
8584

86-
`streamx-pump` is a planned data extraction component, similar to `flinkx`. Based on the various `connector` provided
87-
in `streamx-core`, the purpose is to create a convenient, fast, out-of-the-box real-time data extraction and migration
88-
component for big data, and it will be integrated into the `streamx-console`.
85+
`streamx-pump` is a planned data extraction component, similar to `flinkx`. Based on the various `connector` provided in `streamx-core`, the
86+
purpose is to create a convenient, fast, out-of-the-box real-time data extraction and migration component for big data, and it will be
87+
integrated into the `streamx-console`.
8988

9089
### 3️⃣ streamx-console
9190

92-
`streamx-console` is a stream processing and `Low Code` platform, capable of managing `Flink` tasks, integrating project
93-
compilation, deploy, configuration, startup, `savepoint`, `flame graph`, `Flink SQL`, monitoring and many other
94-
features. Simplify the daily operation and maintenance of the `Flink` task.
91+
`streamx-console` is a stream processing and `Low Code` platform, capable of managing `Flink` tasks, integrating project compilation,
92+
deploy, configuration, startup, `savepoint`, `flame graph`, `Flink SQL`, monitoring and many other features. Simplify the daily operation
93+
and maintenance of the `Flink` task.
9594

96-
Our ultimate goal is to build a one-stop big data solution integrating stream processing, batch processing, data
97-
warehouse and data laker.
95+
Our ultimate goal is to build a one-stop big data solution integrating stream processing, batch processing, data warehouse and data laker.
9896

9997
* [Apache Flink](http://flink.apache.org)
10098
* [Apache YARN](http://hadoop.apache.org)
@@ -111,10 +109,10 @@ warehouse and data laker.
111109
* [Monaco Editor](https://microsoft.github.io/monaco-editor/)
112110
* ...
113111

114-
Thanks to the above excellent open source projects and many outstanding open source projects that are not mentioned, for
115-
giving the greatest respect, special thanks to [Apache Zeppelin](http://zeppelin.apache.org)
116-
, [IntelliJ IDEA](https://www.jetbrains.com/idea/), Thanks to
117-
the [fire-spark](https://github.com/GuoNingNing/fire-spark) project for the early inspiration and help.
112+
Thanks to the above excellent open source projects and many outstanding open source projects that are not mentioned, for giving the greatest
113+
respect, special thanks to [Apache Zeppelin](http://zeppelin.apache.org)
114+
, [IntelliJ IDEA](https://www.jetbrains.com/idea/), Thanks to the [fire-spark](https://github.com/GuoNingNing/fire-spark) project for the
115+
early inspiration and help.
118116

119117
### 🚀 Quick Start
120118

@@ -130,26 +128,26 @@ click [Document](http://www.streamxhub.com/zh/doc/) for more information
130128

131129
### Apache Zeppelin
132130

133-
[Apache Zeppelin](https://zeppelin.apache.org) is a Web-based notebook that enables data-driven, interactive data
134-
analytics and collaborative documents with SQL, Java, Scala and more.
131+
[Apache Zeppelin](https://zeppelin.apache.org) is a Web-based notebook that enables data-driven, interactive data analytics and
132+
collaborative documents with SQL, Java, Scala and more.
135133

136134
At the same time we also need a one-stop tool that can cover `development`, `test`, `package`, `deploy`, and `start`.
137-
`streamx-console` solves these pain points very well, positioning is a one-stop stream processing platform, and has
138-
developed more exciting features (such as `Flink SQL WebIDE`, `dependency isolation`, `task rollback `, `flame diagram`
135+
`streamx-console` solves these pain points very well, positioning is a one-stop stream processing platform, and has developed more exciting
136+
features (such as `Flink SQL WebIDE`, `dependency isolation`, `task rollback `, `flame diagram`
139137
etc.)
140138

141139
### FlinkX
142140

143-
[FlinkX](http://github.com/DTStack/flinkx) is a distributed offline and real-time data synchronization framework based
144-
on flink widely used in DTStack, which realizes efficient data migration between multiple heterogeneous data sources.
141+
[FlinkX](http://github.com/DTStack/flinkx) is a distributed offline and real-time data synchronization framework based on flink widely used
142+
in DTStack, which realizes efficient data migration between multiple heterogeneous data sources.
145143

146-
`StreamX` focuses on the management of development phases and tasks. The `streamx-pump` module is also under planning,
147-
dedicated to solving data source migration, and will eventually be integrated into the `streamx-console`.
144+
`StreamX` focuses on the management of development phases and tasks. The `streamx-pump` module is also under planning, dedicated to solving
145+
data source migration, and will eventually be integrated into the `streamx-console`.
148146

149147
## 🍼 Feedback
150148

151-
You can quickly submit an issue. Before submitting, please check the problem and try to use the following contact
152-
information! Maybe your question has already been asked by others, or it has already been answered. Thank you!
149+
You can quickly submit an issue. Before submitting, please check the problem and try to use the following contact information! Maybe your
150+
question has already been asked by others, or it has already been answered. Thank you!
153151

154152
You can contact us or ask questions via:
155153

@@ -160,8 +158,7 @@ You can contact us or ask questions via:
160158

161159
Are you **enjoying this project** ? 👋
162160

163-
If you like this framework, and appreciate the work done for it to exist, you can still support the developers by
164-
donating ☀️ 👊
161+
If you like this framework, and appreciate the work done for it to exist, you can still support the developers by donating ☀️ 👊
165162

166163
| WeChat Pay | Alipay |
167164
|:----------|:----------|

README_CN.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -39,8 +39,8 @@ Make Flink|Spark easier!!!
3939
## 🚀 什么是StreamX
4040

4141
    大数据技术如今发展的如火如荼,已经呈现百花齐放欣欣向荣的景象,实时处理流域 `Apache Spark``Apache Flink`
42-
更是一个伟大的进步,尤其是 `Apache Flink` 被普遍认为是下一代大数据流计算引擎, 我们在使用 `Flink` 时发现从编程模型, 启动配置到运维管理都有很多可以抽象共用的地方, 我们将一些好的经验固化下来并结合业内的最佳实践,
43-
通过不断努力终于诞生了今天的框架 —— `StreamX`, 项目的初衷是 —— 让 `Flink` 开发更简单, 使用 `StreamX` 开发,可以极大降低学习成本和开发门槛, 让开发者只用关心最核心的业务, `StreamX`
42+
更是一个伟大的进步,尤其是 `Apache Flink` 被普遍认为是下一代大数据流计算引擎, 我们在使用 `Flink` 时发现从编程模型, 启动配置到运维管理都有很多可以抽象共用的地方, 我们将一些好的经验固化下来并结合业内的最佳实践, 通过不断努力终于诞生了今天的框架
43+
—— `StreamX`, 项目的初衷是 —— 让 `Flink` 开发更简单, 使用 `StreamX` 开发,可以极大降低学习成本和开发门槛, 让开发者只用关心最核心的业务, `StreamX`
4444
规范了项目的配置,鼓励函数式编程,定义了最佳的编程方式,提供了一系列开箱即用的 `Connectors` ,标准化了配置、开发、测试、部署、监控、运维的整个过程, 提供 `Scala``Java` 两套api,
4545
其最终目的是打造一个一站式大数据平台,流批一体,湖仓一体的解决方案
4646

SECURITY.md

+3-4
Original file line numberDiff line numberDiff line change
@@ -2,8 +2,7 @@
22

33
## Supported Versions
44

5-
Use this section to tell people about which versions of your project are currently being supported with security
6-
updates.
5+
Use this section to tell people about which versions of your project are currently being supported with security updates.
76

87
| Version | Supported |
98
| ------- | ------------------ |
@@ -16,6 +15,6 @@ updates.
1615

1716
Use this section to tell people how to report a vulnerability.
1817

19-
Tell them where to go, how often they can expect to get an update on a reported vulnerability, what to expect if the
20-
vulnerability is accepted or declined, etc.
18+
Tell them where to go, how often they can expect to get an update on a reported vulnerability, what to expect if the vulnerability is
19+
accepted or declined, etc.
2120

streamx-common/src/main/scala/com/streamxhub/streamx/common/enums/ExecutionMode.java

+1-1
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ public enum ExecutionMode implements Serializable {
5656
/**
5757
* kubernetes application
5858
*/
59-
KUBERNETES_NATIVE_APPLICATION(6,"kubernetes-application");
59+
KUBERNETES_NATIVE_APPLICATION(6, "kubernetes-application");
6060

6161
private Integer mode;
6262
private String name;

streamx-common/src/main/scala/com/streamxhub/streamx/common/enums/SqlErrorType.java

-2
Original file line numberDiff line numberDiff line change
@@ -21,9 +21,7 @@
2121
package com.streamxhub.streamx.common.enums;
2222

2323
/**
24-
*
2524
* @author benjobs
26-
*
2725
*/
2826
public enum SqlErrorType {
2927
/**

streamx-common/src/main/scala/com/streamxhub/streamx/common/fs/FsOperator.scala

+2-1
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,7 @@ import com.streamxhub.streamx.common.enums.StorageType
2525

2626
/**
2727
* just for java.
28+
*
2829
* @author benjobs
2930
*/
3031
object FsOperatorGetter {
@@ -42,7 +43,7 @@ object FsOperator {
4243
def of(storageType: StorageType): FsOperator = {
4344
storageType match {
4445
case StorageType.HDFS => HdfsOperator
45-
case StorageType.LFS => LfsOperator
46+
case StorageType.LFS => LFsOperator
4647
case _ => throw new UnsupportedOperationException(s"Unsupported storageType:$storageType")
4748
}
4849
}

streamx-common/src/main/scala/com/streamxhub/streamx/common/fs/HdfsOperator.scala

+1-1
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ object HdfsOperator extends FsOperator with Logger {
3737
override def move(srcPath: String, dstPath: String): Unit = HdfsUtils.move(toHdfsPath(srcPath), toHdfsPath(dstPath))
3838

3939
override def upload(srcPath: String, dstPath: String, delSrc: Boolean, overwrite: Boolean): Unit =
40-
HdfsUtils.upload(toHdfsPath(srcPath), toHdfsPath(dstPath), delSrc = delSrc, overwrite = overwrite)
40+
HdfsUtils.upload(srcPath, toHdfsPath(dstPath), delSrc = delSrc, overwrite = overwrite)
4141

4242
override def copy(srcPath: String, dstPath: String, delSrc: Boolean, overwrite: Boolean): Unit =
4343
HdfsUtils.copyHdfs(toHdfsPath(srcPath), toHdfsPath(dstPath), delSrc = delSrc, overwrite = overwrite)

streamx-common/src/main/scala/com/streamxhub/streamx/common/fs/LfsOperator.scala streamx-common/src/main/scala/com/streamxhub/streamx/common/fs/LFsOperator.scala

+1-1
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ import java.io.{File, FileInputStream}
3333
* Local File System (aka LFS) Operator
3434
*/
3535
//noinspection DuplicatedCode
36-
object LfsOperator extends FsOperator with Logger {
36+
object LFsOperator extends FsOperator with Logger {
3737

3838
override def exists(path: String): Boolean = {
3939
StringUtils.isNotBlank(path) && new File(path).exists()

streamx-common/src/main/scala/com/streamxhub/streamx/common/util/DateUtils.scala

+1-1
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ import java.text.{ParseException, SimpleDateFormat}
2424
import java.time.LocalDateTime
2525
import java.time.format.DateTimeFormatter
2626
import java.util.concurrent.TimeUnit
27-
import java.util.{Calendar, TimeZone, _}
27+
import java.util._
2828
import scala.util._
2929

3030

streamx-common/src/main/scala/com/streamxhub/streamx/common/util/HadoopUtils.scala

+1-1
Original file line numberDiff line numberDiff line change
@@ -348,7 +348,7 @@ object HadoopUtils extends Logger {
348348
val tmpDir = FileUtils.createTempDir()
349349
val fs = FileSystem.get(new Configuration)
350350
val sourcePath = fs.makeQualified(new Path(jarOnHdfs))
351-
if (!fs.exists(sourcePath)) throw new IOException("jar file: " + jarOnHdfs + " doesn't exist.")
351+
if (!fs.exists(sourcePath)) throw new IOException(s"jar file: $jarOnHdfs doesn't exist.")
352352
val destPath = new Path(tmpDir.getAbsolutePath + "/" + sourcePath.getName)
353353
fs.copyToLocalFile(sourcePath, destPath)
354354
new File(destPath.toString).getAbsolutePath

streamx-common/src/main/scala/com/streamxhub/streamx/common/util/HdfsUtils.scala

+2-2
Original file line numberDiff line numberDiff line change
@@ -50,8 +50,8 @@ object HdfsUtils extends Logger {
5050
def upload(src: String, dst: String, delSrc: Boolean = false, overwrite: Boolean = true): Unit =
5151
HadoopUtils.hdfs.copyFromLocalFile(delSrc, overwrite, getPath(src), getPath(dst))
5252

53-
def upload2(srcs: Array[String], dst: String, delSrc: Boolean = false, overwrite: Boolean = true): Unit =
54-
HadoopUtils.hdfs.copyFromLocalFile(delSrc, overwrite, srcs.map(getPath), getPath(dst))
53+
def uploadMulti(src: Array[String], dst: String, delSrc: Boolean = false, overwrite: Boolean = true): Unit =
54+
HadoopUtils.hdfs.copyFromLocalFile(delSrc, overwrite, src.map(getPath), getPath(dst))
5555

5656
def download(src: String, dst: String, delSrc: Boolean = false, useRawLocalFileSystem: Boolean = false): Unit =
5757
HadoopUtils.hdfs.copyToLocalFile(delSrc, getPath(src), getPath(dst), useRawLocalFileSystem)

streamx-common/src/main/scala/com/streamxhub/streamx/common/util/PropertiesUtils.scala

-1
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,6 @@ import java.io._
2626
import java.util.concurrent.atomic.AtomicInteger
2727
import java.util.{Properties, Scanner, HashMap => JavaMap, LinkedHashMap => JavaLinkedMap}
2828
import scala.collection.JavaConversions._
29-
import scala.collection.JavaConverters._
3029
import scala.collection.mutable.{Map => MutableMap}
3130

3231
/**

streamx-common/src/main/scala/com/streamxhub/streamx/common/util/RedisClient.scala

+1-1
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@
2222
package com.streamxhub.streamx.common.util
2323

2424
import redis.clients.jedis.exceptions.JedisConnectionException
25-
import redis.clients.jedis.{Jedis, _}
25+
import redis.clients.jedis._
2626

2727
import java.util.concurrent.ConcurrentHashMap
2828
import scala.annotation.meta.getter

streamx-common/src/main/scala/com/streamxhub/streamx/common/util/RedisUtils.scala

+1-1
Original file line numberDiff line numberDiff line change
@@ -24,8 +24,8 @@ import redis.clients.jedis.{Jedis, JedisCluster, Pipeline, ScanParams}
2424

2525
import java.lang.{Integer => JInt}
2626
import java.util.Set
27-
import scala.collection.JavaConverters._
2827
import scala.collection.JavaConversions._
28+
import scala.collection.JavaConverters._
2929
import scala.collection.immutable
3030
import scala.util.{Failure, Success, Try}
3131

streamx-common/src/main/scala/com/streamxhub/streamx/common/util/SqlConvertUtils.scala

+1-1
Original file line numberDiff line numberDiff line change
@@ -143,7 +143,7 @@ object SqlConvertUtils extends Logger {
143143
}
144144
}
145145

146-
val body = sql.substring(sql.indexOf("("),sql.lastIndexOf(")") + 1)
146+
val body = sql.substring(sql.indexOf("("), sql.lastIndexOf(")") + 1)
147147
.replaceAll("\r\n", "")
148148
.replaceFirst("\\(", "(\n")
149149
.replaceFirst("\\)$", "\n)")

0 commit comments

Comments
 (0)