Skip to content

Commit 2a3b92d

Browse files
authored
Merge 9c182ee into dde45db
2 parents dde45db + 9c182ee commit 2a3b92d

File tree

18 files changed

+1173
-7
lines changed

18 files changed

+1173
-7
lines changed

docs/docs/en/guide/resource/configuration.md

Lines changed: 11 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
# Resource Center Configuration
22

33
- You could use `Resource Center` to upload text files, UDFs and other task-related files.
4-
- You could configure `Resource Center` to use distributed file system like [Hadoop](https://hadoop.apache.org/docs/r2.7.0/) (2.6+), [MinIO](https://github.com/minio/minio) cluster or remote storage products like [AWS S3](https://aws.amazon.com/s3/), [Alibaba Cloud OSS](https://www.aliyun.com/product/oss), etc.
4+
- You could configure `Resource Center` to use distributed file system like [Hadoop](https://hadoop.apache.org/docs/r2.7.0/) (2.6+), [MinIO](https://github.com/minio/minio) cluster or remote storage products like [AWS S3](https://aws.amazon.com/s3/), [Alibaba Cloud OSS](https://www.aliyun.com/product/oss), [Huawei Cloud OBS](https://support.huaweicloud.com/obs/index.html) etc.
55
- You could configure `Resource Center` to use local file system. If you deploy `DolphinScheduler` in `Standalone` mode, you could configure it to use local file system for `Resouce Center` without the need of an external `HDFS` system or `S3`.
66
- Furthermore, if you deploy `DolphinScheduler` in `Cluster` mode, you could use [S3FS-FUSE](https://github.com/s3fs-fuse/s3fs-fuse) to mount `S3` or [JINDO-FUSE](https://help.aliyun.com/document_detail/187410.html) to mount `OSS` to your machines and use the local file system for `Resouce Center`. In this way, you could operate remote files as if on your local machines.
77

@@ -80,7 +80,7 @@ data.basedir.path=/tmp/dolphinscheduler
8080
# resource view suffixs
8181
#resource.view.suffixs=txt,log,sh,bat,conf,cfg,py,java,sql,xml,hql,properties,json,yml,yaml,ini,js
8282

83-
# resource storage type: HDFS, S3, OSS, GCS, ABS, NONE
83+
# resource storage type: LOCAL, HDFS, S3, OSS, GCS, ABS, OBS, NONE
8484
resource.storage.type=NONE
8585
# resource store on HDFS/S3/OSS path, resource file will store to this base path, self configuration, please make sure the directory exists on hdfs and have read write permissions. "/dolphinscheduler" is recommended
8686
resource.storage.upload.base.path=/tmp/dolphinscheduler
@@ -107,6 +107,15 @@ resource.alibaba.cloud.oss.bucket.name=dolphinscheduler
107107
# oss bucket endpoint, required if you set resource.storage.type=OSS
108108
resource.alibaba.cloud.oss.endpoint=https://oss-cn-hangzhou.aliyuncs.com
109109

110+
# alibaba cloud access key id, required if you set resource.storage.type=OBS
111+
resource.huawei.cloud.access.key.id=<your-access-key-id>
112+
# alibaba cloud access key secret, required if you set resource.storage.type=OBS
113+
resource.huawei.cloud.access.key.secret=<your-access-key-secret>
114+
# oss bucket name, required if you set resource.storage.type=OBS
115+
resource.huawei.cloud.obs.bucket.name=dolphinscheduler
116+
# oss bucket endpoint, required if you set resource.storage.type=OBS
117+
resource.huawei.cloud.obs.endpoint=obs.cn-southwest-2.huaweicloud.com
118+
110119
# if resource.storage.type=HDFS, the user must have the permission to create directories under the HDFS root path
111120
resource.hdfs.root.user=hdfs
112121
# if resource.storage.type=S3, the value like: s3a://dolphinscheduler; if resource.storage.type=HDFS and namenode HA is enabled, you need to copy core-site.xml and hdfs-site.xml to conf dir

docs/docs/zh/guide/resource/configuration.md

Lines changed: 11 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
# 资源中心配置详情
22

33
- 资源中心通常用于上传文件、UDF 函数,以及任务组管理等操作。
4-
- 资源中心可以对接分布式的文件存储系统,如[Hadoop](https://hadoop.apache.org/docs/r2.7.0/)(2.6+)或者[MinIO](https://github.com/minio/minio)集群,也可以对接远端的对象存储,如[AWS S3](https://aws.amazon.com/s3/)或者[阿里云 OSS](https://www.aliyun.com/product/oss)等。
4+
- 资源中心可以对接分布式的文件存储系统,如[Hadoop](https://hadoop.apache.org/docs/r2.7.0/)(2.6+)或者[MinIO](https://github.com/minio/minio)集群,也可以对接远端的对象存储,如[AWS S3](https://aws.amazon.com/s3/)或者[阿里云 OSS](https://www.aliyun.com/product/oss)[华为云 OBS](https://support.huaweicloud.com/obs/index.html) 等。
55
- 资源中心也可以直接对接本地文件系统。在单机模式下,您无需依赖`Hadoop``S3`一类的外部存储系统,可以方便地对接本地文件系统进行体验。
66
- 除此之外,对于集群模式下的部署,您可以通过使用[S3FS-FUSE](https://github.com/s3fs-fuse/s3fs-fuse)`S3`挂载到本地,或者使用[JINDO-FUSE](https://help.aliyun.com/document_detail/187410.html)`OSS`挂载到本地等,再用资源中心对接本地文件系统方式来操作远端对象存储中的文件。
77

@@ -79,7 +79,7 @@ resource.aws.s3.endpoint=
7979
# user data local directory path, please make sure the directory exists and have read write permissions
8080
data.basedir.path=/tmp/dolphinscheduler
8181

82-
# resource storage type: LOCAL, HDFS, S3, OSS, GCS, ABS
82+
# resource storage type: LOCAL, HDFS, S3, OSS, GCS, ABS, OBS, NONE
8383
resource.storage.type=LOCAL
8484

8585
# resource store on HDFS/S3/OSS path, resource file will store to this hadoop hdfs path, self configuration,
@@ -108,6 +108,15 @@ resource.alibaba.cloud.oss.bucket.name=dolphinscheduler
108108
# oss bucket endpoint, required if you set resource.storage.type=OSS
109109
resource.alibaba.cloud.oss.endpoint=https://oss-cn-hangzhou.aliyuncs.com
110110

111+
# alibaba cloud access key id, required if you set resource.storage.type=OBS
112+
resource.huawei.cloud.access.key.id=<your-access-key-id>
113+
# alibaba cloud access key secret, required if you set resource.storage.type=OBS
114+
resource.huawei.cloud.access.key.secret=<your-access-key-secret>
115+
# oss bucket name, required if you set resource.storage.type=OBS
116+
resource.huawei.cloud.obs.bucket.name=dolphinscheduler
117+
# oss bucket endpoint, required if you set resource.storage.type=OBS
118+
resource.huawei.cloud.obs.endpoint=obs.cn-southwest-2.huaweicloud.com
119+
111120
# if resource.storage.type=HDFS, the user must have the permission to create directories under the HDFS root path
112121
resource.hdfs.root.user=root
113122
# if resource.storage.type=S3, the value like: s3a://dolphinscheduler;

dolphinscheduler-bom/pom.xml

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -115,6 +115,7 @@
115115
<casdoor.version>1.6.0</casdoor.version>
116116
<azure-sdk-bom.version>1.2.10</azure-sdk-bom.version>
117117
<protobuf.version>3.17.2</protobuf.version>
118+
<esdk-obs.version>3.23.3</esdk-obs.version>
118119
<system-lambda.version>1.2.1</system-lambda.version>
119120
</properties>
120121

@@ -896,6 +897,23 @@
896897
<artifactId>protobuf-java</artifactId>
897898
<version>${protobuf.version}</version>
898899
</dependency>
900+
901+
<dependency>
902+
<groupId>com.huaweicloud</groupId>
903+
<artifactId>esdk-obs-java-bundle</artifactId>
904+
<version>${esdk-obs.version}</version>
905+
<exclusions>
906+
<exclusion>
907+
<groupId>org.apache.logging.log4j</groupId>
908+
<artifactId>log4j-core</artifactId>
909+
</exclusion>
910+
<exclusion>
911+
<groupId>org.apache.logging.log4j</groupId>
912+
<artifactId>log4j-api</artifactId>
913+
</exclusion>
914+
</exclusions>
915+
</dependency>
916+
899917
<dependency>
900918
<groupId>com.github.stefanbirkner</groupId>
901919
<artifactId>system-lambda</artifactId>

dolphinscheduler-common/pom.xml

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -93,6 +93,11 @@
9393
<artifactId>aws-java-sdk-s3</artifactId>
9494
</dependency>
9595

96+
<dependency>
97+
<groupId>com.huaweicloud</groupId>
98+
<artifactId>esdk-obs-java-bundle</artifactId>
99+
</dependency>
100+
96101
<dependency>
97102
<groupId>com.github.oshi</groupId>
98103
<artifactId>oshi-core</artifactId>

dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/constants/Constants.java

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -147,6 +147,9 @@ private Constants() {
147147

148148
public static final String AZURE_BLOB_STORAGE_ACCOUNT_NAME = "resource.azure.blob.storage.account.name";
149149

150+
public static final String HUAWEI_CLOUD_OBS_BUCKET_NAME = "resource.huawei.cloud.obs.bucket.name";
151+
public static final String HUAWEI_CLOUD_OBS_END_POINT = "resource.huawei.cloud.obs.endpoint";
152+
150153
/**
151154
* fetch applicationId way
152155
*/

dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/enums/ResUploadType.java

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,5 +21,5 @@
2121
* data base types
2222
*/
2323
public enum ResUploadType {
24-
LOCAL, HDFS, S3, OSS, GCS, ABS, NONE
24+
LOCAL, HDFS, S3, OSS, GCS, ABS, OBS, NONE
2525
}

dolphinscheduler-common/src/main/resources/common.properties

Lines changed: 12 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ data.basedir.path=/tmp/dolphinscheduler
2121
# resource view suffixs
2222
#resource.view.suffixs=txt,log,sh,bat,conf,cfg,py,java,sql,xml,hql,properties,json,yml,yaml,ini,js
2323

24-
# resource storage type: LOCAL, HDFS, S3, OSS, GCS, ABS, NONE. LOCAL type is a specific type of HDFS with "resource.hdfs.fs.defaultFS = file:///" configuration
24+
# resource storage type: LOCAL, HDFS, S3, OSS, GCS, ABS, OBS, NONE. LOCAL type is a specific type of HDFS with "resource.hdfs.fs.defaultFS = file:///" configuration
2525
# please notice that LOCAL mode does not support reading and writing in distributed mode, which mean you can only use your resource in one machine, unless
2626
# use shared file mount point
2727
resource.storage.type=LOCAL
@@ -73,6 +73,17 @@ resource.azure.blob.storage.account.name=<your-account-name>
7373
# abs connection string, required if you set resource.storage.type=ABS
7474
resource.azure.blob.storage.connection.string=<your-connection-string>
7575

76+
77+
# huawei cloud access key id, required if you set resource.storage.type=OBS
78+
resource.huawei.cloud.access.key.id=<your-access-key-id>
79+
# huawei cloud access key secret, required if you set resource.storage.type=OBS
80+
resource.huawei.cloud.access.key.secret=<your-access-key-secret>
81+
# oss bucket name, required if you set resource.storage.type=OBS
82+
resource.huawei.cloud.obs.bucket.name=dolphinscheduler
83+
# oss bucket endpoint, required if you set resource.storage.type=OBS
84+
resource.huawei.cloud.obs.endpoint=obs.cn-southwest-2.huaweicloud.com
85+
86+
7687
# if resource.storage.type=HDFS, the user must have the permission to create directories under the HDFS root path
7788
resource.hdfs.root.user=hdfs
7889
# if resource.storage.type=S3, the value like: s3a://dolphinscheduler; if resource.storage.type=HDFS and namenode HA is enabled, you need to copy core-site.xml and hdfs-site.xml to conf dir

dolphinscheduler-dist/release-docs/LICENSE

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -567,6 +567,7 @@ The text of each license is also included at licenses/LICENSE-[project].txt.
567567
casdoor-spring-boot-starter 1.6.0 https://mvnrepository.com/artifact/org.casbin/casdoor-spring-boot-starter/1.6.0 Apache 2.0
568568
org.apache.oltu.oauth2.client 1.0.2 https://mvnrepository.com/artifact/org.apache.oltu.oauth2/org.apache.oltu.oauth2.client/1.0.2 Apache 2.0
569569
org.apache.oltu.oauth2.common 1.0.2 https://mvnrepository.com/artifact/org.apache.oltu.oauth2/org.apache.oltu.oauth2.common/1.0.2 Apache 2.0
570+
esdk-obs-java-bundle 3.23.3 https://mvnrepository.com/artifact/com.huaweicloud/esdk-obs-java-bundle/3.23.3 Apache 2.0
570571

571572

572573

0 commit comments

Comments
 (0)