关注我们

前言
提前准备
2.1. 基础组件
JDK:下载JDK (1.8+ https://www.oracle.com/technetwork/java/javase/downloads/index.html),安装并配置 JAVA_HOME 环境变量,并将其下的 bin 目录追加到 PATH 环境变量中。如果你的环境中已存在,可以跳过这步。 二进制包:在下载页面下载(https://dolphinscheduler.apache.org/zh-cn/download) DolphinScheduler 二进制包 数据库:PostgreSQL (8.2.15+ https://www.postgresql.org/download/) 或者 MySQL (5.7+),两者任选其一即可,如 MySQL 则需要 JDBC Driver 8 版本,可以从中央仓库下载。 注册中心:ZooKeeper (3.4.6+),下载地址https://zookeeper.apache.org/releases.html。 进程树分析 macOS安装pstree Fedora/Red/Hat/CentOS/Ubuntu/Debian安装psmisc。
上传
tar -xvf apache-dolphinscheduler-3.1.7-bin.tar.gz
mv apache-dolphinscheduler-3.1.7-bin dolphinscheduler-3.1.7-origin
用户
4.1. 配置用户免密及权限
# 创建用户需使用 root 登录
useradd dolphinscheduler
# 添加密码
echo "dolphinscheduler" | passwd --stdin dolphinscheduler
# 配置 sudo 免密
sed -i '$adolphinscheduler ALL=(ALL) NOPASSWD: ALL' etc/sudoers
sed -i 's/Defaults requirett/#Defaults requirett/g' etc/sudoers
# 修改目录权限,使得部署用户对二进制包解压后的 apache-dolphinscheduler-*-bin 目录有操作权限
chown -R dolphinscheduler:dolphinscheduler apache-dolphinscheduler-*-bin
因为任务执行服务是以 sudo -u {linux-user} 切换不同 linux 用户的方式来实现多租户运行作业,所以部署用户需要有 sudo 权限,而且是免密的。初学习者不理解的话,完全可以暂时忽略这一点。 如果发现 etc/sudoers 文件中有 “Defaults requirett” 这行,也请注释掉。
4.2. 配置机器SSH免密登陆
su dolphinscheduler
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
# 一定要执行下面这个命令,否则免密登录会失败
chmod 600 ~/.ssh/authorized_keys
ssh localhost判断是否成功,如果不需要输入密码就能 ssh登陆,则证明成功。
启动zookeeper
修改配置
install_env.sh和
dolphinscheduler_env.sh。
6.1. install_env.sh
install_env.sh文件配置将 DolphinScheduler 安装到哪些机器 ,以及每台机器安装哪些服务。可以在路径
bin/env/中找到此文件,之后按照下面的说明修改对应的配置即可。
# ---------------------------------------------------------
# INSTALL MACHINE
# ---------------------------------------------------------
# A comma separated list of machine hostname or IP would be installed DolphinScheduler,
# including master, worker, api, alert. If you want to deploy in pseudo-distributed
# mode, just write a pseudo-distributed hostname
# Example for hostnames: ips="ds1,ds2,ds3,ds4,ds5", Example for IPs: ips="192.168.8.1,192.168.8.2,192.168.8.3,192.168.8.4,192.168.8.5"
# 配置海豚调度器要安装到那些机器上
ips=${ips:-"ds01,ds02,ds03,hadoop02,hadoop03,hadoop04,hadoop05,hadoop06,hadoop07,hadoop08"}
# Port of SSH protocol, default value is 22. For now we only support same port in all `ips` machine
# modify it if you use different ssh port
sshPort=${sshPort:-"22"}
# A comma separated list of machine hostname or IP would be installed Master server, it
# must be a subset of configuration `ips`.
# Example for hostnames: masters="ds1,ds2", Example for IPs: masters="192.168.8.1,192.168.8.2"
# 配置 master 角色要安装到哪些机器上
masters=${masters:-"ds01,ds02,ds03,hadoop04,hadoop05,hadoop06,hadoop07,hadoop08"}
# A comma separated list of machine <hostname>:<workerGroup> or <IP>:<workerGroup>.All hostname or IP must be a
# subset of configuration `ips`, And workerGroup have default value as `default`, but we recommend you declare behind the hosts
# Example for hostnames: workers="ds1:default,ds2:default,ds3:default", Example for IPs: workers="192.168.8.1:default,192.168.8.2:default,192.168.8.3:default"
# 配置 worker 角色要安装到哪些机器上,默认都放到 default 的 worker 分组内,其他分组,可以通过海豚调度器界面进行单独配置
workers=${workers:-"ds01:default,ds02:default,ds03:default,hadoop02:default,hadoop03:default,hadoop04:default,hadoop05:default,hadoop06:default,hadoop07:default,hadoop08:default"}
# A comma separated list of machine hostname or IP would be installed Alert server, it
# must be a subset of configuration `ips`.
# Example for hostname: alertServer="ds3", Example for IP: alertServer="192.168.8.3"
# 配置 alert 角色安装到哪个机器上,配置一台机器即可
alertServer=${alertServer:-"hadoop03"}
# A comma separated list of machine hostname or IP would be installed API server, it
# must be a subset of configuration `ips`.
# Example for hostname: apiServers="ds1", Example for IP: apiServers="192.168.8.1"
# 配置 api 角色安装到哪个机器上,配置一台机器即可
apiServers=${apiServers:-"hadoop04"}
# The directory to install DolphinScheduler for all machine we config above. It will automatically be created by `install.sh` script if not exists.
# Do not set this configuration same as the current path (pwd). Do not add quotes to it if you using related path.
# 配置安装路径,将会在所有海豚集群的机器上安装服务,一定要和上面解压的二进制包目录区分开,最好带上版本号,以方便后续的升级操作。
installPath=${installPath:-"/opt/dolphinscheduler-3.1.5"}
# The user to deploy DolphinScheduler for all machine we config above. For now user must create by yourself before running `install.sh`
# script. The user needs to have sudo privileges and permissions to operate hdfs. If hdfs is enabled than the root directory needs
# to be created by this user
# 部署使用的用户,用上面自己新建的用户即可
deployUser=${deployUser:-"dolphinscheduler"}
# The root of zookeeper, for now DolphinScheduler default registry server is zookeeper.
# 配置注册到 zookeeper znode 名称,如果配置了多个海豚集群,则需要配置不同的名称
zkRoot=${zkRoot:-"/dolphinscheduler"}
6.2. dolphinscheduler_env.sh
bin/env/中找到此文件,该文件用来配置用到的一些环境,按照下面的说明修改对应配置即可:
# JDK 路径,一定要修改
export JAVA_HOME=${JAVA_HOME:-/usr/java/jdk1.8.0_202}
# 数据库类型,支持 mysql、postgresql
export DATABASE=${DATABASE:-mysql}
export SPRING_PROFILES_ACTIVE=${DATABASE}
# 连接 url,主要修改下面的 hostname,最后配置的是东八区
export SPRING_DATASOURCE_URL="jdbc:mysql://hostname:3306/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8&useSSL=false&serverTimezone=Asia/Shanghai"
export SPRING_DATASOURCE_USERNAME=dolphinscheduler
# 如果密码比较复杂,则需要前后使用英文单引号括起来
export SPRING_DATASOURCE_PASSWORD='xxxxxxxxxxxxx'
export SPRING_CACHE_TYPE=${SPRING_CACHE_TYPE:-none}
# 配置各角色 JVM 启动时使用的时区,默认为 -UTC,如果想要完全支持东八区,则设置为 -GMT+8
export SPRING_JACKSON_TIME_ZONE=${SPRING_JACKSON_TIME_ZONE:-GMT+8}
export MASTER_FETCH_COMMAND_NUM=${MASTER_FETCH_COMMAND_NUM:-10}
export REGISTRY_TYPE=${REGISTRY_TYPE:-zookeeper}
# 配置使用的 zookeeper 地址
export REGISTRY_ZOOKEEPER_CONNECT_STRING=${REGISTRY_ZOOKEEPER_CONNECT_STRING:-hadoop01:2181,hadoop02:2181,hadoop03:2181}
# 配置使用到的一些环境变量,按照自己的需要进行配置即可,所有需要的组件,都自己安装
export HADOOP_HOME=${HADOOP_HOME:-/opt/cloudera/parcels/CDH/lib/hadoop}
export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-/etc/hadoop/conf}
export SPARK_HOME1=${SPARK_HOME1:-/opt/soft/spark1}
export SPARK_HOME2=${SPARK_HOME2:-/opt/spark-3.3.2}
export PYTHON_HOME=${PYTHON_HOME:-/opt/python-3.9.16}
export HIVE_HOME=${HIVE_HOME:-/opt/cloudera/parcels/CDH/lib/hive}
export FLINK_HOME=${FLINK_HOME:-/opt/flink-1.15.3}
export DATAX_HOME=${DATAX_HOME:-/opt/datax}
export SEATUNNEL_HOME=${SEATUNNEL_HOME:-/opt/seatunnel-2.1.3}
export CHUNJUN_HOME=${CHUNJUN_HOME:-/opt/soft/chunjun}
export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME/bin:$JAVA_HOME/bin:$HIVE_HOME/bin:$FLINK_HOME/bin:$DATAX_HOME/bin:$SEATUNNEL_HOME/bin:$CHUNJUN_HOME/bin:$PATH
6.3. common.properties
hdfs-site.xml 和
core-site.xml文件,然后放到
api-server/conf/和
worker-server/conf/目录下。如果是自己搭建的 apache 的原生集群,则从各个组件的 conf 目录下找,如果是 CDH ,则可以通过 CDH 界面直接下载。
api-server/conf/和
worker-server/conf/目录下的这个文件,该文件主要用来配置资源上传相关参数,比如将海豚的资源上传到 hdfs 等,按照下面的说明修改即可:
# 本地路径,主要用来存放任务运行时的临时文件,要保证用户对该文件具有读写权限,一般保持默认即可,如果后续任务运行报错说是对该目录下的文件没有操作权限,直接将该目录权限修改为 777 即可
data.basedir.path=/tmp/dolphinscheduler
# resource view suffixs
#resource.view.suffixs=txt,log,sh,bat,conf,cfg,py,java,sql,xml,hql,properties,json,yml,yaml,ini,js
# 保存资源的地方,可用值为: HDFS, S3, OSS, NONE
resource.storage.type=HDFS
# 资源上传的基本路径,必须以 dolphinscheduler 开头,要保证用户对该目录有读写权限
resource.storage.upload.base.path=/dolphinscheduler
# The AWS access key. if resource.storage.type=S3 or use EMR-Task, This configuration is required
resource.aws.access.key.id=minioadmin
# The AWS secret access key. if resource.storage.type=S3 or use EMR-Task, This configuration is required
resource.aws.secret.access.key=minioadmin
# The AWS Region to use. if resource.storage.type=S3 or use EMR-Task, This configuration is required
resource.aws.region=cn-north-1
# The name of the bucket. You need to create them by yourself. Otherwise, the system cannot start. All buckets in Amazon S3 share a single namespace; ensure the bucket is given a unique name.
resource.aws.s3.bucket.name=dolphinscheduler
# You need to set this parameter when private cloud s3. If S3 uses public cloud, you only need to set resource.aws.region or set to the endpoint of a public cloud such as S3.cn-north-1.amazonaws.com.cn
resource.aws.s3.endpoint=http://localhost:9000
# alibaba cloud access key id, required if you set resource.storage.type=OSS
resource.alibaba.cloud.access.key.id=<your-access-key-id>
# alibaba cloud access key secret, required if you set resource.storage.type=OSS
resource.alibaba.cloud.access.key.secret=<your-access-key-secret>
# alibaba cloud region, required if you set resource.storage.type=OSS
resource.alibaba.cloud.region=cn-hangzhou
# oss bucket name, required if you set resource.storage.type=OSS
resource.alibaba.cloud.oss.bucket.name=dolphinscheduler
# oss bucket endpoint, required if you set resource.storage.type=OSS
resource.alibaba.cloud.oss.endpoint=https://oss-cn-hangzhou.aliyuncs.com
# if resource.storage.type=HDFS, the user must have the permission to create directories under the HDFS root path
resource.hdfs.root.user=hdfs
# if resource.storage.type=S3, the value like: s3a://dolphinscheduler; if resource.storage.type=HDFS and namenode HA is enabled, you need to copy core-site.xml and hdfs-site.xml to conf dir
#
resource.hdfs.fs.defaultFS=hdfs://bigdata:8020
# whether to startup kerberos
hadoop.security.authentication.startup.state=false
# java.security.krb5.conf path
java.security.krb5.conf.path=/opt/krb5.conf
# login user from keytab username
login.user.keytab.username=hdfs-mycluster@ESZ.COM
# login user from keytab path
login.user.keytab.path=/opt/hdfs.headless.keytab
# kerberos expire time, the unit is hour
kerberos.expire.time=2
# resourcemanager port, the default value is 8088 if not specified
resource.manager.httpaddress.port=8088
# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty
yarn.resourcemanager.ha.rm.ids=hadoop02,hadoop03
# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname
yarn.application.status.address=http://ds1:%s/ws/v1/cluster/apps/%s
# job history status url when application number threshold is reached(default 10000, maybe it was set to 1000)
yarn.job.history.status.address=http://hadoop02:19888/ws/v1/history/mapreduce/jobs/%s
# datasource encryption enable
datasource.encryption.enable=false
# datasource encryption salt
datasource.encryption.salt=!@#$%^&*
# data quality option
data-quality.jar.name=dolphinscheduler-data-quality-dev-SNAPSHOT.jar
#data-quality.error.output.path=/tmp/data-quality-error-data
# Network IP gets priority, default inner outer
# Whether hive SQL is executed in the same session
support.hive.oneSession=false
# use sudo or not, if set true, executing user is tenant user and deploy user needs sudo permissions; if set false, executing user is the deploy user and doesn't need sudo permissions
sudo.enable=true
setTaskDirToTenant.enable=false
# network interface preferred like eth0, default: empty
#dolphin.scheduler.network.interface.preferred=
# network IP gets priority, default: inner outer
#dolphin.scheduler.network.priority.strategy=default
# system env path
#dolphinscheduler.env.path=dolphinscheduler_env.sh
# development state
development.state=false
# rpc port
alert.rpc.port=50052
# set path of conda.sh
conda.path=/opt/anaconda3/etc/profile.d/conda.sh
# Task resource limit state
task.resource.limit.state=false
# mlflow task plugin preset repository
ml.mlflow.preset_repository=https://github.com/apache/dolphinscheduler-mlflow
# mlflow task plugin preset repository version
ml.mlflow.preset_repository_version="main"
6.4. application.yaml
spring:
banner:
charset: UTF-8
jackson:
# 将时区设置为东八区,只修改这一个地方即可
time-zone: GMT+8
date-format: "yyyy-MM-dd HH:mm:ss"
6.5. service.57a50399.js和service.57a50399.js.gz
api-server/ui/assets/和
ui/assets/目录下。

初始化数据库
驱动配置
api-server/libs、
alert-server/libs、
master-server/libs、
worker-server/libs、
tools/libs。
数据库用户
create database `dolphinscheduler` character set utf8mb4 collate utf8mb4_general_ci;
create user 'dolphinscheduler'@'%' IDENTIFIED WITH mysql_native_password by 'your_password';
grant ALL PRIVILEGES ON dolphinscheduler.* to 'dolphinscheduler'@'%';
flush privileges;
执行数据库升级脚本:
bash tools/bin/upgrade-schema.sh
安装
bash ./bin/install.sh启停服务
# 一键停止集群所有服务
bash ./bin/stop-all.sh
# 一键开启集群所有服务
bash ./bin/start-all.sh
# 启停 Master
bash ./bin/dolphinscheduler-daemon.sh stop master-server
bash ./bin/dolphinscheduler-daemon.sh start master-server
# 启停 Worker
bash ./bin/dolphinscheduler-daemon.sh start worker-server
bash ./bin/dolphinscheduler-daemon.sh stop worker-server
# 启停 Api
bash ./bin/dolphinscheduler-daemon.sh start api-server
bash ./bin/dolphinscheduler-daemon.sh stop api-server
# 启停 Alert
bash ./bin/dolphinscheduler-daemon.sh start alert-server
bash ./bin/dolphinscheduler-daemon.sh stop alert-server
<service>/conf/dolphinscheduler_env.sh中都有
dolphinscheduler_env.sh文件,为微服务需求提供便利。这意味着你可以在对应服务中配置
<service>/conf/dolphinscheduler_env.sh,然后通过
<service>/bin/start.sh命令基于不同的环境变量来启动各个服务。但如果使用命令
/bin/dolphinscheduler-daemon.sh start <service>启动服务器,它将会使用文件
bin/env/dolphinscheduler_env.sh覆盖
<service>/conf/dolphinscheduler_env.sh,然后启动服务,这么做是为了减少用户修改配置的成本。
扩容
10.1. 标准方式
新节点 安装配置好 JDK。 新建海豚用户(Linux 用户),然后配置免密登录、权限等。 之前安装海豚调度器时解压二进制安装包的机器上。 登录安装海豚的用户。 切换到之前安装海豚调度器时解压二进制安装包,修改配置文件: bin/env/install_env.sh
,在该配置文件中,修改需要在新节点上部署的角色。执行 /bin/install.sh
文件进行安装,该脚本会按照bin/env/install_env.sh
文件中的配置,将整个目录重新scp
到所有的机器,之后停止所有机器上的角色,然后再启动所有角色。
10.2. 简单方式
新节点 安装配置好 JDK。 新建海豚用户(Linux 用户),然后配置免密登录、权限等。 之前安装海豚调度器时解压二进制安装包的机器上。 登录安装海豚的用户。 将之前修改完配置的整个目录直接压缩,然后传输到新节点上。 新节点 在新节点上解压文件,然后将其重命名到之前配置文件 bin/env/install_env.sh
中配置的安装目录下。登录安装海豚的用户。 需要在新节点部署哪些角色,就启动哪些角色,具体脚本位置: /bin/dolphinscheduler-daemon.sh
,启动命令为:
./dolphinscheduler-daemon.sh start master-server
./dolphinscheduler-daemon.sh start worker-server
登录到海豚调度器界面,然后“监控中心”中观察,对应角色在新节点是否启动。
缩容
在需要下线的机器上,通过 /bin/dolphinscheduler-daemon.sh
脚本停止机器上所有的角色,停止命令为:
./dolphinscheduler-daemon.sh stop worker-server
登录到海豚调度器界面,然后“监控中心”中观察,刚才机器上停止的角色是否已经消失。 在之前安装海豚调度器时解压二进制安装包的机器上 登录安装海豚的用户。 修改配置文件: bin/env/install_env.sh
,在该配置文件中,删除下线角色对应的机器。
升级
上传新版二进制包。 解压,解压到和旧版安装目录不同的目录,或者是重命名也可以。 修改配置文件,比较简单的方式是,将上面步骤中涉及到的所有配置文件,从之前安装的目录下拷贝到新版本目录下,替换即可。 将其他节点上部署的一些组件,全部打包,然后解压放到新节点对应的位置。具体需要拷贝哪些组件,可以查看 dolphinscheduler_env.sh 文件中的配置。 配置驱动,参考《初始化数据库》中的步骤。 停止之前的集群。 备份整个数据库。 执行数据库升级脚本,参考《初始化数据库》中的步骤。 执行安装脚本,参考《安装》。 升级完成,登录界面,查看“监控中心”,看所有角色是否都成功启动。

我知道你在看哟

文章转载自海豚调度,如果涉嫌侵权,请发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。




