软硬件环境建议配置
DolphinScheduler 作为一款开源分布式工作流任务调度系统,可以很好的部署和运行在 Intel 架构服务器环境及主流虚拟化环境下,并支持主流的Linux操作系统环境。
1. Linux 操作系统版本要求
操作系统 版本
Red Hat Enterprise Linux 7.0 及以上
CentOS 7.0 及以上
Oracle Enterprise Linux 7.0 及以上
Ubuntu LTS 16.04 及以上
注意: 以上 Linux 操作系统可运行在物理服务器以及 VMware、KVM、XEN 主流虚拟化环境上。
2. 服务器建议配置
DolphinScheduler 支持运行在 Intel x86-64 架构的 64 位通用硬件服务器平台。对生产环境的服务器硬件配置有以下建议:
生产环境
CPU 内存 硬盘类型 网络 实例数量
4核+ 8 GB+ SAS 千兆网卡 1+
3. 网络要求注意:
- 以上建议配置为部署 DolphinScheduler 的最低配置,生产环境强烈推荐使用更高的配置。
- 硬盘大小配置建议 50GB+ ,系统盘和数据盘分开。
DolphinScheduler正常运行提供如下的网络端口配置:
组件 默认端口 说明
MasterServer 5566 非通信端口,只需本机端口不冲突即可
WorkerServer 7788 非通信端口,只需本机端口不冲突即可
ApiApplicationServer 12345 提供后端通信端口
nginx 8888 提供 UI 端通信端口
4. 客户端 Web 浏览器要求注意:
- MasterServer 和 WorkerServer 不需要开启网络间通信,只需本机端口不冲突即可
- 管理员可根据实际环境中 DolphinScheduler 组件部署方案,在网络侧和主机侧开放相关端口
DolphinScheduler 推荐 Chrome 以及使用 Chrome 内核的较新版本浏览器访问前端可视化操作界面。
集群部署(Cluster)
DolphinScheduler集群部署分为后端部署和前端部署两部分:
1、后端部署
1.1 : 基础软件安装(必装项请自行安装)
- PostgreSQL (8.2.15+) or Mysql (5.7系列) : 两者任选其一即可
- JDK (1.8+) : 必装,请安装好后在/etc/profile下配置 JAVA_HOME 及 PATH 变量
- ZooKeeper (3.4.6+) :必装
- Hadoop (2.6+) or MinIO :选装,如果需要用到资源上传功能,可以选择上传到Hadoop or MinIO上
注意:DolphinScheduler本身不依赖Hadoop、Hive、Spark,仅是会调用他们的Client,用于对应任务的提交。
1. Mysql (5.7系列)安装
cd /workspace/
wget https://dev.mysql.com/get/mysql57-community-release-el7-11.noarch.rpm
yum -y localinstall mysql57-community-release-el7-11.noarch.rpm
yum -y install mysql-community-server
systemctl start mysqld
systemctl enable mysqld
systemctl daemon-reload
#过滤密码
egrep "root@localhost" /var/log/mysqld.log
2020-03-19T11:07:17.431952Z 1 [Note] A temporary password is generated for root@localhost: D+Kp_9nuVge:
mysql -u root -pD+Kp_9nuVge:
mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'Test2020@';
#设置允许远程登录
mysql> GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'Test2020@' WITH GRANT OPTION;
mysql> exit
Bye
##修改/etc/my.cnf配置文件,在[mysqld]下添加编码配置
character_set_server=utf8
init_connect='SET NAMES utf8'
##登录检查
mysql> show variables like '%character%';
+--------------------------+----------------------------+
| Variable_name | Value |
+--------------------------+----------------------------+
| character_set_client | utf8 |
| character_set_connection | utf8 |
| character_set_database | utf8 |
| character_set_filesystem | binary |
| character_set_results | utf8 |
| character_set_server | utf8 |
| character_set_system | utf8 |
| character_sets_dir | /usr/share/mysql/charsets/ |
+--------------------------+----------------------------+
8 rows in set (0.01 sec)
2.JDK (1.8+)安装
#下载jdk-8u241-linux-x64.tar.gz
tar xf jdk-8u241-linux-x64.tar.gz
ln -s /workspace/jdk1.8.0_241 /workspace/jdk
sed -i.ori '$a export JAVA_HOME=/workspace/jdk\nexport PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH\nexport CLASSPATH=.$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib:$JAVA_HOME/lib/tools.jar' /etc/profile
[root@172-20-100-107 workspace]# source /etc/profile
[root@172-20-100-107 workspace]# java -version
java version "1.8.0_241"
Java(TM) SE Runtime Environment (build 1.8.0_241-b07)
Java HotSpot(TM) 64-Bit Server VM (build 25.241-b07, mixed mode)
3.ZooKeeper (3.4.6+) 安装
下载地址:https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/stable/
#利用wget下载并解压zookeeper3.5.7,当前在172.20.100.107机器上操作
cd /workspace/
wget https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/stable/apache-zookeeper-3.5.7-bin.tar.gz
tar xf apache-zookeeper-3.5.7-bin.tar.gz
mv apache-zookeeper-3.5.7-bin zookeeper-3.5.7
cd zookeeper-3.5.7/
#配置zookeeper3.5.7
mv conf/zoo_sample.cfg conf/zoo.cfg
vim conf/zoo.cfg
#先把dataDir=/tmp/zookeeper注释,文本末尾添加以下内容:
dataDir=/workspace/zookeeper-3.5.7/data
dataLogDir=/workspace/zookeeper-3.5.7/log
server.1=172.20.100.107:2888:3888
server.2=172.20.100.106:2888:3888
server.3=172.20.100.105:2888:3888
配置文件说明:
tickTime:这个时间是作为 Zookeeper 服务器之间或客户端与服务器之间维持心跳的时间间隔,也就是每个 tickTime 时间就会发送一个心跳。
initLimit:这个配置项是用来配置 Zookeeper 接受客户端(这里所说的客户端不是用户连接 Zookeeper 服务器的客户端,而是 Zookeeper 服务器集群中连接到 Leader 的 Follower 服务器)初始化连接时最长能忍受多少个心跳时间间隔数。
当已经超过 5个心跳的时间(也就是 tickTime)长度后 Zookeeper 服务器还没有收到客户端的返回信息,那么表明这个客户端连接失败。总的时间长度就是 52000=10 秒
syncLimit:这个配置项标识 Leader 与Follower 之间发送消息,请求和应答时间长度,最长不能超过多少个 tickTime 的时间长度,总的时间长度就是52000=10秒
dataDir:快照日志的存储路径
dataLogDir:事物日志的存储路径,如果不配置这个那么事物日志会默认存储到dataDir制定的目录,这样会严重影响zk的性能,当zk吞吐量较大的时候,产生的事物日志、快照日志太多
clientPort:这个端口就是客户端连接 Zookeeper 服务器的端口,Zookeeper 会监听这个端口,接受客户端的访问请求。修改他的端口改大点
autopurge.purgeInterval:这个参数指定了日志清理频率,单位是小时,需要填写一个1或更大的整数,默认是0,表示不开启自己清理功能。
autopurge.snapRetainCount:这个参数和上面的参数搭配使用,这个参数指定了需要保留的文件数目。默认是保留3个。
创建myid文件
mkdir -p /workspace/zookeeper-3.5.7/data #创建数据目录,该目录在zoo.cfg中配置
cd /workspace/zookeeper-3.5.7/data
touch myid #创建myid文件
echo "1">>myid #往myid中写入1,对应server.X={IP}:2888:3888 中的x数字
##将上面在172.20.100.107机器上配置好的zookeeper复制到106,105两台机器上去
scp -r /workspace/zookeeper-3.5.7 172.20.100.106:/workspace/ #将配置好的zookeeper复制到106
scp -r /workspace/zookeeper-3.5.7 172.20.100.105:/workspace/ #将配置好的zookeeper复制到105
##修改106,105机器上/workspace/zookeeper-3.5.7/data/myid为对应的值
ssh 172.20.100.106
cd /workspace/zookeeper-3.5.7/data/
rm -f ./myid
echo "2">>myid #往myid中写入2,对应server.X={IP}:2888:3888 中的x数字,此处为2
ssh 172.20.100.105
cd /workspace/zookeeper-3.5.7/data/
rm -f ./myid
echo "3">>myid #往myid中写入3,对应server.X={IP}:2888:3888 中的x数字,此处为3
开放zookeeper端口
##防火墙打开的情况下需要开发端口
iptables -A INPUT -p tcp --dport 2888 -j ACCEPT
iptables -A INPUT -p tcp --dport 3888 -j ACCEPT
iptables -A INPUT -p tcp --dport 2181 -j ACCEPT
添加环境变量
vim /etc/profile
# zookeeper
export ZK_HOME=/workspace/zookeeper-3.5.7
export PATH=$ZK_HOME/bin:$PATH
##使配置生效
source /etc/profile
启动zookeeper
zkServer.sh start #三台机器都要做此操作,否则通过zkServer.sh status查看启动状态时,
#可能会有Error contacting service. It is probably not running.错误信息。
#具体查看可以在$ZK_HOME/zookeeper.out查看详细的日志信息
[root@node02 ~]# zkServer.sh start
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: /workspace/zookeeper-3.5.7/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
查看状态
#172.20.100.107 节点
[root@172-20-100-107 ~]# zkServer.sh status #查看当前机器的zookeeper状态
ZooKeeper JMX enabled by default
Using config: /workspace/zookeeper-3.5.7/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: follower
#172.20.100.106节点
[root@172-20-100-106 ~]# zkServer.sh status
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: /workspace/zookeeper-3.5.7/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: leader
#172.20.100.105节点
[root@node02 ~]# zkServer.sh status
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: /workspace/zookeeper-3.5.7/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: follower
客户端连接zookeeper
[root@172-20-100-107 ~]# zkCli.sh -server 172.20.100.107:2181 ##如果出现如下内容,则表明zookeeper已经安装成功
Connecting to 172.20.100.107:2181
2020-03-20 12:06:45,378 [myid:] - INFO [main:Environment@109] - Client environment:zookeeper.version=3.5.7-f0fdd52973d373ffd9c86b81d99842dc2c7f660e, built on 02/10/2020 11:30 GMT
2020-03-20 12:06:45,383 [myid:] - INFO [main:Environment@109] - Client environment:host.name=<NA>
2020-03-20 12:06:45,383 [myid:] - INFO [main:Environment@109] - Client environment:java.version=1.8.0_241
2020-03-20 12:06:45,387 [myid:] - INFO [main:Environment@109] - Client environment:java.vendor=Oracle Corporation
2020-03-20 12:06:45,388 [myid:] - INFO [main:Environment@109] - Client environment:java.home=/workspace/jdk1.8.0_241/jre
2020-03-20 12:06:45,388 [myid:] - INFO [main:Environment@109] - Client environment:java.class.path=/workspace/zookeeper-3.5.7/bin/../zookeeper-server/target/classes:/workspace/zookeeper-3.5.7/bin/../build/classes:/workspace/zookeeper-3.5.7/bin/../zookeeper-server/target/lib/*.jar:/workspace/zookeeper-3.5.7/bin/../build/lib/*.jar:/workspace/zookeeper-3.5.7/bin/../lib/zookeeper-jute-3.5.7.jar:/workspace/zookeeper-3.5.7/bin/../lib/zookeeper-3.5.7.jar:/workspace/zookeeper-3.5.7/bin/../lib/slf4j-log4j12-1.7.25.jar:/workspace/zookeeper-3.5.7/bin/../lib/slf4j-api-1.7.25.jar:/workspace/zookeeper-3.5.7/bin/../lib/netty-transport-native-unix-common-4.1.45.Final.jar:/workspace/zookeeper-3.5.7/bin/../lib/netty-transport-native-epoll-4.1.45.Final.jar:/workspace/zookeeper-3.5.7/bin/../lib/netty-transport-4.1.45.Final.jar:/workspace/zookeeper-3.5.7/bin/../lib/netty-resolver-4.1.45.Final.jar:/workspace/zookeeper-3.5.7/bin/../lib/netty-handler-4.1.45.Final.jar:/workspace/zookeeper-3.5.7/bin/../lib/netty-common-4.1.45.Final.jar:/workspace/zookeeper-3.5.7/bin/../lib/netty-codec-4.1.45.Final.jar:/workspace/zookeeper-3.5.7/bin/../lib/netty-buffer-4.1.45.Final.jar:/workspace/zookeeper-3.5.7/bin/../lib/log4j-1.2.17.jar:/workspace/zookeeper-3.5.7/bin/../lib/json-simple-1.1.1.jar:/workspace/zookeeper-3.5.7/bin/../lib/jline-2.11.jar:/workspace/zookeeper-3.5.7/bin/../lib/jetty-util-9.4.24.v20191120.jar:/workspace/zookeeper-3.5.7/bin/../lib/jetty-servlet-9.4.24.v20191120.jar:/workspace/zookeeper-3.5.7/bin/../lib/jetty-server-9.4.24.v20191120.jar:/workspace/zookeeper-3.5.7/bin/../lib/jetty-security-9.4.24.v20191120.jar:/workspace/zookeeper-3.5.7/bin/../lib/jetty-io-9.4.24.v20191120.jar:/workspace/zookeeper-3.5.7/bin/../lib/jetty-http-9.4.24.v20191120.jar:/workspace/zookeeper-3.5.7/bin/../lib/javax.servlet-api-3.1.0.jar:/workspace/zookeeper-3.5.7/bin/../lib/jackson-databind-2.9.10.2.jar:/workspace/zookeeper-3.5.7/bin/../lib/jackson-core-2.9.10.jar:/workspace/zookeeper-3.5.7/bin/../lib/jackson-annotations-2.9.10.jar:/workspace/zookeeper-3.5.7/bin/../lib/commons-cli-1.2.jar:/workspace/zookeeper-3.5.7/bin/../lib/audience-annotations-0.5.0.jar:/workspace/zookeeper-3.5.7/bin/../zookeeper-*.jar:/workspace/zookeeper-3.5.7/bin/../zookeeper-server/src/main/resources/lib/*.jar:/workspace/zookeeper-3.5.7/bin/../conf:.:/workspace/jdk/lib:/workspace/jdk/jre/lib:/workspace/jdk/lib/tools.jar
2020-03-20 12:06:45,388 [myid:] - INFO [main:Environment@109] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2020-03-20 12:06:45,388 [myid:] - INFO [main:Environment@109] - Client environment:java.io.tmpdir=/tmp
2020-03-20 12:06:45,389 [myid:] - INFO [main:Environment@109] - Client environment:java.compiler=<NA>
2020-03-20 12:06:45,389 [myid:] - INFO [main:Environment@109] - Client environment:os.name=Linux
2020-03-20 12:06:45,389 [myid:] - INFO [main:Environment@109] - Client environment:os.arch=amd64
2020-03-20 12:06:45,389 [myid:] - INFO [main:Environment@109] - Client environment:os.version=3.10.0-957.1.3.el7.x86_64
2020-03-20 12:06:45,390 [myid:] - INFO [main:Environment@109] - Client environment:user.name=root
2020-03-20 12:06:45,390 [myid:] - INFO [main:Environment@109] - Client environment:user.home=/root
2020-03-20 12:06:45,390 [myid:] - INFO [main:Environment@109] - Client environment:user.dir=/root
2020-03-20 12:06:45,390 [myid:] - INFO [main:Environment@109] - Client environment:os.memory.free=113MB
2020-03-20 12:06:45,393 [myid:] - INFO [main:Environment@109] - Client environment:os.memory.max=228MB
2020-03-20 12:06:45,393 [myid:] - INFO [main:Environment@109] - Client environment:os.memory.total=119MB
2020-03-20 12:06:45,398 [myid:] - INFO [main:ZooKeeper@868] - Initiating client connection, connectString=172.20.100.107:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@759ebb3d
2020-03-20 12:06:45,407 [myid:] - INFO [main:X509Util@79] - Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation
2020-03-20 12:06:45,418 [myid:] - INFO [main:ClientCnxnSocket@237] - jute.maxbuffer value is 4194304 Bytes
2020-03-20 12:06:45,430 [myid:] - INFO [main:ClientCnxn@1653] - zookeeper.request.timeout value is 0. feature enabled=
Welcome to ZooKeeper!
2020-03-20 12:06:45,444 [myid:172.20.100.107:2181] - INFO [main-SendThread(172.20.100.107:2181):ClientCnxn$SendThread@1112] - Opening socket connection to server 172.20.100.107/172.20.100.107:2181. Will not attempt to authenticate using SASL (unknown error)
JLine support is enabled
2020-03-20 12:06:45,553 [myid:172.20.100.107:2181] - INFO [main-SendThread(172.20.100.107:2181):ClientCnxn$SendThread@959] - Socket connection established, initiating session, client: /172.20.100.107:24774, server: 172.20.100.107/172.20.100.107:2181
[zk: 172.20.100.107:2181(CONNECTING) 0] 2020-03-20 12:06:45,628 [myid:172.20.100.107:2181] - INFO [main-SendThread(172.20.100.107:2181):ClientCnxn$SendThread@1394] - Session establishment complete on server 172.20.100.107/172.20.100.107:2181, sessionid = 0x1024dcdb7f50000, negotiated timeout = 30000
WATCHER::
WatchedEvent state:SyncConnected type:None path:null
4.Hadoop (2.6+)搭建
前言: 系统安装要求
1: CentOS7
2: Hadoop 2.7.3
3: JDK1.8
下载地址:http://hadoop.apache.org/releases.html
安装
##下载
wget https://archive.apache.org/dist/hadoop/common/hadoop-2.7.3/hadoop-2.7.3.tar.gz
##解压
tar zxvf hadoop-2.7.3.tar.gz
##配置hadoop
配置hadoop环境变量
vim /etc/profile
#hadoop
export HADOOP_HOME=/workspace/hadoop-2.7.3
export PATH=$PATH:$HADOOP_HOME/bin
source /etc/profile
1.2 : 下载后端tar.gz包
- 请下载最新版本的后端安装包至服务器部署目录,比如创建 /opt/dolphinscheduler 做为安装部署目录,下载地址: 下载 (以1.2.0版本为例),下载后上传tar包到该目录中,并进行解压
# 创建部署目录,部署目录请不要创建在/root、/home等高权限目录
mkdir -p /opt/dolphinscheduler;
cd /opt/dolphinscheduler;
# 解压缩
wget http://apache.communilink.net/incubator/dolphinscheduler/1.2.0/apache-dolphinscheduler-incubating-1.2.0-dolphinscheduler-backend-bin.tar.gz
tar -zxvf apache-dolphinscheduler-incubating-1.2.0-dolphinscheduler-backend-bin.tar.gz -C /opt/dolphinscheduler;
mv apache-dolphinscheduler-incubating-1.2.0-dolphinscheduler-backend-bin dolphinscheduler-backend
1.3:创建部署用户和hosts映射
在所有部署调度的机器上创建部署用户,并且一定要配置sudo免密。假如我们计划在ds1,ds2,ds3,ds4这4台机器上部署调度,首先需要在每台机器上都创建部署用户
# 创建用户需使用root登录,设置部署用户名,请自行修改,后面以dolphinscheduler为例
useradd dolphinscheduler;
# 设置用户密码,请自行修改,后面以dolphinscheduler123为例
echo "dolphinscheduler123" | passwd --stdin dolphinscheduler
# 配置sudo免密
echo 'dolphinscheduler ALL=(ALL) NOPASSWD: NOPASSWD: ALL' >> /etc/sudoers
- 因为是以 sudo -u {linux-user} 切换不同linux用户的方式来实现多租户运行作业,所以部署用户需要有 sudo 权限,而且是免密的。
- 如果发现/etc/sudoers文件中有"Default requiretty"这行,也请注释掉
- 如果用到资源上传的话,还需要在`HDFS或者MinIO`上给该部署用户分配读写的权限
1.4 : 配置hosts映射和ssh打通及修改目录权限
- 以第一台机器(hostname为ds1)作为部署机,在ds1上配置所有待部署机器的hosts, 在ds1以root登录
vi /etc/hosts #add ip hostname 172.20.xxx.xxx ds1 172.20.xxx.xxx ds2 172.20.xxx.xxx ds3 172.20.xxx.xxx ds4
注意:请删掉或者注释掉127.0.0.1这行
- 同步ds1上的/etc/hosts到所有部署机器
for ip in ds2 ds3; #请将此处ds2 ds3替换为自己要部署的机器的hostname do sudo scp -r /etc/hosts $ip:/etc/ #在运行中需要输入root密码 done
备注:当然 通过
sshpass -p xxx sudo scp -r /etc/hosts $ip:/etc/
就可以省去输入密码了centos下sshpass的安装:
- 先安装epelyum install -y epel-releaseyum repolist
- 安装完成epel之后,就可以按照sshpass了yum install -y sshpass
- 在ds1上,切换到部署用户并配置ssh本机免密登录
su dolphinscheduler; ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys chmod 600 ~/.ssh/authorized_keys
注意:正常设置后,dolphinscheduler用户在执行命令ssh localhost
是不需要再输入密码的
- 在ds1上,配置部署用户dolphinscheduler ssh打通到其他待部署的机器
su dolphinscheduler; for ip in ds2 ds3; #请将此处ds2 ds3替换为自己要部署的机器的hostname do ssh-copy-id $ip #该操作执行过程中需要手动输入dolphinscheduler用户的密码 done # 当然 通过 sshpass -p xxx ssh-copy-id $ip 就可以省去输入密码了
- 在ds1上,修改目录权限,使得部署用户对dolphinscheduler-backend目录有操作权限
sudo chown -R dolphinscheduler:dolphinscheduler dolphinscheduler-backend
1.5 : 数据库初始化
- 进入数据库,默认数据库是PostgreSQL,如选择Mysql的话,后续需要添加mysql-connector-java驱动包到DolphinScheduler的lib目录下,这里以mysql为例
mysql -uroot -p
- 进入数据库命令行窗口后,执行数据库初始化命令,设置访问账号和密码。
mysql> CREATE DATABASE dolphinscheduler DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
Query OK, 1 row affected (0.00 sec)
mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO 'dolphinuser'@'%' IDENTIFIED BY 'Dolphinuser@123';
Query OK, 0 rows affected, 1 warning (0.00 sec)
mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO 'dolphinuser'@'localhost' IDENTIFIED BY 'Dolphinuser@123';
Query OK, 0 rows affected, 1 warning (0.00 sec)
mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)
- 创建表和导入基础数据
- 修改 conf 目录下 application-dao.properties 中的下列配置
vi conf/application-dao.properties
- 如果选择 Mysql,请注释掉 PostgreSQL 相关配置(反之同理), 还需要手动添加 [ mysql-connector-java 驱动 jar ] 包到 lib 目录下,这里下载的是mysql-connector-java-5.1.47.jar,然后正确配置数据库连接相关信息
- wget https://downloads.mysql.com/archives/get/p/3/file/mysql-connector-java-5.1.47.tar.gz
#postgre #spring.datasource.driver-class-name=org.postgresql.Driver #spring.datasource.url=jdbc:postgresql://localhost:5432/dolphinscheduler # mysql spring.datasource.driver-class-name=com.mysql.jdbc.Driver spring.datasource.url=jdbc:mysql://xxx:3306/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8 需要修改ip spring.datasource.username=xxx 需要修改为上面的{user}值 spring.datasource.password=xxx 需要修改为上面的{password}值
- 修改并保存完后,执行 script 目录下的创建表及导入基础数据脚本
sh script/create-dolphinscheduler.sh
注意: 如果执行上述脚本报 ”/bin/java: No such file or directory“ 错误,请在/etc/profile下配置 JAVA_HOME 及 PATH 变量
1.6 : 修改运行参数
- 修改 conf/env 目录下的
.dolphinscheduler_env.sh
环境变量(以相关用到的软件都安装在/opt/soft下为例)export HADOOP_HOME=/opt/soft/hadoop export HADOOP_CONF_DIR=/opt/soft/hadoop/etc/hadoop #export SPARK_HOME1=/opt/soft/spark1 export SPARK_HOME2=/opt/soft/spark2 export PYTHON_HOME=/opt/soft/python export JAVA_HOME=/opt/soft/java export HIVE_HOME=/opt/soft/hive export FLINK_HOME=/opt/soft/flink export PATH=$HADOOP_HOME/bin:$SPARK_HOME2/bin:$PYTHON_HOME:$JAVA_HOME/bin:$HIVE_HOME/bin:$PATH:$FLINK_HOME/bin:$PATH
`注: 这一步非常重要,例如 JAVA_HOME 和 PATH 是必须要配置的,没有用到的可以忽略或者注释掉;如果找不到.dolphinscheduler_env.sh, 请运行 ls -a`
- 将jdk软链到/usr/bin/java下(仍以 JAVA_HOME=/opt/soft/java 为例)
sudo ln -s /opt/soft/java/bin/java /usr/bin/java
- 修改一键部署脚本
install.sh
中的各参数,特别注意以下参数的配置# 这里填 mysql or postgresql dbtype="mysql" # 数据库连接地址 dbhost="192.168.xx.xx:3306" # 数据库名 dbname="dolphinscheduler" # 数据库用户名,此处需要修改为上面设置的{user}具体值 username="xxx" # 数据库密码, 如果有特殊字符,请使用\转义,需要修改为上面设置的{passowrd}具体值 passowrd="xxx" #将DS安装到哪个目录,如: /opt/soft/dolphinscheduler,不同于现在的目录 installPath="/opt/soft/dolphinscheduler" #使用哪个用户部署,使用1.3小节创建的用户 deployUser="dolphinscheduler" #zookeeper地址 zkQuorum="192.168.xx.xx:2181,192.168.xx.xx:2181,192.168.xx.xx:2181" #在哪些机器上部署DS服务 ips="ds1,ds2,ds3,ds4" #master服务部署在哪台机器上 masters="ds1,ds2" #worker服务部署在哪台机器上 workers="ds3,ds4" #报警服务部署在哪台机器上 alertServer="ds2" #后端api服务部署在在哪台机器上 apiServers="ds1" # 邮件配置,以qq邮箱为例 # 邮件协议 mailProtocol="SMTP" # 邮件服务地址 mailServerHost="smtp.exmail.qq.com" # 邮件服务端口 mailServerPort="25" # mailSender和mailUser配置成一样即可 # 发送者 mailSender="xxx@qq.com" # 发送用户 mailUser="xxx@qq.com" # 邮箱密码 mailPassword="xxx" # TLS协议的邮箱设置为true,否则设置为false starttlsEnable="true" # 邮件服务地址值,参考上面 mailServerHost sslTrust="smtp.exmail.qq.com" # 开启SSL协议的邮箱配置为true,否则为false。注意: starttlsEnable和sslEnable不能同时为true sslEnable="false" # excel下载路径 xlsFilePath="/tmp/xls" # 业务用到的比如sql等资源文件上传到哪里,可以设置:HDFS,S3,NONE。如果想上传到HDFS,请配置为HDFS;如果不需要资源上传功能请选择NONE。 resUploadStartupType="HDFS" #如果上传资源保存想保存在hadoop上,hadoop集群的NameNode启用了HA的话,需要将core-site.xml和hdfs-site.xml放到conf目录下,本例即是放到/opt/dolphinscheduler/conf下面,并配置namenode cluster名称;如果NameNode不是HA,则只需要将mycluster修改为具体的ip或者主机名即可 defaultFS="hdfs://mycluster:8020" # 如果ResourceManager是HA,则配置为ResourceManager节点的主备ip或者hostname,比如"192.168.xx.xx,192.168.xx.xx",否则如果是单ResourceManager或者根本没用到yarn,请配置yarnHaIps=""即可,我这里没用到yarn,配置为"" yarnHaIps="" # 如果是单ResourceManager,则配置为ResourceManager节点ip或主机名,否则保持默认值即可。我这里没用到yarn,保持默认 singleYarnIp="ark1"
特别注意:
- 如果需要用资源上传到Hadoop集群功能, 并且Hadoop集群的NameNode 配置了 HA的话 ,需要开启 HDFS类型的资源上传,同时需要将Hadoop集群下的core-site.xml和hdfs-site.xml复制到/opt/dolphinscheduler/conf,非NameNode HA跳过次步骤
1.7 : 安装python的zookeeper工具kazoo
- 安装python的 zookeeper 工具 ,
此步骤仅在一键部署时候用到
#安装pip
sudo yum -y install python-pip; #ubuntu请使用 sudo apt-get install python-pip
sudo pip install kazoo;
注意:如果yum没找到python-pip,也可以通过下面方式安装
sudo curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
sudo python get-pip.py # 如果是python3,使用sudo python3 get-pip.py
#然后
sudo pip install kazoo;
- 切换到部署用户dolphinscheduler,然后执行一键部署脚本
sh install.sh
注意: 第一次部署的话,在运行中第3步`3,stop server`出现5次以下信息,此信息可以忽略 sh: bin/dolphinscheduler-daemon.sh: No such file or directory
- 脚本完成后,会启动以下5个服务,使用
jps
命令查看服务是否启动(jps
为java JDK
自带)
MasterServer ----- master服务
WorkerServer ----- worker服务
LoggerServer ----- logger服务
ApiApplicationServer ----- api服务
AlertServer ----- alert服务
如果以上服务都正常启动,说明自动部署成功
部署成功后,可以进行日志查看,日志统一存放于logs文件夹内
logs/
├── dolphinscheduler-alert-server.log
├── dolphinscheduler-master-server.log
|—— dolphinscheduler-worker-server.log
|—— dolphinscheduler-api-server.log
|—— dolphinscheduler-logger-server.log
2、前端部署
请下载最新版本的前端安装包至服务器部署目录,下载地址: 下载 (以1.2.0版本为例),下载后上传tar包到该目录中,并进行解压
cd /opt/dolphinscheduler;
wget http://apache.communilink.net/incubator/dolphinscheduler/1.2.0/apache-dolphinscheduler-incubating-1.2.0-dolphinscheduler-front-bin.tar.gz
tar -zxvf apache-dolphinscheduler-incubating-1.2.0-dolphinscheduler-front-bin.tar.gz -C /opt/dolphinscheduler;
mv apache-dolphinscheduler-incubating-1.2.0-dolphinscheduler-front-bin dolphinscheduler-ui
以下两种部署方式任选其一种即可,推荐自动化部署
2.1 自动化部署
- 进入dolphinscheduler-ui目录下执行(
注意:自动化部署会自动下载 nginx
)cd dolphinscheduler-ui; sh ./install-dolphinscheduler-ui.sh;
- 执行后,会在运行中请键入前端端口,默认端口是8888,如果选择默认,请直接回车,或者键入其他端口
- 然后会让键入跟前端ui交互的api-server的ip
- 接着是让键入跟前端ui交互的api-server的port
- 接着是操作系统选择
- 等待部署完成
- 部署完,为防止资源过大无法上传到资源中心,建议修改nginx上传大小参数,具体如下
- 添加nginx配置 client_max_body_size 1024m,在http方法体内添加即可
vi /etc/nginx/nginx.conf # add param client_max_body_size 1024m;
- 然后重启Nginx服务
systemctl restart nginx
- 访问前端页面地址: http://localhost:8888 ,出现前端登录页面,前端web也安装完成了
2.2 手动部署
- 自行安装nginx,去官网下载: http://nginx.org/en/download.html 或者
yum install nginx -y
- 修改nginx配置文件(注意
自行修改
的几处)
vi /etc/nginx/nginx.conf
server {
listen 8888;# 访问端口(自行修改)
server_name localhost;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
location / {
root /opt/soft/dolphinscheduler-ui/dist; # 前端解压的dist目录地址(自行修改)
index index.html index.html;
}
location /dolphinscheduler {
proxy_pass http://localhost:12345; # 接口地址(自行修改)
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header x_real_ipP $remote_addr;
proxy_set_header remote_addr $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_connect_timeout 4s;
proxy_read_timeout 30s;
proxy_send_timeout 12s;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
- 然后重启Nginx服务
systemctl restart nginx
- 访问前端页面地址: http://localhost:8888 ,出现前端登录页面,前端web也安装完成了
3、启停服务
- 一键停止集群所有服务
sh ./bin/stop-all.sh
- 一键开启集群所有服务
sh ./bin/start-all.sh
- 启停Master
sh ./bin/dolphinscheduler-daemon.sh start master-server
sh ./bin/dolphinscheduler-daemon.sh stop master-server
- 启停Worker
sh ./bin/dolphinscheduler-daemon.sh start worker-server
sh ./bin/dolphinscheduler-daemon.sh stop worker-server
- 启停Api
sh ./bin/dolphinscheduler-daemon.sh start api-server
sh ./bin/dolphinscheduler-daemon.sh stop api-server
- 启停Logger
sh ./bin/dolphinscheduler-daemon.sh start logger-server
sh ./bin/dolphinscheduler-daemon.sh stop logger-server
- 启停Alert
sh ./bin/dolphinscheduler-daemon.sh start alert-server
sh ./bin/dolphinscheduler-daemon.sh stop alert-server
参考文章:
https://gitee.com/dolphinscheduler/DolphinScheduler
https://dolphinscheduler.apache.org/zh-cn/docs/1.2.0/user_doc/cluster-deployment.html
https://blog.csdn.net/WYA1993/article/details/88890883
https://www.jianshu.com/p/757842c62e28
https://blog.csdn.net/cafebar123/article/details/73500014
- 转载请注明来源:Dolphin Scheduler集群部署
- 本文永久链接地址:https://www.xionghaier.cn/archives/1203.html