Spark 2.2.0 高可用搭建

Linux大全评论894 views阅读模式

一、概述

1.实验环境基于以前搭建的Haoop HA;

2.spark HA所需要的Zookeeper环境前文已经配置过,此处不再重复。

3.所需软件包为:scala-2.12.3.tgz、spark-2.2.0-bin-Hadoop2.7.tar

4.主机规划

bd1

bd2

bd3

Worker

bd4

bd5

 

Master、Worker

二、配置Scala

1.解压并拷贝

[root@bd1 ~]
# tar -zxf scala-2.12.3.tgz 
[root@bd1 ~]
# cp -r scala-2.12.3 /usr/local/

2.配置环境变量

[root@bd1 ~]
# vim /etc/profile
export 
SCALA_HOME=
/usr/local/scala
export 
PATH=:$SCALA_HOME
/bin
:$PATH
[root@bd1 ~]
# source /etc/profile

3.验证

[root@bd1 ~]
# scala -version
Scala code runner version 2.12.3 -- Copyright 2002-2017, LAMP
/EPFL 
and Lightbend, Inc.

三、配置Spark

1.解压并拷贝

[root@bd1 ~]
# tar -zxf spark-2.2.0-bin-hadoop2.7.tgz
[root@bd1 ~]
# cp spark-2.2.0-bin-hadoop2.7 /usr/local/spark

2.配置环境变量

[root@bd1 ~]
# vim /etc/profile
export 
SCALA_HOME=
/usr/local/scala
export 
PATH=:$SCALA_HOME
/bin
:$PATH
[root@bd1 ~]
# source /etc/profile

3.修改spark-env.sh    #文件不存在需要拷贝模板

[root@bd1 conf]
# vim spark-env.sh
export 
JAVA_HOME=
/usr/local/jdk
export 
HADOOP_HOME=
/usr/local/hadoop
export 
HADOOP_CONF_DIR=
/usr/local/hadoop/etc/hadoop
export 
SCALA_HOME=
/usr/local/scala
export 
SPARK_DAEMON_JAVA_OPTS=
"-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=bd4:2181,bd5:2181 -Dspark.deploy.zookeeper.dir=/spark"
export 
SPARK_WORKER_MEMORY=1g
export 
SPARK_WORKER_CORES=2
export 
SPARK_WORKER_INSTANCES=1

4.修改spark-defaults.conf    #文件不存在需要拷贝模板

[root@bd1 conf]
# vim spark-defaults.conf
spark.master                     spark:
//master
:7077
spark.eventLog.enabled           
true
spark.eventLog.
dir               
hdfs:
//master
:
/user/spark/history
spark.serializer                 org.apache.spark.serializer.KryoSerializer

5.在HDFS文件系统中新建日志文件目录

hdfs dfs -
mkdir 
-p 
/user/spark/history
hdfs dfs -
chmod 
777 
/user/spark/history

6.修改slaves

[root@bd1 conf]
# vim slaves
bd1
bd2
bd3
bd4
bd5

四、同步到其他主机

1.使用scp同步Scala到bd2-bd5

scp 
-r 
/usr/local/scala 
root@bd2:
/usr/local/
scp 
-r 
/usr/local/scala 
root@bd3:
/usr/local/
scp 
-r 
/usr/local/scala 
root@bd4:
/usr/local/
scp 
-r 
/usr/local/scala 
root@bd5:
/usr/local/

2.同步Spark到bd2-bd5

scp 
-r 
/usr/local/spark 
root@bd2:
/usr/local/
scp 
-r 
/usr/local/spark 
root@bd3:
/usr/local/
scp 
-r 
/usr/local/spark 
root@bd4:
/usr/local/
scp 
-r 
/usr/local/spark 
root@bd5:
/usr/local/

五、启动集群并测试HA

1.启动顺序为:zookeeper-->hadoop-->spark

2.启动spark

bd4:

[root@bd4 sbin]
# cd /usr/local/spark/sbin/
[root@bd4 sbin]
# ./start-all.sh 
starting org.apache.spark.deploy.master.Master, logging to 
/usr/local/spark/logs/spark-root-org
.apache.spark.deploy.master.Master-1-bd4.out
bd4: starting org.apache.spark.deploy.worker.Worker, logging to 
/usr/local/spark/logs/spark-root-org
.apache.spark.deploy.worker.Worker-1-bd4.out
bd2: starting org.apache.spark.deploy.worker.Worker, logging to 
/usr/local/spark/logs/spark-root-org
.apache.spark.deploy.worker.Worker-1-bd2.out
bd3: starting org.apache.spark.deploy.worker.Worker, logging to 
/usr/local/spark/logs/spark-root-org
.apache.spark.deploy.worker.Worker-1-bd3.out
bd5: starting org.apache.spark.deploy.worker.Worker, logging to 
/usr/local/spark/logs/spark-root-org
.apache.spark.deploy.worker.Worker-1-bd5.out
bd1: starting org.apache.spark.deploy.worker.Worker, logging to 
/usr/local/spark/logs/spark-root-org
.apache.spark.deploy.worker.Worker-1-bd1.out
  
[root@bd4 sbin]
# jps
3153 DataNode
7235 Jps
3046 JournalNode
7017 Master
3290 NodeManager
7116 Worker
2958 QuorumPeerMain

bd5:

[root@bd5 sbin]
# ./start-master.sh 
starting org.apache.spark.deploy.master.Master, logging to 
/usr/local/spark/logs/spark-root-org
.apache.spark.deploy.master.Master-1-bd5.out
  
[root@bd5 sbin]
# jps
3584 NodeManager
5602 RunJar
3251 QuorumPeerMain
8564 Master
3447 DataNode
8649 Jps
8474 Worker
3340 JournalNode

Spark 2.2.0 高可用搭建

Spark 2.2.0 高可用搭建

3.停掉bd4的Master进程

[root@bd4 sbin]
# kill -9 7017
[root@bd4 sbin]
# jps
3153 DataNode
7282 Jps
3046 JournalNode
3290 NodeManager
7116 Worker
2958 QuorumPeerMain

Spark 2.2.0 高可用搭建

Spark 2.2.0 高可用搭建

五、总结

一开始时想把Master放到bd1和bd2上,但是启动Spark后发现两个节点上都是Standby。然后修改配置文件转移到bd4和bd5上,才顺利运行。换言之Spark HA的Master必须位于Zookeeper集群上才能正常运行,即该节点上要有JournalNode这个进程。

更多Spark相关教程见以下内容

CentOS 7.0下安装并配置Spark  http://www.linuxidc.com/Linux/2015-08/122284.htm

Ubuntu系统搭建单机Spark注意事项  http://www.linuxidc.com/Linux/2017-10/147220.htm

Spark1.0.0部署指南 http://www.linuxidc.com/Linux/2014-07/104304.htm

Spark2.0安装配置文档  http://www.linuxidc.com/Linux/2016-09/135352.htm

Spark 1.5、Hadoop 2.7 集群环境搭建  http://www.linuxidc.com/Linux/2016-09/135067.htm

Spark官方文档 - 中文翻译  http://www.linuxidc.com/Linux/2016-04/130621.htm

CentOS 6.2(64位)下安装Spark0.8.0详细记录 http://www.linuxidc.com/Linux/2014-06/102583.htm

Spark-2.2.0安装和部署详解  http://www.linuxidc.com/Linux/2017-08/146215.htm

Spark2.0.2 Hadoop2.6.4全分布式配置详解 http://www.linuxidc.com/Linux/2016-11/137367.htm

Ubuntu 14.04 LTS 安装 Spark 1.6.0 (伪分布式) http://www.linuxidc.com/Linux/2016-03/129068.htm

企鹅博客
  • 本文由 发表于 2019年8月24日 21:41:01
  • 转载请务必保留本文链接:https://www.qieseo.com/169233.html

发表评论