本文共 14041 字,大约阅读时间需要 46 分钟。
zookeeper集群搭建:
和其他大多数集群结构一样,zookeeper集群也是主从结构。搭建集群时,机器数量最低也是三台,因为小于三台就无法进行选举。选举就是当集群中的master节点挂掉之后,剩余的两台机器会进行选举,在这两台机器中选举出一台来做master节点。而当原本挂掉的master恢复正常后,也会重新加入集群当中。但是不会再作为master节点,而是作为slave节点。如下:
本节介绍单机伪分布式的zookeeper安装,官方下载地址如下:
我这里使用的是3.4.11版本,所以找到相应的版本点击进去,复制到.tar.gz的下载链接到Linux上进行下载。命令如下:
[root@study-01 ~]# cd /usr/local/src/[root@study-01 /usr/local/src]# wget https://archive.apache.org/dist/zookeeper/zookeeper-3.4.11/zookeeper-3.4.11.tar.gz
下载完成之后将其解压到/usr/local/目录下:
[root@study-01 /usr/local/src]# tar -zxvf zookeeper-3.4.11.tar.gz -C /usr/local/[root@study-01 /usr/local/src]# cd ../zookeeper-3.4.11/[root@study-01 /usr/local/zookeeper-3.4.11]# lsbin dist-maven lib README_packaging.txt zookeeper-3.4.11.jar.ascbuild.xml docs LICENSE.txt recipes zookeeper-3.4.11.jar.md5conf ivysettings.xml NOTICE.txt src zookeeper-3.4.11.jar.sha1contrib ivy.xml README.md zookeeper-3.4.11.jar[root@study-01 /usr/local/zookeeper-3.4.11]#
然后给目录重命名一下:
[root@study-01 ~]# cd /usr/local/[root@study-01 /usr/local]# mv zookeeper-3.4.11/ zookeeper00
接着进行一系列的配置:
[root@study-01 /usr/local]# cd zookeeper00/[root@study-01 /usr/local/zookeeper00]# cd conf/[root@study-01 /usr/local/zookeeper00/conf]# cp zoo_sample.cfg zoo.cfg # 拷贝官方提供的模板配置文件[root@study-01 /usr/local/zookeeper00/conf]# vim zoo.cfg # 增加或修改成如下内容tickTime=2000initLimit=10syncLimit=5dataDir=/usr/local/zookeeper00/dataDirdataLogDir=/usr/local/zookeeper00/dataLogDirclientPort=21814lw.commands.whitelist=*server.1=192.168.190.129:2888:3888 # master节点,ip后面跟的是集群通信的端口server.2=192.168.190.129:2889:3889server.3=192.168.190.129:2890:3890[root@study-01 /usr/local/zookeeper00/conf]# cd ../[root@study-01 /usr/local/zookeeper00]# mkdir {dataDir,dataLogDir}[root@study-01 /usr/local/zookeeper00]# cd dataDir/[root@study-01 /usr/local/zookeeper00/dataDir]# vim myid # 配置该节点的id1[root@study-01 /usr/local/zookeeper00/dataDir]#
配置完之后,拷贝多个目录出来,因为是单机的伪分布式所以需要在一台机器上安装多个zookeeper:
[root@study-01 /usr/local]# cp zookeeper00 zookeeper01 -rf[root@study-01 /usr/local]# cp zookeeper00 zookeeper02 -rf
配置 zookeeper01:
[root@study-01 /usr/local]# cd zookeeper01/conf/[root@study-01 /usr/local/zookeeper01/conf]# vim zoo.cfg # 修改内容如下tickTime=2000initLimit=10syncLimit=5dataDir=/usr/local/zookeeper01/dataDirdataLogDir=/usr/local/zookeeper01/dataLogDirclientPort=2182 # 端口号必须要修改4lw.commands.whitelist=*server.1=192.168.190.129:2888:3888server.2=192.168.190.129:2889:3889server.3=192.168.190.129:2890:3890[root@study-01 /usr/local/zookeeper01/conf]# cd ../dataDir/[root@study-01 /usr/local/zookeeper01/dataDir]# vim myid2[root@study-01 /usr/local/zookeeper01/dataDir]#
配置 zookeeper02:
[root@study-01 /usr/local]# cd zookeeper02/conf/[root@study-01 /usr/local/zookeeper02/conf]# vim zoo.cfgtickTime=2000initLimit=10syncLimit=5dataDir=/usr/local/zookeeper02/dataDirdataLogDir=/usr/local/zookeeper02/dataLogDirclientPort=2183 # 端口号必须要修改4lw.commands.whitelist=*server.1=192.168.190.129:2888:3888server.2=192.168.190.129:2889:3889server.3=192.168.190.129:2890:3890[root@study-01 /usr/local/zookeeper02/conf]# cd ../dataDir/[root@study-01 /usr/local/zookeeper02/dataDir]# vim myid3[root@study-01 /usr/local/zookeeper02/dataDir]#
以上就在单机上配置好三个zookeeper集群节点了,现在我们来测试一下,这个伪分布式的zookeeper集群能否正常运作起来:
[root@study-01 ~]# cd /usr/local/zookeeper00/bin/[root@study-01 /usr/local/zookeeper00/bin]# ./zkServer.sh start # 启动第一个节点ZooKeeper JMX enabled by defaultUsing config: /usr/local/zookeeper00/bin/../conf/zoo.cfgStarting zookeeper ... STARTED[root@study-01 /usr/local/zookeeper00/bin]# netstat -lntp |grep java # 查看监听的端口tcp6 0 0 192.168.190.129:3888 :::* LISTEN 3191/java # 集群通信的端口tcp6 0 0 :::44793 :::* LISTEN 3191/javatcp6 0 0 :::2181 :::* LISTEN 3191/java [root@study-01 /usr/local/zookeeper00/bin]# cd ../../zookeeper01/bin/[root@study-01 /usr/local/zookeeper01/bin]# ./zkServer.sh startZooKeeper JMX enabled by defaultUsing config: /usr/local/zookeeper01/bin/../conf/zoo.cfg # 启动第二个节点Starting zookeeper ... STARTED[root@study-01 /usr/local/zookeeper01/bin]# cd ../../zookeeper02/bin/[root@study-01 /usr/local/zookeeper02/bin]# ./zkServer.sh start # 启动第三个节点[root@study-01 /usr/local/zookeeper02/bin]# netstat -lntp |grep java # 查看监听的端口tcp6 0 0 192.168.190.129:2889 :::* LISTEN 3232/java tcp6 0 0 :::48463 :::* LISTEN 3232/java tcp6 0 0 192.168.190.129:3888 :::* LISTEN 3191/java tcp6 0 0 192.168.190.129:3889 :::* LISTEN 3232/java tcp6 0 0 192.168.190.129:3890 :::* LISTEN 3286/java tcp6 0 0 :::44793 :::* LISTEN 3191/java tcp6 0 0 :::60356 :::* LISTEN 3286/java tcp6 0 0 :::2181 :::* LISTEN 3191/java tcp6 0 0 :::2182 :::* LISTEN 3232/java tcp6 0 0 :::2183 :::* LISTEN 3286/java [root@study-01 /usr/local/zookeeper02/bin]# jps # 查看进程3232 QuorumPeerMain3286 QuorumPeerMain3191 QuorumPeerMain3497 Jps[root@study-01 /usr/local/zookeeper02/bin]#
如上,可以看到,三个节点都正常启动成功了,接下来我们进入客户端,创建一些znode,看看是否会同步到集群中的其他节点上去:
[root@study-01 /usr/local/zookeeper02/bin]# ./zkCli.sh -server localhost:2181 # 登录第一个节点的客户端[zk: localhost:2181(CONNECTED) 0] ls /[zookeeper][zk: localhost:2181(CONNECTED) 1] create /data test-dataCreated /data[zk: localhost:2181(CONNECTED) 2] ls /[zookeeper, data][zk: localhost:2181(CONNECTED) 3] quit[root@study-01 /usr/local/zookeeper02/bin]# ./zkCli.sh -server localhost:2182 # 登录第二个节点的客户端[zk: localhost:2182(CONNECTED) 0] ls / # 可以查看到我们在第一个节点上创建的znode,代表集群中的节点能够正常同步数据[zookeeper, data][zk: localhost:2182(CONNECTED) 1] get /data # 数据也是一致的test-datacZxid = 0x100000002ctime = Tue Apr 24 18:35:56 CST 2018mZxid = 0x100000002mtime = Tue Apr 24 18:35:56 CST 2018pZxid = 0x100000002cversion = 0dataVersion = 0aclVersion = 0ephemeralOwner = 0x0dataLength = 9numChildren = 0[zk: localhost:2182(CONNECTED) 2] quit[root@study-01 /usr/local/zookeeper02/bin]# ./zkCli.sh -server localhost:2183 # 登录第三个节点的客户端[zk: localhost:2183(CONNECTED) 0] ls / # 第三个节点也能查看到我们在第一个节点上创建的znode[zookeeper, data][zk: localhost:2183(CONNECTED) 1] get /data # 数据也是一致的test-datacZxid = 0x100000002ctime = Tue Apr 24 18:35:56 CST 2018mZxid = 0x100000002mtime = Tue Apr 24 18:35:56 CST 2018pZxid = 0x100000002cversion = 0dataVersion = 0aclVersion = 0ephemeralOwner = 0x0dataLength = 9numChildren = 0[zk: localhost:2183(CONNECTED) 2] quit[root@study-01 /usr/local/zookeeper02/bin]#
查看集群的状态、主从信息需要使用 ./zkServer.sh status
命令,但是多个节点的话,逐个查看有些费劲,所以我们写一个简单的shell脚本来批量执行命令。如下:
[root@study-01 ~]# vim checked.sh # 脚本内容如下#!/bin/bash/usr/local/zookeeper00/bin/zkServer.sh status/usr/local/zookeeper01/bin/zkServer.sh status/usr/local/zookeeper02/bin/zkServer.sh status[root@study-01 ~]# sh ./checked.sh # 执行脚本ZooKeeper JMX enabled by defaultUsing config: /usr/local/zookeeper00/bin/../conf/zoo.cfgMode: follower # 从节点ZooKeeper JMX enabled by defaultUsing config: /usr/local/zookeeper01/bin/../conf/zoo.cfgMode: leader # 主节点ZooKeeper JMX enabled by defaultUsing config: /usr/local/zookeeper02/bin/../conf/zoo.cfgMode: follower[root@study-01 ~]#
到此为止,我们就成功完成了单机zookeeper伪分布式集群的搭建,并且也测试成功了。
接下来,我们使用三台虚拟机来搭建zookeeper真实分布式集群,机器的ip地址如下:
注:三台机器都必须具备java的运行环境,并且关闭或清空防火墙规则,不想关闭防火墙的话,就需要去配置相应的防火墙规则
首先配置一下系统的hosts文件:
[root@localhost ~]# vim /etc/hosts192.168.190.128 zk000192.168.190.129 zk001192.168.190.130 zk002
把之前做伪分布式实验机器上的其他zookeeper目录删除掉,并把zookeeper目录使用rsync同步到其他机器上。如下:
[root@zk001 ~]# cd /usr/local/[root@zk001 /usr/local]# rm -rf zookeeper01[root@zk001 /usr/local]# rm -rf zookeeper02[root@zk001 /usr/local]# mv zookeeper00/ zookeeper[root@zk001 /usr/local]# rsync -av /usr/local/zookeeper/ 192.168.190.128:/usr/local/zookeeper/[root@zk001 /usr/local]# rsync -av /usr/local/zookeeper/ 192.168.190.130:/usr/local/zookeeper/
然后逐个在三台机器上都配置一下环境变量,如下:
[root@zk001 ~]# vim .bash_profile # 增加如下内容export ZOOKEEPER_HOME=/usr/local/zookeeperPATH=$PATH:$HOME/bin:$JAVA_HOME/bin:$ZOOKEEPER_HOME/binexport PATH[root@zk001 ~]# source .bash_profile
逐个修改配置文件,zk000:
[root@zk000 ~]# cd /usr/local/zookeeper/conf/[root@zk000 /usr/local/zookeeper/conf]# vim zoo.cfg tickTime=2000initLimit=10syncLimit=5dataDir=/usr/local/zookeeper/dataDirdataLogDir=/usr/local/zookeeper/dataLogDirclientPort=21814lw.commands.whitelist=*server.1=192.168.190.128:2888:3888 # 默认server.1为master节点server.2=192.168.190.129:2888:3888server.3=192.168.190.130:2888:3888[root@zk000 /usr/local/zookeeper/conf]# cd ../dataDir/[root@zk000 /usr/local/zookeeper/dataDir]# vim myid 1[root@zk000 /usr/local/zookeeper/dataDir]#
zk001:
[root@zk001 ~]# cd /usr/local/zookeeper/conf/[root@zk001 /usr/local/zookeeper/conf]# vim zoo.cfg tickTime=2000initLimit=10syncLimit=5dataDir=/usr/local/zookeeper/dataDirdataLogDir=/usr/local/zookeeper/dataLogDirclientPort=21814lw.commands.whitelist=*server.1=192.168.190.128:2888:3888 # 默认server.1为master节点server.2=192.168.190.129:2888:3888server.3=192.168.190.130:2888:3888[root@zk001 /usr/local/zookeeper/conf]# cd ../dataDir/[root@zk001 /usr/local/zookeeper/dataDir]# vim myid 2[root@zk001 /usr/local/zookeeper/dataDir]#
zk002:
[root@zk002 ~]# cd /usr/local/zookeeper/conf/[root@zk002 /usr/local/zookeeper/conf]# vim zoo.cfg tickTime=2000initLimit=10syncLimit=5dataDir=/usr/local/zookeeper/dataDirdataLogDir=/usr/local/zookeeper/dataLogDirclientPort=21814lw.commands.whitelist=*server.1=192.168.190.128:2888:3888 # 默认server.1为master节点server.2=192.168.190.129:2888:3888server.3=192.168.190.130:2888:3888[root@zk002 /usr/local/zookeeper/conf]# cd ../dataDir/[root@zk002 /usr/local/zookeeper/dataDir]# vim myid 3[root@zk002 /usr/local/zookeeper/dataDir]#
配置完成之后,启动三台机器的zookeeper服务:
[root@zk000 ~]# zkServer.sh start[root@zk001 ~]# zkServer.sh start[root@zk002 ~]# zkServer.sh start
启动成功后,查看三个机器的集群状态信息:
[root@zk000 ~]# zkServer.sh statusZooKeeper JMX enabled by defaultUsing config: /usr/local/zookeeper/bin/../conf/zoo.cfgMode: leader[root@zk000 ~]#[root@zk001 ~]# zkServer.sh statusZooKeeper JMX enabled by defaultUsing config: /usr/local/zookeeper/bin/../conf/zoo.cfgMode: follower[root@zk001 ~]# [root@zk002 ~]# zkServer.sh statusZooKeeper JMX enabled by defaultUsing config: /usr/local/zookeeper/bin/../conf/zoo.cfgMode: follower[root@zk002 ~]#
然后我们来测试创建znode是否会同步,进入192.168.190.128机器的客户端:
[root@zk000 ~]# zkCli.sh -server 192.168.190.128:2181[zk: 192.168.190.128:2181(CONNECTED) 0] ls /[zookeeper, data][zk: 192.168.190.128:2181(CONNECTED) 1] create /real-culster real-dataCreated /real-culster[zk: 192.168.190.128:2181(CONNECTED) 2] ls /[zookeeper, data, real-culster][zk: 192.168.190.128:2181(CONNECTED) 3] get /real-culsterreal-datacZxid = 0x300000002ctime = Tue Apr 24 20:48:32 CST 2018mZxid = 0x300000002mtime = Tue Apr 24 20:48:32 CST 2018pZxid = 0x300000002cversion = 0dataVersion = 0aclVersion = 0ephemeralOwner = 0x0dataLength = 9numChildren = 0[zk: 192.168.190.128:2181(CONNECTED) 4] quit
进入192.168.190.129机器的客户端
[root@zk000 ~]# zkCli.sh -server 192.168.190.129:2181[zk: 192.168.190.129:2181(CONNECTED) 0] ls /[zookeeper, data, real-culster][zk: 192.168.190.129:2181(CONNECTED) 1] get /real-culsterreal-datacZxid = 0x300000002ctime = Tue Apr 24 20:48:32 CST 2018mZxid = 0x300000002mtime = Tue Apr 24 20:48:32 CST 2018pZxid = 0x300000002cversion = 0dataVersion = 0aclVersion = 0ephemeralOwner = 0x0dataLength = 9numChildren = 0[zk: 192.168.190.129:2181(CONNECTED) 2] quit
进入192.168.190.130机器的客户端
[root@zk000 ~]# zkCli.sh -server 192.168.190.130:2181[zk: 192.168.190.130:2181(CONNECTED) 0] ls /[zookeeper, data, real-culster][zk: 192.168.190.130:2181(CONNECTED) 1] get /real-culsterreal-datacZxid = 0x300000002ctime = Tue Apr 24 20:48:32 CST 2018mZxid = 0x300000002mtime = Tue Apr 24 20:48:32 CST 2018pZxid = 0x300000002cversion = 0dataVersion = 0aclVersion = 0ephemeralOwner = 0x0dataLength = 9numChildren = 0[zk: 192.168.190.130:2181(CONNECTED) 2] quit
从以上的测试可以看到,在zookeeper分布式集群中,我们在任意一个节点创建的znode都会被同步的集群中的其他节点上,数据也会被一并同步。所以到此为止,我们的zookeeper分布式集群就搭建成功了。
以上我们只是测试了znode的同步,还没有测试集群中的节点选举,所以本节就来测试一下,当master节点挂掉之后看看slave节点会不会通过选举坐上master的位置。首先我们来把master节点的zookeeper服务给停掉:
[root@zk001 ~]# zkServer.sh statusZooKeeper JMX enabled by defaultUsing config: /usr/local/zookeeper/bin/../conf/zoo.cfgMode: leader[root@zk001 ~]# zkServer.sh stopZooKeeper JMX enabled by defaultUsing config: /usr/local/zookeeper/bin/../conf/zoo.cfgStopping zookeeper ... STOPPED[root@zk001 ~]#
这时到其他两台机器上进行查看,可以看到有一台已经成为master节点了:
[root@zk002 ~]# zkServer.sh status # 可以看到zk002这个节点成为了master节点ZooKeeper JMX enabled by defaultUsing config: /usr/local/zookeeper/bin/../conf/zoo.cfgMode: leader[root@zk002 ~]# [root@zk000 ~]# zkServer.sh statusZooKeeper JMX enabled by defaultUsing config: /usr/local/zookeeper/bin/../conf/zoo.cfgMode: follower[root@zk000 ~]#
然后再把停掉的节点启动起来,可以看到,该节点重新加入了集群,但是此时是以slave角色存在,而不会以master角色存在:
[root@zk001 ~]# zkServer.sh startZooKeeper JMX enabled by defaultUsing config: /usr/local/zookeeper/bin/../conf/zoo.cfgStarting zookeeper ... STARTED[root@zk001 ~]# zkServer.sh statusZooKeeper JMX enabled by defaultUsing config: /usr/local/zookeeper/bin/../conf/zoo.cfgMode: follower[root@zk001 ~]#
可以看到,zk002这个节点依旧是master角色,不会被取代,所以只有在选举的时候集群中的节点才会切换角色:
[root@zk002 ~]# zkServer.sh statusZooKeeper JMX enabled by defaultUsing config: /usr/local/zookeeper/bin/../conf/zoo.cfgMode: leader[root@zk002 ~]#
转载于:https://blog.51cto.com/zero01/2107174