本文为Patroni高可用集群的安装部署文档,介绍文档请看Patroni高可用集群介绍
一、环境
本例中安装Patroni开源软件和Patroni高可用集群运行所需的中间件和数据库软件版本信息如下:
| 软件 | 版本 |
|---|---|
| Patroni | Patroni-1.2.5 |
| etcd | etcd-3.1.6 |
| HAProxy | HAProxy-1.5.18-1 |
| Keepalived | Keepalived-1.3.4 |
| PostgreSQL | PostgreSQL-9.6.2 |
Python,pip版本:
| 软件 | 版本 |
|---|---|
| Python | Python 2.7.1 |
| pip | pip 9.0.1 (Python 2.7) |
主机信息如下(系统非最小化安装,默认安装):
| IP地址 | 主机名 | 操作系统 |
|---|---|---|
| 198.168.191.137 | node1 | CentOS release 6.8 (Final) |
| 198.168.191.138 | node2 | CentOS release 6.8 (Final) |
| 198.168.191.139 | node3 | CentOS release 6.8 (Final) |
请注意: 本文中所有用到的包均在提前准备好了的压缩包中(patroni安装部署需要的包.zip),请下载并解压这个文件,按照部署步骤使用里面的各个包。百度网盘下载地址: http://pan.baidu.com/s/1c1WlEhU 提取密码: y1dt
二、运行环境搭建
2.1 安装相关软件
readline、 zlib、 openssl-devel、bison、flex、libnl、epel、jq
三个节点均需建立
解压: tar -xvf package.tar.gzcd ./package yum install ./* --disablerepo=*
2.2 安装gcc编译器(root)
三个节点均需进行
解压: tar -xzvf gcc.tar.gzcd ./gcc yum install ./* --disablerepo=*
2.3 Python2.7的搭建(root)
node1,node2节点
运行Patroni需要Python2.7以上版本。
查看python版本信息命令: python --version
在centos6.8下,Python的版本是2.6.6,我们需要替换掉:
这里用的是Python2.7.1版本,下载地址:
python-2.7.1下载: https://www.python.org/ftp/python/2.7.1/python-2.7.1.tar.bz2
安装2.7.1替换掉2.6.6:
解压: tar -jxvf python-2.7.1.tar.bz2 进入目录: cd python-2.7.1 安装: ./configure make all make install make clean make distclean 查看版本信息: /usr/local/bin/python2.7 -V 建立软连接,使系统默认的 python指向 python2.7: mv usr/bin/python usr/bin/python2.6.6 ln -s usr/local/bin/python2.7 usr/bin/python 重新检验python 版本 python -V 解决系统 python 软链接指向 python2.7 版本后,因为yum是不兼容 python 2.7的,所以yum不能正常工作,我们需要指定 yum 的python版本: vi usr/bin/yum 将文件头部的 #!/usr/bin/python 改成 #!/usr/bin/python2.6.6 保存&退出!
2.4 安装pip
node1,node2安装:
安装setuptools: unzip setuptools-36.0.1.zipcd setuptools-36.0.1 python ./setup.py install 安装pip: tar -xzvf pip-9.0.1.tar.gzcd pip-9.0.1chmod +x setup.py python ./setup.py install pip --version
2.5 创建用户(以下的普通用户均为这个用户)
三个节点均需创建:
创建系统普通用户,以下的普通用户均为这个用户,定义密码: useradd yd passwd yd
2.6 建立日志目录(普通用户)
三个节点均需创建:
mkdir -p home/yd/logfile/
三、安装数据库
3.1 下载源码包:
PostgreSQL9.6.2下载:https://ftp.postgresql.org/pub/source/v9.6.2/postgresql-9.6.2.tar.gz
3.2 安装PostgreSQL9.6.2源码包(普通用户)
node1,node2节点进行:
解压文件: tar -zxvf postgresql-9.6.2.tar.gzcd postgresql-9.6.2./configure --prefix=/home/yd/pg96 make make install
3.3 配置环境变量(普通用户)
node1,node2节点进行:
源码编译安装cd ~ 添加以下内容,保存退出: vi ./.bash_profileexport PGHOME=/home/yd/pg96export PATH=/home/yd/pg96/bin:$PATHexport LANG=en_US.utf8export PGUSER=postgresexport LD_LIBRARY_PATH=$PGHOME/libexport MANPATH=$PGHOME/share/man 读取环境变量:source ./.bash_profile
四、安装Patroni
4.1 下载Patroni
Patroni下载:https://github.com/zalando/Patroni/tree/v1.2.5
4.2 安装Python模块包(root)
node1,node2节点进行:
解压文件: tar -xzvf pgk.tar.gzcd pgksource home/yd/.bash_profile pip install ./*
4.3 安装Patroni(普通用户)
node1,node2节点进行 这里只需要解压文件:
unzip patroni-1.2.5.zip
4.4 配置Patroni参数文件(普通用户)
node1,node2节点进行 /home/yd/patroni-1.2.5/postgres0,1,2.xml参数文件各项参数的意义:
scope: batman#namespace: service/name: postgresql0 ##patroni节点名restapi: ##haproxy的监听端口,8008,8009,8010…… listen: 192.168.191.137:8008 connect_address: 192.168.191.137:8008# certfile: etc/ssl/certs/ssl-cert-snakeoil.pem# keyfile: etc/ssl/private/ssl-cert-snakeoil.key# authentication:# username: username# password: passwordetcd: ##本数据库节点指向的etcd节点的位置 host: 192.168.191.137:2379bootstrap: # this section will be written into Etcd:/<namespace>/<scope>/config after initializing new cluster # and all other cluster members will use it as a `global configuration` dcs: ttl: 30 ##备机获得领导锁,完成一次自动切换的最大时间长度 一般是:loop_wait+2*retry_timeout <= ttl loop_wait: 10 ##循环一次ha.py的间隔时间 retry_timeout: 10 ##当前节点连接不到配置etcd节点的retry延迟 maximum_lag_on_failover: 1048576 ##异步模式主备间差异最大值# master_start_timeout: 300 ##主机启动超时# synchronous_mode: false ##同步流复制开关,开启后要关闭maximum_lag_on_failover postgresql: use_pg_rewind: true ##pg_rewind开关# use_slots: true parameters: ##pg参数文件配置# wal_level: hot_standby# hot_standby: "on"# wal_keep_segments: 8# max_wal_senders: 5# max_replication_slots: 5# wal_log_hints: "on"# archive_mode: "on" ##若要开启归档,归档参数默认不写入数据目录里面的参数文件,要取消注释# archive_timeout: 1800s ##若要开启归档,归档参数默认不写入数据目录里面的参数文件,要取消注释# archive_command: mkdir -p ../wal_archive && test ! -f ../wal_archive/%f && cp %p ../wal_archive/%f ##若要开启归档,归档参数默认不写入数据目录里面的参数文件,要取消注释# recovery_conf:# restore_command: cp ../wal_archive/%f %p ##恢复参数默认不会写入recovery_conf,要取消注释 # some desired options for 'initdb' initdb: # Note: It needs to be a list (some options need values, others are switches) ##initdb命令参数 - encoding: UTF8 - data-checksums pg_hba: # Add following lines to pg_hba.conf after running 'initdb' ## pg_hba文件参数 - host replication replicator 0.0.0.0/0 md5 - host all all 0.0.0.0/0 md5# - hostssl all all 0.0.0.0/0 md5 # Additional script to be launched after initial cluster creation (will be passed the connection URL as parameter)# post_init: usr/local/bin/setup_cluster.sh ##在初始群集创建后将要启动的附加脚本 # Some additional users users which needs to be created after initializing new cluster##在初始群集创建后将要附加的用户 users: admin: password: admin options: - createrole - createdb postgresql: listen: 0.0.0.0:5432 ##数据库监听 connect_address: 192.168.191.137:5432 ##数据库ip::端口 data_dir: data/postgresql0 ##data目录位置# bin_dir: pgpass: tmp/pgpass0 ##pgpass文件位置 authentication: ##数据库流复制用户和超级用户 replication: username: replicator password: rep-pass superuser: username: postgres password: zalando parameters: unix_socket_directories: '.' ##指定Unix域套接字(S)的目录tags: nofailover: false ##不进行故障转移,单个节点设置没影响 noloadbalance: false clonefrom: false nosync: false replicatefrom: postgres0 ##级联流复制用到,默认不是级联
分别在node1节点的/home/yd/patroni-1.2.5/postgres0.yml 与 node2节点的/home/yd/patroni-1.2.5/postgres1.yml进行配置,必需指定的参数为下面的参数(下面针对于测试,只做部分参数的调整,不要直接把下面的参数文件直接复制到文件中,请根据上面的各项参数意义,来依照实际情况配置以下的内容):
Node1下postgres0.yml vi home/yd/patroni-1.2.5/postgres0.yml 请将以下参数在postgres0.yml文件中做对应的修改,不要直接复制 restapi: listen: 192.168.191.137:8008 connect_address: 192.168.191.137:8008etcd: host: 192.168.191.137:2379 pg_hba: # Add following lines to pg_hba.conf after running 'initdb' - host replication replicator 0.0.0.0/0 md5 - host all all 0.0.0.0/0 trust postgresql: listen: 0.0.0.0:5432 connect_address: 192.168.191.137:5432 data_dir: data/PostgreSQL0 username: replicator password: ‘123456’ superuser: username: postgres password: ‘123456’ unix_socket_directories: '/tmp'Node2下postgres1.yml vi home/yd/patroni-1.2.5/postgres1.yml 请将以下参数在postgres1.yml文件中做对应的修改,不要直接复制 restapi: listen: 192.168.191.138:8009 connect_address: 192.168.191.138:8009etcd: host: 192.168.191.138:2379 pg_hba: # Add following lines to pg_hba.conf after running 'initdb' - host replication replicator 0.0.0.0/0 md5 - host all all 0.0.0.0/0 trust postgresql: listen: 0.0.0.0:5432 connect_address: 192.168.191.138:5432 data_dir: data/PostgreSQL1 username: replicator password: ‘123456’ superuser: username: postgres password: ‘123456’ unix_socket_directories: '/tmp'
五、安装etcd
5.1 下载etcd3.1.6免编译包
etcd下载:https://github-production-release-asset-2e65be.s3.amazonaws.com/11225014/cf4b42c0-24f2-11e7-88bb-46b42cbf1ada?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20170801%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20170801T070812Z&X-Amz-Expires=300&X-Amz-Signature=b311239b0323c7d5ec3be6d7835e354ddc275821285ef02a82cd10c424978cac&X-Amz-SignedHeaders=host&actor_id=10413412&response-content-disposition=attachment%3B%20filename%3Detcd-v3.1.6-linux-amd64.tar.gz&response-content-type=application%2Foctet-stream
5.2 解压(root)
三个节点均需执行:
解压安装文件: tar -xzvf etcd-v3.1.6-linux-amd64.tar.gz
5.3 安装(root)
三个节点均需执行:
cd etcd-v3.1.6-linux-amd64 cp ./etcd* usr/bin
5.4 注册etcd服务,添加开机启动服务(root)
三个节点均需执行 在三个节点的/etc/init.d目录下分别添加etcd服务脚本(start/stop/restart/status) 以下为etcd服务脚本创建,需要修改的参数(各参数意义见第5.4章最后的备注):
export ETCD_LOG=/home/yd/logfile/etcd_log ##更换对应的日志目录位置export ETCD_OWNER="yd" ##更换对应的用户名 --data-dir=/home/yd/patroni-1.2.5/data/etcd ##更换对应的etcd数据目录 --initial-advertise-peer-urls http://192.168.191.137:2380 ##更换对应的IP--listen-peer-urls http://192.168.191.137:2380 ##更换对应的IP --listen-client-urls http://192.168.191.137:2379,http://127.0.0.1:2379 ##更换对应的IP--advertise-client-urls http://192.168.191.137:2379 ##更换对应的IP--initial-cluster infra0=http://192.168.191.137:2380,infra1=http://192.168.191.138:2380,infra2=http://192.168.191.119:2380 ##更换对应的IP在三个节点分别执行:node1:
vi etc/init.d/etcd#!/bin/bash#chkconfig:23456 70 35#description:etcd startexport ETCD_LOG=/home/yd/logfile/etcd_log export ETCD_OWNER="yd"case "$1" instart) echo -n "starting etcd" su - $ETCD_OWNER -c "nohup usr/bin/etcd --name infra0 --data-dir=/home/yd/patroni-1.2.5/data/etcd \ --initial-advertise-peer-urls http://192.168.191.137:2380 \ --listen-peer-urls http://192.168.191.137:2380 \ --listen-client-urls http://192.168.191.137:2379,http://127.0.0.1:2379 \ --advertise-client-urls http://192.168.191.137:2379 \ --initial-cluster-token etcd-cluster-9 \ --initial-cluster infra0=http://192.168.191.137:2380,infra1=http://192.168.191.138:2380,infra2=http://192.168.191.119:2380 \ --initial-cluster-state new > $ETCD_LOG 2>&1 & " echo " OK " ;; stop)echo -n "shutdown etcd"pid=`ps -ef|grep etcd |grep -v grep |awk '{print $2}'` kill -9 $pidecho " OK ";; restart)$0 stop$0 start ;; status) ProcNumber=`ps -ef |grep -w usr/bin/etcd |grep -v grep|wc -l` if [ $ProcNumber -eq 0 ];then echo "etcd is not running" else echo "etcd is running" fi ;; *)echo "Usage: `basename $0` start|stop|restart|status"exit 1esacexit 0
node2:
vi etc/init.d/etcd#!/bin/bash#chkconfig:23456 70 35#description:etcd startexport ETCD_LOG=/home/yd/logfile/etcd_logexport ETCD_OWNER="yd"case "$1" instart) echo -n "starting etcd" su - $ETCD_OWNER -c "nohup usr/bin/etcd --name infra1 --data-dir=/home/yd/ Patroni-1.2.5/data/etcd \ --initial-advertise-peer-urls http://192.168.191.138:2380 \ --listen-peer-urls http://192.168.191.138:2380 \ --listen-client-urls http://192.168.191.138:2379,http://127.0.0.1:2379 \ --advertise-client-urls http://192.168.191.138:2379 \ --initial-cluster-token etcd-cluster-9 \ --initial-cluster infra0=http://192.168.191.137:2380,infra1=http://192.168.191.138:2380,infra2=http://192.168.191.119:2380 \ --initial-cluster-state new > $ETCD_LOG 2>&1 & " echo " OK " ;; stop)echo -n "shutdown etcd"pid=`ps -ef|grep etcd |grep -v grep |awk '{print $2}'` kill -9 $pidecho " OK ";; restart)$0 stop$0 start ;; status) ProcNumber=`ps -ef |grep -w usr/bin/etcd |grep -v grep|wc -l` if [ $ProcNumber -eq 0 ];then echo "etcd is not running" else echo "etcd is running" fi ;; *)echo "Usage: `basename $0` start|stop|restart|status"exit 1esacexit 0
node3:
vi etc/init.d/etcd#!/bin/bash#chkconfig:23456 70 35#description:etcd startexport ETCD_LOG=/home/yd/logfile/etcd_logexport ETCD_OWNER="yd"case "$1" instart) echo -n "starting etcd" su - $ETCD_OWNER -c "nohup usr/bin/etcd --name infra2 --data-dir=/home/yd/patroni-1.2.5/data/etcd \ --initial-advertise-peer-urls http://192.168.191.119:2380 \ --listen-peer-urls http://192.168.191.119:2380 \ --listen-client-urls http://192.168.191.119:2379,http://127.0.0.1:2379 \ --advertise-client-urls http://192.168.191.119:2379 \ --initial-cluster-token etcd-cluster-9 \ --initial-cluster infra0=http://192.168.191.137:2380,infra1=http://192.168.191.138:2380,infra2=http://192.168.191.119:2380 \ --initial-cluster-state new > $ETCD_LOG 2>&1 & "echo " OK " ;; stop)echo -n "shutdown etcd"pid=`ps -ef|grep etcd |grep -v grep |awk '{print $2}'` kill -9 $pidecho " OK ";; restart)$0 stop$0 start ;; status) ProcNumber=`ps -ef |grep -w usr/bin/etcd |grep -v grep|wc -l` if [ $ProcNumber -eq 0 ];then echo "etcd is not running" else echo "etcd is running" fi ;; *)echo "Usage: `basename $0` start|stop|restart|status"exit 1esacexit 0
加入系统服务
分别在三个节点执行(root用户执行): chmod +x etc/init.d/etcd chkconfig --add etcd chkconfig --level 23456 etcd on 检查是否加入了系统服务: [root@node1 ~]# chkconfig --list etcdetcd 0:off 1:off 2:on 3:on 4:on 5:on 6:on
备注: 上面脚本中etcd命令各参数的意义:—name etcd0本节点的名字—initial-advertise-peer-urls http://192.168.191.137:2380其他节点使用,其他节点通过该地址与本节点交互信息。一定要保证从其他节点能可访问该地址。静态配置方式下,该参数的value一定要同时在—initial-cluster参数中存在。
—listen-client-urls http://192.168.191.137:2379,http://127.0.0.1:2379本节点使用,用于监听etcd客户发送信息的地址。ip为全0代表监听本节点侧所有接口。—advertise-client-urls http://192.168.191.137:2379
etcd客户使用,客户通过该地址与本节点交互信息。一定要保证从客户侧能可访问该地址。
—initial-cluster-token etcd-cluster-4
用于区分不同集群。本地如有多个集群要设为不同。
—initial-cluster infra0=http://192.168.191.137:2380,infra1=http://192.168.191.138:2380,infra2=http://192.168.191.119:2380
本节点使用。描述集群中所有节点的信息,本节点根据此信息去联系其他节点。
—initial-cluster-state new
用于指示本次是否为新建集群。有两个取值new和existing。如果填为existing,则该节点启动时会尝试与其他节点交互。
—data-dir=/home/yd/patroni-1.2.5/data/etcd
启动三个节点的etcd后,每个节点会生成一个etcd的data目录,用来存放信息。下次启动时分别要指向这个目录
六、安装HAproxy
node1,node2节点进行
更改下面代码中server项的IP和端口,再把以下内容添加到/home/yd/patroni-1.2.5/ haproxy.cfg中,node1,node2均一样。
vi /home/yd/patroni-1.2.5/haproxy.cfg
global maxconn 100defaults log global mode tcp retries 2 timeout client 30m timeout connect 4s timeout server 30m timeout check 5s frontend ft_postgresql bind *:5000 default_backend bk_db backend bk_db option httpchk server postgresql_192.168.191.137_5432 192.168.191.137:5432 maxconn 100 check port 8008 server postgresql_192.168.191.138_5433 192.168.191.138:5432 maxconn 100 check port 8009
七、安装Keepalived
7.1 下载Keepalived1.3.4源码包:
Keepalived1.3.4下载:http://www.Keepalived.org/software/keepalived-1.3.4.tar.gz
7.2 安装Keepalived1.3.4源码包(root)
node1,node2节点进行:
解压文件: tar -xzvf keepalived-1.3.4.tar.gzcd /root/keepalived-1.3.4./configure --prefix=/root/data/keepalived make make install
7.3 安装Keepalived1.3.4(root)
node1,node2节点进行:
复制/sbin/keepalived到/usr/sbin下: cp /root/data/keepalived/sbin/keepalived /usr/sbin/ keepalived默认会读取/etc/keepalived/keepalived.conf配置文件: mkdir /etc/keepalived cp /root/data/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf 复制sysconfig文件到/etc/sysconfig下: cp /root/data/keepalived/etc/sysconfig/keepalived /etc/sysconfig/ 复制启动脚本到/etc/init.d下: cp /root/keepalived-1.3.4/keepalived/etc/init.d/keepalived /etc/init.d/
7.4 修改Keepalived.conf参数文件(root)
node1,node2节点进
node1:
vi /etc/keepalived/keepalived.conf (覆盖) ! Configuration File for keepalived global_defs { router_id node1 } vrrp_instance VI_1 { state BACKUP #主从都设为backup interface eth0 #绑定的网卡 virtual_router_id 51 priority 100 nopreempt #state为backup时nopreempt才起到非抢占的作用 advert_int 1 authentication { auth_type PASS auth_pass 123456 } virtual_ipaddress { 192.168.191.16 } } virtual_server 192.168.191.16 4330 { #VIP不需要在网卡上设,内部维护 delay_loop 6 lb_algo wrr lb_kind DR persistence_timeout 50 protocol TCP real_server 192.168.191.137 5000 { #node1的实际ip notify_down /etc/keepalived/down.sh #检测不到本地的5000端口发生切换调用 weight 1 TCP_CHECK { connect_timeout 3 nb_get_retry 3 delay_before_retry 3 connect_port 5000 #监控服务端口(HAProxy生成的端口) } } }vi /etc/keepalived/down.sh #!/bin/sh pkill keepalived sleep 5 #延迟以免发生vip来回跳 /etc/init.d/keepalived start sleep 5 #延迟以免发生vip来回跳su - yd <<EOF haproxy -f /home/yd/patroni-1.2.5/haproxy.cfg EOF
node2:
vi /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id node1 } vrrp_instance VI_1 { state BACKUP #主从都设为backup interface eth0 #绑定的网卡 virtual_router_id 51 priority 50 nopreempt #state为backup时nopreempt才起到非抢占的作用 advert_int 1 authentication { auth_type PASS auth_pass 123456 } virtual_ipaddress { 192.168.191.16 } } virtual_server 192.168.191.16 4330 { #VIP不需要在网卡上设,内部维护 delay_loop 6 lb_algo wrr lb_kind DR persistence_timeout 50 protocol TCP real_server 192.168.191.138 5000 { #node2的ip notify_down /etc/keepalived/down.sh #检测不到本地的5000端口发生切换调用 weight 1 TCP_CHECK { connect_timeout 3 nb_get_retry 3 delay_before_retry 3 connect_port 5000 #监控服务端口(HAProxy生成的端口) } } }vi /etc/keepalived/down.sh #!/bin/sh pkill keepalived sleep 5 #延迟以免发生vip来回跳 /etc/init.d/keepalived start sleep 5 #延迟以免发生vip来回跳su - yd <<EOF haproxy -f /home/yd/patroni-1.2.5/haproxy.cfg EOF
八、启动集群的各个组件
到这里Patroni高可用集群的搭建完成,启动步骤如下。
8.1 启动顺序
Etcd➤Patroni➤Haproxy➤keepalived
8.2 启动etcd(root)
三个节点均需执行:
service etcd start
8.3 启动Patroni(普通用户)
node1,node2节点均需执行:
测试阶段不建议把patroni放在后台运行,可以实时监控patroni的打屏信息。Node1: cd /home/yd/ patroni-master ./patroni.py postgres0.yml Node2: cd /home/yd/ patroni-master ./patroni.py postgres1.yml
8.4 启动HAProxy(普通用户)
node1,node2节点均需执行:
nohup haproxy -f /home/yd/patroni-master/haproxy.cfg > /home/yd/logfile/haproxy_log 2>&1 &注意:
[yd@node1 patroni-master]$ haproxy -f ./haproxy.cfg [WARNING] 205/160129 (5536) : Server bk_db/postgresql_192.168.191.138_5432 is DOWN, reason: Layer7 wrong status, code: 503, info: "Service Unavailable", check duration: 6ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue. 这个warning不是错误信息,haproxy只会代理主节点。这里表示的是192.168.191.138(node2)现在是备节点。
8.5 启动Keepalived
node1,node2节点均需执行:
/etc/init.d/keepalived start
8.6 连接测试
可以在node1上看到虚拟ip 192.168.191.16已经生成。
[root@node1 ~]# ip add 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:ce:ee:39 brd ff:ff:ff:ff:ff:ff inet 192.168.191.138/24 brd 192.168.191.255 scope global eth0 inet 192.168.191.16/32 scope global eth0 inet6 fe80::20c:29ff:fece:ee39/64 scope link valid_lft forever preferred_lft forever 3: pan0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 8e:fd:80:60:20:30 brd ff:ff:ff:ff:ff:ff在node3上用虚拟ip+5000端口登录试试。
[root@node1 ~]# su - yd [yd@node1 patroni-master]$ psql -h 192.168.191.16 -p 5000 psql (9.6.2) Type "help" for help. postgres=# create table test(id int); CREATE TABLE postgres=# insert into test select 1; INSERT 0 1
扫码关注了解更多





