一.环境准备
1.1.服务器规划
| 主机名 | 主机IP | 角色 | PostgreSQL版本 | repmgr版本 | 操作系统 |
|---|---|---|---|---|---|
| pgmaster | 192.168.183.109 | 主服务器 | 14.2 | 5.3.3 | Centos7.9 x86_64 |
| pgslave1 | 192.168.183.110 | 备用服务器 | 14.2 | 5.3.3 | Centos7.9 x86_64 |
| pgslave2 | 192.168.183.111 | 备用服务器 | 14.2 | 5.3.3 | Centos7.9 x86_64 |
| pgwitness | 192.168.183.112 | 见证服务器 | 14.2 | 5.3.3 | Centos7.9 x86_64 |
1.2.版本要求
repmgr必须安装在复制群集中的每台服务器上。建议源码编译安装,与数据库版本要匹配。
1.3.环境依赖
repmgr有一些必要的依赖项,以使其能够最佳地运行,并且应该为系统提供某些最低限度的功能。本案例是在Centos7.9操作系统和epmgr最新5.3的版本。在生产环境,可以选择根据实际情况选择操作系统和对应的repmgr和PostgreSQL版本。
repmgr下载地址:https://www.repmgr.org/downloads.html 选择tar.gz
postgres下载地址:https://www.postgresql.org/ftp/source/ 选择tar.gz
1.4.通信配置
建议repmgr通过配置SSH免密在节点之间进行通信,方便运维操作。
1.5.数据库安装
主节点完成数据库软件编译安装、初始化、归档配置(可选)
其他节点完成数据库软件编译安装
具体步骤参考文档”01数据库安装手册“
scp /opt/CentOS-7-x86_64-DVD-2009.iso /opt/postgresql-14.2.tar.gz /opt/repmgr-5.3.3.tar.gz root@192.168.183.110:/opt
scp /opt/CentOS-7-x86_64-DVD-2009.iso /opt/postgresql-14.2.tar.gz /opt/repmgr-5.3.3.tar.gz root@192.168.183.111:/opt
scp /opt/CentOS-7-x86_64-DVD-2009.iso /opt/postgresql-14.2.tar.gz /opt/repmgr-5.3.3.tar.gz root@192.168.183.112:/opt
二.安装步骤
2.1.配置主机hosts
所有节点
--节点1 192.168.183.109
hostnamectl set-hostname pgmaster
--节点2 192.168.183.110
hostnamectl set-hostname pgslave1
--节点3 192.168.183.111
hostnamectl set-hostname pgslave2
--节点4 192.168.183.112
hostnamectl set-hostname pgwitness
--所有节点
cat >> /etc/hosts << "EOF"
#add by postgres
192.168.183.109 pgmaster
192.168.183.110 pgslave1
192.168.183.111 pgslave2
192.168.183.112 pgwitness
EOF
2.2.ssh互信配置
所有节点postgres用户
--节点1 192.168.183.109
ssh-keygen
ssh-copy-id -i /home/postgres/.ssh/id_rsa.pub postgres@192.168.183.110
ssh-copy-id -i /home/postgres/.ssh/id_rsa.pub postgres@192.168.183.111
ssh-copy-id -i /home/postgres/.ssh/id_rsa.pub postgres@192.168.183.112
--节点2 192.168.183.110
ssh-keygen
ssh-copy-id -i /home/postgres/.ssh/id_rsa.pub postgres@192.168.183.109
ssh-copy-id -i /home/postgres/.ssh/id_rsa.pub postgres@192.168.183.111
ssh-copy-id -i /home/postgres/.ssh/id_rsa.pub postgres@192.168.183.112
--节点3 192.168.183.111
ssh-keygen
ssh-copy-id -i /home/postgres/.ssh/id_rsa.pub postgres@192.168.183.109
ssh-copy-id -i /home/postgres/.ssh/id_rsa.pub postgres@192.168.183.110
ssh-copy-id -i /home/postgres/.ssh/id_rsa.pub postgres@192.168.183.112
--节点4 192.168.183.112
ssh-keygen
ssh-copy-id -i /home/postgres/.ssh/id_rsa.pub postgres@192.168.183.109
ssh-copy-id -i /home/postgres/.ssh/id_rsa.pub postgres@192.168.183.110
ssh-copy-id -i /home/postgres/.ssh/id_rsa.pub postgres@192.168.183.111
2.3.repmgr源代码安装
要安装repmgr,必须先编译安装PostgreSQL的先决条件。大多数主流Linux发行版和其他UNIX变体都提供了从包中安装先决条件的简单方法。目标repmgr版本要和PostgreSQL版本匹配。
repmgr下载地址:https://www.repmgr.org/downloads.html 选择tar.gz
2.3.1.安装repmger依赖
所有节点
yum check-update
yum groupinstall -y "Development Tools"
yum install -y yum-utils openjade docbook-dtds docbook-style-dsssl docbook-style-xsl
yum install -y yum-builddep flex libselinux-devel libxml2-devel libxslt-devel openssl-devel pam-devel readline-devel
2.3.2.编译安装
所有节点
tar -xvf /opt/repmgr-5.3.3.tar.gz -C /pgdb
chown -R postgres:postgres /pgdb
su - postgres
cd /pgdb/repmgr-5.3.3/
./configure
make && make install
源码编译之后会自动在PGHOME的extension/生成repmgr.control以及对应的repmgrd服务
2.4 配置服务
2.4.1.创建密码文件
每个节点
su - postgres -c "echo "192.168.183.109:5432:repmgr:repmgr:top@123" >> /home/postgres/.pgpass"
su - postgres -c "echo "192.168.183.110:5432:repmgr:repmgr:top@123" >> /home/postgres/.pgpass"
su - postgres -c "echo "192.168.183.111:5432:repmgr:repmgr:top@123" >> /home/postgres/.pgpass"
su - postgres -c "echo "192.168.183.112:5432:repmgr:repmgr:top@123" >> /home/postgres/.pgpass"
su - postgres -c "chmod 0600 /home/postgres/.pgpass"
2.4.2.配置主库
/pgdb/data是本案例默认的PGDATA。配置遵循以下步骤:
创建一个用于管理repmgr的用户和数据库
节点1:192.168.183.109 pgmaster 操作
psql -Upostgres -W -d postgres -c "create user repmgr with password 'repmgr' superuser replication;"
psql -Upostgres -W -d postgres -c "create database repmgr owner repmgr;"
2.4.3.数据库参数配置
节点1:192.168.183.109 pgmaster 操作
#主从流复制
sed -i "/^#hot_standby/s/^#//" /pgdb/data/postgresql.conf #去掉注释 pg_rewind要求打开
sed -i "/^#max_wal_senders/s/^#//" /pgdb/data/postgresql.conf #去掉注释
sed -i "/^#wal_sender_timeout = 60s/s/^#//" /pgdb/data/postgresql.conf #去掉注释
sed -i "/^#wal_keep_size = 0/s/#wal_keep_size = 0/wal_keep_size = 16MB/" /pgdb/data/postgresql.conf
# 主从切换参数,启用PG数据库的复制槽
sed -i "/^#max_replication_slots/s/^#//" /pgdb/data/postgresql.conf #去掉注释
sed -i "/^#wal_log_hints = off/s/#wal_log_hints = off/wal_log_hints = on/" /pgdb/data/postgresql.conf
#自动切换
sed -i "/^#shared_preload_libraries = ''/s/#shared_preload_libraries = ''/shared_preload_libraries = 'repmgr'/" /pgdb/data/postgresql.conf
sed -i "/^#full_page_writes = on/s/^#//" /pgdb/data/postgresql.conf #去掉注释 pg_rewind要求打开
2.4.4.用户访问权限配置
节点1:192.168.183.109 pgmaster 操作
cp /pgdb/data/pg_hba.conf /pgdb/data/pg_hba.confbak
echo "host all all 0.0.0.0/0 md5" >> /pgdb/data/pg_hba.conf
echo "local replication repmgr trust" >> /pgdb/data/pg_hba.conf
echo "host replication repmgr 127.0.0.1/32 trust" >> /pgdb/data/pg_hba.conf
echo "host replication repmgr 192.168.183.0/0 trust" >> /pgdb/data/pg_hba.conf
echo "local repmgr repmgr trust" >> /pgdb/data/pg_hba.conf
echo "host repmgr repmgr 127.0.0.1/32 trust" >> /pgdb/data/pg_hba.conf
echo "host repmgr repmgr 192.168.183.0/0 trust" >> /pgdb/data/pg_hba.conf
2.4.5.重新加载PostgreSQL服务
节点1:192.168.183.109 pgmaster 操作
systemctl restart postgres
2.4.6.创建repmgr.conf的文件
所有节点
--节点1 192.168.183.109 pgmaster节点
cat > /pgdb/repmgr-5.3.3/repmgr.conf << "EOF"
# 基本信息
node_id=1 # 节点ID,高可用集群各节点标识
node_name='192.168.183.109' # 节点ID,高可用集群各节点标识
conninfo='host=192.168.183.109 port=5785 user=repmgr dbname=repmgr connect_timeout=2' # 本节点数据库连接信息
data_directory='/pgdb/data5785' # pg数据目录
replication_user='repmgr' # 流复制数据库用户,默认使用repmgr
repmgr_bindir='/pgdb/repmgr-5.3.3' # repmgr软件目录
pg_bindir='/pgdb/pgsql/bin' # pg软件目录
# 日志管理
log_level=INFO #日志级别
log_facility=STDERR #日志工具
log_file='/pgdb/repmgr-5.3.3/repmgrd.log'
log_status_interval=10 #以指定的时间间隔(以秒为单位,默认为300)发出状态日志行
EOF
--节点2 192.168.183.110 pgslave1节点
cat > /pgdb/repmgr-5.3.3/repmgr.conf << "EOF"
# 基本信息
node_id=2 # 节点ID,高可用集群各节点标识
node_name='192.168.183.110' # 节点ID,高可用集群各节点标识
conninfo='host=192.168.183.110 port=5785 user=repmgr dbname=repmgr connect_timeout=2' # 本节点数据库连接信息
data_directory='/pgdb/data' # pg数据目录
replication_user='repmgr' # 流复制数据库用户,默认使用repmgr
repmgr_bindir='/pgdb/repmgr-5.3.3' # repmgr软件目录
pg_bindir='/pgdb/pgsql/bin' # pg软件目录
# 日志管理
log_level=INFO #日志级别
log_facility=STDERR #日志工具
log_file='/pgdb/repmgr-5.3.3/repmgrd.log'
log_status_interval=10 #以指定的时间间隔(以秒为单位,默认为300)发出状态日志行
EOF
--节点3 192.168.183.111 pgslave2节点
cat > /pgdb/repmgr-5.3.3/repmgr.conf << "EOF"
# 基本信息
node_id=3 # 节点ID,高可用集群各节点标识
node_name='192.168.183.111' # 节点ID,高可用集群各节点标识
conninfo='host=192.168.183.111 port=5785 user=repmgr dbname=repmgr connect_timeout=2' # 本节点数据库连接信息
data_directory='/pgdb/data5785' # pg数据目录
replication_user='repmgr' # 流复制数据库用户,默认使用repmgr
repmgr_bindir='/pgdb/repmgr-5.3.3' # repmgr软件目录
pg_bindir='/pgdb/pgsql/bin' # pg软件目录
# 日志管理
log_level=INFO #日志级别
log_facility=STDERR #日志工具
log_file='/pgdb/repmgr-5.3.3/repmgrd.log'
log_status_interval=10 #以指定的时间间隔(以秒为单位,默认为300)发出状态日志行
EOF
--节点4 192.168.183.112 pgwitness节点
cat > /pgdb/repmgr-5.3.3/repmgr.conf << "EOF"
# 基本信息
node_id=4 # 节点ID,高可用集群各节点标识
node_name='192.168.183.112' # 节点ID,高可用集群各节点标识
conninfo='host=192.168.183.112 port=5785 user=repmgr dbname=repmgr connect_timeout=2' # 本节点数据库连接信息
data_directory='/pgdb/data5785' # pg数据目录
replication_user='repmgr' # 流复制数据库用户,默认使用repmgr
repmgr_bindir='/pgdb/repmgr-5.3.3' # repmgr软件目录
pg_bindir='/pgdb/pgsql/bin' # pg软件目录
# 日志管理
log_level=INFO #日志级别
log_facility=STDERR #日志工具
log_file='/pgdb/repmgr-5.3.3/repmgrd.log'
log_status_interval=10 #以指定的时间间隔(以秒为单位,默认为300)发出状态日志行
EOF
2.4.7.补充:repmgr.conf参数详解
为了防止repmgr意外读取错误的配置文件,使用repmgr命令通过-f/–config-file参数指定repmgr.conf路径。
repmgr.conf配置文件是管理repmgr服务非常关键的一个配置文件,若要让repmgr集群更好的服务于数据库框架,需要了解repmgr参数和命令的语法和含义:
node_id:大于零的唯一整数,用于标识节点。必配项
node_name:任意(但唯一)字符串,字符串的最大长度为63个字符,并且只能包含可打印的ASCII字符。必配项
conninfo:数据库连接信息,集群中的所有服务器必须能够使用此字符串连接到本地节点。如果使用repmgrd,请考虑connect_timeout在conninfo字符串中进行显式设置,以确定放弃网络连接尝试的时间。必配项
data_directory:PostgreSQL节点的数据目录。必配项
config_directory:如果PostgreSQL配置文件位于数据目录之外,请指定主postgresql.conf文件所在的目录。可选项
replication_user:与PostgreSQL用户建立复制连接。如果未设置默认值,则为conninfo中定义的用户可选项replication_type必须为physical(默认值)可选项
location:定义节点位置的任意字符串;在故障转移期间使用它来检查当前主节点的可见性可选项use_replication_slots:是否使用物理复制插槽。注意:使用复制插槽时, max_replication_slots应至少配置为要连接到主数据库的备用数据库的数量。可选项
ssh_options:建议添加-q以抑制任何多余的SSH信息,例如登录标语,以及一个显式 ConnectTimeout值,例如:ssh_options =’-q -o ConnectTimeout = 10’可选项
log_level:DEBUG,INFO,NOTICE, WARNING,ERROR,ALERT,CRIT 或EMERG。默认值为INFO。需要注意,这DEBUG将产生大量的日志输出,因此在正常使用中不应启用它。可选项
log_facility日志工具:可能的值是STDERR(默认),或Syslog集成,一LOCAL0,LOCAL1,LOCAL7,USER。可选项
log_file:如果将log_facility设置为STDERR,则可以将日志输出重定向到指定的文件。可选项log_status_interval:此设置使repmgrd以指定的时间间隔(以秒为单位,默认为300)发出状态日志行。可选项
monitor_interval_secs:支持重载,检查主库节点的状态的间隔时间,默认是2秒
connection_check_type:探测主库是否可用, 1. ping(默认) 2.connection 仅建立连接 3.query发起一个select查询
reconnect_attempts :当主库连不上的时侯,再次尝试连接的次数,默认6次
reconnect_interval :当主库连不上的时侯,二次连接的时间间隔,默认10秒
2.4.8.补充:repmgr命令
repmgr -f /pgdb/repmgr-5.3.3/repmgr.conf node service --list-actions --action=stop
repmgr -f /pgdb/repmgr-5.3.3/repmgr.conf node service --list-actions --action=start
repmgr -f /pgdb/repmgr-5.3.3/repmgr.conf node service --list-actions --action=restart
repmgr -f /pgdb/repmgr-5.3.3/repmgr.conf node service --list-actions --action=reload
2.5.向repmgr注册主节点
192.168.183.109
要使repmgr支持复制集群,必须向repmgr注册主节点。将安装repmgr扩展和元数据对象,并为主服务器添加元数据记录
2.5.1 注册本地服务器为主服务器节点
[postgres@pgmaster ~]$ repmgr -f /pgdb/repmgr-5.3.3/repmgr.conf primary register
INFO: connecting to primary database...
NOTICE: attempting to install extension "repmgr"
NOTICE: "repmgr" extension successfully installed
NOTICE: primary node record (ID: 1) registered
2.5.2.查看集群信息
可以看到目前集群中已经添加“192.168.183.109”为主服务器节点。
[postgres@pgmaster ~]$ repmgr -f /pgdb/repmgr-5.3.3/repmgr.conf service status
ID | Name | Role | Status | Upstream | repmgrd | PID | Paused? | Upstream last seen
----+-----------------+---------+-----------+----------+-------------+-----+---------+--------------------
1 | 192.168.183.109 | primary | * running | | not running | n/a | n/a | n/a
[postgres@pgmaster ~]$ psql -h 192.168.183.109 -Urepmgr -d repmgr
repmgr=# \x
Expanded display is on.
repmgr=# select * from repmgr.nodes;
-[ RECORD 1 ]----+----------------------------------------------------------------------------
node_id | 1
upstream_node_id |
active | t
node_name | 192.168.183.109
type | primary
location | location1
priority | 100
conninfo | host=192.168.183.109 port=5785 user=repmgr dbname=repmgr connect_timeout=2
repluser | repmgr
slot_name | repmgr_slot_1
config_file | /data/pgdb/repmgr-5.3.3/repmgr.conf
2.5.3.开启主库守护进程repmgrd
复制集群中的每个服务器都有自己的记录。如果使用了repmgrd,则当节点的状态或角色发生更改时,将更新upstream_node_id、active和type字段。
su - root
cat > /usr/lib/systemd/system/repmgr.service << "EOF"
[Unit]
Description=A replication manager, and failover management tool for PostgreSQL
After=syslog.target
After=network.target
After=postgres
[Service]
Type=forking
User=postgres
Group=postgres
PIDFile=/pgdb/repmgr-5.3.3/repmgrd.pid
Environment=REPMGRDCONF=/pgdb/repmgr-5.3.3/repmgr.conf
Environment=PIDFILE=/pgdb/repmgr-5.3.3/repmgrd.pid
ExecStart=/pgdb/repmgr-5.3.3/repmgrd -f /pgdb/repmgr-5.3.3/repmgr.conf -p /pgdb/repmgr-5.3.3/repmgrd.pid -d --verbose
ExecStop=/usr/bin/kill -TERM $MAINPID
ExecReload=/usr/bin/kill -HUP $MAINPID
TimeoutSec=300
[Install]
WantedBy=multi-user.target
EOF
chmod +x /usr/lib/systemd/system/repmgr.service
systemctl daemon-reload
systemctl enable repmgr.service
systemctl start repmgr.service
2.6 克隆从库
节点2:192.168.183.110
节点3:192.168.183.111
备服务器节点注册前,不需要对PostgreSQL数据库进行初始化,可通过repmgr工具“一键式”部署。在对备用服务器进行克隆前,可以使用以下命令测试数据库连通性:
--测试数据库连通性 ip:主节点ip
[postgres@slave ~]$ psql 'host=192.168.183.109 user=repmgr dbname=repmgr connect_timeout=2'
psql (14.5)
Type "help" for help.
repmgr=#
2.6.1.查看配置文件
--节点2 192.168.183.110 pgslave1节点
cat > /pgdb/repmgr-5.3.3/repmgr.conf << "EOF"
# 基本信息
node_id=2 # 节点ID,高可用集群各节点标识
node_name='192.168.183.110' # 节点ID,高可用集群各节点标识
conninfo='host=192.168.183.110 port=5785 user=repmgr dbname=repmgr connect_timeout=2' # 本节点数据库连接信息
data_directory='/pgdb/data' # pg数据目录
replication_user='repmgr' # 流复制数据库用户,默认使用repmgr
repmgr_bindir='/pgdb/repmgr-5.3.3' # repmgr软件目录
pg_bindir='/pgdb/pgsql/bin' # pg软件目录
# 日志管理
log_level=INFO #日志级别
log_facility=STDERR #日志工具
log_file='/pgdb/repmgr-5.3.3/repmgrd.log'
log_status_interval=10 #以指定的时间间隔(以秒为单位,默认为300)发出状态日志行
EOF
--节点3 192.168.183.111 pgslave2节点
cat > /pgdb/repmgr-5.3.3/repmgr.conf << "EOF"
# 基本信息
node_id=3 # 节点ID,高可用集群各节点标识
node_name='192.168.183.111' # 节点ID,高可用集群各节点标识
conninfo='host=192.168.183.111 port=5785 user=repmgr dbname=repmgr connect_timeout=2' # 本节点数据库连接信息
data_directory='/pgdb/data5785' # pg数据目录
replication_user='repmgr' # 流复制数据库用户,默认使用repmgr
repmgr_bindir='/pgdb/repmgr-5.3.3' # repmgr软件目录
pg_bindir='/pgdb/pgsql/bin' # pg软件目录
# 日志管理
log_level=INFO #日志级别
log_facility=STDERR #日志工具
log_file='/pgdb/repmgr-5.3.3/repmgrd.log'
log_status_interval=10 #以指定的时间间隔(以秒为单位,默认为300)发出状态日志行
EOF
2.6.2.测试克隆
从库需要停止数据数据服务,使用-dry-run选项检查待机是否可以克隆.repmgr命令
[postgres@pgslave1 ~]$ repmgr -h 192.168.183.109 -U repmgr -d repmgr -f /pgdb/repmgr-5.3.3/repmgr.conf standby clone --dry-run
NOTICE: destination directory "/pgdb/data" provided
INFO: connecting to source node
DETAIL: connection string is: host=192.168.183.109 user=repmgr dbname=repmgr
DETAIL: current installation size is 33 MB
INFO: "repmgr" extension is installed in database "repmgr"
INFO: replication slot usage not requested; no replication slot will be set up for this standby
INFO: parameter "max_wal_senders" set to 10
NOTICE: checking for available walsenders on the source node (2 required)
INFO: sufficient walsenders available on the source node
DETAIL: 2 required, 10 available
NOTICE: checking replication connections can be made to the source server (2 required)
INFO: required number of replication connections could be made to the source server
DETAIL: 2 replication connections required
NOTICE: standby will attach to upstream node 1
HINT: consider using the -c/--fast-checkpoint option
INFO: would execute:
/pgdb/pgsql/bin/pg_basebackup -l "repmgr base backup" -D /pgdb/data -h 192.168.183.109 -p 5432 -U repmgr -X stream
INFO: all prerequisites for "standby clone" are met
2.6.3.正式克隆
[postgres@pgslave1 ~]$ repmgr -h 192.168.183.109 -U repmgr -d repmgr -f /pgdb/repmgr-5.3.3/repmgr.conf standby clone
NOTICE: destination directory "/pgdb/data" provided
INFO: connecting to source node
DETAIL: connection string is: host=192.168.183.109 user=repmgr dbname=repmgr
DETAIL: current installation size is 33 MB
INFO: replication slot usage not requested; no replication slot will be set up for this standby
NOTICE: checking for available walsenders on the source node (2 required)
NOTICE: checking replication connections can be made to the source server (2 required)
INFO: checking and correcting permissions on existing directory "/pgdb/data"
NOTICE: starting backup (using pg_basebackup)...
HINT: this may take some time; consider using the -c/--fast-checkpoint option
INFO: executing:
/pgdb/pgsql/bin/pg_basebackup -l "repmgr base backup" -D /pgdb/data -h 192.168.183.109 -p 5432 -U repmgr -X stream
NOTICE: standby clone (using pg_basebackup) complete
NOTICE: you can now start your PostgreSQL server
HINT: for example: pg_ctl -D /pgdb/data start
HINT: after starting the server, you need to register this standby with "repmgr standby register"
注意:如果是要做级联复制,则需要在克隆时候添加–upstream-node-id=1参数,这个参数,它指定了数据流的来源
2.6.4.启动新副本数据库
[postgres@pgslave1 ~]$ pg_ctl start -D /pgdb/data/
[postgres@pgslave2 ~]$ pg_ctl start -D /pgdb/data/
2.7.向repmgr注册备节点
2.7.1.注册本地服务器为备服务器节点
节点2 192.168.183.110
节点3 192.168.183.111
--测试注册
[postgres@pgslave1 ~]$ repmgr -f /pgdb/repmgr-5.3.3/repmgr.conf standby register --dry-run
--正式注册
[postgres@pgslave1 ~]$ repmgr -f /pgdb/repmgr-5.3.3/repmgr.conf standby register
INFO: connecting to local node "192.168.183.110" (ID: 2)
INFO: connecting to primary database
WARNING: --upstream-node-id not supplied, assuming upstream node is primary (node ID: 1)
INFO: standby registration complete
NOTICE: standby node "192.168.183.110" (ID: 2) successfully registered
2.7.2.启动repmgrd守护进程
节点2和节点3都执行
su - postgres
cat > /usr/lib/systemd/system/repmgr.service << "EOF"
[Unit]
Description=A replication manager, and failover management tool for PostgreSQL
After=syslog.target
After=network.target
After=postgres
[Service]
Type=forking
User=postgres
Group=postgres
PIDFile=/pgdb/repmgr-5.3.3/repmgrd.pid
Environment=REPMGRDCONF=/pgdb/repmgr-5.3.3/repmgr.conf
Environment=PIDFILE=/pgdb/repmgr-5.3.3/repmgrd.pid
ExecStart=/pgdb/repmgr-5.3.3/repmgrd -f /pgdb/repmgr-5.3.3/repmgr.conf -p /pgdb/repmgr-5.3.3/repmgrd.pid -d --verbose
ExecStop=/usr/bin/kill -TERM $MAINPID
ExecReload=/usr/bin/kill -HUP $MAINPID
TimeoutSec=300
[Install]
WantedBy=multi-user.target
EOF
chmod +x /usr/lib/systemd/system/repmgr.service
systemctl daemon-reload
systemctl enable repmgr.service
systemctl start repmgr.service
2.7.3.查看repmgr集群状态
[postgres@pgslave1 ~]$ repmgr -f /pgdb/repmgr-5.3.3/repmgr.conf service status
ID | Name | Role | Status | Upstream | repmgrd | PID | Paused? | Upstream last seen
----+-----------------+---------+-----------+-----------------+---------+-------+---------+--------------------
1 | 192.168.183.109 | primary | * running | | running | 8488 | no | n/a
2 | 192.168.183.110 | standby | running | 192.168.183.109 | running | 12749 | no | 0 second(s) ago
3 | 192.168.183.111 | standby | running | 192.168.183.109 | running | 7548 | no | 1 second(s) ago
通过repmgr服务状态查看,主从三个节点目前状态均已正常在线。
2.8.配置见证服务
2.8.1.见证节点的作用
见证节点主要的工作是保证在主节点不可用时,建立投票法定人数,帮助备用数据库达到法定的数量,必须选择一个现有的备用系统来提升和接管集群。简单来讲,备机连不上主机了,就会连接见证节点,如果也连接不上见证节点,那判断自己网络故障了,如果能连上见证节点,则认为主机故障,见证节点的作用类似于一个信任的网关。
所有见证节点安装在witness服务上,需要在此节点重新初始化数据。按照以下步骤生成一个功能齐全的repmgr证人:
2.8.2.查看密码文件
[postgres@pgwitness ~]$ cat /home/postgres/.pgpass
top@123
192.168.183.109:5432:repmgr:repmgr:top@123
192.168.183.110:5432:repmgr:repmgr:top@123
192.168.183.111:5432:repmgr:repmgr:top@123
192.168.183.112:5432:repmgr:repmgr:top@123
2.8.3.初始化一新PostgreSQL实例
/pgdb/pgsql/bin/initdb --username=postgres --pwfile=/home/postgres/.pgpass -D /pgdb/data --encoding=UTF8 --lc-collate=en_US.UTF-8 --lc-ctype=en_US.UTF-8
2.8.4.配置文件
--数据库配置文件
cp /pgdb/data/postgresql.conf /pgdb/data/postgresql.confbak
sed -i "/^#listen_addresses = 'localhost'/s/#listen_addresses = 'localhost'/listen_addresses = '*'/" /pgdb/data/postgresql.conf
sed -i "s/^#port = 5432/port = 5432/" /pgdb/data/postgresql.conf
sed -i 's/max_connections = 100/max_connections = 500/' /pgdb/data/postgresql.conf
sed -i "/^#wal_level/s/^#//" /pgdb/data/postgresql.conf #去掉注释
sed -i 's/#archive_mode = off/archive_mode = on/' /pgdb/data/postgresql.conf
sed -i "/^#archive_command = ''/s/#archive_command = ''/archive_command ='cp %p \/pgdb\/pg_archive\/%f'/" /pgdb/data/postgresql.conf
sed -i "/^#log_destination = 'stderr'/s/#log_destination = 'stderr'/log_destination = 'csvlog'/" /pgdb/data/postgresql.conf
sed -i "/^#logging_collector = off/s/#logging_collector = off/logging_collector = on/" /pgdb/data/postgresql.conf
sed -i "/^#log_directory = 'log'/s/^#//" /pgdb/data/postgresql.conf #去掉注释
sed -i "/^#log_filename/s/^#//" /pgdb/data/postgresql.conf #去掉注释
sed -i "/^#log_file_mode/s/^#//" /pgdb/data/postgresql.conf #去掉注释
sed -i "/^#log_rotation_age/s/^#//" /pgdb/data/postgresql.conf #去掉注释
sed -i "/^#log_rotation_size/s/^#//" /pgdb/data/postgresql.conf #去掉注释
sed -i "/^shared_buffers = 128MB/s/shared_buffers = 128MB/shared_buffers = 1024MB/" /pgdb/data/postgresql.conf #物理内存25~40%
sed -i "/^#work_mem = 4MB/s/#work_mem = 4MB/work_mem = 30MB/" /pgdb/data/postgresql.conf
sed -i "/^#maintenance_work_mem = 64MB/s/#maintenance_work_mem = 64MB/maintenance_work_mem = 256MB/" /pgdb/data/postgresql.conf
sed -i "/^#temp_buffers = 8MB/s/#temp_buffers = 8MB/temp_buffers = 256MB/" /pgdb/data/postgresql.conf
sed -i "/^#shared_preload_libraries = ''/s/#shared_preload_libraries = ''/shared_preload_libraries = 'repmgr'/" /pgdb/data/postgresql.conf
--用户权限访问文件
cp /pgdb/data/pg_hba.conf /pgdb/data/pg_hba.confbak
echo "host all all 0.0.0.0/0 md5" >> /pgdb/data/pg_hba.conf
echo "local replication repmgr trust" >> /pgdb/data/pg_hba.conf
echo "host replication repmgr 127.0.0.1/32 trust" >> /pgdb/data/pg_hba.conf
echo "host replication repmgr 192.168.183.0/0 trust" >> /pgdb/data/pg_hba.conf
echo "local repmgr repmgr trust" >> /pgdb/data/pg_hba.conf
echo "host repmgr repmgr 127.0.0.1/32 trust" >> /pgdb/data/pg_hba.conf
echo "host repmgr repmgr 192.168.183.0/0 trust" >> /pgdb/data/pg_hba.conf
2.8.5.启动PostgreSQL服务
su - root
cat > /usr/lib/systemd/system/postgres.service << "EOF"
[Unit]
Description=PostgreSQL database server
After=network.target
[Service]
Type=forking
User=postgres
Group=postgres
Environment=PGPORT=5432
Environment=PGDATA=/pgdb/data
OOMScoreAdjust=-1000
ExecStart=/pgdb/pgsql/bin/pg_ctl start -D /pgdb/data
ExecStop=/pgdb/pgsql/bin/pg_ctl stop -D /pgdb/data -s -m fast
ExecReload=/pgdb/pgsql/bin/bin/pg_ctl reload -D /pgdb/data -s
TimeoutSec=300
[Install]
WantedBy=multi-user.target
EOF
chmod +x /usr/lib/systemd/system/postgres.service
systemctl daemon-reload
systemctl enable postgres.service
systemctl start postgres.service
2.8.6.创建见证数据库
su - postgres -c "/pgdb/pgsql/bin/psql -d postgres -h127.0.0.1 -p5432 -c \"create user repmgr with superuser password 'top@123';\""
su - postgres -c "/pgdb/pgsql/bin/psql -d postgres -h127.0.0.1 -p5432 -c \"create database repmgr owner repmgr;\""
2.8.7.查看repmgr配置文件
--节点4 192.168.183.112 pgwitness节点
cat > /pgdb/repmgr-5.3.3/repmgr.conf << "EOF"
# 基本信息
node_id=4 # 节点ID,高可用集群各节点标识
node_name='192.168.183.112' # 节点ID,高可用集群各节点标识
conninfo='host=192.168.183.112 port=5785 user=repmgr dbname=repmgr connect_timeout=2' # 本节点数据库连接信息
data_directory='/pgdb/data5785' # pg数据目录
replication_user='repmgr' # 流复制数据库用户,默认使用repmgr
repmgr_bindir='/pgdb/repmgr-5.3.3' # repmgr软件目录
pg_bindir='/pgdb/pgsql/bin' # pg软件目录
# 日志管理
log_level=INFO #日志级别
log_facility=STDERR #日志工具
log_file='/pgdb/repmgr-5.3.3/repmgrd.log'
log_status_interval=10 #以指定的时间间隔(以秒为单位,默认为300)发出状态日志行
EOF
2.9.向repmgr注册见证服务器节点
2.9.1.注册本地服务器为见证服务器节点
--测试注册
[postgres@pgwitness repmgr-5.3.3]$ repmgr -f /pgdb/repmgr-5.3.3/repmgr.conf -h 192.168.183.109 -U repmgr -d repmgr witness register --dry-run
INFO: connecting to witness node "192.168.183.112" (ID: 4)
INFO: connecting to primary node
INFO: prerequisites for registering the witness node are met
--正式注册
[postgres@pgwitness repmgr-5.3.3]$ repmgr -f /pgdb/repmgr-5.3.3/repmgr.conf -h 192.168.183.109 -U repmgr -d repmgr witness register
INFO: connecting to witness node "192.168.183.112" (ID: 4)
INFO: connecting to primary node
NOTICE: attempting to install extension "repmgr"
NOTICE: "repmgr" extension successfully installed
INFO: witness registration complete
NOTICE: witness node "192.168.183.112" (ID: 4) successfully registered
2.9.2.启动repmgrd守护进程
节点4
su - root
cat > /usr/lib/systemd/system/repmgr.service << "EOF"
[Unit]
Description=A replication manager, and failover management tool for PostgreSQL
After=syslog.target
After=network.target
After=postgres
[Service]
Type=forking
User=postgres
Group=postgres
PIDFile=/pgdb/repmgr-5.3.3/repmgrd.pid
Environment=REPMGRDCONF=/pgdb/repmgr-5.3.3/repmgr.conf
Environment=PIDFILE=/pgdb/repmgr-5.3.3/repmgrd.pid
ExecStart=/pgdb/repmgr-5.3.3/repmgrd -f /pgdb/repmgr-5.3.3/repmgr.conf -p /pgdb/repmgr-5.3.3/repmgrd.pid -d --verbose
ExecStop=/usr/bin/kill -TERM $MAINPID
ExecReload=/usr/bin/kill -HUP $MAINPID
TimeoutSec=300
[Install]
WantedBy=multi-user.target
EOF
chmod +x /usr/lib/systemd/system/repmgr.service
systemctl daemon-reload
systemctl enable repmgr.service
systemctl start repmgr.service
2.9.3.查看repmgr集群状态
[postgres@pgwitness ~]$ repmgr -f /pgdb/repmgr-5.3.3/repmgr.conf service status
ID | Name | Role | Status | Upstream | repmgrd | PID | Paused? | Upstream last seen
----+-----------------+---------+-----------+-----------------+---------+-------+---------+--------------------
1 | 192.168.183.109 | primary | * running | | running | 8488 | no | n/a
2 | 192.168.183.110 | standby | running | 192.168.183.109 | running | 12749 | no | 0 second(s) ago
3 | 192.168.183.111 | standby | running | 192.168.183.109 | running | 7548 | no | 0 second(s) ago
4 | 192.168.183.112 | witness | * running | 192.168.183.109 | running | 7915 | no | 0 second(s) ago
通过repmgr服务状态查看,主从见证四个节点目前状态均已正常在线。
三、服务管理
3.1 手动故障切换
当主服务器发生宕机或者服务不可用或者有人工干预或人工误操作导致的停库等情况,此时Repmgr集群管理系统可能就不能进行自动的管理工作,就需要在关闭repmgrd守护进程的基础上进行手动切换,让新的备服务器来接管故障的主服务器,以确保整个repmgr集群对外可用。这时可以通过repmgr Standby Promotion来完成。
3.1.1.显示集群中注册的所有节点信息
[postgres@pgmaster ~]$ repmgr -f /pgdb/repmgr-5.3.3/repmgr.conf cluster show
ID | Name | Role | Status | Upstream | Location | Priority | Timeline | Connection string
----+-----------------+---------+-----------+-----------------+----------+----------+----------+-----------------------------------------------------------------------------
1 | 192.168.183.109 | primary | * running | | default | 100 | 1 | host=192.168.183.109 port=5432 user=repmgr dbname=repmgr connect_timeout=2
2 | 192.168.183.110 | standby | running | 192.168.183.109 | default | 100 | 1 | host=192.168.183.110 port=5432 user=repmgr dbname=repmgr connect_timeout=2
3 | 192.168.183.111 | standby | running | 192.168.183.109 | default | 100 | 1 | host=192.168.183.111 port=5432 user=repmgr dbname=repmgr connect_timeout=2
4 | 192.168.183.112 | witness | * running | 192.168.183.109 | default | 0 | n/a | host=192.168.183.112 port=5432 user=repmgr dbname=repmgr connect_timeout=2
模拟主库宕机,停止主库数据库服务
3.1.2.停止主库
--停止主库
[postgres@pgmaster ~]$ pg_ctl stop -D /pgdb/data
--显示集群中注册的所有节点信息 其他节点查看
[postgres@pgslave1 ~]$ repmgr -f /pgdb/repmgr-5.3.3/repmgr.conf cluster show
ID | Name | Role | Status | Upstream | Location | Priority | Timeline | Connection string
----+-----------------+---------+---------------+-------------------+----------+----------+----------+-----------------------------------------------------------------------------
1 | 192.168.183.109 | primary | ? unreachable | ? | default | 100 | | host=192.168.183.109 port=5432 user=repmgr dbname=repmgr connect_timeout=2
2 | 192.168.183.110 | standby | running | ? 192.168.183.109 | default | 100 | 1 | host=192.168.183.110 port=5432 user=repmgr dbname=repmgr connect_timeout=2
3 | 192.168.183.111 | standby | running | ? 192.168.183.109 | default | 100 | 1 | host=192.168.183.111 port=5432 user=repmgr dbname=repmgr connect_timeout=2
4 | 192.168.183.112 | witness | * running | ? 192.168.183.109 | default | 0 | n/a | host=192.168.183.112 port=5432 user=repmgr dbname=repmgr connect_timeout=2
WARNING: following issues were detected
- unable to connect to node "192.168.183.109" (ID: 1)
- node "192.168.183.109" (ID: 1) is registered as an active primary but is unreachable
- unable to connect to node "192.168.183.110" (ID: 2)'s upstream node "192.168.183.109" (ID: 1)
- unable to determine if node "192.168.183.110" (ID: 2) is attached to its upstream node "192.168.183.109" (ID: 1)
- unable to connect to node "192.168.183.111" (ID: 3)'s upstream node "192.168.183.109" (ID: 1)
- unable to determine if node "192.168.183.111" (ID: 3) is attached to its upstream node "192.168.183.109" (ID: 1)
- unable to connect to node "192.168.183.112" (ID: 4)'s upstream node "192.168.183.109" (ID: 1)
HINT: execute with --verbose option to see connection error messages
此时,复制集群将处于部分禁用状态,两个备用数据库都将尝试连接到已停止的主数据库时接受只读连接。请注意,repmgr元数据表尚未更新;执行repmgr cluster show将注意差异:
3.1.3.将 slave1 提升为主节点
slave1执行操作
--将 slave1 提升为主节点
[postgres@pgslave1 ~]$ repmgr -f /pgdb/repmgr-5.3.3/repmgr.conf standby promote
WARNING: 2 sibling nodes found, but option "--siblings-follow" not specified
DETAIL: these nodes will remain attached to the current primary:
192.168.183.111 (node ID: 3)
192.168.183.112 (node ID: 4, witness server)
NOTICE: promoting standby to primary
DETAIL: promoting server "192.168.183.110" (ID: 2) using pg_promote()
NOTICE: waiting up to 60 seconds (parameter "promote_check_timeout") for promotion to complete
NOTICE: STANDBY PROMOTE successful
DETAIL: server "192.168.183.110" (ID: 2) was successfully promoted to primary
--查看集群状态
[postgres@pgslave1 ~]$ repmgr -f /pgdb/repmgr-5.3.3/repmgr.conf cluster show
ID | Name | Role | Status | Upstream | Location | Priority | Timeline | Connection string
----+-----------------+---------+-----------+-------------------+----------+----------+----------+-----------------------------------------------------------------------------
1 | 192.168.183.109 | primary | - failed | ? | default | 100 | | host=192.168.183.109 port=5432 user=repmgr dbname=repmgr connect_timeout=2
2 | 192.168.183.110 | primary | * running | | default | 100 | 2 | host=192.168.183.110 port=5432 user=repmgr dbname=repmgr connect_timeout=2
3 | 192.168.183.111 | standby | running | ? 192.168.183.109 | default | 100 | 1 | host=192.168.183.111 port=5432 user=repmgr dbname=repmgr connect_timeout=2
4 | 192.168.183.112 | witness | * running | 192.168.183.110 | default | 0 | n/a | host=192.168.183.112 port=5432 user=repmgr dbname=repmgr connect_timeout=2
WARNING: following issues were detected
- unable to connect to node "192.168.183.109" (ID: 1)
- unable to connect to node "192.168.183.111" (ID: 3)'s upstream node "192.168.183.109" (ID: 1)
- unable to determine if node "192.168.183.111" (ID: 3) is attached to its upstream node "192.168.183.109" (ID: 1)
HINT: execute with --verbose option to see connection error messages
通过查看集群状态,主库不可用的集群告警已自动清除,因为原备库slave1已成功接管了主库的服务。仅仅接管,对于repmgr整个集群的可靠性还不够。此时pgmater故障,slave1提升为primary,slave2仍为 standby,但是其upstream 为 192.168.183.109,需要执行repmgr Standby Follow,使未跟随新主服务的备库slave2重新加入到新的集群中来。
3.1.4.将 slave2 跟随新主服务
slave2上操作
--将 slave2 跟随新主服务
[postgres@pgslave2 ~]$ repmgr -f /pgdb/repmgr-5.3.3/repmgr.conf standby follow
NOTICE: attempting to find and follow current primary
INFO: local node 3 can attach to follow target node 2
DETAIL: local node's recovery point: 0/E0000A0; follow target node's fork point: 0/E0000A0
NOTICE: setting node 3's upstream to node 2
WARNING: node "192.168.183.111" not found in "pg_stat_replication"
NOTICE: STANDBY FOLLOW successful
DETAIL: standby attached to upstream node "192.168.183.110" (ID: 2)
--查看集群状态
[postgres@pgslave2 ~]$ repmgr -f /pgdb/repmgr-5.3.3/repmgr.conf cluster show
ID | Name | Role | Status | Upstream | Location | Priority | Timeline | Connection string
----+-----------------+---------+-----------+-----------------+----------+----------+----------+-----------------------------------------------------------------------------
1 | 192.168.183.109 | primary | - failed | ? | default | 100 | | host=192.168.183.109 port=5432 user=repmgr dbname=repmgr connect_timeout=2
2 | 192.168.183.110 | primary | * running | | default | 100 | 2 | host=192.168.183.110 port=5432 user=repmgr dbname=repmgr connect_timeout=2
3 | 192.168.183.111 | standby | running | 192.168.183.110 | default | 100 | 2 | host=192.168.183.111 port=5432 user=repmgr dbname=repmgr connect_timeout=2
4 | 192.168.183.112 | witness | * running | 192.168.183.110 | default | 0 | n/a | host=192.168.183.112 port=5432 user=repmgr dbname=repmgr connect_timeout=2
WARNING: following issues were detected
- unable to connect to node "192.168.183.109" (ID: 1)
HINT: execute with --verbose option to see connection error messages
3.1.5.重新加入原主服务到新集群
法一:repmgr node rejoin
--保持原主库关闭状态
ps -ef | grep postgres 无进程
--预重新加入命令,ip为新主ip,在旧主服务器上执行该命令
repmgr -f /pgdb/repmgr-5.3.3/repmgr.conf node rejoin -d 'host=192.168.183.109 user=repmgr dbname=repmgr' --force-rewind --dry-run --verbose
--正式重新加入原主库命令,ip为新主ip,在旧主服务器上执行该命令
repmgr -f /pgdb/repmgr-5.3.3/repmgr.conf node rejoin -d 'host=192.168.183.109 user=repmgr dbname=repmgr' --force-rewind --verbose
法二:重新克隆生成备库
如果法一加入失败,采取法二方法
在旧主服务器上操作
--保持原主库关闭状态
ps -ef | grep postgres 无进程
--删除数据目录
[postgres@pgmaster]$ cd /pgdb
[postgres@pgmaster pgdb]$ mv data/ databak/
[postgres@pgmaster pgdb]$ mkdir -p data/
--测试克隆新备库,新主ip
repmgr -h 192.168.183.110 -U repmgr -d repmgr -f /pgdb/repmgr-5.3.3/repmgr.conf standby clone --dry-run
--正式克隆新备库,新主ip
repmgr -h 192.168.183.110 -U repmgr -d repmgr -f /pgdb/repmgr-5.3.3/repmgr.conf standby clone
--启动新备库
[postgres@pgmaster pgdb]$ pg_ctl start -D /pgdb/data
--向repmgr中注册为备节点
[postgres@pgmaster pgdb]$ repmgr -f /pgdb/repmgr-5.3.3/repmgr.conf standby register --dry-run
[postgres@pgmaster pgdb]$ repmgr -f /pgdb/repmgr-5.3.3/repmgr.conf standby register -F
--启动repmgrd进程
[postgres@pgmaster ~]# sudo systemctl start repmgr
--查看集群状态
[postgres@pgmaster pgdb]$ repmgr -f /pgdb/repmgr-5.3.3/repmgr.conf cluster show
ID | Name | Role | Status | Upstream | Location | Priority | Timeline | Connection string
----+-----------------+---------+-----------+-----------------+----------+----------+----------+-----------------------------------------------------------------------------
1 | 192.168.183.109 | standby | running | 192.168.183.110 | default | 100 | 2 | host=192.168.183.109 port=5432 user=repmgr dbname=repmgr connect_timeout=2
2 | 192.168.183.110 | primary | * running | | default | 100 | 2 | host=192.168.183.110 port=5432 user=repmgr dbname=repmgr connect_timeout=2
3 | 192.168.183.111 | standby | running | 192.168.183.110 | default | 100 | 2 | host=192.168.183.111 port=5432 user=repmgr dbname=repmgr connect_timeout=2
4 | 192.168.183.112 | witness | * running | 192.168.183.110 | default | 0 | n/a | host=192.168.183.112 port=5432 user=repmgr dbname=repmgr connect_timeout=2
3.2 自动故障转移
3.2.1.添加故障切换相关参数
利用repmgrd进程实现自动的failover,前面的操作一致,修改repmgr.conf文件,添加故障切换相关参数。主从节点均添加。
cat >> /pgdb/repmgr-5.3.3/repmgr-5.3.3/repmgr.conf << "EOF"
# failover设置
failover='automatic'
promote_command='$BASEPATH/repmgr-5.3.3/repmgr standby promote -f $BASEPATH/repmgr-5.3.3/repmgr.conf --log-to-file'
follow_command='$BASEPATH/repmgr-5.3.3/repmgr standby follow -f $BASEPATH/repmgr-5.3.3/repmgr.conf --log-to-file --upstream-node-id=%n'
# 高可用参数设置
location='location1' # 定义location
priority=100
monitoring_history=true #启动监控参数
monitor_interval_secs=5 #定义监视数据间隔写入时间参数
connection_check_type='ping'
reconnect_attempts=3 # 故障转移之前,尝试重新连接主库次数(默认为6)参数
reconnect_interval=5 #每间隔5s尝试一次重连
standby_disconnect_on_failover =true
use_replication_slots=true
EOF
3.2.2.重启repmgr主库服务,使配置生效
repmgr -f /pgdb/repmgr-5.3.3/repmgr.conf node service --action=restart
3.2.3.主备库启动repmgrd
sudo systemctl restart repmgr
3.2.4.模拟主库故障
--查看集群状态
[postgres@pgslave1 ~]$ repmgr -f /pgdb/repmgr-5.3.3/repmgr.conf cluster show
ID | Name | Role | Status | Upstream | Location | Priority | Timeline | Connection string
----+-----------------+---------+-----------+-----------------+----------+----------+----------+-----------------------------------------------------------------------------
1 | 192.168.183.109 | standby | running | 192.168.183.110 | default | 100 | 2 | host=192.168.183.109 port=5432 user=repmgr dbname=repmgr connect_timeout=2
2 | 192.168.183.110 | primary | * running | | default | 100 | 2 | host=192.168.183.110 port=5432 user=repmgr dbname=repmgr connect_timeout=2
3 | 192.168.183.111 | standby | running | 192.168.183.110 | default | 100 | 2 | host=192.168.183.111 port=5432 user=repmgr dbname=repmgr connect_timeout=2
4 | 192.168.183.112 | witness | * running | 192.168.183.110 | default | 0 | n/a | host=192.168.183.112 port=5432 user=repmgr dbname=repmgr connect_timeout=2
--关闭主库
[postgres@pgslave1 ~]$ pg_ctl stop -D /pgdb/data/
waiting for server to shut down.... done
server stopped
--查看集群状态
[postgres@pgmaster ~]$ repmgr -f /pgdb/repmgr-5.3.3/repmgr.conf cluster show
ID | Name | Role | Status | Upstream | Location | Priority | Timeline | Connection string
----+-----------------+---------+---------------+-------------------+----------+----------+----------+-----------------------------------------------------------------------------
1 | 192.168.183.109 | standby | running | ? 192.168.183.110 | default | 100 | 2 | host=192.168.183.109 port=5432 user=repmgr dbname=repmgr connect_timeout=2
2 | 192.168.183.110 | primary | ? unreachable | ? | default | 100 | | host=192.168.183.110 port=5432 user=repmgr dbname=repmgr connect_timeout=2
3 | 192.168.183.111 | standby | running | ? 192.168.183.110 | default | 100 | 2 | host=192.168.183.111 port=5432 user=repmgr dbname=repmgr connect_timeout=2
4 | 192.168.183.112 | witness | * running | ? 192.168.183.110 | default | 0 | n/a | host=192.168.183.112 port=5432 user=repmgr dbname=repmgr connect_timeout=2
WARNING: following issues were detected
- unable to connect to node "192.168.183.109" (ID: 1)'s upstream node "192.168.183.110" (ID: 2)
- unable to determine if node "192.168.183.109" (ID: 1) is attached to its upstream node "192.168.183.110" (ID: 2)
- unable to connect to node "192.168.183.110" (ID: 2)
- node "192.168.183.110" (ID: 2) is registered as an active primary but is unreachable
- unable to connect to node "192.168.183.111" (ID: 3)'s upstream node "192.168.183.110" (ID: 2)
- unable to determine if node "192.168.183.111" (ID: 3) is attached to its upstream node "192.168.183.110" (ID: 2)
- unable to connect to node "192.168.183.112" (ID: 4)'s upstream node "192.168.183.110" (ID: 2)
HINT: execute with --verbose option to see connection error messages
--检查本节点状态
[postgres@pgmaster ~]$ repmgr -f /pgdb/repmgr-5.3.3/repmgr.conf node status
Node "192.168.183.109":
PostgreSQL version: 14.2
Total data size: 33 MB
Conninfo: host=192.168.183.109 port=5432 user=repmgr dbname=repmgr connect_timeout=2
Role: standby
WAL archiving: disabled (on standbys "archive_mode" must be set to "always" to be effective)
Archive command: /usr/bin/lz4 -q -z %p /pgdb/pg_archive/%f.lz4
WALs pending archiving: 0 pending files
Replication connections: 0 (of maximal 10)
Replication slots: 0 physical (of maximal 10; 0 missing)
Upstream node: 192.168.183.110 (ID: 2)
Replication lag: 0 seconds
Last received LSN: 0/120000A0
Last replayed LSN: 0/120000A0
--查看repmgr服务状态
[postgres@pgmaster ~]$ repmgr -f /pgdb/repmgr-5.3.3/repmgr.conf service status
ID | Name | Role | Status | Upstream | repmgrd | PID | Paused? | Upstream last seen
----+-----------------+---------+---------------+-------------------+---------+-------+---------+--------------------
1 | 192.168.183.109 | standby | running | ? 192.168.183.110 | running | 49071 | no | 1659 second(s) ago
2 | 192.168.183.110 | primary | ? unreachable | ? | n/a | n/a | n/a | n/a
3 | 192.168.183.111 | standby | running | ? 192.168.183.110 | running | 21695 | no | 1662 second(s) ago
4 | 192.168.183.112 | witness | * running | ? 192.168.183.110 | running | 21013 | no | 1665 second(s) ago
WARNING: following issues were detected
- unable to connect to node "192.168.183.109" (ID: 1)'s upstream node "192.168.183.110" (ID: 2)
- unable to determine if node "192.168.183.109" (ID: 1) is attached to its upstream node "192.168.183.110" (ID: 2)
- unable to connect to node "192.168.183.110" (ID: 2)
- node "192.168.183.110" (ID: 2) is registered as an active primary but is unreachable
- unable to connect to node "192.168.183.111" (ID: 3)'s upstream node "192.168.183.110" (ID: 2)
- unable to determine if node "192.168.183.111" (ID: 3) is attached to its upstream node "192.168.183.110" (ID: 2)
- unable to connect to node "192.168.183.112" (ID: 4)'s upstream node "192.168.183.110" (ID: 2)
HINT: execute with --verbose option to see connection error messages
3.2.5.查看备库日志
发现slave1已经升为主库:/pgdb/repmgr-5.3.3/repmgrd.log
[2023-08-21 00:34:10] [DETAIL] PQping() returned "PQPING_NO_RESPONSE"
[2023-08-21 00:34:10] [WARNING] unable to reconnect to node "192.168.183.109" (ID: 1) after 3 attempts
[2023-08-21 00:34:10] [INFO] 2 active sibling nodes registered
[2023-08-21 00:34:10] [INFO] 4 total nodes registered
[2023-08-21 00:34:10] [INFO] primary node "192.168.183.109" (ID: 1) and this node have the same location ("location1")
[2023-08-21 00:34:10] [INFO] local node's last receive lsn: 0/6049BE0
[2023-08-21 00:34:10] [INFO] checking state of sibling node "192.168.183.110" (ID: 2)
[2023-08-21 00:34:10] [INFO] node "192.168.183.110" (ID: 2) reports its upstream is node 1, last seen 18 second(s) ago
[2023-08-21 00:34:10] [INFO] standby node "192.168.183.110" (ID: 2) last saw primary node 18 second(s) ago
[2023-08-21 00:34:10] [INFO] last receive LSN for sibling node "192.168.183.110" (ID: 2) is: 0/6049BE0
[2023-08-21 00:34:10] [INFO] node "192.168.183.110" (ID: 2) has same LSN as current candidate "192.168.183.111" (ID: 3)
[2023-08-21 00:34:10] [INFO] node "192.168.183.110" (ID: 2) has same priority but lower node_id than current candidate "192.168.183.111" (ID: 3)
[2023-08-21 00:34:10] [INFO] checking state of sibling node "192.168.183.112" (ID: 4)
[2023-08-21 00:34:10] [INFO] node "192.168.183.112" (ID: 4) reports its upstream is node 1, last seen 20 second(s) ago
[2023-08-21 00:34:10] [INFO] witness node "192.168.183.112" (ID: 4) last saw primary node 20 second(s) ago
[2023-08-21 00:34:10] [INFO] visible nodes: 3; total nodes: 3; no nodes have seen the primary within the last 10 seconds
[2023-08-21 00:34:10] [NOTICE] promotion candidate is "192.168.183.110" (ID: 2)
[2023-08-21 00:34:10] [INFO] follower node awaiting notification from a candidate node
[2023-08-21 00:34:18] [NOTICE] attempting to follow new primary "192.168.183.110" (node ID: 2)
[2023-08-21 00:34:18] [NOTICE] redirecting logging output to "/pgdb/repmgr-5.3.3/repmgrd.log"
[2023-08-21 00:34:18] [INFO] local node 3 can attach to follow target node 2
[2023-08-21 00:34:18] [DETAIL] local node's recovery point: 0/6049BE0; follow target node's fork point: 0/6049BE0
[2023-08-21 00:34:18] [NOTICE] setting node 3's upstream to node 2
[2023-08-21 00:34:18] [WARNING] unable to reload server configuration
[2023-08-21 00:34:18] [NOTICE] restarting server using "sudo systemctl restart postgres"
[2023-08-21 00:34:20] [WARNING] unable to connect to old upstream node 1 to remove replication slot
[2023-08-21 00:34:20] [HINT] if reusing this node, you should manually remove any inactive replication slots
[2023-08-21 00:34:20] [NOTICE] STANDBY FOLLOW successful
[2023-08-21 00:34:20] [DETAIL] standby attached to upstream node "192.168.183.110" (ID: 2)
INFO: set_repmgrd_pid(): provided pidfile is /pgdb/repmgr-5.3.3/repmgrd.pid
[2023-08-21 00:34:20] [NOTICE] node "192.168.183.111" (ID: 3) now following new upstream node "192.168.183.110" (ID: 2)
[2023-08-21 00:34:20] [INFO] resuming standby monitoring mode
[2023-08-21 00:34:20] [DETAIL] following new primary "192.168.183.110" (ID: 2)
3.2.6.查看cluster状态,备库已经升主
[postgres@pgslave2 ~]$ repmgr -f /pgdb/repmgr-5.3.3/repmgr.conf cluster show
ID | Name | Role | Status | Upstream | Location | Priority | Timeline | Connection string
----+-----------------+---------+-----------+-----------------+-----------+----------+----------+-----------------------------------------------------------------------------
1 | 192.168.183.109 | primary | - failed | ? | location1 | 100 | | host=192.168.183.109 port=5432 user=repmgr dbname=repmgr connect_timeout=2
2 | 192.168.183.110 | primary | * running | | location1 | 100 | 2 | host=192.168.183.110 port=5432 user=repmgr dbname=repmgr connect_timeout=2
3 | 192.168.183.111 | standby | running | 192.168.183.110 | location1 | 100 | 1 | host=192.168.183.111 port=5432 user=repmgr dbname=repmgr connect_timeout=2
4 | 192.168.183.112 | witness | * running | 192.168.183.110 | location1 | 0 | n/a | host=192.168.183.112 port=5432 user=repmgr dbname=repmgr connect_timeout=2
WARNING: following issues were detected
- unable to connect to node "192.168.183.109" (ID: 1)
HINT: execute with --verbose option to see connection error messages
四、集群管理
4.1.检查集群状态
4.1.1.node相关
repmgr -f /pgdb/repmgr-5.3.3/repmgr.conf node status
repmgr -f /pgdb/repmgr-5.3.3/repmgr.conf node check
4.1.2.cluster相关
repmgr -f /pgdb/repmgr-5.3.3/repmgr.conf cluster show
repmgr -f /pgdb/repmgr-5.3.3/repmgr.conf cluster crosscheck
4.1.3.service相关
repmgr -f /pgdb/repmgr-5.3.3/repmgr.conf service status
4.1.4.库里的表也可以查状态
进repmgr库里检查集群状态
SELECT * FROM repmgr.nodes;
4.2.repmgrd管理
可以在任何一个节点上执行,可以用于例行维护,避免主库正常关闭,集群被切换
4.2.1.暂停repmgrd
repmgr -f /pgdb/repmgr-5.3.3/repmgr.conf service pause
4.2.2.解除暂停repmgrd
repmgr -f /pgdb/repmgr-5.3.3/repmgr.conf service unpause
4.3.standby节点增删管理
4.3.1.集群新增一个standby节点
相应节点做好配置后,在此节点为上执行
repmgr -f /pgdb/repmgr-5.3.3/repmgr.conf standby register
4.3.2.集群删除一个standby节点执行
repmgr standby unregister -f /pgdb/repmgr-5.3.3/repmgr.conf --node-id=3
4.4.集群删除一个primary节点
相应节点做好配置后,在此节点为上执行
repmgr -f /pgdb/repmgr-5.3.3/repmgr.conf primary unregister --node-id=1
五、故障处理
当主节点故障后系统会选举出一个新主节点,然后如果故障的主节点在未修复的情况下再次启动的话是这个状态 ! running
要将1节点移除,可以在2或3上用命令,如果卸载不掉可以在结尾加上 --force 强制卸载
repmgr primary unregister --node-id 1
如果是坏了的standby节点修复后,可以再加回来
六、常用管理命令
repmgr primary register 安装pg的repmgr扩展并注册为主节点(添加主的信息到repmgr元数据)
repmgr primary unregister 注销不活动的主节点
repmgr standby clone 从其他节点复制数据到从节点
repmgr standby register 注册从节点(添加从的信息到repmgr元数据)
repmgr standby unregister repmgr 元数据中移除从的信息
repmgr standby promote 将从提升为主
repmgr standby follow 将从跟随新主
repmgr standby switchover 将从提升为主并将主降级为从
repmgr witness register 注册一个观察节点(添加观察节点的信息到repmgr元数据)
repmgr witness unregister 移除一个观察节点
repmgr node status 显示节点的基本信息和复制状态
repmgr node check 从复制的角度对节点进行健康监测
repmgr node rejoin 重新加入一个失效节点到集群
repmgr cluster show 显示集群中注册的所有节点信息
repmgr cluster matrix 在所有节点运行show并汇总
repmgr cluster crosscheck 在节点间两两交叉监测连接
repmgr cluster event 输出时间记录
repmgr cluster cleanup 清理监控历史
repmgr service status 节点状态 repmgr状态为running才能切换
九、问题
1.fe_sendauth: no password supplied
本地服务器向repmgr注册为主服务器节点时提示如下报错,原因是pgpass中端口不对
[postgres@pgmaster ~]$ repmgr -f /pgdb/repmgr-5.3.3/repmgr.conf primary register
INFO: connecting to primary database...
ERROR: connection to database failed
DETAIL:
connection to server at "192.168.183.109", port 5432 failed: fe_sendauth: no password supplied
DETAIL: attempted to connect using:
user=repmgr connect_timeout=2 dbname=repmgr host=192.168.183.109 port=5432 fallback_application_name=repmgr options=-csearch_path=
检查密码文件信息是否正确
[postgres@pgmaster ~]$ cat .pgpass
top@123 --postgres密码
192.168.183.109:5432:repmgr:repmgr:top@123 --节点ip:dbport:流复制dbuser:流复制dbuser数据库:密码
192.168.183.110:5432:repmgr:repmgr:top@123
192.168.183.111:5432:repmgr:repmgr:top@123
192.168.183.112:5432:repmgr:repmgr:top@123
2.重新加入集群报错
问题:重新加入原主服务到新集群报错
[postgres@pgmaster ~]$ repmgr -f /pgdb/repmgr-5.3.3/repmgr.conf node rejoin -h 192.168.183.109 -U repmgr -d repmgr --force-rewind
ERROR: database is still running in state "in production"
HINT: "repmgr node rejoin" cannot be executed on a running node
分析过程:
--查看集群状态
[postgres@pgslave1 ~]$ pg_controldata -D /pgdb/data | grep 'Database cluster state'
Database cluster state: in production
[postgres@pgslave1 ~]$ psql -c "\x" -c "SELECT * FROM pg_stat_replication;"
Expanded display is on.
-[ RECORD 1 ]----+------------------------------
pid | 19777
usesysid | 16384
usename | repmgr
application_name | 192.168.183.111
client_addr | 192.168.183.111
client_hostname |
client_port | 15254
backend_start | 2023-08-20 14:47:53.265395+08
backend_xmin |
state | streaming
sent_lsn | 0/E002B08
write_lsn | 0/E002B08
flush_lsn | 0/E002B08
replay_lsn | 0/E002B08
write_lag |
flush_lag |
replay_lag |
sync_priority | 0
sync_state | async
reply_time | 2023-08-20 15:00:16.606724+08
[postgres@pgslave1 ~]$ repmgr -f /pgdb/repmgr-5.3.3/repmgr.conf cluster show
ID | Name | Role | Status | Upstream | Location | Priority | Timeline | Connection string
----+-----------------+---------+-----------+-----------------+----------+----------+----------+-----------------------------------------------------------------------------
1 | 192.168.183.109 | primary | - failed | ? | default | 100 | | host=192.168.183.109 port=5432 user=repmgr dbname=repmgr connect_timeout=2
2 | 192.168.183.110 | primary | * running | | default | 100 | 2 | host=192.168.183.110 port=5432 user=repmgr dbname=repmgr connect_timeout=2
3 | 192.168.183.111 | standby | running | 192.168.183.110 | default | 100 | 2 | host=192.168.183.111 port=5432 user=repmgr dbname=repmgr connect_timeout=2
4 | 192.168.183.112 | witness | * running | 192.168.183.110 | default | 0 | n/a | host=192.168.183.112 port=5432 user=repmgr dbname=repmgr connect_timeout=2
WARNING: following issues were detected
- unable to connect to node "192.168.183.109" (ID: 1)
HINT: execute with --verbose option to see connection error messages
--关闭旧主库pg库,命令中ip更改为新主的ip,在旧主库服务器执行命令
[postgres@pgmaster ~]$ repmgr -f /pgdb/repmgr-5.3.3/repmgr.conf node rejoin -h 192.168.183.110 -U repmgr -d repmgr --force-rewind
原文链接:https://blog.csdn.net/weixin_46199817/article/details/119648490




