暂无图片
暂无图片
2
暂无图片
暂无图片
暂无图片

Greenplum 6.12.1 安装部署及单独添加镜像

原创 高云龙 云和恩墨 2022-09-06
448

Greenplum 是一个面向数据仓库应用的关系型数据库,它是在PostgreSQL的基础上采用MPP(Massive Parallel Processing)架构,本质上是多个PostgreSQL实例一起工作形成的一个紧密结合的数据库管理系统。

Greenplum 诞生于2003年硅谷,2006年推出了首款产品,2010/07 被EMC收购,并把Greenplum作为EMC面向分析云的战略核心产品,加以大力发展。 2008年12月 进入中国市场,2010年1月1日Greenplum 正式宣布在中国独立运营。2015年,Pivotal公司拥抱了开源社区,将Greenplum开源。

greenplum有三部分组成,分别是

  • Master节点:是GP数据库系统的入口,接受客户端连接及提交的SQL语句,将工作负载分发给segment。
  • Segment节点:负责数据的存储,每个segment都是一个独立的PostgreSQL数据库,存储一部分数据。
  • Interconnect:负责不同PostgreSQL实例之间的通信

环境准备

系统:centos7
关闭SELinux

vim  /etc/selinux/config
SELINUX=disabled

关闭防火墙

systemctl status firewalld
systemctl stop firewalld
systemctl disable firewalld

配置系统参数

# vim /etc/sysctl.conf 

# kernel.shmall = _PHYS_PAGES / 2 # See Shared Memory Pages
kernel.shmall = 197951838
# kernel.shmmax = kernel.shmall * PAGE_SIZE 
kernel.shmmax = 810810728448
kernel.shmmni = 4096
vm.overcommit_memory = 2 # See Segment Host Memory
vm.overcommit_ratio = 95 # See Segment Host Memory

net.ipv4.ip_local_port_range = 10000 65535 # See Port Settings
kernel.sem = 250 2048000 200 8192
kernel.sysrq = 1
kernel.core_uses_pid = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.msgmni = 2048
net.ipv4.tcp_syncookies = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.conf.all.arp_filter = 1
net.ipv4.ipfrag_high_thresh = 41943040
net.ipv4.ipfrag_low_thresh = 31457280
net.ipv4.ipfrag_time = 60
net.core.netdev_max_backlog = 10000
net.core.rmem_max = 2097152
net.core.wmem_max = 2097152
vm.swappiness = 10
vm.zone_reclaim_mode = 0
vm.dirty_expire_centisecs = 500
vm.dirty_writeback_centisecs = 100
vm.dirty_background_ratio = 0 # See System Memory
vm.dirty_ratio = 0
vm.dirty_background_bytes = 1610612736
vm.dirty_bytes = 4294967296
kernel.core_pattern=/var/core/core.%h.%t

# vim /etc/security/limits.conf
* soft nofile 524288
* hard nofile 524288
* soft nproc 131072
* hard nproc 131072
* soft  core unlimited

# sysctl -p

关闭 RemoveIPC

# vim /etc/systemd/logind.conf
RemoveIPC=no

# service systemd-logind restart

关闭THP

# grubby --update-kernel=ALL --args="transparent_hugepage=never"

创建用户

greenplum 系统用户习惯使用gpadmin,但数据库超户依然是postgres

[root@node1 software]# groupadd gpadmin
[root@node1 software]# useradd -g gpadmin gpadmin

安装依赖包

[root@node1 software]# yum -y install apr apr-util bash bzip2 curl krb5 krb5-devel libcurl libevent libxml2 libyaml zlib openldap openssh openssl openssl-libs perl readline rsync R sed tar zip 

下载地址

开源下载地址 本次安装以6.12.1为例

wget https://github.com/greenplum-db/gpdb/releases/download/6.12.1/greenplum-db-6.12.1-rhel7-x86_64.rpm

安装部署

安装gp

这步需要在所有节点执行,可以将rpm上传到各个服务器安装,也可以先在一台服务器解压安装,然后将安装好后的目录传输到各个服务器

[root@node1 software]# rpm --install ./greenplum-db-6.12.1-rhel7-x86_64.rpm --prefix=/opt/
[root@node1 software]# chown -R gpadmin: /opt/greenplum*

建立互信

--在一个服务器节点上配置ssh,并免密码登陆到其他节点
[gpadmin@node1 ~]$ ssh-keygen -t rsa
[gpadmin@node1 ~]$ ssh-copy-id -i .ssh/id_rsa.pub 192.168.122.157
[gpadmin@node1 ~]$ ssh-copy-id -i .ssh/id_rsa.pub 192.168.122.68

--创建hostfile,包含所有节点的hostname
[gpadmin@node1 ~]$ vim hostfile_exkeys
node1
node2
node3

--gpssh-exkeys做机器之间的互信
[gpadmin@node1 ~]$ gpssh-exkeys -f hostfile_exkeys

--创建数据目录
[gpadmin@node1 ~]$ source /opt/greenplum-db/greenplum_path.sh
[root@node1 ~]# gpssh -f /home/gpadmin/hostfile_exkeys -e "mkdir -p /data/gpdata && chown -R gpadmin: /data/gpdata"
[root@node1 ~]# su - gpadmin
[gpadmin@node1 ~]$ source /opt/greenplum-db/greenplum_path.sh
[gpadmin@node1 ~]$ gpssh -h node1 -e "mkdir -p /data/gpdata/gpmaster"
[gpadmin@node1 ~]$ gpssh -h node2 -h node3 -e "mkdir -p /data/gpdata/seg1/primary"
[gpadmin@node1 ~]$ gpssh -h node2 -h node3 -e "mkdir -p /data/gpdata/seg2/primary"

配置文件

[gpadmin@node1 ~]$ cp /opt/greenplum-db/docs/cli_help/gpconfigs/gpinitsystem_config .
[gpadmin@node1 ~]$ cat gpinitsystem_config|egrep -v '#|^$'
ARRAY_NAME="Greenplum Data Platform"
SEG_PREFIX=gpseg
PORT_BASE=6000
declare -a DATA_DIRECTORY=(/data/gpdata/seg1/primary /data/gpdata/seg2/primary)
MASTER_HOSTNAME=node1
MASTER_DIRECTORY=/data/gpdata/gpmaster
MASTER_PORT=65432
TRUSTED_SHELL=ssh
CHECK_POINT_SEGMENTS=8
ENCODING=UNICODE
MACHINE_LIST_FILE=/home/gpadmin/hostfile_gpinitsystem

--创建初始化文件,此文件中只需要填写segment节点服务器
[gpadmin@node1 ~]$ cat hostfile_gpinitsystem
node2
node3

配置环境变量

[gpadmin@node1 ~]$ vim ~/.bashrc
source /opt/greenplum-db/greenplum_path.sh
export MASTER_DATA_DIRECTORY=/data/gpdata/gpmaster/gpseg-1
[gpadmin@node1 ~]$ source ~/.bashrc

初始化

[gpadmin@node1 ~]$ gpinitsystem -c gpinitsystem_config -h hostfile_gpinitsystem
.
.
20220828:18:55:12:004071 gpinitsystem:node1:gpadmin-[INFO]:----------------------------------------
20220828:18:55:12:004071 gpinitsystem:node1:gpadmin-[INFO]:-Greenplum Primary Segment Configuration
20220828:18:55:12:004071 gpinitsystem:node1:gpadmin-[INFO]:----------------------------------------
20220828:18:55:12:004071 gpinitsystem:node1:gpadmin-[INFO]:-node2 	6000 	node2 	/data/gpdata/seg1/primary/gpseg0 	2
20220828:18:55:12:004071 gpinitsystem:node1:gpadmin-[INFO]:-node2 	6001 	node2 	/data/gpdata/seg2/primary/gpseg1 	3
20220828:18:55:12:004071 gpinitsystem:node1:gpadmin-[INFO]:-node3 	6000 	node3 	/data/gpdata/seg1/primary/gpseg2 	4
20220828:18:55:12:004071 gpinitsystem:node1:gpadmin-[INFO]:-node3 	6001 	node3 	/data/gpdata/seg2/primary/gpseg3 	5

Continue with Greenplum creation Yy|Nn (default=N):
> Y
20220828:18:55:27:004071 gpinitsystem:node1:gpadmin-[INFO]:-Building the Master instance database, please wait...
20220828:18:55:31:004071 gpinitsystem:node1:gpadmin-[INFO]:-Starting the Master in admin mode
.
.
20220828:18:55:51:008133 gpstart:node1:gpadmin-[INFO]:-Process results...
20220828:18:55:51:008133 gpstart:node1:gpadmin-[INFO]:-----------------------------------------------------
20220828:18:55:51:008133 gpstart:node1:gpadmin-[INFO]:-   Successful segment starts                                            = 4
20220828:18:55:51:008133 gpstart:node1:gpadmin-[INFO]:-   Failed segment starts                                                = 0
20220828:18:55:51:008133 gpstart:node1:gpadmin-[INFO]:-   Skipped segment starts (segments are marked down in configuration)   = 0
20220828:18:55:51:008133 gpstart:node1:gpadmin-[INFO]:-----------------------------------------------------
20220828:18:55:51:008133 gpstart:node1:gpadmin-[INFO]:-Successfully started 4 of 4 segment instances
20220828:18:55:51:008133 gpstart:node1:gpadmin-[INFO]:-----------------------------------------------------
20220828:18:55:51:008133 gpstart:node1:gpadmin-[INFO]:-Starting Master instance node1 directory /data/gpdata/gpmaster/gpseg-1
20220828:18:55:51:008133 gpstart:node1:gpadmin-[INFO]:-Command pg_ctl reports Master node1 instance active
20220828:18:55:51:008133 gpstart:node1:gpadmin-[INFO]:-Connecting to dbname='template1' connect_timeout=15
20220828:18:55:51:008133 gpstart:node1:gpadmin-[INFO]:-No standby master configured.  skipping...
20220828:18:55:51:008133 gpstart:node1:gpadmin-[INFO]:-Database successfully started
20220828:18:55:51:004071 gpinitsystem:node1:gpadmin-[INFO]:-Completed restart of Greenplum instance in production mode
20220828:18:55:51:004071 gpinitsystem:node1:gpadmin-[INFO]:-Scanning utility log file for any warning messages
20220828:18:55:51:004071 gpinitsystem:node1:gpadmin-[WARN]:-*******************************************************
20220828:18:55:51:004071 gpinitsystem:node1:gpadmin-[WARN]:-Scan of log file indicates that some warnings or errors
20220828:18:55:51:004071 gpinitsystem:node1:gpadmin-[WARN]:-were generated during the array creation
20220828:18:55:51:004071 gpinitsystem:node1:gpadmin-[INFO]:-Please review contents of log file
20220828:18:55:51:004071 gpinitsystem:node1:gpadmin-[INFO]:-/home/gpadmin/gpAdminLogs/gpinitsystem_20220828.log
20220828:18:55:51:004071 gpinitsystem:node1:gpadmin-[INFO]:-To determine level of criticality
20220828:18:55:51:004071 gpinitsystem:node1:gpadmin-[WARN]:-*******************************************************
20220828:18:55:51:004071 gpinitsystem:node1:gpadmin-[INFO]:-Greenplum Database instance successfully created
20220828:18:55:51:004071 gpinitsystem:node1:gpadmin-[INFO]:-------------------------------------------------------

检查

[gpadmin@node1 ~]$ gpstate -b
20220828:19:19:16:017060 gpstate:node1:gpadmin-[INFO]:-Starting gpstate with args: -b
20220828:19:19:16:017060 gpstate:node1:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 6.12.1 build commit:7ec4678f29dd922d7d44501f5fc344b5d0d4d49f Open Source'
20220828:19:19:16:017060 gpstate:node1:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 9.4.24 (Greenplum Database 6.12.1 build commit:7ec4678f29dd922d7d44501f5fc344b5d0d4d49f Open Source) on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 6.4.0, 64-bit compiled on Nov 20 2020 18:43:31'
20220828:19:19:16:017060 gpstate:node1:gpadmin-[INFO]:-Obtaining Segment details from master...
20220828:19:19:16:017060 gpstate:node1:gpadmin-[INFO]:-Gathering data from segments...
20220828:19:19:16:017060 gpstate:node1:gpadmin-[INFO]:-Greenplum instance status summary
20220828:19:19:16:017060 gpstate:node1:gpadmin-[INFO]:-----------------------------------------------------
20220828:19:19:16:017060 gpstate:node1:gpadmin-[INFO]:-   Master instance                                = Active
20220828:19:19:16:017060 gpstate:node1:gpadmin-[INFO]:-   Master standby                                 = No master standby configured
20220828:19:19:16:017060 gpstate:node1:gpadmin-[INFO]:-   Total segment instance count from metadata     = 4
20220828:19:19:16:017060 gpstate:node1:gpadmin-[INFO]:-----------------------------------------------------
20220828:19:19:16:017060 gpstate:node1:gpadmin-[INFO]:-   Primary Segment Status
20220828:19:19:16:017060 gpstate:node1:gpadmin-[INFO]:-----------------------------------------------------
20220828:19:19:16:017060 gpstate:node1:gpadmin-[INFO]:-   Total primary segments                         = 4
20220828:19:19:16:017060 gpstate:node1:gpadmin-[INFO]:-   Total primary segment valid (at master)        = 4
20220828:19:19:16:017060 gpstate:node1:gpadmin-[INFO]:-   Total primary segment failures (at master)     = 0
20220828:19:19:16:017060 gpstate:node1:gpadmin-[INFO]:-   Total number of postmaster.pid files missing   = 0
20220828:19:19:16:017060 gpstate:node1:gpadmin-[INFO]:-   Total number of postmaster.pid files found     = 4
20220828:19:19:16:017060 gpstate:node1:gpadmin-[INFO]:-   Total number of postmaster.pid PIDs missing    = 0
20220828:19:19:16:017060 gpstate:node1:gpadmin-[INFO]:-   Total number of postmaster.pid PIDs found      = 4
20220828:19:19:16:017060 gpstate:node1:gpadmin-[INFO]:-   Total number of /tmp lock files missing        = 0
20220828:19:19:16:017060 gpstate:node1:gpadmin-[INFO]:-   Total number of /tmp lock files found          = 4
20220828:19:19:16:017060 gpstate:node1:gpadmin-[INFO]:-   Total number postmaster processes missing      = 0
20220828:19:19:16:017060 gpstate:node1:gpadmin-[INFO]:-   Total number postmaster processes found        = 4
20220828:19:19:16:017060 gpstate:node1:gpadmin-[INFO]:-----------------------------------------------------
20220828:19:19:16:017060 gpstate:node1:gpadmin-[INFO]:-   Mirror Segment Status
20220828:19:19:16:017060 gpstate:node1:gpadmin-[INFO]:-----------------------------------------------------
20220828:19:19:16:017060 gpstate:node1:gpadmin-[INFO]:-   Mirrors not configured on this array
20220828:19:19:16:017060 gpstate:node1:gpadmin-[INFO]:-----------------------------------------------------
[gpadmin@node1 ~]$s

添加镜像

standby是master节点的镜像,当master节点不能继续对外提供服务时,可以在standby节点所在的服务器手工执行gpactivatestandby命令将standby提升为主库继续对外提供服务。

mirror 是segment primary的镜像,当集群检测到primay故障后,会自动将mirror提升为primay,不需要人工干预,但如果想恢复集群各节点的初始角色,需要使用gprecoverseg命令。

standby 节点可以在进行集群初始化:gpinitsystem 的时候通过添加-s <standby_master_hostname> --mirror-mode=spread 直接添加启动standby

mirror 是可以在安装greenplum集群的时候配置gpinitsystem_config文件一起创建的

如果在安装过程中没有配置,可以参考以下步骤进行添加

添加standby

准备

  • 添加gpadmin用户,且配置好环境变量
  • greenplum 二进制包已经安装好(可以从其他节点copy)
  • 与集群内所有节点可以进行免密码访问
  • 提前创建好数据目录

这里直接在node2服务器添加standby

--node2服务器
[gpadmin@node2 gpdata]$ mkdir -p /data/gpdata/gpmaster

--node1服务器
[gpadmin@node1 ~]$ gpinitstandby -s node2
20220907:16:46:08:013783 gpinitstandby:node1:gpadmin-[INFO]:-Validating environment and parameters for standby initialization...
20220907:16:46:08:013783 gpinitstandby:node1:gpadmin-[INFO]:-Checking for data directory /data/gpdata/gpmaster/gpseg-1 on node2
20220907:16:46:08:013783 gpinitstandby:node1:gpadmin-[INFO]:------------------------------------------------------
20220907:16:46:08:013783 gpinitstandby:node1:gpadmin-[INFO]:-Greenplum standby master initialization parameters
20220907:16:46:08:013783 gpinitstandby:node1:gpadmin-[INFO]:------------------------------------------------------
20220907:16:46:08:013783 gpinitstandby:node1:gpadmin-[INFO]:-Greenplum master hostname               = node1
20220907:16:46:08:013783 gpinitstandby:node1:gpadmin-[INFO]:-Greenplum master data directory         = /data/gpdata/gpmaster/gpseg-1
20220907:16:46:08:013783 gpinitstandby:node1:gpadmin-[INFO]:-Greenplum master port                   = 65432
20220907:16:46:08:013783 gpinitstandby:node1:gpadmin-[INFO]:-Greenplum standby master hostname       = node2
20220907:16:46:08:013783 gpinitstandby:node1:gpadmin-[INFO]:-Greenplum standby master port           = 65432
20220907:16:46:08:013783 gpinitstandby:node1:gpadmin-[INFO]:-Greenplum standby master data directory = /data/gpdata/gpmaster/gpseg-1
20220907:16:46:08:013783 gpinitstandby:node1:gpadmin-[INFO]:-Greenplum update system catalog         = On
Do you want to continue with standby master initialization? Yy|Nn (default=N):
> Y
20220907:16:46:11:013783 gpinitstandby:node1:gpadmin-[INFO]:-Syncing Greenplum Database extensions to standby
20220907:16:46:12:013783 gpinitstandby:node1:gpadmin-[INFO]:-The packages on node2 are consistent.
20220907:16:46:12:013783 gpinitstandby:node1:gpadmin-[INFO]:-Adding standby master to catalog...
20220907:16:46:12:013783 gpinitstandby:node1:gpadmin-[INFO]:-Database catalog updated successfully.
20220907:16:46:12:013783 gpinitstandby:node1:gpadmin-[INFO]:-Updating pg_hba.conf file...
Warning: the RSA host key for 'node2' differs from the key for the IP address '192.168.122.157'
Offending key for IP in /home/gpadmin/.ssh/known_hosts:1
Matching host key in /home/gpadmin/.ssh/known_hosts:5
20220907:16:46:13:013783 gpinitstandby:node1:gpadmin-[INFO]:-pg_hba.conf files updated successfully.
20220907:16:46:14:013783 gpinitstandby:node1:gpadmin-[INFO]:-Starting standby master
20220907:16:46:14:013783 gpinitstandby:node1:gpadmin-[INFO]:-Checking if standby master is running on host: node2  in directory: /data/gpdata/gpmaster/gpseg-1
20220907:16:46:15:013783 gpinitstandby:node1:gpadmin-[WARNING]:-Unable to cleanup previously started standby: 'Warning: the RSA host key for 'node2' differs from the key for the IP address '192.168.122.157'
Offending key for IP in /home/gpadmin/.ssh/known_hosts:1
Matching host key in /home/gpadmin/.ssh/known_hosts:5
'
20220907:16:46:15:013783 gpinitstandby:node1:gpadmin-[INFO]:-Cleaning up pg_hba.conf backup files...
20220907:16:46:16:013783 gpinitstandby:node1:gpadmin-[INFO]:-Backup files of pg_hba.conf cleaned up successfully.
20220907:16:46:16:013783 gpinitstandby:node1:gpadmin-[INFO]:-Successfully created standby master on node2

--查看standby 状态信息
[gpadmin@node1 ~]$ gpstate -f
20220907:16:46:21:014222 gpstate:node1:gpadmin-[INFO]:-Starting gpstate with args: -f
20220907:16:46:21:014222 gpstate:node1:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 6.12.1 build commit:7ec4678f29dd922d7d44501f5fc344b5d0d4d49f Open Source'
20220907:16:46:21:014222 gpstate:node1:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 9.4.24 (Greenplum Database 6.12.1 build commit:7ec4678f29dd922d7d44501f5fc344b5d0d4d49f Open Source) on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 6.4.0, 64-bit compiled on Nov 20 2020 18:43:31'
20220907:16:46:21:014222 gpstate:node1:gpadmin-[INFO]:-Obtaining Segment details from master...
20220907:16:46:22:014222 gpstate:node1:gpadmin-[INFO]:-Standby master details
20220907:16:46:22:014222 gpstate:node1:gpadmin-[INFO]:-----------------------
20220907:16:46:22:014222 gpstate:node1:gpadmin-[INFO]:-   Standby address          = node2
20220907:16:46:22:014222 gpstate:node1:gpadmin-[INFO]:-   Standby data directory   = /data/gpdata/gpmaster/gpseg-1
20220907:16:46:22:014222 gpstate:node1:gpadmin-[INFO]:-   Standby port             = 65432
20220907:16:46:22:014222 gpstate:node1:gpadmin-[INFO]:-   Standby PID              = 17544
20220907:16:46:22:014222 gpstate:node1:gpadmin-[INFO]:-   Standby status           = Standby host passive
20220907:16:46:22:014222 gpstate:node1:gpadmin-[INFO]:--------------------------------------------------------------
20220907:16:46:22:014222 gpstate:node1:gpadmin-[INFO]:--pg_stat_replication
20220907:16:46:22:014222 gpstate:node1:gpadmin-[INFO]:--------------------------------------------------------------
20220907:16:46:22:014222 gpstate:node1:gpadmin-[INFO]:--WAL Sender State: streaming
20220907:16:46:22:014222 gpstate:node1:gpadmin-[INFO]:--Sync state: sync
20220907:16:46:22:014222 gpstate:node1:gpadmin-[INFO]:--Sent Location: 0/C000000
20220907:16:46:22:014222 gpstate:node1:gpadmin-[INFO]:--Flush Location: 0/C000000
20220907:16:46:22:014222 gpstate:node1:gpadmin-[INFO]:--Replay Location: 0/C000000
20220907:16:46:22:014222 gpstate:node1:gpadmin-[INFO]:--------------------------------------------------------------

可能遇到报错

[gpadmin@node1 ~]$ gpinitstandby -s node2
20220907:16:34:58:026356 gpinitstandby:node1:gpadmin-[ERROR]:-Failed to retrieve configuration information from the master.
20220907:16:34:58:026356 gpinitstandby:node1:gpadmin-[ERROR]:-Failed to create standby
20220907:16:34:58:026356 gpinitstandby:node1:gpadmin-[ERROR]:-Error initializing standby master: FATAL:  role "gpadmin" does not exist

原因:
master的环境变量里没有配置PGPORT,默认是5432,而我这台服务器上5432端口给了一个pg14的实例使用,gp使用的是65432端口,所以找不到,此时需要修改一下环境变量~/.bashrc即可
source /opt/greenplum-db/greenplum_path.sh
export MASTER_DATA_DIRECTORY=/data/gpdata/gpmaster/gpseg-1
export PGPORT=65432
export PGUSER=gpadmin
export PGDATABASE=postgres

添加mirror

虽然mirror可以和primary部署在同一台服务器上,但为了保证segment节点的高可用,mirror节点一般不会与primary节点放在同一台服务器上,这样当一个segment服务器上的所有节点都不可用后,整个集群的数据依然是完整的,可以对外提供服务,所以这里只展示mirror与primary部署在不同服务器的情况下

准备

  • 所有节点都可以免密码登陆
  • 准备好数据目录
--创建目录
[gpadmin@node1 ~]$ gpssh -h node2 -h node3 -e "mkdir -p /data/gpdata/seg1/mirror"
[node3] mkdir -p /data/gpdata/seg1/mirror
[node2] mkdir -p /data/gpdata/seg1/mirror
[gpadmin@node1 ~]$ gpssh -h node2 -h node3 -e "mkdir -p /data/gpdata/seg2/mirror"
[node3] mkdir -p /data/gpdata/seg2/mirror
[node2] mkdir -p /data/gpdata/seg2/mirror
[gpadmin@node1 ~]$

--生成并编辑配置文件
[gpadmin@node1 ~]$ gpaddmirrors -o mirror_config
[gpadmin@node1 ~]$ cat mirror_config
0|node3|8000|/data/gpdata/seg1/mirror/gpseg0
1|node3|8001|/data/gpdata/seg2/mirror/gpseg1
2|node2|8000|/data/gpdata/seg1/mirror/gpseg2
3|node2|8001|/data/gpdata/seg2/mirror/gpseg3

--添加mirror
[gpadmin@node1 ~]$ gpaddmirrors -i mirror_config

可能遇到的问题

gpaddmirrors:node1:gpadmin-[ERROR]:-gpaddmirrors: error: Value of port offset supplied via -p option produces ports outside of the valid rangeMirror port base range must be between 6432 and 61000

原因一:是gp6.12.1 的bug,在gp6.13修复 
原因二:primary 的端口号小于6432,需要修改
最后修改时间:2022-09-15 19:19:04
「喜欢这篇文章,您的关注和赞赏是给作者最好的鼓励」
关注作者
【版权声明】本文为墨天轮用户原创内容,转载时必须标注文章的来源(墨天轮),文章链接,文章作者等基本信息,否则作者和墨天轮有权追究责任。如果您发现墨天轮中有涉嫌抄袭或者侵权的内容,欢迎发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。

评论