暂无图片
暂无图片
暂无图片
暂无图片
暂无图片

GBase 8c V5_2.0.x安装部署

zgp 2022-06-14
862

系统要求

在推荐的部署环境下,一般需要1台GTM服务器和3台数据库服务器集群分别部署Coordinator和Datanode节点。

服务器推荐的硬件和操作系统配置要求如下表所示:

资源

物理机环境

虚拟机环境

CPU

2*10(cores)及以上

2*10(cores)及以上

内存

128G及以上

8G及以上

磁盘

SSD 500GB及以上

SSD/SAS/SATA 100GB及以上

网络

10Gb及以上

1000Mb或10Gb

操作系统

CentOS7.5及以上版本

CentOS7.5及以上版本

用户权限要求

gbase

gbase

集群规划

主机、IP、端口规划:

主机名

IP

角色

端口号

nodename

路径

node1

192.168.142.211

DCS

2379

/home/gbase/deploy/bin/gha_ctl

Coordinator

5432

cn1

/home/gbase/data/coord

Datanode

5433

dn1_1

/home/gbase/data/datanode

node2

192.168.142.212

DCS

2379

/home/gbase/deploy/bin/gha_ctl

Coordinator

5432

cn2

/home/gbase/data/coord

Datanode

5433

dn2_1

/home/gbase/data/datanode

node3

192.168.142.213

DCS

2379

/home/gbase/deploy/bin/gha_ctl

Coordinator

5432

cn3

/home/gbase/data/coord

Datanode

5433

dn3_1

/home/gbase/data/datanode

gtm

192.168.142.210

gtm

6666

gtm1

/home/gbase/data/gtm

安装准备

在启动安装过程前,必须确保用户有足够的权限来执行安装。在如下的安装和准备过程中,需要将gbase用户加入至sudoer。

以下操作,如无特殊说明,在所有节点均需执行。

创建用户和配置sudoer

所有节点均创建gbase组和用户:

[root@localhost ~]# groupadd gbase

[root@localhost ~]# useradd -m -d /home/gbase gbase -g gbase

[root@localhost ~]# passwd gbase

返回并设置密码,请牢记密码。

添加gbase至sudoer:

[root@localhost ~]# visudo

在打开的文档中如下位置增加gbase用户及权限:

## Allow root to run any commands anywhere

root ALL=(ALL) ALL

gbase ALL=(ALL) NOPASSWD:ALL

配置sudoer后,数据库安装配置操作无需root权限。

环境设置

  1. 防火墙关闭操作:需要将GBase 8c分布式数据库节点间访问端口打通才可以保证读写请求、数据等信息的正常传输。在普通业务场景中,数据库节点间及其与业务服务之间的网络通信都是在安全域内完成数据交互,如果没有特殊的安全要求,建议将节点的防火墙进行关闭操作。否则需要按照“集群规划”中的“端口号”信息配置防火墙白名单。

[gbase@localhost ~]$ sudo systemctl stop firewalld.service

  1. 禁止防火墙开机自启动:

[gbase@localhost ~]$ sudo systemctl disable firewalld.service

系统返回:

Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.

Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

即为操作成功。

  1. 关闭selinux:

[gbase@localhost ~]$ sudo vim /etc/selinux/config

设置SELINUX=disabled,保存退出:

# This file controls the state of SELinux on the system.

# SELINUX= can take one of these three values:

# enforcing - SELinux security policy is enforced.

# permissive - SELinux prints warnings instead of enforcing.

# disabled - No SELinux policy is loaded.

SELINUX=disabled

# SELINUXTYPE= can take one of three values:

# targeted - Targeted processes are protected,

# minimum - Modification of targeted policy. Only selected processes are protected.

# mls - Multi Level Security protection.

SELINUXTYPE=targeted

  1. 同步系统时间:

GBase 8c分布式数据库系统,需要节点间时间同步,来保证数据库一致性。一般采用NTP服务的方式来保证节点间的时间同步。

首先检查服务器是否安装NTP服务以及是否正常运行:

[gbase@localhost ~]$ sudo systemctl status ntpd.service

如果显示running表示服务正在运行。否则考虑如下操作:

如果系统可以与外网通信,可以使用如下命令与NTP服务器同步:

[gbase@localhost ~]$ sudo systemctl status ntpd.service

如果服务器所在网络无法与外网通信,需要手动配置NTP服务

首先确认是否安装ntp:

[gbase@localhost ~]$ rpm -qa|grep ntp

若已安装ntp应返回如下内容:

python-ntplib-0.3.2-1.el7.noarch

ntpdate-4.2.6p5-29.el7.centos.x86_64

fontpackages-filesystem-1.44-8.el7.noarch

ntp-4.2.6p5-29.el7.centos.x86_64

若没有ntp显示,则应删除原有ntpdate后重新安装ntp:

[gbase@localhost ~]$ sudo yum -y remove ntpdate-4.2.6p5-29.el7.centos.x86_64

[gbase@localhost ~]$ sudo yum -y install ntp

安装完毕后,在所有节点上配置ntp服务,首先选定ntp服务主节点,本篇选用gtm节点作为ntp主节点。

修改ntp.conf配置文件:

[gbase@localhost ~]$ sudo vi /etc/ntp.conf

ntp节点配置分为主节点配置及其他节点配置,主节点修改配置文件,增加:

restrict 192.168.142.210 nomodify notrap nopeer noquery //当前节点IP

restrict 192.168.142.2 mask 255.255.255.0 nomodify notrap //集群所在网段网关、子网掩码

//server部分注释掉0~n并增加如下内容:

server 127.127.1.0

Fudge 127.127.1.0 stratum 10

修改涉及部分配置文件及修改位置如下:

# Permit all access over the loopback interface. This could

# be tightened as well, but to do so would effect some of

# the administrative functions.

restrict 192.168.142.210 nomodify notrap nopeer noquery

restrict 127.0.0.1

restrict ::1

# Hosts on local network are less restricted.

restrict 192.168.142.2 mask 255.255.255.0 nomodify notrap

# Use public servers from the pool.ntp.org project.

# Please consider joining the pool (http://www.pool.ntp.org/join.html).

#server 0.centos.pool.ntp.org iburst

#server 1.centos.pool.ntp.org iburst

#server 2.centos.pool.ntp.org iburst

#server 3.centos.pool.ntp.org iburst

server 127.127.1.0

Fudge 127.127.1.0 stratum 10

ntp其他节点修改配置文件,增加:

restrict 192.168.142.211 nomodify notrap nopeer noquery //当前节点IP

restrict 192.168.142.2 mask 255.255.255.0 nomodify notrap //集群所在网段网关、子网掩码

//server部分注释掉0~n并指向主节点:

server 192.168.142.210

Fudge 192.168.142.210 stratum 10

修改涉及部分配置文件及修改位置如下:

# the administrative functions.

restrict 192.168.142.211 nomodify notrap nopeer noquery

restrict 127.0.0.1

restrict ::1

# Hosts on local network are less restricted.

restrict 192.168.142.2 mask 255.255.255.0 nomodify notrap

# Use public servers from the pool.ntp.org project.

# Please consider joining the pool (http://www.pool.ntp.org/join.html).

#server 0.centos.pool.ntp.org iburst

#server 1.centos.pool.ntp.org iburst

#server 2.centos.pool.ntp.org iburst

#server 3.centos.pool.ntp.org iburst

server 192.168.142.210

Fudge 192.168.142.210 stratum 10

全部节点配置完成后,在所有节点启动ntp服务器:

[gbase@localhost ~]$ sudo service ntpd start

查看ntp服务器是否连通:

[gbase@localhost ~]$ ntpstat

主节点返回:

ynchronised to local net (127.127.1.0) at stratum 6

time correct to within 7948 ms

polling server every 64 s

其他节点返回:

synchronised to NTP server (192.168.142.210) at stratum 7

time correct to within 903 ms

polling server every 64 s

注意:ntp服务器配置完毕后,需要等待5~10分钟才能完成时间同步,如果在配置后提示unsynchronised time server re-starting polling server every 8 s或unsynchronised polling server every 8 s均属正常,等待一段时间再次执行ntpstat命令查看即可。

设置开机自启动:

[gbase@localhost ~]$ sudo chkconfig ntpd on

注意:某些虚拟机环境下无法配置NTP开机自启动,需要每次重启后进行手工启动。NTP服务会影响分布式数据库部署及一致性操作,须提前配置生效

创建gbase用户免密登录

所有节点创建gbase用户免密登录:

[gbase@localhost ~]$ mkdir ~/.ssh

[gbase@localhost ~]$ chmod 700 ~/.ssh

在任意节点均可进行数据库集群部署操作,本篇采用在gtm节点进行GBase 8c数据库集群部署的方式。执行部署脚本的设备,gbase用户需配置免密登录其他设备。

免密登录配置操作如下(以下操作仅在gtm节点进行):

gbase用户生成认证文件:

[gbase@localhost ~]$ ssh-keygen -t rsa

[gbase@localhost ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

[gbase@localhost ~]$ chmod 600 ~/.ssh/authorized_keys

将秘钥文件拷贝至其他节点(此操作需输入密码):

[gbase@localhost ~]$ scp ~/.ssh/authorized_keys gbase@192.168.142.210:~/.ssh/

[gbase@localhost ~]$ scp ~/.ssh/authorized_keys gbase@192.168.142.211:~/.ssh/

[gbase@localhost ~]$ scp ~/.ssh/authorized_keys gbase@192.168.142.212:~/.ssh/

[gbase@localhost ~]$ scp ~/.ssh/authorized_keys gbase@192.168.142.213:~/.ssh/

安装部署

GBase 8c的安装部署方式,可以通过手工部署或直接修改配置文件的方式进行部署安装,用户可以自行选择。

手工部署方式

  1. 下载GBase 8c安装包并拷贝至gtm节点对应目录下(以/home/gbase/deploy为例)。

注意:数据库安装部署需要在配置了免密登录的节点上进行,本例中为gtm节点免密,则安装部署操作需在该节点完成。

压缩安装包:

[gbase@localhost deploy]$ tar xvf GBase8cV5_S2.0.0B17.tar.gz

执行bin目录的gb_install.sh(如果同一目录多次执行,需要删除.gb_install.sh.completed),配置默认安装路径:

[gbase@localhost deploy]$ /home/gbase/deploy/bin/gb_install.sh

  1. 部署DCS集群,需要列出规划的全部DCS集群IP地址和端口号信息:

语法:

gha_ctl CREATE dcs host:port .....

DCS需要提供高可用功能,应至少部署在三台节点上。

具体部署命令为:

[gbase@localhost deploy]$ /home/gbase/deploy/bin/gha_ctl create dcs 192.168.142.211:2379 192.168.142.212:2379 192.168.142.213:2379

返回添加成功信息:

{

"ret":0,

"msg":"Success"

}

向dcs中添加安装路径及安装包名信息(集群名称缺省为gbase8c):

gha_ctl PREPARE version GBase8cV5_XXX.tar.gz installpath -l dcslist [-c cluster]

  • version为对应安装包版本号;
  • installpath为安装路径;
  • dcslist为dcs地址,一般情况下可以只列出一个节点地址,其他节点会自动同步消息。为了保证高可用,也可以列出所有节点地址;
  • [-c cluster]表示集群名称,为可选字段,缺省默认值gbase8c;

具体部署命令为:

[gbase@localhost deploy]$ /home/gbase/deploy/bin/gha_ctl prepare GBase8cV5_S2.0.0B17 /home/gbase/deploy/GBase8cV5_S2.0.0B17.tar.gz /home/gbase/install -l http://192.168.142.211:2379,http://192.168.142.212:2379,http://192.168.142.213:2379

返回添加成功信息:

{

"ret":0,

"msg":"Success"

}

拷贝安装包并解压缩至目标路径,同时设置环境变量:

gha_ctl DEPLOY host ... -l dcslist

  • host表示节点IP;
  • dcslist为DCS地址,一般情况下可以只列出一个节点地址,其他节点会自动同步消息。为了保证高可用,也可以列出所有节点地址;

具体部署命令为:

[gbase@localhost deploy]$ /home/gbase/deploy/bin/gha_ctl deploy 192.168.142.211 192.168.142.212 192.168.142.213 192.168.142.210 -l http://192.168.142.211:2379,http://192.168.142.212:2379,http://192.168.142.213:2379

返回添加成功信息:

{

"ret":0,

"msg":"Success"

}

  1. 添加gtm节点,并将其信息记录到DCS:

gha_ctl ADD gtm name host port dir rest_port -l dcslist [-c cluster]

  • host port dir分别为节点IP、端口号及目录;
  • rest_port为修改动态库的rpath;
  • dcslist为DCS地址,一般情况下可以只列出一个节点地址,其他节点会自动同步消息。为了保证高可用,也可以列出所有节点地址;
  • [-c cluster]表示集群名称,为可选字段,缺省默认值gbase8c;

具体部署命令为:

[gbase@localhost deploy]$ /home/gbase/deploy/bin/gha_ctl add gtm gtm1 192.168.142.210 6666 /home/gbase/data/gtm 8008 -l http://192.168.142.211:2379,http://192.168.142.212:2379,http://192.168.142.213:2379

返回添加成功信息:

{

"ret":0,

"msg":"Success"

}

  1. 添加Coordinator节点,并将其信息记录到DCS:

gha_ctl ADD coordinator name host port pooler dir proxy_port rest_port -l dcslist [-c cluster]

  • host port pooler dir分别为节点IP、端口号、连接池及目录;
  • proxy_port为proxy路径;
  • rest_port为修改动态库的rpath;
  • dcslist为DCS地址,一般情况下可以只列出一个节点地址,其他节点会自动同步消息。为了保证高可用,也可以列出所有节点地址;
  • [-c cluster]表示集群名称,为可选字段,缺省默认值gbase8c;

具体部署命令为:

[gbase@localhost deploy]$ /home/gbase/deploy/bin/gha_ctl add coordinator cn1 192.168.142.211 5432 6667 /home/gbase/data/coord 6666 8009 -l http://192.168.142.211:2379,http://192.168.142.212:2379,http://192.168.142.213:2379

返回添加成功信息:

{

"ret":0,

"msg":"Success"

}

相同方法添加cn2及cn3:

[gbase@localhost deploy]$ /home/gbase/deploy/bin/gha_ctl add coordinator cn2 192.168.142.212 5432 6667 /home/gbase/data/coord 6666 8009 -l http://192.168.142.211:2379,http://192.168.142.212:2379,http://192.168.142.213:2379

[gbase@localhost deploy]$ /home/gbase/deploy/bin/gha_ctl add coordinator cn3 192.168.142.213 5432 6667 /home/gbase/data/coord 6666 8009 -l http://192.168.142.211:2379,http://192.168.142.212:2379,http://192.168.142.213:2379

  1. 添加Datanode节点,并将其信息记录到DCS:

gha_ctl ADD datanode group name host port pooler dir proxy_port rest_port -l dcslist [-c cluster]

  • host port pooler dir分别为节点IP、端口号、连接池及目录;
  • proxy_port为proxy路径;
  • rest_port为修改动态库的rpath;
  • dcslist为DCS地址,一般情况下可以只列出一个节点地址,其他节点会自动同步消息。为了保证高可用,也可以列出所有节点地址;
  • [-c cluster]表示集群名称,为可选字段,缺省默认值gbase8c;

具体部署命令为:

[gbase@localhost deploy]$ /home/gbase/deploy/bin/gha_ctl add datanode dn1 dn1_1 192.168.142.211 5433 6668 /home/gbase/data/datanode 7001 8011 -l http://192.168.142.211:2379,http://192.168.142.212:2379,http://192.168.142.213:2379

返回添加成功信息:

{

"ret":0,

"msg":"Success"

}

相同方法添加dn2及dn3:

[gbase@localhost deploy]$ /home/gbase/deploy/bin/gha_ctl add datanode dn2 dn2_1 192.168.142.212 5433 6668 /home/gbase/data/datanode 7001 8011 -l http://192.168.142.211:2379,http://192.168.142.212:2379,http://192.168.142.213:2379

[gbase@localhost deploy]$ /home/gbase/deploy/bin/gha_ctl add datanode dn3 dn3_1 192.168.142.213 5433 6668 /home/gbase/data/datanode 7001 8011 -l http://192.168.142.211:2379,http://192.168.142.212:2379,http://192.168.142.213:2379

  1. 添加完成后,检查集群状态:

gha_ctl MONITOR all/gtm/coordinator/datanode/dcs -l dcslist

  • dcslist为DCS地址,一般情况下可以只列出一个节点地址,其他节点会自动同步消息。为了保证高可用,也可以列出所有节点地址;

具体操作命令为:

[gbase@localhost deploy]$ /home/gbase/deploy/bin/gha_ctl monitor all -l http://192.168.142.211:2379,http://192.168.142.212:2379,http://192.168.142.213:2379

返回全部运行中为成功:

{

"cluster":"gbase8c",

"gtm":[

{

"host":"192.168.142.210",

"port":"6666",

"restPort":"8008",

"name":"gtm1",

"workDir":"/home/gbase/data/gtm",

"state":"running",

"role":"master"

}

],

"coordinator":[

{

"host":"192.168.142.211",

"port":"5432",

"restPort":"8009",

"name":"cn1",

"workDir":"/home/gbase/data/coord",

"state":"running"

},

{

"host":"192.168.142.212",

"port":"5432",

"restPort":"8009",

"name":"cn2",

"workDir":"/home/gbase/data/coord",

"state":"running"

},

{

"host":"192.168.142.213",

"port":"5432",

"restPort":"8009",

"name":"cn3",

"workDir":"/home/gbase/data/coord",

"state":"running"

}

],

"datanode":{

"dn1":[

{

"host":"192.168.142.211",

"port":"5433",

"restPort":"8011",

"name":"dn1_1",

"workDir":"/home/gbase/data/datanode",

"role":"master",

"state":"running"

}

],

"dn2":[

{

"host":"192.168.142.212",

"port":"5433",

"restPort":"8011",

"name":"dn2_1",

"workDir":"/home/gbase/data/datanode",

"role":"master",

"state":"running"

}

],

"dn3":[

{

"host":"192.168.142.213",

"port":"5433",

"restPort":"8011",

"name":"dn3_1",

"workDir":"/home/gbase/data/datanode",

"role":"master",

"state":"running"

}

]

},

"dcs":{

"cluster_state":"healthy",

"members":[

{

"url":"http://192.168.142.213:2379",

"id":"62f9a805ba818264",

"name":"node_2",

"state":"healthy",

"isLeader":false

},

{

"url":"http://192.168.142.211:2379",

"id":"8ea592db62fb7d92",

"name":"node_0",

"state":"healthy",

"isLeader":true

},

{

"url":"http://192.168.142.212:2379",

"id":"fbc0fc018ba512fa",

"name":"node_1",

"state":"healthy",

"isLeader":false

}

]

}

}

该集群中已配置1个GTM节点,3个Coordinator节点及3个Datanode节点(Datanode节点无备机)。

通过配置文件部署方式

  1. 下载并将安装包拷贝至gtm节点对应目录下(以/home/gbase/deploy为例),解压缩安装包:

[gbase@localhost deploy]$ tar xvf GBase8cV5_S2.0.0B17.tar.gz

每次解压完压缩包后,都需要执行一下bin目录的gb_install.sh(如果同一目录多次执行,需要删除.gb_install.sh.completed):

[gbase@localhost deploy]$ /home/gbase/deploy/bin/gb_install.sh

  1. 添加安装包名称及安装路径到配置文件:

gha_ctl SET INSTALL version GBase8cV5_XXX.tar.gz installpath [-c cluster] [-p confpath]

  • installpath为安装路径;
  • version为对应安装包版本号;
  • [-c cluster]表示集群名称,为可选字段,缺省默认值gbase8c;
  • [-p confpath]指定配置文件保存路径,为可选字段,缺省默认/tmp;

具体部署命令如下:

[gbase@localhost ~]$ /home/gbase/deploy/bin/gha_ctl set install GBase8cV5_S2.0.0B17 /home/gbase/deploy/GBase8cV5_S2.0.0B17.tar.gz /home/gbase/install -c gbase8c -p /home/gbase

返回添加成功信息:

{

"ret":0,

"msg":"Success"

}

  1. 添加DCS信息到配置文件,可以一个一个添加,也可以一次添加多个,语法如下:

gha_ctl SET dcs host:port ... [-c cluster] [-p confpath]

  • [-c cluster]表示集群名称,为可选字段,缺省默认值gbase8c;
  • [-p confpath]指定配置文件保存路径,为可选字段,缺省默认/tmp;

本次添加3个DCS,具体部署命令如下:

[gbase@localhost ~]$ /home/gbase/deploy/bin/gha_ctl set dcs 192.168.142.211:2379 192.168.142.212:2379 192.168.142.213:2379 -c gbase8c -p /home/gbase

返回添加成功信息:

{

"ret":0,

"msg":"Success"

}

  1. 添加gtm信息到配置文件中:

gha_ctl SET gtm name host port dir rest_port [-c cluster] [-p confpath]

  • host port dir分别为节点IP、端口号及目录;
  • rest_port为修改动态库的rpath;
  • [-c cluster]表示集群名称,为可选字段,缺省默认值gbase8c;
  • [-p confpath]指定配置文件保存路径,为可选字段,缺省默认/tmp;

具体部署命令如下:

[gbase@localhost ~]$ /home/gbase/deploy/bin/gha_ctl set gtm gtm1 192.168.142.210 6666 /home/gbase/data/gtm 8008 -c gbase8c -p /home/gbase

返回添加成功信息:

{

"ret":0,

"msg":"Success"

}

  1. 添加Coordinator信息到配置文件中:

gha_ctl SET coordinator name host port pooler dir proxy_port rest_port [-c cluster] [-p confpath]

  • host port pooler dir分别为节点IP、端口号、连接池及目录;
  • proxy_port为proxy路径;
  • rest_port为修改动态库的rpath;
  • [-c cluster]表示集群名称,为可选字段,缺省默认值gbase8c;
  • [-p confpath]指定配置文件保存路径,为可选字段,缺省默认/tmp;

具体部署命令如下:

[gbase@localhost ~]$ /home/gbase/deploy/bin/gha_ctl set coordinator cn1 192.168.142.211 5432 6667 /home/gbase/data/coord 6666 8009 -c gbase8c -p /home/gbase

[gbase@localhost ~]$ /home/gbase/deploy/bin/gha_ctl set coordinator cn2 192.168.142.212 5432 6667 /home/gbase/data/coord 6666 8009 -c gbase8c -p /home/gbase

[gbase@localhost ~]$ /home/gbase/deploy/bin/gha_ctl set coordinator cn3 192.168.142.213 5432 6667 /home/gbase/data/coord 6666 8009 -c gbase8c -p /home/gbase

返回添加成功信息:

{

"ret":0,

"msg":"Success"

}

  1. 添加Datanode信息到配置文件中:

gha_ctl SET datanode group name host port pooler dir proxy_port rest_port [-c cluster] [-p confpath]

  • host port pooler dir分别为节点IP、端口号、连接池及目录;
  • proxy_port为proxy路径;
  • rest_port为修改动态库的rpath;
  • [-c cluster]表示集群名称,为可选字段,缺省默认值gbase8c;
  • [-p confpath]指定配置文件保存路径,为可选字段,缺省默认/tmp;

具体部署命令如下:

[gbase@localhost ~]$ /home/gbase/deploy/bin/gha_ctl set datanode dn1 dn1_1 192.168.142.211 5433 6668 /home/gbase/data/datanode 6789 8011 -c gbase8c -p /home/gbase

[gbase@localhost ~]$ /home/gbase/deploy/bin/gha_ctl set datanode dn2 dn2_1 192.168.142.212 5433 6668 /home/gbase/data/datanode 6789 8011 -c gbase8c -p /home/gbase

[gbase@localhost ~]$ /home/gbase/deploy/bin/gha_ctl set datanode dn3 dn3_1 192.168.142.213 5433 6668 /home/gbase/data/datanode 6789 8011 -c gbase8c -p /home/gbase

  1. 安装集群:

gha_ctl INSTALL [-c cluster] [-p confpath]

  • [-c cluster]表示集群名称,为可选字段,缺省默认值gbase8c;
  • [-p confpath]指定配置文件保存路径,为可选字段,缺省默认/tmp;

具体部署命令如下:

[gbase@localhost ~]$ /home/gbase/deploy/bin/gha_ctl install -c gbase8c -p /home/gbase

返回部署成功信息:

{

"ret":0,

"msg":"Success"

}

  1. 安装完成后,检查集群状态:

gha_ctl MONITOR all/gtm/coordinator/datanode/dcs -l dcslist

  • dcslist为DCS地址,一般情况下可以只列出一个节点地址,其他节点会自动同步消息。为了保证高可用,也可以列出所有节点地址;

具体操作命令如下:

[gbase@localhost ~]$ /home/gbase/deploy/bin/gha_ctl monitor all -l http://192.168.142.211:2379,http://192.168.142.212:2379,http://192.168.142.213:2379

返回全部运行中为成功:

{

"cluster":"gbase8c",

"gtm":[

{

"host":"192.168.142.210",

"port":"6666",

"restPort":"8008",

"name":"gtm1",

"workDir":"/home/gbase/data/gtm",

"state":"running",

"role":"master"

}

],

"coordinator":[

{

"host":"192.168.142.211",

"port":"5432",

"restPort":"8009",

"name":"cn1",

"workDir":"/home/gbase/data/coord",

"state":"running"

},

{

"host":"192.168.142.212",

"port":"5432",

"restPort":"8009",

"name":"cn2",

"workDir":"/home/gbase/data/coord",

"state":"running"

},

{

"host":"192.168.142.213",

"port":"5432",

"restPort":"8009",

"name":"cn3",

"workDir":"/home/gbase/data/coord",

"state":"running"

}

],

"datanode":{

"dn1":[

{

"host":"192.168.142.211",

"port":"5433",

"restPort":"8011",

"name":"dn1_1",

"workDir":"/home/gbase/data/datanode",

"role":"master",

"state":"running"

}

],

"dn2":[

{

"host":"192.168.142.212",

"port":"5433",

"restPort":"8011",

"name":"dn2_1",

"workDir":"/home/gbase/data/datanode",

"role":"master",

"state":"running"

}

],

"dn3":[

{

"host":"192.168.142.213",

"port":"5433",

"restPort":"8011",

"name":"dn3_1",

"workDir":"/home/gbase/data/datanode",

"role":"master",

"state":"running"

}

]

},

"dcs":{

"cluster_state":"healthy",

"members":[

{

"url":"http://192.168.142.213:2379",

"id":"62f9a805ba818264",

"name":"node_2",

"state":"healthy",

"isLeader":false

},

{

"url":"http://192.168.142.211:2379",

"id":"8ea592db62fb7d92",

"name":"node_0",

"state":"healthy",

"isLeader":false

},

{

"url":"http://192.168.142.212:2379",

"id":"fbc0fc018ba512fa",

"name":"node_1",

"state":"healthy",

"isLeader":true

}

]

}

}

该集群中已配置1个GTM节点,3个Coordinator节点及3个Datanode节点(Datanode节点无备机)。

使用GBase 8c数据库

集群状态监控

使用MONITOR可以分别监控集群gtm、Coordinator、Datanode、DCS节点状态或对集群全部节点状态进行监控。语法:

gha_ctl MONITOR all/gtm/coordinator/datanode/dcs -l dcslist

  • dcslist为DCS地址,一般情况下可以只列出一个节点地址,其他节点会自动同步消息。为了保证高可用,也可以列出所有节点地址;

具体操作命令如下:

[gbase@localhost deploy]$ /home/gbase/deploy/bin/gha_ctl monitor all -l http://192.168.142.211:2379,http://192.168.142.212:2379,http://192.168.142.213:2379

返回全部运行中为成功:

{

"cluster":"gbase8c",

"gtm":[

{

"host":"192.168.142.210",

"port":"6666",

"restPort":"8008",

"name":"gtm1",

"workDir":"/home/gbase/data/gtm",

"state":"running",

"role":"master"

}

],

"coordinator":[

{

"host":"192.168.142.211",

"port":"5432",

"restPort":"8009",

"name":"cn1",

"workDir":"/home/gbase/data/coord",

"state":"running"

},

{

"host":"192.168.142.212",

"port":"5432",

"restPort":"8009",

"name":"cn2",

"workDir":"/home/gbase/data/coord",

"state":"running"

},

{

"host":"192.168.142.213",

"port":"5432",

"restPort":"8009",

"name":"cn3",

"workDir":"/home/gbase/data/coord",

"state":"running"

}

],

"datanode":{

"dn1":[

{

"host":"192.168.142.211",

"port":"5433",

"restPort":"8011",

"name":"dn1_1",

"workDir":"/home/gbase/data/datanode",

"role":"master",

"state":"running"

}

],

"dn2":[

{

"host":"192.168.142.212",

"port":"5433",

"restPort":"8011",

"name":"dn2_1",

"workDir":"/home/gbase/data/datanode",

"role":"master",

"state":"running"

}

],

"dn3":[

{

"host":"192.168.142.213",

"port":"5433",

"restPort":"8011",

"name":"dn3_1",

"workDir":"/home/gbase/data/datanode",

"role":"master",

"state":"running"

}

]

},

"dcs":{

"cluster_state":"healthy",

"members":[

{

"url":"http://192.168.142.213:2379",

"id":"62f9a805ba818264",

"name":"node_2",

"state":"healthy",

"isLeader":false

},

{

"url":"http://192.168.142.211:2379",

"id":"8ea592db62fb7d92",

"name":"node_0",

"state":"healthy",

"isLeader":true

},

{

"url":"http://192.168.142.212:2379",

"id":"fbc0fc018ba512fa",

"name":"node_1",

"state":"healthy",

"isLeader":false

}

]

}

}

该集群中已配置1个GTM节点,3个Coordinator节点及3个Datanode节点(Datanode节点无备机)。

启动/停止数据库集群组件

  1. 启动组件命令:

启动GTM节点语法:

gha_ctl START gtm name -l dcslist [-c cluster]

启动Coordinator节点语法:

gha_ctl START coordinator name -l dcslist [-c cluster]

启动Datanode节点语法:

gha_ctl START datanode group name -l dcslist [-c cluster]

  • dcslist为DCS地址,一般情况下可以只列出一个节点地址,其他节点会自动同步消息。为了保证高可用,也可以列出所有节点地址;
  • [-c cluster]表示集群名称,为可选字段,缺省默认值gbase8c;
  1. 停止组件命令:

停止GTM节点语法:

gha_ctl STOP gtm name -l dcslist [-c cluster]

停止Coordinator节点语法:

gha_ctl STOP coordinator name -l dcslist [-c cluster]

停止Datanode节点语法:

gha_ctl STOP datanode group name -l dcslist [-c cluster]

  • dcslist为DCS地址,一般情况下可以只列出一个节点地址,其他节点会自动同步消息。为了保证高可用,也可以列出所有节点地址;
  • [-c cluster]表示集群名称,为可选字段,缺省默认值gbase8c;

本次启停DN2节点,具体操作命令为:

停止DN2节点:

[gbase@localhost deploy]$ /home/gbase/deploy/bin/gha_ctl stop datanode dn2 dn2_1 -l http://192.168.142.211:2379,http://192.168.142.212:2379,http://192.168.142.213:2379

返回操作成功信息:

{

"ret":0,

"msg":"Success"

}

查询当前Datanode节点状态:

[gbase@localhost deploy]$ /home/gbase/deploy/bin/gha_ctl monitor datanode -l http://192.168.142.211:2379,http://192.168.142.212:2379,http://192.168.142.213:2379

返回DN1、DN3 running状态,DN2 stop状态信息:

{

"cluster":"gbase8c",

"datanode":{

"dn1":[

{

"host":"192.168.142.211",

"port":"5433",

"restPort":"8011",

"name":"dn1_1",

"workDir":"/home/gbase/data/datanode",

"role":"master",

"state":"running"

}

],

"dn2":[

{

"host":"192.168.142.212",

"port":"5433",

"restPort":"8011",

"name":"dn2_1",

"workDir":"/home/gbase/data/datanode",

"role":"master",

"state":"stopped"

}

],

"dn3":[

{

"host":"192.168.142.213",

"port":"5433",

"restPort":"8011",

"name":"dn3_1",

"workDir":"/home/gbase/data/datanode",

"role":"master",

"state":"running"

}

]

}

}

配置成功。

启动DN2节点:

[gbase@localhost deploy]$ /home/gbase/deploy/bin/gha_ctl start datanode dn2 dn2_1 -l http://192.168.142.211:2379,http://192.168.142.212:2379,http://192.168.142.213:2379

返回操作成功信息:

{

"ret":0,

"msg":"Success"

}

查询当前Datanode节点状态:

[gbase@localhost deploy]$ /home/gbase/deploy/bin/gha_ctl monitor datanode -l http://192.168.142.211:2379,http://192.168.142.212:2379,http://192.168.142.213:2379

返回DN1、DN2、DN3均running状态:

{

"cluster":"gbase8c",

"datanode":{

"dn1":[

{

"host":"192.168.142.211",

"port":"5433",

"restPort":"8011",

"name":"dn1_1",

"workDir":"/home/gbase/data/datanode",

"role":"master",

"state":"running"

}

],

"dn2":[

{

"host":"192.168.142.212",

"port":"5433",

"restPort":"8011",

"name":"dn2_1",

"workDir":"/home/gbase/data/datanode",

"role":"master",

"state":"running"

}

],

"dn3":[

{

"host":"192.168.142.213",

"port":"5433",

"restPort":"8011",

"name":"dn3_1",

"workDir":"/home/gbase/data/datanode",

"role":"master",

"state":"running"

}

]

}

}

配置成功。

组件删除

使用REMOVE命令可以对集群某组件进行删除操作,该操作前无需stop组件,集群会自动停止其服务。

删除GTM节点语法:

gha_ctl REMOVE gtm name -l dcslist [-c cluster] [-f ]

删除Coordinator节点语法:

gha_ctl REMOVE coordinator name -l dcslist [-c cluster] [-f ]

删除Datanode节点语法:

gha_ctl REMOVE datanode group name -l dcslist [-c cluster] [-f ]

  • dcslist为DCS地址,一般情况下可以只列出一个节点地址,其他节点会自动同步消息。为了保证高可用,也可以列出所有节点地址;
  • [-c cluster]表示集群名称,为可选字段,缺省默认值gbase8c;
  • [-f]表示清理安装目录,为可选字段,缺省保留目录文件夹。

本次删除DN1节点,具体操作命令为:

[gbase@localhost deploy]$ /home/gbase/deploy/bin/gha_ctl remove datanode dn1 dn1_1 -l http://192.168.142.211:2379,http://192.168.142.212:2379,http://192.168.142.213:2379

返回操作成功信息:

{

"ret":0,

"msg":"Success"

}

此时查询DN节点状态:

[gbase@localhost deploy]$ /home/gbase/deploy/bin/gha_ctl monitor datanode -l http://192.168.142.211:2379,http://192.168.142.212:2379,http://192.168.142.213:2379

返回只有DN2、DN3节点(DN1已被删除):

{

"cluster":"gbase8c",

"datanode":{

"dn2":[

{

"host":"192.168.142.212",

"port":"5433",

"restPort":"8011",

"name":"dn2_1",

"workDir":"/home/gbase/data/datanode",

"role":"master",

"state":"running"

}

],

"dn3":[

{

"host":"192.168.142.213",

"port":"5433",

"restPort":"8011",

"name":"dn3_1",

"workDir":"/home/gbase/data/datanode",

"role":"master",

"state":"running"

}

]

}

}

操作成功。

数据库集群卸载

gha_ctl UNINSTALL [-f] [-c cluster] -l dcslist

  • [-c cluster]表示集群名称,为可选字段,缺省默认值gbase8c;
  • [-f]表示清理安装目录,为可选字段,缺省保留目录文件夹。
  • dcslist为DCS地址,一般情况下可以只列出一个节点地址,其他节点会自动同步消息。为了保证高可用,也可以列出所有节点地址;

具体操作命令为:

[gbase@localhost deploy]$ /home/gbase/deploy/bin/gha_ctl uninstall -f -l http://192.168.142.211:2379,http://192.168.142.212:2379,http://192.168.142.213:2379

返回操作成功信息:

{

"ret":0,

"msg":"Success"

}

此时查询数据库集群状态:

[gbase@localhost deploy]$ /home/gbase/deploy/bin/gha_ctl monitor all -l http://192.168.142.211:2379,http://192.168.142.212:2379,http://192.168.142.213:2379

返回没有集群信息:

{

"cluster":"gbase8c",

"gtm":[],

"coordinator":[],

"datanode":{},

"dcs":{

"cluster_state":"healthy",

"members":[

{

"url":"http://192.168.142.213:2379",

"id":"62f9a805ba818264",

"name":"node_2",

"state":"healthy",

"isLeader":true

},

{

"url":"http://192.168.142.211:2379",

"id":"8ea592db62fb7d92",

"name":"node_0",

"state":"healthy",

"isLeader":false

},

{

"url":"http://192.168.142.212:2379",

"id":"fbc0fc018ba512fa",

"name":"node_1",

"state":"healthy",

"isLeader":false

}

]

}

}

操作成功。

最后删除DCS工具,语法:

gha_ctl DESTROY dcs -l dcslist

  • dcslist为DCS地址,一般情况下可以只列出一个节点地址,其他节点会自动同步消息。为了保证高可用,也可以列出所有节点地址;

具体操作命令为:

[gbase@localhost deploy]$ /home/gbase/deploy/bin/gha_ctl destroy dcs -l http://192.168.142.211:2379,http://192.168.142.212:2379,http://192.168.142.213:2379

返回操作成功信息:

{

"ret":0,

"msg":"Success"

}

数据库集群删除完成。

登录数据库

数据库登录语法:

gsql -p port -h host

以gbase用户登录node3节点的CN3为例,具体操作命令为:

[gbase@localhost ~]$ gsql -p 5432 -h 192.168.142.211

正确返回信息:

gsql (GBase8c 5.2.0.0B17, based on PG 10.6 (GBase8cV5_S2.0.0B17))

Type "help" for help.

gbase=#

随后可以自由使用GBase 8c数据库。

附录:

常见错误码返回信息及描述:

Value

Msg

说明

0

Success

成功

80000101

Invalid argument:%s

参数无效

80000102

Unsupport operation:%s

命令不支持

80000103

Argument count error

参数个数错误

80000104

Dcs list invalid

无效的Dcs list

80000105

Ip address invalid:%s

无效的ip

80000106

Port number invalid:%s

无效的端口号

80000107

Resource:%s already in use

资源被占用

80000108

Directory:%s already exist

目录已存在

80000109

Directory:%s not exist

目录不存在

80000110

Directory:%s empty

目录为空

80000111

Directory:%s not empty

目录不为空

80000112

Directory name:%s too long

目录名字太长

80000113

No such component:%s

组件不存在(start/stop某个组件)

80000201

Run cmd failed:%s

执行命令失败(ssh到远端执行命令)

80000202

No route to host:%s

网络不通

80000203

Not prepare package info or version mismatch

未prepare就进行deploy或者升级/回滚时版本不匹配

80000204

Package:%s not exist

deploy时安装包不存在

80000205

Host:%s not deployed

未deploy就进行add

80000206

Soft link error

无效的软连接(布署时检查软连接)

80000207

Start component:%s failed

启动组件失败

80000208

Url requests failed:%s

http post/get等命令失败

80000301

Transport endpoint unreach

连接dcs失败

80000302

Acquire lock timeout

获取dcs锁超时

80000303

Write data to dcs failed

写入dcs失败

80000304

Read data from dcs failed

读取dcs失败

80000305

Host ips belong to different cluster

创建dcs集群时,输入的ip属于不同的集群

80000306

Dcs cluster not healthy

Dcs集群不健康

80000401

Switchover failed:%s

Swtichover失败

80000402

Host:%s not in cluster

升级时,传的ip不在集群中

80000403

Slave datanode replication state not streaming

Stop all的时候内部判断用,平台不用关注

80000501

Component:%s not running

组件未运行

80000502

Cluster:%s is busy

当前集群正在扩容中

80000503

Base backup failed

Backup失败

80000601

Sql connect to: %s failed

连接数据库失败

80000602

Sql cmd:%s execute failed

执行sql命令失败

80000603

Sql cmd result error

sql返回结果不符合预期

80000604

Connection:%s state error

Sql连接状态错误

「喜欢这篇文章,您的关注和赞赏是给作者最好的鼓励」
关注作者
【版权声明】本文为墨天轮用户原创内容,转载时必须标注文章的来源(墨天轮),文章链接,文章作者等基本信息,否则作者和墨天轮有权追究责任。如果您发现墨天轮中有涉嫌抄袭或者侵权的内容,欢迎发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。

文章被以下合辑收录

评论