暂无图片
暂无图片
暂无图片
暂无图片
暂无图片

kingbaseES V8R3集群VIP非正常加载案例

原创 jack 2021-12-27
1034

案例说明:

在kingbaseES V8R3集群系统关闭数据库服务后或重启主机系统后,集群还未正常启动的状态下,VIP仍被加载的解决方案。


系统环境:

# 数据库: [kingbase@srv1 ~]$ ksql -h 192.168.2.254 -U SYSTEM -W 123456 -p 9999 TEST ksql (V008R003C002B0061) # 操作系统: [root@srv2 etc]# cat /etc/centos-release CentOS Linux release 7.8.2003 (Core)



一、kingbaseES V8R3集群正常启动和关闭


1、kingbaseES V8R3集群正常启动(VIP被加载)

说明: 对于kingbaseES V8R3集群正常启动后,会在主机网卡上自动加载DB VIP和Cluster VIP.


1)启动集群

[kingbase@srv1 bin]$ ./kingbase_monitor.sh start ----------------------------------------------------------------------- start crontab kingbase position : [1] Redirecting to /bin/systemctl restart crond.service start crontab kingbase position : [1] Redirecting to /bin/systemctl restart crond.service ADD VIP NOW AT 2021-02-24 15:05:23 ON enp0s8 execute: [/sbin/ip addr add 192.168.2.254/24 dev enp0s8 label enp0s8:2] execute: /sbin/arping -U 192.168.2.254 -I enp0s8 -w 1 ARPING 192.168.2.254 from 192.168.2.254 enp0s8 Sent 1 probes (1 broadcast(s)) Received 0 response(s) wait kingbase recovery 5 sec... start crontab kingbasecluster line number: [2] Redirecting to /bin/systemctl restart crond.service start crontab kingbasecluster line number: [2] Redirecting to /bin/systemctl restart crond.service ...................... all started.. ... now we check again ======================================================================= | ip | program| [status] [ 192.168.2.2]| [kingbasecluster]| [active] [ 192.168.2.3]| [kingbasecluster]| [active] [ 192.168.2.2]| [kingbase]| [active] [ 192.168.2.3]| [kingbase]| [active] =======================================================================

2)查看集群状态


[kingbase@srv1 bin]$ ksql -h 192.168.2.254 -U SYSTEM -W 123456 -p 54321 TEST ksql (V008R003C002B0061) Type "help" for help. TEST=# select * from sys_stat_replication; pid | usesysid | usename | application_name | client_addr | client_hostname | client_port | backend_start | backend_xmin | state | sent_location | write_location | flush_lo cation | replay_location | sync_priority | sync_state ------+----------+---------+------------------+-------------+-----------------+-------------+---- 5492 | 10 | SYSTEM | node1 | 192.168.2.2 | | 49387 | 202 1-02-24 15:05:21.977613+08 | | streaming | 0/2313F9A0 | 0/2313F9A0 | 0/2313F9 A0 | 0/2313F9A0 | 1 | sync (1 row) [kingbase@srv1 bin]$ ksql -h 192.168.2.254 -U SYSTEM -W 123456 -p 9999 TEST ksql (V008R003C002B0061) Type "help" for help. TEST=# show pool_nodes; node_id | hostname | port | status | lb_weight | role | select_cnt | load_balance_node | replication_delay ---------+-------------+-------+--------+-----------+---------+------------+-------------------+- 0 | 192.168.2.2 | 54321 | up | 0.500000 | standby | 0 | false | 0 1 | 192.168.2.3 | 54321 | up | 0.500000 | primary | 0 | true | 0 (2 rows)


3)查看集群node ip配置信息(VIP已被加载)


说明: 其中192.168.2.253为Cluster VIP

192.168.2.254为DB VIP

[kingbase@srv1 bin]$ ip add sh 3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:9b:c4:39 brd ff:ff:ff:ff:ff:ff inet 192.168.2.2/24 brd 192.168.2.255 scope global enp0s8 valid_lft forever preferred_lft forever inet 192.168.2.253/24 scope global secondary enp0s8:0 valid_lft forever preferred_lft forever [root@srv2 ~]# ip add sh 3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:6d:e6:1c brd ff:ff:ff:ff:ff:ff inet 192.168.2.3/24 brd 192.168.2.255 scope global enp0s8 valid_lft forever preferred_lft forever inet 192.168.2.254/24 scope global secondary enp0s8:2 valid_lft forever preferred_lft forever


2、kingbaseES V8R3集群正常关闭


说明: 在kingbaseES V8R3集群正常关闭时,会释放集群的VIP(包括DB VIP和Cluster VIP)


1)停止集群服务

[kingbase@srv1 bin]$ ./kingbase_monitor.sh stop ----------------------------------------------------------------------- 2021-02-24 15:09:46 KingbaseES automation beging... 2021-02-24 15:09:46 stop kingbasecluster [192.168.2.2] ... remove status file /home/kingbase/cluster/kdb/run/kingbasecluster/kingbasecluster_status DEL VIP NOW AT 2021-02-24 15:09:52 ON enp0s8 No VIP on my dev, nothing to do. 2021-02-24 15:09:52 Done... 2021-02-24 15:09:52 stop kingbasecluster [192.168.2.3] ... remove status file /home/kingbase/cluster/kdb/run/kingbasecluster/kingbasecluster_status DEL VIP NOW AT 2021-02-24 15:09:58 ON enp0s8 No VIP on my dev, nothing to do. 2021-02-24 15:09:59 Done... 2021-02-24 15:09:59 stop kingbase [192.168.2.2] ... set /home/kingbase/cluster/kdb/db/data down now... 2021-02-24 15:10:02 Done... 2021-02-24 15:10:03 Del kingbase VIP [192.168.2.254/24] ... DEL VIP NOW AT 2021-02-24 15:10:03 ON enp0s8 No VIP on my dev, nothing to do. 2021-02-24 15:10:03 Done... 2021-02-24 15:10:03 stop kingbase [192.168.2.3] ... set /home/kingbase/cluster/kdb/db/data down now... 2021-02-24 15:10:06 Done... 2021-02-24 15:10:07 Del kingbase VIP [192.168.2.254/24] ... DEL VIP NOW AT 2021-02-24 15:10:08 ON enp0s8 execute: [/sbin/ip addr del 192.168.2.254/24 dev enp0s8] Oprate del ip cmd end. 2021-02-24 15:10:08 Done... ...................... all stop..


2)查看集群node ip信息(VIP已经被释放)


[kingbase@srv1 bin]$ ip add sh 3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:9b:c4:39 brd ff:ff:ff:ff:ff:ff inet 192.168.2.2/24 brd 192.168.2.255 scope global enp0s8 valid_lft forever preferred_lft forever [root@srv2 ~]# ip add sh 3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:6d:e6:1c brd ff:ff:ff:ff:ff:ff inet 192.168.2.3/24 brd 192.168.2.255 scope global enp0s8 valid_lft forever preferred_lft forever



二、kingbaseES V8R3集群非正常启动和关闭


说明:例如在主机系统刚刚启动,kingbaseES V8R3集群还未正常启动时,但是VIP却已经被加载到网卡上。


1、主机系统刚启动,集群未被正常启动


查看kingbase进程,只有kingbasecluster服务,数据库服务未启动,集群还未被正常启动

[root@srv2 etc]# cat /etc/cron.d/KINGBASECRON */1 * * * * kingbase /home/kingbase/cluster/kdb/db/bin/network_rewind.sh */1 * * * * root /home/kingbase/cluster/kdb/kingbasecluster/bin/restartcluster.sh


#注意:集群服务启动,是由以上crond服务启动。

[root@srv1 ~]# ps -ef |grep kingbase root 2283 1 0 10:09 ? 00:00:00 ./kingbasecluster -n root 2305 2283 0 10:09 ? 00:00:00 kingbasecluster: watchdog root 2412 2283 0 10:09 ? 00:00:00 kingbasecluster: lifecheck root 2414 2412 0 10:09 ? 00:00:00 kingbasecluster: heartbeat receiver root 2415 2412 0 10:09 ? 00:00:00 kingbasecluster: heartbeat sender root 3106 2283 0 10:10 ? 00:00:00 kingbasecluster: wait for connection request root 3107 2283 0 10:10 ? 00:00:00 kingbasecluster: wait for connection request root 3108 2283 0 10:10 ? 00:00:00 kingbasecluster: wait for connection request root 3109 2283 0 10:10 ? 00:00:00 kingbasecluster: wait for connection request root 3110 2283 0 10:10 ? 00:00:00 kingbasecluster: wait for connection request root 3111 2283 0 10:10 ? 00:00:00 kingbasecluster: wait for connection request root 3112 2283 0 10:10 ? 00:00:00 kingbasecluster: wait for connection request root 3113 2283 0 10:10 ? 00:00:00 kingbasecluster: wait for connection request root 3114 2283 0 10:10 ? 00:00:00 kingbasecluster: wait for connection request root 3115 2283 0 10:10 ? 00:00:00 kingbasecluster: wait for connection request root 3116 2283 0 10:10 ? 00:00:00 kingbasecluster: wait for connection request root 3117 2283 0 10:10 ? 00:00:00 kingbasecluster: wait for connection request root 3118 2283 0 10:10 ? 00:00:00 kingbasecluster: wait for connection request root 3119 2283 0 10:10 ? 00:00:00 kingbasecluster: wait for connection request root 3120 2283 0 10:10 ? 00:00:00 kingbasecluster: wait for connection request root 3126 2283 0 10:10 ? 00:00:00 kingbasecluster: PCP: wait for connection request root 3127 2283 0 10:10 ? 00:00:00 kingbasecluster: worker process root 3128 2283 0 10:10 ? 00:00:00 kingbasecluster: wait for connection request

2、查看集群node ip信息(VIP已被加载)


[kingbase@srv1 bin]$ ip add sh 3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:9b:c4:39 brd ff:ff:ff:ff:ff:ff inet 192.168.2.2/24 brd 192.168.2.255 scope global enp0s8 valid_lft forever preferred_lft forever inet 192.168.2.253/24 scope global secondary enp0s8:0 valid_lft forever preferred_lft forever [root@srv2 ~]# ip add sh 3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:6d:e6:1c brd ff:ff:ff:ff:ff:ff inet 192.168.2.3/24 brd 192.168.2.255 scope global enp0s8 valid_lft forever preferred_lft forever inet 192.168.2.254/24 scope global secondary enp0s8:2 valid_lft forever preferred_lft forever


3、查看系统message日志


如以下图所示:网卡ip的加载有一个系统服务avahi-damon管理,其中包括VIP地址的加载。

0


0


0

4、avahi-daemon服务介绍


Zeroconf

Zero configuration networking(zeroconf)零配置网络服务规范,是一种用于自动生成可用IP地址的网络技术,不需要额外的手动配置和专属的配置服务器。

“零配置网络服务”的目标,是让非专业用户也能便捷的连接各种网络设备,例如计算机,打印机等。整个搭建网络的过程都是通过程式自动化实现。如果没有 zeroconf,用户必须手动配置一些服务,例如DHCP、DNS,计算机网络的其他设置等。这些对非技术用户和新用户们来说是很难的事情。

Zeroconf规范的提出者是Apple公司.


Avahi

Avahi 是Zeroconf规范的开源实现,常见使用在Linux上。包含了一整套多播DNS(multicastDNS)/DNS-SD网络服务的实现。它使用的发布授权是LGPL。Zeroconf规范的另一个实现是Apple公司的Bonjour程式。Avahi和Bonjour相互兼容(废话,都走同一个 规范标准嘛,就象IE,Firefox,chrome都能跑HTTP1.1一样)。

Avahi允许程序在不需要进行手动网络配置的情况 下,在一个本地环境网络中发布和获知各种服务和主机。例如,当某用户把他的计算机接入到某个局域网时,如果他的机器运行有Avahi服务,则Avahi程式自动广播,从而发现网络中可用的打印机、共享文件和可相互聊天的其他用户。这有点象他正在接收局域网中的各种网络广告一样。

Linux下系统实际启动的进程名,是avahi-daemon

除非你有兼容的设备或使用 zeroconf 协议的服务,否则应该关闭它。

如果你用不到 把该服务直接关闭

#/etc/init.d/avahi-daemon stop or service avahi-daemon stop #chkconfig avahi-daemon off


5、Linux下关闭avahi-daemon服务

1)查看avahi-daemon服务状态 [root@srv1 ~]# avahi-daemon Daemon already running on PID 862 [root@srv1 ~]# avahi-daemon -h avahi-daemon [options] -h --help Show this help -D --daemonize Daemonize after startup (implies -s) -s --syslog Write log messages to syslog(3) instead of STDERR -k --kill Kill a running daemon -r --reload Request a running daemon to reload static services -c --check Return 0 if a daemon is already running -V --version Show version -f --file=FILE Load the specified configuration file instead of /etc/avahi/avahi-daemon.conf --no-rlimits Don't enforce resource limits --no-drop-root Don't drop privileges --no-chroot Don't chroot() --no-proc-title Don't modify process title --debug Increase verbosity # 普通用户查看 [kingbase@srv1 bin]$ systemctl status avahi-daemon ● avahi-daemon.service - Avahi mDNS/DNS-SD Stack Loaded: loaded (/usr/lib/systemd/system/avahi-daemon.service; enabled; vendor preset: enabled) Active: active (running) since 三 2021-02-24 10:08:24 CST; 5h 3min ago Main PID: 862 (avahi-daemon) Status: "avahi-daemon 0.6.31 starting up." Tasks: 2 CGroup: /system.slice/avahi-daemon.service ├─862 avahi-daemon: running [srv1.local] └─910 avahi-daemon: chroot helper # root用户查看 [root@srv1 ~]# systemctl status avahi-daemon.socket ● avahi-daemon.socket - Avahi mDNS/DNS-SD Stack Activation Socket Loaded: loaded (/usr/lib/systemd/system/avahi-daemon.socket; enabled; vendor preset: enabled) Active: active (listening) since 三 2021-02-24 10:08:16 CST; 5h 4min ago Listen: /var/run/avahi-daemon/socket (Stream) 2月 24 10:08:16 srv1 systemd[1]: Listening on Avahi mDNS/DNS-SD Stack Activation Socket. [root@srv2 ~]# systemctl status avahi-daemon ● avahi-daemon.service - Avahi mDNS/DNS-SD Stack Loaded: loaded (/usr/lib/systemd/system/avahi-daemon.service; enabled; vendor preset: enabled) Active: active (running) since 三 2021-02-24 10:08:59 CST; 5h 2min ago Main PID: 732 (avahi-daemon) Status: "avahi-daemon 0.6.31 starting up." Tasks: 2 CGroup: /system.slice/avahi-daemon.service ├─732 avahi-daemon: running [srv2.local] └─792 avahi-daemon: chroot helper 2月 24 10:09:46 srv2 avahi-daemon[732]: Joining mDNS multicast group on interface virbr0.I....1. 2月 24 10:09:46 srv2 avahi-daemon[732]: New relevant interface virbr0.IPv4 for mDNS. 2月 24 10:09:46 srv2 avahi-daemon[732]: Registering new address record for 192.168.122.1 o...v4. 2月 24 10:09:46 srv2 avahi-daemon[732]: Withdrawing address record for fe80::5054:ff:fefb:...ic. 2月 24 10:10:03 srv2 avahi-daemon[732]: Registering new address record for 192.168.2.254 o...v4. 2月 24 10:54:53 srv2 avahi-daemon[732]: Withdrawing address record for 192.168.2.254 on enp0s8. 2月 24 10:55:07 srv2 avahi-daemon[732]: Registering new address record for 192.168.2.254 o...v4. 2月 24 15:05:08 srv2 avahi-daemon[732]: Withdrawing address record for 192.168.2.254 on enp0s8. 2月 24 15:05:23 srv2 avahi-daemon[732]: Registering new address record for 192.168.2.254 o...v4. 2月 24 15:10:08 srv2 avahi-daemon[732]: Withdrawing address record for 192.168.2.254 on enp0s8. Hint: Some lines were ellipsized, use -l to show in full. # 关闭avahi-daemon服务(所有node) [root@srv1 ~]# systemctl stop avahi-daemon Warning: Stopping avahi-daemon.service, but it can still be activated by: avahi-daemon.socket [root@srv1 ~]# systemctl stop avahi-daemon.socket [root@srv1 ~]# systemctl disable avahi-daemon.socket Removed symlink /etc/systemd/system/sockets.target.wants/avahi-daemon.socket. [root@srv1 ~]# systemctl disable avahi-daemon Removed symlink /etc/systemd/system/multi-user.target.wants/avahi-daemon.service. Removed symlink /etc/systemd/system/dbus-org.freedesktop.Avahi.service. [root@srv1 ~]# systemctl status avahi-daemon ● avahi-daemon.service - Avahi mDNS/DNS-SD Stack Loaded: loaded (/usr/lib/systemd/system/avahi-daemon.service; disabled; vendor preset: enabled) Active: inactive (dead) 2月 24 10:55:31 srv1 avahi-daemon[862]: Registering new address record for 192.168.2.253 o...v4. 2月 24 15:04:44 srv1 avahi-daemon[862]: Withdrawing address record for 192.168.2.253 on enp0s8. 2月 24 15:05:45 srv1 avahi-daemon[862]: Registering new address record for 192.168.2.253 o...v4. 2月 24 15:09:47 srv1 avahi-daemon[862]: Withdrawing address record for 192.168.2.253 on enp0s8. 2月 24 15:12:51 srv1 avahi-daemon[862]: Got SIGTERM, quitting. 2月 24 15:12:51 srv1 systemd[1]: Stopping Avahi mDNS/DNS-SD Stack... 2月 24 15:12:51 srv1 avahi-daemon[862]: Leaving mDNS multicast group on interface virbr0.I....1. 2月 24 15:12:51 srv1 avahi-daemon[862]: Leaving mDNS multicast group on interface enp0s8.I....2. 2月 24 15:12:51 srv1 avahi-daemon[862]: Leaving mDNS multicast group on interface enp0s3.I...15. 2月 24 15:12:51 srv1 systemd[1]: Stopped Avahi mDNS/DNS-SD Stack. Hint: Some lines were ellipsized, use -l to show in full. [root@srv2 ~]# systemctl stop avahi-daemon.socket [root@srv2 ~]# systemctl disable avahi-daemon.socket Removed symlink /etc/systemd/system/sockets.target.wants/avahi-daemon.socket. [root@srv2 ~]# systemctl stop avahi-daemon [root@srv2 ~]# systemctl disable avahi-daemon Removed symlink /etc/systemd/system/multi-user.target.wants/avahi-daemon.service. Removed symlink /etc/systemd/system/dbus-org.freedesktop.Avahi.service. [root@srv2 ~]# systemctl status avahi-daemon ● avahi-daemon.service - Avahi mDNS/DNS-SD Stack Loaded: loaded (/usr/lib/systemd/system/avahi-daemon.service; disabled; vendor preset: enabled) Active: inactive (dead) 2月 24 10:09:46 srv2 avahi-daemon[732]: Withdrawing address record for fe80::5054:ff:fefb:...ic. 2月 24 10:10:03 srv2 avahi-daemon[732]: Registering new address record for 192.168.2.254 o...v4. 2月 24 10:54:53 srv2 avahi-daemon[732]: Withdrawing address record for 192.168.2.254 on enp0s8. 2月 24 10:55:07 srv2 avahi-daemon[732]: Registering new address record for 192.168.2.254 o...v4. 2月 24 15:05:08 srv2 avahi-daemon[732]: Withdrawing address record for 192.168.2.254 on enp0s8. 2月 24 15:05:23 srv2 avahi-daemon[732]: Registering new address record for 192.168.2.254 o...v4. 2月 24 15:10:08 srv2 avahi-daemon[732]: Withdrawing address record for 192.168.2.254 on enp0s8. 2月 24 15:15:20 srv2 avahi-daemon[732]: Got SIGTERM, quitting. 2月 24 15:15:20 srv2 systemd[1]: Stopping Avahi mDNS/DNS-SD Stack... 2月 24 15:15:20 srv2 systemd[1]: Stopped Avahi mDNS/DNS-SD Stack. Hint: Some lines were ellipsized, use -l to show in full.


6、重启主机系统测试


系统启动完成后......

1)查看avahi-daemon状态

[root@srv1 bin]# systemctl status avahi-daemon ● avahi-daemon.service - Avahi mDNS/DNS-SD Stack Loaded: loaded (/usr/lib/systemd/system/avahi-daemon.service; disabled; vendor preset: enabled) Active: inactive (dead)

2)查看集群node ip信息(VIP已经不再加载)

[root@srv1 ~]# ip add sh 3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:9b:c4:39 brd ff:ff:ff:ff:ff:ff inet 192.168.2.2/24 brd 192.168.2.255 scope global enp0s8 valid_lft forever preferred_lft forever [root@srv2 ~]# ip add sh 3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:6d:e6:1c brd ff:ff:ff:ff:ff:ff inet 192.168.2.3/24 brd 192.168.2.255 scope global enp0s8 valid_lft forever preferred_lft forever

3)正常启动KingbaseCluster

[root@srv1 bin]# ./kingbase_monitor.sh restart ----------------------------------------------------------------------- 2021-02-24 15:24:03 KingbaseES automation beging... 2021-02-24 15:24:03 stop kingbasecluster [192.168.2.2] ... DEL VIP NOW AT 2021-02-24 15:24:04 ON enp0s8 No VIP on my dev, nothing to do. 2021-02-24 15:24:04 Done... 2021-02-24 15:24:04 stop kingbasecluster [192.168.2.3] ... DEL VIP NOW AT 2021-02-24 15:24:06 ON enp0s8 No VIP on my dev, nothing to do. 2021-02-24 15:24:06 Done... 2021-02-24 15:24:06 stop kingbase [192.168.2.2] ... 2021-02-24 15:24:07 Done... 2021-02-24 15:24:08 Del kingbase VIP [192.168.2.254/24] ... DEL VIP NOW AT 2021-02-24 15:24:09 ON enp0s8 No VIP on my dev, nothing to do. 2021-02-24 15:24:09 Done... 2021-02-24 15:24:09 stop kingbase [192.168.2.3] ... 2021-02-24 15:24:10 Done... 2021-02-24 15:24:11 Del kingbase VIP [192.168.2.254/24] ... DEL VIP NOW AT 2021-02-24 15:24:11 ON enp0s8 No VIP on my dev, nothing to do. 2021-02-24 15:24:11 Done... ...................... all stop.. start crontab kingbase position : [1] Redirecting to /bin/systemctl restart crond.service start crontab kingbase position : [1] Redirecting to /bin/systemctl restart crond.service ADD VIP NOW AT 2021-02-24 15:24:23 ON enp0s8 execute: [/sbin/ip addr add 192.168.2.254/24 dev enp0s8 label enp0s8:2] execute: /sbin/arping -U 192.168.2.254 -I enp0s8 -w 1 ARPING 192.168.2.254 from 192.168.2.254 enp0s8 Sent 1 probes (1 broadcast(s)) Received 0 response(s) wait kingbase recovery 5 sec... start crontab kingbasecluster line number: [2] Redirecting to /bin/systemctl restart crond.service start crontab kingbasecluster line number: [2] Redirecting to /bin/systemctl restart crond.service ...................... all started.. ... now we check again ======================================================================= | ip | program| [status] [ 192.168.2.2]| [kingbasecluster]| [active] [ 192.168.2.3]| [kingbasecluster]| [active] [ 192.168.2.2]| [kingbase]| [active] [ 192.168.2.3]| [kingbase]| [active] =======================================================================


集群正常启动后,VIP被正常加载:

[kingbase@srv1 bin]$ ip add sh 3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:9b:c4:39 brd ff:ff:ff:ff:ff:ff inet 192.168.2.2/24 brd 192.168.2.255 scope global enp0s8 valid_lft forever preferred_lft forever inet 192.168.2.253/24 scope global secondary enp0s8:0 valid_lft forever preferred_lft forever [root@srv2 ~]# ip add sh 3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:6d:e6:1c brd ff:ff:ff:ff:ff:ff inet 192.168.2.3/24 brd 192.168.2.255 scope global enp0s8 valid_lft forever preferred_lft forever inet 192.168.2.254/24 scope global secondary enp0s8:2 valid_lft forever preferred_lft forever


三、总结


1、以上测试在CentOS 7环境下,不同的系统要看系统日志针对性解决。

2、对于avahi-daemon服务在生产环境下停止,要和系统集成服务商沟通。

「喜欢这篇文章,您的关注和赞赏是给作者最好的鼓励」
关注作者
【版权声明】本文为墨天轮用户原创内容,转载时必须标注文章的来源(墨天轮),文章链接,文章作者等基本信息,否则作者和墨天轮有权追究责任。如果您发现墨天轮中有涉嫌抄袭或者侵权的内容,欢迎发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。

评论