暂无图片
暂无图片
暂无图片
暂无图片
暂无图片

Oracle故障处理之CRS-2674: Start of 'ora.cssd' on 'rac2' failed

数据与人 2020-12-15
7642

问题背景:客户反馈Oracle rac集群节点宕机

1、首先查看宕机原因

归档日志满导致服务重启,查看归档日志路径是USE_DB_RECOVERY_FILE_DEST (默认路径)

安装的时候没有做调整,应该调整单独的归档目录,首先清理归档日志然后修改归档路径

2、节点一正常启动,节点二起不来    没有cluster服务
检查集群服务
在rac2节点上检查集群服务的状态报错

    [grid@rac2 ~]# u01/app/11.2.0/grid/bin/crs_stat -t
    CRS-0184: Cannot communicate with the CRS daemon.

    根据上面报错,可以判断出crs是有问题。
    尝试启动也报错:注意需要使用root

    尝试启动crs服务

      root@ora102 ~]# u01/app/11.2.0/grid/bin/crsctl start crs
      CRS-4640: Oracle High Availability Services is already active
      CRS-4000: Command Start failed, or completed with errors.

      正常情况是:

        [root@rac2 bin]# u01/app/11.2.0/grid/bin/crsctl start crs
        CRS-4123: Oracle High Availability Services has been started.


        检查crs服务,发现有问题:

          [grid@rac2 ~]$ crsctl check crs
          CRS-4638: Oracle High Availability Services is online
          CRS-4535: Cannot communicate with Cluster Ready Services
          CRS-4530: Communications failure contacting Cluster Synchronization Services demon
          CRS-4534: Cannot communicate with Event Manager


          然后节点rac2查看ip情况,发现vip和scan ip都已经不在,可以判断出节点rac已经脱离了集群。

          查看节点 ifconfig -a


          3、尝试重新注册节点2加入集群

            [root@rac2 ~]# sh u01/app/11.2.0/grid/root.sh 
            Performing root user operation for Oracle 11g


            The following environment variables are set as:
            ORACLE_OWNER= grid
            ORACLE_HOME= u01/app/11.2.0/grid
            Enter the full pathname of the local bin directory: [/usr/local/bin]:
            The contents of "dbhome" have not changed. No need to overwrite.
            The contents of "oraenv" have not changed. No need to overwrite.
            The contents of "coraenv" have not changed. No need to overwrite.
            Entries will be added to the etc/oratab file as needed by
            Database Configuration Assistant when a database is created
            Finished running generic part of root script.
            Now product-specific root actions will be performed.
            Using configuration parameter file: u01/app/11.2.0/grid/crs/install/crsconfig_params
            User ignored Prerequisites during installation
            Installing Trace File Analyzer
            Configure Oracle Grid Infrastructure for a Cluster ... succeeded


            4、清理节点2的配置信息,然后重新运行root.sh

              [root@rac2 trace]$ u01/app/11.2.0/grid/crs/install/rootcrs.pl -verbose -deconfig -force
              [root@rac2 ~]# u01/app/11.2.0/grid/crs/install/roothas.pl -verbose -deconfig -force
              [root@rac2 bin]# /u01/app/11.2.0/grid/root.sh


              报错:

                [root@rac2 install]#  /u01/app/11.2.0/grid/crs/install/roothas.pl -verbose -deconfig -force
                Can't locate Env.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 . /u01/app/11.2.0/grid/crs/install) at crsconfig_lib.pm line 703.
                BEGIN failed--compilation aborted at crsconfig_lib.pm line 703.
                Compilation failed in require at /u01/app/11.2.0/grid/crs/install/roothas.pl line 166.
                BEGIN failed--compilation aborted at /u01/app/11.2.0/grid/crs/install/roothas.pl line 166.

                缺少依赖包  安装命令 yum install perl-Env

                已安装:

                  perl-Env.noarch 0:1.04-2.el7


                5、清理节点2配置信息

                  [root@rac2 install]#  /u01/app/11.2.0/grid/crs/install/roothas.pl -verbose -deconfig -force
                  Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
                  CRS-4535: Cannot communicate with Cluster Ready Services
                  CRS-4000: Command Stop failed, or completed with errors.
                  CRS-4535: Cannot communicate with Cluster Ready Services
                  CRS-4000: Command Delete failed, or completed with errors.
                  CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac2'
                  CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac2'
                  CRS-2677: Stop of 'ora.mdnsd' on 'rac2' succeeded
                  CRS-2673: Attempting to stop 'ora.crf' on 'rac2'
                  CRS-2677: Stop of 'ora.crf' on 'rac2' succeeded
                  CRS-2673: Attempting to stop 'ora.gipcd' on 'rac2'
                  CRS-2677: Stop of 'ora.gipcd' on 'rac2' succeeded
                  CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac2'
                  CRS-2677: Stop of 'ora.gpnpd' on 'rac2' succeeded
                  CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac2' has completed
                  CRS-4133: Oracle High Availability Services has been stopped.
                  Successfully deconfigured Oracle Restart stack


                  6、重新注册到集群中

                    [root@rac2 install]# /u01/app/11.2.0/grid/root.sh 
                    Performing root user operation for Oracle 11g
                    The following environment variables are set as:
                    ORACLE_OWNER= grid
                    ORACLE_HOME= /u01/app/11.2.0/grid
                    Enter the full pathname of the local bin directory: [/usr/local/bin]:
                    The contents of "dbhome" have not changed. No need to overwrite.
                    The contents of "oraenv" have not changed. No need to overwrite.
                    The contents of "coraenv" have not changed. No need to overwrite.




                    Entries will be added to the /etc/oratab file as needed by
                    Database Configuration Assistant when a database is created
                    Finished running generic part of root script.
                    Now product-specific root actions will be performed.
                    Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
                    User ignored Prerequisites during installation
                    Installing Trace File Analyzer
                    OLR initialization - successful
                    Adding Clusterware entries to inittab
                    CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1, number 1, and is terminating
                    An active cluster was found during exclusive startup, restarting to join the cluster
                    Start of resource "ora.cssd" failed
                    CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac2'
                    CRS-2672: Attempting to start 'ora.gipcd' on 'rac2'
                    CRS-2676: Start of 'ora.cssdmonitor' on 'rac2' succeeded
                    CRS-2676: Start of 'ora.gipcd' on 'rac2' succeeded
                    CRS-2672: Attempting to start 'ora.cssd' on 'rac2'
                    CRS-2672: Attempting to start 'ora.diskmon' on 'rac2'
                    CRS-2676: Start of 'ora.diskmon' on 'rac2' succeeded
                    CRS-2674: Start of 'ora.cssd' on 'rac2' failed
                    CRS-2679: Attempting to clean 'ora.cssd' on 'rac2'
                    CRS-2681: Clean of 'ora.cssd' on 'rac2' succeeded
                    CRS-2673: Attempting to stop 'ora.gipcd' on 'rac2'
                    CRS-2677: Stop of 'ora.gipcd' on 'rac2' succeeded
                    CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'rac2'
                    CRS-2677: Stop of 'ora.cssdmonitor' on 'rac2' succeeded
                    CRS-5804: Communication error with agent process
                    CRS-4000: Command Start failed, or completed with errors.
                    Failed to start Oracle Grid Infrastructure stack
                    Failed to start Cluster Synchorinisation Service in clustered mode at /u01/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 1278.
                    /u01/app/11.2.0/grid/perl/bin/perl -I/u01/app/11.2.0/grid/perl/lib -I/u01/app/11.2.0/grid/crs/install /u01/app/11.2.0/grid/crs/install/rootcrs.pl execution failed

                    依然失败


                    7、CSSD没有在第二个节点上启动。

                    $grid_home/log/rac2子目录中查找cssd日志文件。查看日志信息。

                      /u01/app/11.2.0/grid/log/rac2/cssd
                      2019-10-12 15:41:19.013: [ CSSD][3199571712]clssgmDiscEndpcl: gipcDestroy 0x8a28
                      2019-10-12 15:41:19.064: [ CSSD][3181754112]clssgmWaitOnEventValue: after CmInfo State val 3, eval 1 waited 0
                      2019-10-12 15:41:19.844: [ CSSD][3186484992]clssnmvDHBValidateNcopy: node 1, rac1, has a disk HB, but no network HB, DHB has rcfg 464729747, wrtcnt, 8055111, LATS 336904, lastSeqNo 8055110, uniqueness 1569234927, timestamp 1570866136/3845241248
                      2019-10-12 15:41:20.064: [ CSSD][3181754112]clssgmWaitOnEventValue: after CmInfo State val 3, eval 1 waited 0
                      2019-10-12 15:41:20.845: [ CSSD][3186484992]clssnmvDHBValidateNcopy: node 1, rac1, has a disk HB, but no network HB, DHB has rcfg 464729747, wrtcnt, 8055112, LATS 337904, lastSeqNo 8055111, uniqueness 1569234927, timestamp 1570866137/3845242248


                      8、查看节点2的心跳

                        [grid@rac2 /]$ ping 20.20.20.201  --节点1的priv
                        PING 20.20.20.201 (20.20.20.201) 56(84) bytes of data.
                        From 20.20.20.202 icmp_seq=1 Destination Host Unreachable
                        From 20.20.20.202 icmp_seq=2 Destination Host Unreachable
                        From 20.20.20.202 icmp_seq=3 Destination Host Unreachable
                        From 20.20.20.202 icmp_seq=4 Destination Host Unreachable

                        心跳不通、。心累,据客户说节点1的心跳出过好几次问题了,估计网卡有问题。

                        征得客户同意,先尝试节点1的网卡重启下,然后把服务重启下,节点1/2服务都正常起来了,后续建议客户更换网卡消除隐患。



                        文章转载自数据与人,如果涉嫌侵权,请发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。

                        评论