暂无图片
暂无图片
1
暂无图片
暂无图片
暂无图片

Oracle GI安装过程中root.sh脚本执行过程

原创 _ All China Database Union 2024-01-27
1716

安装GI过程中最重要的步骤是root.sh脚本执行,也是最容易出问题的步骤,当遇到这个问题时处理起来非常棘手。root.sh脚本主要调用以下几个脚本组成。

  • /u01/app/19.0.0/grid/install/utl/rootmacro.sh “$@”
  • /u01/app/19.0.0/grid/install/utl/rootinstall.sh
  • /u01/app/19.0.0/grid/rdbms/install/rootadd_rdbms.sh
  • /u01/app/19.0.0/grid/rdbms/install/rootadd_filemap.sh
  • /u01/app/19.0.0/grid/crs/config/rootconfig.sh $

一、root.sh

先看下脚本内容

[grid@rac01:/u01/app/19.0.0/grid]$cat root.sh
#!/bin/sh
unset WAS_ROOTMACRO_CALL_MADE
. /u01/app/19.0.0/grid/install/utl/rootmacro.sh "$@"
. /u01/app/19.0.0/grid/install/utl/rootinstall.sh

#
# Invoke standalone rootadd_rdbms.sh
#
/u01/app/19.0.0/grid/rdbms/install/rootadd_rdbms.sh

/u01/app/19.0.0/grid/rdbms/install/rootadd_filemap.sh
/u01/app/19.0.0/grid/crs/config/rootconfig.sh $@
EXITCODE=$?
if [ $EXITCODE -ne 0 ]; then
        exit $EXITCODE
fi

检查脚本可以发现这个脚本主要是通过调用以上几个脚本执行。并且是按照顺序执行执行上面的五个脚本。下面分别看下几个脚本

二、rootmacro.sh

#!/bin/sh
#
# $Id: rootmacro.sbs /main/22 2016/08/30 22:34:11 pkuruvad Exp $
# Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.
#
# root.sh
#
# This script is intended to be run by root.  The script contains
# all the product installation actions that require root privileges.
#
# IMPORTANT NOTES - READ BEFORE RUNNING SCRIPT
#
# (1) ORACLE_HOME and ORACLE_OWNER can be defined in user's
#     environment to override default values defined in this script.
#
# (2) The environment variable LBIN (defined within the script) points to
#     the default local bin area.  Three executables will be moved there as
#     part of this script execution.
#
# (3) Define (if desired) LOG variable to name of log file.

说明用户可以在其环境变量中定义ORACLE_HOME和ORACLE_OWNER,以覆盖脚本中定义的默认值。这样可以根据用户的需求来指定特定的ORACLE_HOME目录和拥有者。

脚本中定义的环境变量LBIN指向默认的本地bin目录。在脚本执行过程中,将会将三个可执行文件移动到该目录。LBIN环境变量的作用是指定脚本中需要使用的本地bin目录

可以定义LOG变量来指定日志文件的名称。如果用户希望将脚本执行的日志输出到指定的文件中,可以通过定义LOG变量并将其设置为日志文件的路径来实现。这样可以方便地记录脚本的执行过程和结果。

检查脚本内容可以发现主要是定义一些变量和日志位置变量

三、rootinstall.sh

通过第一个脚本将各种变量那个调整完成,现在开始执行第二个脚本。

这个脚本主要是配置一些oratab信息,判断是否已经安装其它产品。对于AIX系统还需要判断是否覆盖写一些东西

四、rootadd_rdbms.sh

检查这个脚本,可以发现这个脚本是在操作安装软件
这个脚本的主要目的是在root权限下执行一些操作,其中包括设置文件的权限和所有权。

具体来说,这个脚本的意思如下:

  1. 设置一些变量,如ORACLE_HOME、CHOWN、CHMOD、RM、AWK、ECHO和CP,用于存储相关命令的路径或值。
  2. 检查当前用户的UID是否为0,如果不是0,则输出错误信息并退出脚本。
  3. 如果文件$ORACLE_HOME/bin/oradism存在,则将其所有权更改为root,并设置权限为4750。
  4. 如果文件$ORACLE_HOME/bin/oradism.old存在,则删除它。
  5. 如果文件不存在且文件ORACLE_HOME/bin/extjob不存在且文件ORACLE_HOME/bin/extjobo存在,则将extjobo文件复制为extjob。
  6. 如果文件$ORACLE_HOME/bin/extjob存在,则将其所有权更改为root,并设置权限为4750。
  7. 如果文件$ORACLE_HOME/rdbms/admin/externaljob.ora存在,则将其所有权更改为root,并设置权限为640。
  8. 如果文件$ORACLE_HOME/bin/jssu存在,则将其所有权更改为root,并设置权限为4750。
  9. 如果文件$ORACLE_HOME/bin/extproc存在,则将其权限设置为g+s。
  10. 如果文件$ORACLE_HOME/rdbms/admin/externaljob.ora.orig存在,则删除它。
  11. 如果目录$ORACLE_HOME/scheduler/wallet存在,则将其权限设置为0700。
    总的来说,这个脚本的目的是在root权限下,对指定的文件和目录进行权限和所有权的设置,以确保系统的安全性和正确性。

五、rootadd_filemap.sh

这个脚本的主要目的是在指定的路径下创建文件夹,并将相关文件和可执行程序复制到对应的目录中。

具体来说,这个脚本的意思如下:

  1. 设置ORACLE_HOME变量为/u01/app/19.0.0/grid,指定了Oracle软件的安装路径。
  2. 加载并执行$ORACLE_HOME/install/utl/rootmacro.sh脚本,该脚本可能包含了一些用于root权限操作的函数或变量。
  3. 设置ORCLFMAPLOC变量为/opt/ORCLfmap,指定了文件映射的根目录。
  4. 设置FILEMAPLOC变量为$ORCLFMAPLOC/prot1_64,指定了文件映射的具体目录,这里假设是64位Solaris系统。
  5. 如果$ORCLFMAPLOC目录不存在,则创建该目录。
  6. 如果$FILEMAPLOC目录不存在,则创建该目录。
  7. 如果$FILEMAPLOC/bin目录不存在,则创建该目录。
  8. 如果$FILEMAPLOC/etc目录不存在,则创建该目录。
  9. 如果$FILEMAPLOC/log目录不存在,则创建该目录。
  10. 如果OSDBA_GROUP环境变量不为空,则将FMPUTL_GROUP变量设置为OSDBA_GROUP的值,否则设置为root。
  11. 复制ORACLE_HOME/bin/fmputl和ORACLE_HOME/bin/fmputlhp到$FILEMAPLOC/bin目录。
  12. 设置$FILEMAPLOC/bin/fmputl的权限为550。
  13. 将$FILEMAPLOC/bin/fmputlhp的所属组设置为FMPUTL_GROUP,并将权限设置为4550。
  14. 如果FILEMAPLOC/etc/filemap.ora文件不存在,则将ORACLE_HOME/rdbms/install/filemap.ora文件复制到$FILEMAPLOC/etc目录。

总的来说,这个脚本的目的是在指定的路径下创建文件夹,并将相关的文件和可执行程序复制到对应的目录中,以便进行文件映射的操作。

六、rootconfig.sh

这个脚本的主要目的是在安装Oracle Grid Infrastructure时执行一些特定的操作。

主要是判断是升级或者安装,执行一些relink操作和补丁操作及配置链接库。同时也引出他在调用rootcrs.pl or roothas.pl两个perl脚本。主要功能是做crs集群初始化工作。

七、root.sh日志分析

Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/19.0.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:    Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.

这部分开始应该是最后的rootconfig.sh脚本的输出日志

Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/19.0.0/grid/crs/install/crsconfig_params

读取了crs配置文件,这个文件里面主要是一些crs配置信息,包括用户组、节点、目录、asm目录、网络等信息

The log of current session can be found at:
  /u01/app/grid/crsdata/rac01/crsconfig/rootcrs_rac01_2023-08-14_07-41-50PM.log
2023/08/14 19:41:59 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.
2023/08/14 19:41:59 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.
2023/08/14 19:41:59 CLSRSC-363: User ignored prerequisites during installation
2023/08/14 19:41:59 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.
2023/08/14 19:42:01 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.
2023/08/14 19:42:02 CLSRSC-594: Executing installation step 5 of 19: 'SetupOSD'.
2023/08/14 19:42:02 CLSRSC-594: Executing installation step 6 of 19: 'CheckCRSConfig'.
2023/08/14 19:42:02 CLSRSC-594: Executing installation step 7 of 19: 'SetupLocalGPNP'.
2023/08/14 19:42:45 CLSRSC-594: Executing installation step 8 of 19: 'CreateRootCert'.
2023/08/14 19:42:46 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2023/08/14 19:42:53 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.
2023/08/14 19:43:08 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.
2023/08/14 19:43:08 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.
2023/08/14 19:43:13 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.
2023/08/14 19:43:13 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
2023/08/14 19:43:36 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.
2023/08/14 19:43:42 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.
2023/08/14 19:43:47 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.
2023/08/14 19:43:52 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.

ASM has been created and started successfully.

[DBT-30001] Disk groups created successfully. Check /u01/app/grid/cfgtoollogs/asmca/asmca-230814PM074424.log for details.

2023/08/14 19:45:22 CLSRSC-482: Running command: '/u01/app/19.0.0/grid/bin/ocrconfig -upgrade grid oinstall'
CRS-4256: Updating the profile
Successful addition of voting disk 47839a16949a4f49bf7a7bf9de141d19.
Successful addition of voting disk f21c478e1a4e4f0dbfa47b7e6a3b3e16.
Successful addition of voting disk 78f8aa8545524f5cbf4387f6caa8c2b8.
Successfully replaced voting disk group with +OCR.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   47839a16949a4f49bf7a7bf9de141d19 (/dev/mapper/asm-data01) [OCR]
 2. ONLINE   f21c478e1a4e4f0dbfa47b7e6a3b3e16 (/dev/mapper/asm-data02) [OCR]
 3. ONLINE   78f8aa8545524f5cbf4387f6caa8c2b8 (/dev/mapper/asm-ocr03) [OCR]
Located 3 voting disk(s).
2023/08/14 19:47:01 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.
2023/08/14 19:48:33 CLSRSC-343: Successfully started Oracle Clusterware stack
2023/08/14 19:48:33 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
2023/08/14 19:49:59 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
2023/08/14 19:50:31 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

安装的19个步骤,其中主要的调用rootcrs.pl or roothas.pl的日志位于 /u01/app/grid/crsdata/rac01/crsconfig/rootcrs_rac01_2023-08-14_07-41-50PM.log。
主要工作还在由rootcrs.pl脚本的日志中。

八、rootcrs日志分析

检查集群参数配置文件

2023-08-14 19:41:50: Checking parameters from paramfile /u01/app/19.0.0/grid/crs/install/crsconfig_params to validate installer variables
...
2023-08-14 19:41:50: The configuration parameter file /u01/app/19.0.0/grid/crs/install/crsconfig_params  is valid

执行第一步主要是检查crsconfig_params参数配置文件里面的参数

2023-08-14 19:41:50: Save the ASM password file location: +OCR/orapwASM

保存密码文件保存在OCR磁盘组

2023-08-14 19:41:51: Perform initialization tasks before configuring OLR
2023-08-14 19:41:51: Perform initialization tasks before configuring OCR
2023-08-14 19:41:51: Perform initialization tasks before configuring IOS
2023-08-14 19:41:51: Perform initialization tasks before configuring CHM
2023-08-14 19:41:51: Set CRS_ADR_DISABLE to stop CLSB initializing ADR

创建文件前的初始化

2023-08-14 19:41:51: Listing the sub-dirs under [/u01/app/grid/diag/crs/rac01/crs] ...
2023-08-14 19:41:51: Set permission 0775 on sub directory '/u01/app/grid/diag/crs/rac01/crs'
2023-08-14 19:41:51: Listing the sub-dirs under [/u01/app/grid/diag/kfod/rac01/kfod] ...
2023-08-14 19:41:51: Set permission 0775 on sub directory '/u01/app/grid/diag/crs/rac01/crs'
2023-08-14 19:41:51: Set permission 0775 on sub directory '/u01/app/grid/diag/kfod/rac01/kfod'
2023-08-14 19:41:51: Processing crsconfig_files.sbs and copying wrapper scripts
2023-08-14 19:41:51: copy (/u01/app/19.0.0/grid/crs/utl/usrvip, /u01/app/19.0.0/grid/bin/usrvip)
2023-08-14 19:41:51: copy (/u01/app/19.0.0/grid/crs/utl/appvipcfg, /u01/app/19.0.0/grid/bin/appvipcfg)
2023-08-14 19:41:51: copy (/u01/app/19.0.0/grid/crs/utl/srdtool, /u01/app/19.0.0/grid/bin/rdtool)
2023-08-14 19:41:51: copy (/u01/app/19.0.0/grid/crs/utl/evm.auth, /u01/app/19.0.0/grid/evm/admin/conf/evm.auth)
2023-08-14 19:41:51: copy (/u01/app/19.0.0/grid/crs/utl/evmdaemon.conf, /u01/app/19.0.0/grid/evm/admin/conf/evmdaemon.conf)
2023-08-14 19:41:51: copy (/u01/app/19.0.0/grid/crs/utl/evmlogger.conf, /u01/app/19.0.0/grid/evm/admin/conf/evmlogger.conf)
2023-08-14 19:41:51: copy (/u01/app/19.0.0/grid/crs/utl/logging.properties, /u01/app/19.0.0/grid/srvm/admin/logging.properties)

修改目录权限,复制crs文件

2023-08-14 19:41:51: set the permissions on the /u01/app/19.0.0/grid/jdk directory
2023-08-14 19:41:51: orig perm for COPYRIGHT is 0755, setting file perm to 0755
2023-08-14 19:41:51: orig perm for THIRDPARTYLICENSEREADME.txt is 0755, setting file perm to 0755
2023-08-14 19:41:51: orig perm for THIRDPARTYLICENSEREADME-JAVAFX.txt is 0755, setting file perm to 0755
...
2023-08-14 19:41:51: orig perm for XML__Parser.3 is 0700, setting file perm to 0750
2023-08-14 19:41:51: orig perm for Dumpvalue.3 is 0700, setting file perm to 0750
2023-08-14 19:41:51: orig perm for less.3 is 0700, setting file perm to 0750
2023-08-14 19:41:51: orig perm for Locale__Codes__Changes.3 is 0700, setting file perm to 0750
2023-08-14 19:41:51: orig perm for Pod__Text__Overstrike.3 is 0700, setting file perm to 0750
2023-08-14 19:41:51: orig perm for CPAN__Plugin.3 is 0700, setting file perm to 0750
2023-08-14 19:41:51: orig perm for Pod__Find.3 is 0700, setting file perm to 0750

修改所有依赖环境的权限,同时这些依赖对象也值得观察

2023-08-14 19:41:53: CheckHomeExists: check if home /u01/app/19.0.0/grid exists on nodes rac01 rac02
2023-08-14 19:41:53: Invoking "/u01/app/19.0.0/grid/bin/cluutil -doesHomeExist -nodelist rac01,rac02 -oraclehome /u01/app/19.0.0/grid"
2023-08-14 19:41:53: trace file=/u01/app/grid/crsdata/rac01/crsconfig/cluutil3.log
2023-08-14 19:41:53: Running as user grid: /u01/app/19.0.0/grid/bin/cluutil -doesHomeExist -nodelist rac01,rac02 -oraclehome /u01/app/19.0.0/grid
2023-08-14 19:41:54: Removing file /tmp/QdYr8Qo21S
2023-08-14 19:41:54: Successfully removed file: /tmp/QdYr8Qo21S
2023-08-14 19:41:54: pipe exit code: 0
2023-08-14 19:41:54: /bin/su successfully executed

检查两个节点的home

2023-08-14 19:41:54: Oracle clusterware configuration has started
2023-08-14 19:41:54: Invoking "/u01/app/19.0.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_STACK -state START"
2023-08-14 19:41:54: trace file=/u01/app/grid/crsdata/rac01/crsconfig/cluutil5.log

开始集群配置

2023-08-14 19:41:58:  "-ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_STACK -pname VERSION -pvalue 19.0.0.0.0" succeeded with status 0.
2023-08-14 19:41:58: Succeeded to add (property/value):('VERSION/'19.0.0.0.0') for checkpoint:ROOTCRS_STACK
2023-08-14 19:41:58: Invoking "/u01/app/19.0.0/grid/bin/cluutil -ckpt -global -oraclebase /u01/app/grid -chkckpt -name ROOTCRS_FIRSTNODE -status"
2023-08-14 19:41:58: trace file=/u01/app/grid/crsdata/rac01/crsconfig/cluutil2.log
2023-08-14 19:41:58: Running as user grid: /u01/app/19.0.0/grid/bin/cluutil -ckpt -global -oraclebase /u01/app/grid -chkckpt -name ROOTCRS_FIRSTNODE -status
2023-08-14 19:41:59: Removing file /tmp/dhb5dRa8el
2023-08-14 19:41:59: Successfully removed file: /tmp/dhb5dRa8el
2023-08-14 19:41:59: pipe exit code: 0
2023-08-14 19:41:59: /bin/su successfully executed

可以看到每一步都有做检查点操作,也就说明root.sh脚本可以反复执行,因为每次执行都会检查检查点文件判断执行到哪一步了。

2023-08-14 19:41:59: The 'ROOTCRS_FIRSTNODE' status is START
2023-08-14 19:41:59: Global ckpt 'ROOTCRS_FIRSTNODE' state: START
2023-08-14 19:41:59: First node operations have not been done, and local node is installer node.
2023-08-14 19:41:59: Local node: rac01 is the first node.
2023-08-14 19:41:59: ret=1; localNode=rac01; isFirstNode=1
2023-08-14 19:41:59: Perform initialization tasks before configuring ASM
2023-08-14 19:41:59: Perform initialization tasks before configuring SRVM
2023-08-14 19:41:59: Perform initialization tasks before configuring OHASD
2023-08-14 19:41:59: Executed stage SetupEnv in 8 seconds
2023-08-14 19:41:59: Executing the [SetupTFA] step with checkpoint [null] ...
2023-08-14 19:41:59: Executing cmd: /u01/app/19.0.0/grid/bin/clsecho -p has -f clsrsc -m 594 '1' '19' 'SetupTFA'
2023-08-14 19:41:59: Executing cmd: /u01/app/19.0.0/grid/bin/clsecho -p has -f clsrsc -m 594 '1' '19' 'SetupTFA'
2023-08-14 19:41:59: Command output:
>  CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.
...
2023-08-14 19:41:59: CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.
2023-08-14 19:41:59: Executed stage SetupTFA in 0 seconds
2023-08-14 19:41:59: Executing the [ValidateEnv] step with checkpoint [null] ...
2023-08-14 19:41:59: Executing cmd: /u01/app/19.0.0/grid/bin/clsecho -p has -f clsrsc -m 594 '2' '19' 'ValidateEnv'
2023-08-14 19:41:59: Executing cmd: /u01/app/19.0.0/grid/crs/install/tfa_setup -silent -crshome /u01/app/19.0.0/grid
2023-08-14 19:41:59: Executing cmd: /u01/app/19.0.0/grid/bin/clsecho -p has -f clsrsc -m 594 '2' '19' 'ValidateEnv'
2023-08-14 19:41:59: Command output:

执行root.sh日志中提示的第一步,安装TFA工具

2023-08-14 19:42:02: Oracle CRS home = /u01/app/19.0.0/grid
2023-08-14 19:42:02: Host name = rac01
2023-08-14 19:42:02: CRS user = grid
2023-08-14 19:42:02: Oracle CRS home = /u01/app/19.0.0/grid
2023-08-14 19:42:02: GPnP host = rac01
2023-08-14 19:42:02: Oracle GPnP home = /u01/app/19.0.0/grid/gpnp
2023-08-14 19:42:02: Oracle GPnP local home = /u01/app/19.0.0/grid/gpnp/rac01

检查crs配置

2023-08-14 19:42:05: Creating local GPnP setup for clustered node...
2023-08-14 19:42:05: Oracle CRS home = /u01/app/19.0.0/grid
2023-08-14 19:42:05: Oracle GPnP wallets home = /u01/app/19.0.0/grid/gpnp/rac01/wallets
2023-08-14 19:42:05: Checking if GPnP setup exists
2023-08-14 19:42:05: /u01/app/19.0.0/grid/gpnp/rac01/wallets/peer/cwallet.sso wallet must be created
2023-08-14 19:42:05: Removing old wallets/certificates, if any
2023-08-14 19:42:05: Creating wallets for Linux OS
2023-08-14 19:42:05: Invoking "/u01/app/19.0.0/grid/bin/cluutil -walletgen -orahome /u01/app/19.0.0/grid -rootdir /u01/app/19.0.0/grid/gpnp/rac01/wallets/root -rootcert /u01/app/19.0.0/grid/gpnp/rac01/wallets/root/b64certificate.txt -peerdir /u01/app/19.0.0/grid/gpnp/rac01/wallets/peer -peercertreq /u01/app/19.0.0/grid/gpnp/rac01/wallets/peer/certreq.txt -peercert /u01/app/19.0.0/grid/gpnp/rac01/wallets/peer/cert.txt -padir /u01/app/19.0.0/grid/gpnp/rac01/wallets/pa -pacertreq /u01/app/19.0.0/grid/gpnp/rac01/wallets/pa/certreq.txt -pacert /u01/app/19.0.0/grid/gpnp/rac01/wallets/pa/cert.txt -prdrdir /u01/app/19.0.0/grid/gpnp/rac01/wallets/prdr"
2023-08-14 19:42:05: trace file=/u01/app/grid/crsdata/rac01/crsconfig/cluutil5.log
2023-08-14 19:42:05: Running as user grid: /u01/app/19.0.0/grid/bin/cluutil -walletgen -orahome /u01/app/19.0.0/grid -rootdir /u01/app/19.0.0/grid/gpnp/rac01/wallets/root -rootcert /u01/app/19.0.0/grid/gpnp/rac01/wallets/root/b64certificate.txt -peerdir /u01/app/19.0.0/grid/gpnp/rac01/wallets/peer -peercertreq /u01/app/19.0.0/grid/gpnp/rac01/wallets/peer/certreq.txt -peercert /u01/app/19.0.0/grid/gpnp/rac01/wallets/peer/cert.txt -padir /u01/app/19.0.0/grid/gpnp/rac01/wallets/pa -pacertreq /u01/app/19.0.0/grid/gpnp/rac01/wallets/pa/certreq.txt -pacert /u01/app/19.0.0/grid/gpnp/rac01/wallets/pa/cert.txt -prdrdir /u01/app/19.0.0/grid/gpnp/rac01/wallets/prdr
...

2023-08-14 19:42:38:    netlst="ens160"/172.16.220.0:public,"ens192"/10.10.10.0:asm,"ens192"/10.10.10.0:cluster_interconnect=
2023-08-14 19:42:38:    ocrid==
2023-08-14 19:42:38:    clusterguid==
2023-08-14 19:42:38: Checking if GPnP setup exists

...
2023-08-14 19:42:38: Checking if GPnP setup exists
2023-08-14 19:42:38: /u01/app/19.0.0/grid/gpnp/rac01/profiles/peer/profile.xml profile must be created
2023-08-14 19:42:38: Ready to parse: "ens160"/172.16.220.0:public,"ens192"/10.10.10.0:asm,"ens192"/10.10.10.0:cluster_interconnect
2023-08-14 19:42:38: iflist: '"ens160"/172.16.220.0:public,"ens192"/10.10.10.0:asm|cluster_interconnect'
2023-08-14 19:42:38: iflist: "ens160"/172.16.220.0:public
"ens192"/10.10.10.0:asm|cluster_interconnect
2023-08-14 19:42:38: idef="ens160"/172.16.220.0:public
2023-08-14 19:42:38: 1 => 'ens160','172.16.220.0','public'
2023-08-14 19:42:38: idef="ens192"/10.10.10.0:asm|cluster_interconnect
2023-08-14 19:42:38: 2 => 'ens192','10.10.10.0','asm,cluster_interconnect'
2023-08-14 19:42:38: gpnptool pars: -hnet=gen -gen:hnet_nm="*" -gen:net=net1 -net1:net_ip="172.16.220.0" -net1:net_ada="ens160" -net1:net_use="public" -gen:net=net2 -net2:net_ip="10.10.10.0" -net2:net_ada="ens192" -net2:net_use="asm,cluster_interconnect"
2023-08-14 19:42:38: OCRID is not available, hence not set in GPnP Profile
...
2023-08-14 19:42:39: GPnP peer profile create successfully completed.
2023-08-14 19:42:39: <--- GPnP peer profile successfully created
2023-08-14 19:42:39: GPnP local setup successfully created

配置gpnp

2023-08-14 19:42:57: Executing the step [ocr_ConfigFirstNode_step_1] to configure OCR on the first node
2023-08-14 19:42:57: Use DG name as the OCR location
2023-08-14 19:42:57: OCR locations: +OCR
2023-08-14 19:42:57: Oracle CRS home = /u01/app/19.0.0/grid
2023-08-14 19:42:57: Oracle CRS home = /u01/app/19.0.0/grid
2023-08-14 19:42:57: Oracle cluster name = rac-cluster
2023-08-14 19:42:57: OCR locations = +OCR
2023-08-14 19:42:57: Validating OCR
2023-08-14 19:42:57: Retrieving OCR location used by previous installations
2023-08-14 19:42:57: Opening file /etc/oracle/ocr.loc
2023-08-14 19:42:57: Value () is set for key=ocrconfig_loc
2023-08-14 19:42:57: Opening file /etc/oracle/ocr.loc
2023-08-14 19:42:57: Value () is set for key=ocrmirrorconfig_loc
2023-08-14 19:42:57: Opening file /etc/oracle/ocr.loc
2023-08-14 19:42:57: Value () is set for key=ocrconfig_loc3
2023-08-14 19:42:57: Opening file /etc/oracle/ocr.loc
2023-08-14 19:42:57: Value () is set for key=ocrconfig_loc4
2023-08-14 19:42:57: Opening file /etc/oracle/ocr.loc
2023-08-14 19:42:57: Value () is set for key=ocrconfig_loc5
2023-08-14 19:42:57: Checking if OCR sync file exists
...
2023-08-14 19:42:57: Verifying current OCR settings with user entered values
2023-08-14 19:42:57: Setting OCR locations in /etc/oracle/ocr.loc
2023-08-14 19:42:57: Validating OCR locations in /etc/oracle/ocr.loc
2023-08-14 19:42:57: Checking for existence of /etc/oracle/ocr.loc
2023-08-14 19:42:57: Backing up /etc/oracle/ocr.loc to /etc/oracle/ocr.loc.orig
2023-08-14 19:42:57: Setting ocr location +OCR
2023-08-14 19:42:57: set local_only=FALSE

一节点配置OCR,这里也能看到OCR里面的配置项

2023-08-14 19:42:57: Executing the step [olr_ConfigCurrentNode_step_1] to configure OLR on the first node
2023-08-14 19:42:57: unlink /u01/app/grid/crsdata/rac01/olr/rac01_19.olr
2023-08-14 19:42:57: OLR location = /u01/app/grid/crsdata/rac01/olr/rac01_19.olr
2023-08-14 19:42:57: Oracle CRS Home = /u01/app/19.0.0/grid
2023-08-14 19:42:57: Validating /etc/oracle/olr.loc file for OLR location /u01/app/grid/crsdata/rac01/olr/rac01_19.olr
2023-08-14 19:42:57: /etc/oracle/olr.loc already exists. Backing up /etc/oracle/olr.loc to /etc/oracle/olr.loc.orig
2023-08-14 19:42:57: Done setting permissions on file /etc/oracle/olr.loc
2023-08-14 19:42:57: Validating for SI-CSS configuration
2023-08-14 19:42:57: Retrieving OCR main disk location
2023-08-14 19:42:57: Opening file /etc/oracle/ocr.loc
2023-08-14 19:42:57: Value (+OCR) is set for key=ocrconfig_loc
2023-08-14 19:42:57: Executing the step [olr_ConfigCurrentNode_step_2] to configure OLR on the first node
2023-08-14 19:42:57: Creating or upgrading Oracle Local Registry (OLR)
2023-08-14 19:42:57: Executing /u01/app/19.0.0/grid/bin/ocrconfig -local -upgrade grid oinstall
2023-08-14 19:42:57: Executing cmd: /u01/app/19.0.0/grid/bin/ocrconfig -local -upgrade grid oinstall
2023-08-14 19:42:57: OLR successfully created or upgraded

配置本地olr

>  CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.
>End Command output
2023-08-14 19:42:54: CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.
2023-08-14 19:42:54: isZlinuxSeries: 0
2023-08-14 19:42:54: output of ip command for local node: default via 172.16.220.1 dev ens160 proto static metric 100
 default via 10.10.10.1 dev ens192 proto static metric 101
 10.10.10.0/24 dev ens192 proto kernel scope link src 10.10.10.51 metric 101
 172.16.220.0/24 dev ens160 proto kernel scope link src 172.16.220.51 metric 100
 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown

2023-08-14 19:42:54: Getting the private ips of all nodes being configured
2023-08-14 19:42:54: The private ips of the local node is available locally. Skip the local node rac01
2023-08-14 19:42:54: Getting private ip addresses from node rac02
2023-08-14 19:42:54: Executing command: /usr/bin/ssh grid@rac02 "(/usr/sbin/ip route list)"
2023-08-14 19:42:54: Running as user grid: /usr/bin/ssh grid@rac02 "(/usr/sbin/ip route list)"
2023-08-14 19:42:54: Removing file /tmp/cXs5bFNsUo
2023-08-14 19:42:54: Successfully removed file: /tmp/cXs5bFNsUo
2023-08-14 19:42:54: pipe exit code: 0
2023-08-14 19:42:54: /bin/su successfully executed

2023-08-14 19:42:54: Status: 0 Command output: default via 172.16.220.1 dev ens160 proto static metric 100
 default via 10.10.10.1 dev ens192 proto static metric 101
 10.10.10.0/24 dev ens192 proto kernel scope link src 10.10.10.53 metric 101
 172.16.220.0/24 dev ens160 proto kernel scope link src 172.16.220.53 metric 100
 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown

2023-08-14 19:42:54: output of ip command for node rac02: default via 172.16.220.1 dev ens160 proto static metric 100
 default via 10.10.10.1 dev ens192 proto static metric 101
 10.10.10.0/24 dev ens192 proto kernel scope link src 10.10.10.53 metric 101
 172.16.220.0/24 dev ens160 proto kernel scope link src 172.16.220.53 metric 100
 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown

olr里面的配置信息

2023-08-14 19:43:11: init file = /u01/app/19.0.0/grid/crs/init/init.ohasd.sles
2023-08-14 19:43:11: Copying file /u01/app/19.0.0/grid/crs/init/init.ohasd.sles to /etc/init.d directory
2023-08-14 19:43:11: Setting init.ohasd permission in /etc/init.d directory
2023-08-14 19:43:11: init file = /u01/app/19.0.0/grid/crs/init/ohasd.sles
2023-08-14 19:43:11: Copying file /u01/app/19.0.0/grid/crs/init/ohasd.sles to /etc/init.d directory
2023-08-14 19:43:11: Setting ohasd permission in /etc/init.d directory
2023-08-14 19:43:11: Removing "/etc/rc.d/rc3.d/S96ohasd"
2023-08-14 19:43:11: Creating a link "/etc/rc.d/rc3.d/S96ohasd" pointing to /etc/init.d/ohasd
2023-08-14 19:43:11: Removing "/etc/rc.d/rc5.d/S96ohasd"
2023-08-14 19:43:11: Creating a link "/etc/rc.d/rc5.d/S96ohasd" pointing to /etc/init.d/ohasd
2023-08-14 19:43:11: Removing "/etc/rc.d/rc0.d/K15ohasd"
2023-08-14 19:43:11: Creating a link "/etc/rc.d/rc0.d/K15ohasd" pointing to /etc/init.d/ohasd
2023-08-14 19:43:11: Removing "/etc/rc.d/rc1.d/K15ohasd"
2023-08-14 19:43:11: Creating a link "/etc/rc.d/rc1.d/K15ohasd" pointing to /etc/init.d/ohasd
2023-08-14 19:43:11: Removing "/etc/rc.d/rc2.d/K15ohasd"
2023-08-14 19:43:11: Creating a link "/etc/rc.d/rc2.d/K15ohasd" pointing to /etc/init.d/ohasd
2023-08-14 19:43:11: Removing "/etc/rc.d/rc4.d/K15ohasd"
2023-08-14 19:43:11: Creating a link "/etc/rc.d/rc4.d/K15ohasd" pointing to /etc/init.d/ohasd
2023-08-14 19:43:11: Removing "/etc/rc.d/rc6.d/K15ohasd"
2023-08-14 19:43:11: Creating a link "/etc/rc.d/rc6.d/K15ohasd" pointing to /etc/init.d/ohasd
2023-08-14 19:43:11: The file ohasd has been successfully linked to the RC directories
2023-08-14 19:43:11: Invoking "/u01/app/19.0.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/grid -writeckpt -name ROOTCRS_OHASD -state SUCCESS"

配置ohasd

2023-08-14 19:43:28: Registering resource type 'ora.haip.type'
2023-08-14 19:43:28: Executing /u01/app/19.0.0/grid/bin/crsctl add type ora.haip.type -basetype cluster_resource -file /u01/app/19.0.0/grid/log/rac01/ohasd/ora.haip.type -init
2023-08-14 19:43:28: Executing cmd: /u01/app/19.0.0/grid/bin/crsctl add type ora.haip.type -basetype cluster_resource -file /u01/app/19.0.0/grid/log/rac01/ohasd/ora.haip.type -init
2023-08-14 19:43:28: Type 'ora.haip.type' added successfully
2023-08-14 19:43:28: Registering resource type 'ora.evm.type'
2023-08-14 19:43:28: Executing /u01/app/19.0.0/grid/bin/crsctl add type ora.evm.type -basetype ora.daemon.type -file /u01/app/19.0.0/grid/log/rac01/ohasd/ora.evm.type -init
2023-08-14 19:43:28: Executing cmd: /u01/app/19.0.0/grid/bin/crsctl add type ora.evm.type -basetype ora.daemon.type -file /u01/app/19.0.0/grid/log/rac01/ohasd/ora.evm.type -init
2023-08-14 19:43:28: Type 'ora.evm.type' added successfully
2023-08-14 19:43:28: Registering resource type 'ora.mdns.type'
2023-08-14 19:43:28: Executing /u01/app/19.0.0/grid/bin/crsctl add type ora.mdns.type -basetype ora.daemon.type -file /u01/app/19.0.0/grid/log/rac01/ohasd/ora.mdns.type -init
2023-08-14 19:43:28: Executing cmd: /u01/app/19.0.0/grid/bin/crsctl add type ora.mdns.type -basetype ora.daemon.type -file /u01/app/19.0.0/grid/log/rac01/ohasd/ora.mdns.type -init
2023-08-14 19:43:28: Type 'ora.mdns.type' added successfully
2023-08-14 19:43:28: Registering resource type 'ora.gpnp.type'
2023-08-14 19:43:28: Executing /u01/app/19.0.0/grid/bin/crsctl add type ora.gpnp.type -basetype ora.daemon.type -file /u01/app/19.0.0/grid/log/rac01/ohasd/ora.gpnp.type -init
2023-08-14 19:43:28: Executing cmd: /u01/app/19.0.0/grid/bin/crsctl add type ora.gpnp.type -basetype ora.daemon.type -file /u01/app/19.0.0/grid/log/rac01/ohasd/ora.gpnp.type -init
2023-08-14 19:43:29: Type 'ora.gpnp.type' added successfully
2023-08-14 19:43:29: Registering resource type 'ora.gipc.type'
2023-08-14 19:43:29: Executing /u01/app/19.0.0/grid/bin/crsctl add type ora.gipc.type -basetype ora.daemon.type -file /u01/app/19.0.0/grid/log/rac01/ohasd/ora.gipc.type -init
2023-08-14 19:43:29: Executing cmd: /u01/app/19.0.0/grid/bin/crsctl add type ora.gipc.type -basetype ora.daemon.type -file /u01/app/19.0.0/grid/log/rac01/ohasd/ora.gipc.type -init
2023-08-14 19:43:29: Type 'ora.gipc.type' added successfully
2023-08-14 19:43:29: Registering resource type 'ora.cssd.type'
2023-08-14 19:43:29: Executing /u01/app/19.0.0/grid/bin/crsctl add type ora.cssd.type -basetype ora.daemon.type -file /u01/app/19.0.0/grid/log/rac01/ohasd/ora.cssd.type -init
2023-08-14 19:43:29: Executing cmd: /u01/app/19.0.0/grid/bin/crsctl add type ora.cssd.type -basetype ora.daemon.type -file /u01/app/19.0.0/grid/log/rac01/ohasd/ora.cssd.type -init

注册资源

>  CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.
>End Command output
2023-08-14 19:43:52: CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.
2023-08-14 19:43:52: Executing cmd: /u01/app/19.0.0/grid/bin/crsctl stop crs  -f
2023-08-14 19:43:52: Command output:
>  CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac01'
>  CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac01' has completed
>  CRS-4133: Oracle High Availability Services has been stopped.
...
2023-08-14 19:43:55: Executing cmd: /u01/app/19.0.0/grid/bin/crsctl start crs -noautostart
...
2023-08-14 19:44:01: Executing cmd: /u01/app/19.0.0/grid/bin/crsctl start crs -excl -cssonly
...
>  [DBT-30001] Disk groups created successfully. Check /u01/app/grid/cfgtoollogs/asmca/asmca-230814PM074424.log for details.
...

初始化,

2023-08-14 19:49:59: Executing cmd: /u01/app/19.0.0/grid/bin/clsecho -p has -f clsrsc -m 594 '19' '19' 'PostConfig'
2023-08-14 19:49:59: Command output:
>  CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
>End Command output
2023-08-14 19:49:59: CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
2023-08-14 19:49:59: Executing cmd: /u01/app/19.0.0/grid/bin/crsctl add credmaint -path ASM -local
2023-08-14 19:49:59: Command output:
>  CRS-10405: (:CLSCRED0006:)Credential domain already exists.
>  CRS-4000: Command Add failed, or completed with errors.
>End Command output
2023-08-14 19:49:59: Executing cmd: /u01/app/19.0.0/grid/bin/crsctl setperm credmaint -o grid -path ASM -local
2023-08-14 19:49:59: Executing cmd: /u01/app/19.0.0/grid/bin/crsctl add credmaint -path GRIDHOME
2023-08-14 19:49:59: Executing cmd: /u01/app/19.0.0/grid/bin/crsctl setperm credmaint -o grid -path GRIDHOME
2023-08-14 19:49:59: Check whether the backup disk group has been created
2023-08-14 19:49:59: Executing cmd: /u01/app/19.0.0/grid/bin/crsctl query crs releaseversion

集群后期配置

2023-08-14 19:50:14: best gpnp directory in home "/u01/app/19.0.0/grid/gpnp" is "rac01"  new seq=5 for cname=rac-cluster, cguid=
2023-08-14 19:50:14: Best gpnp node configuration is "rac01"
2023-08-14 19:50:14: Creating backup directory "/u01/app/19.0.0/grid/gpnp/gpnp_bcp__2023_8_14_195014"
2023-08-14 19:50:14: Saving old cluster-wide stage in "/u01/app/19.0.0/grid/gpnp/gpnp_bcp__2023_8_14_195014/stg__rac01"
2023-08-14 19:50:14: source dir=/u01/app/19.0.0/grid/gpnp/wallets
2023-08-14 19:50:14: dest   dir=/u01/app/19.0.0/grid/gpnp/gpnp_bcp__2023_8_14_195014/stg__rac01/wallets
2023-08-14 19:50:14: creating directory /u01/app/19.0.0/grid/gpnp/gpnp_bcp__2023_8_14_195014/stg__rac01/wallets/root
2023-08-14 19:50:14: copying file=ewallet.p12
2023-08-14 19:50:14: source file path=/u01/app/19.0.0/grid/gpnp/wallets/root/ewallet.p12
2023-08-14 19:50:14: dest file path=/u01/app/19.0.0/grid/gpnp/gpnp_bcp__2023_8_14_195014/stg__rac01/wallets/root/ewallet.p12
2023-08-14 19:50:14:   copy "/u01/app/19.0.0/grid/gpnp/wallets/root/ewallet.p12" => "/u01/app/19.0.0/grid/gpnp/gpnp_bcp__2023_8_14_195014/stg__rac01/wallets/root/ewallet.p12"
2023-08-14 19:50:14:   set ownership on "/u01/app/19.0.0/grid/gpnp/gpnp_bcp__2023_8_14_195014/stg__rac01/wallets/root/ewallet.p12" => (grid,oinstall)
2023-08-14 19:50:14: creating directory /u01/app/19.0.0/grid/gpnp/gpnp_bcp__2023_8_14_195014/stg__rac01/wallets/pa
2023-08-14 19:50:14: copying file=cwallet.sso
2023-08-14 19:50:14: source file path=/u01/app/19.0.0/grid/gpnp/wallets/pa/cwallet.sso
2023-08-14 19:50:14: dest file path=/u01/app/19.0.0/grid/gpnp/gpnp_bcp__2023_8_14_195014/stg__rac01/wallets/pa/cwallet.sso
2023-08-14 19:50:14:   copy "/u01/app/19.0.0/grid/gpnp/wallets/pa/cwallet.sso" => "/u01/app/19.0.0/grid/gpnp/gpnp_bcp__2023_8_14_195014/stg__rac01/wallets/pa/cwallet.sso"
2023-08-14 19:50:14:   set ownership on "/u01/app/19.0.0/grid/gpnp/gpnp_bcp__2023_8_14_195014/stg__rac01/wallets/pa/cwallet.sso" => (grid,oinstall)
2023-08-14 19:50:14: copying file=cwallet.sso.lck
2023-08-14 19:50:14: source file path=/u01/app/19.0.0/grid/gpnp/wallets/pa/cwallet.sso.lck

启动过程

>  CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
>End Command output
2023-08-14 19:50:31: CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

配置完成

「喜欢这篇文章,您的关注和赞赏是给作者最好的鼓励」
关注作者
【版权声明】本文为墨天轮用户原创内容,转载时必须标注文章的来源(墨天轮),文章链接,文章作者等基本信息,否则作者和墨天轮有权追究责任。如果您发现墨天轮中有涉嫌抄袭或者侵权的内容,欢迎发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。

评论