暂无图片
暂无图片
暂无图片
暂无图片
暂无图片

Zero-Downtime patching Oracle Grid Infrastructure 19c on Linux

原创 许玉冲 2023-10-07
390

 In the past, when we were patching Grid Infrastructure (GI) in rolling mode, the database instances had to be shut down on the node which we were patching on. 


Starting with Oracle Database 19c Release Update (RU) 19.8, Oracle announced that Oracle RAC database instances can keep running and accessible to the database users during Oracle Grid Infrastructure patching. My patching practice is done in following environment


   * Two nodes Grid Infrastructure 19.8 running on Oracle Linux 7 update 8

   * Host name of nodes are rac01.lab.dbaplus.ca and rac02.lab.dbaplus.ca

   * GI RU 19.9.0.0.201020 and OJVM RU 19.9.0.0.201020 are applied in out-of-place(OOP) mode

   * ACFS and AFD are being used

   

1. Create new Oracle Grid Infrastructure (GI) 19c home and prepare Oracle GI Release Update (RU) 19.9 for patching


As the root user on all nodes, create directories for new GI home

[root@rac01]# mkdir -p /u01/app/19.9.0/grid

[root@rac01]# chown -R grid:oinstall /u01/app/19.9.0


As the grid user on first node, download the Oracle Grid Infrastructure 19c image file and extract the files into new created GI home

[grid@rac01]$ cd /u01/app/19.9.0/grid

[grid@rac01]$ unzip -q /u01/media/LINUX.X64_193000_grid_home.zip


As grid user on first node, download and install latest version of OPatch (12.2.0.1.21) into new GI home

[grid@rac01]$ cd /u01/app/19.9.0/grid

[grid@rac01]$ mv OPatch OPatch.old

[grid@rac01]$ unzip -q /u01/media/p6880880_122010_Linux-x86-64_12.2.0.1.21.zip


As grid user on first node, download Oracle GI RU 19.9.0.0.201020 and extract the files into stage/temporary directory

[grid@rac01]$ mkdir /u01/stage/RU

[grid@rac01]$ cd /u01/stage/RU

[grid@rac01]$ unzip -q /u01/media/p31750108_190000_Linux-x86-64.zip


As grid user on first node, download Oracle JavaVM Component Release Update (OJVM RU) 19.9.0.0.201020 and extract the files into stage/temporary directory

[grid@rac01]$ mkdir /u01/stage/OJVM

[grid@rac01]$ cd /u01/stage/OJVM

[grid@rac01]$ unzip -q /u01/media/p31668882_190000_Linux-x86-64.zip


2. Software-Only installing GI with RU applying


GI Software-Only installation is optional, it minimizes my waiting time while working on the patch during the maintenance window.


Starting with Oracle Grid Infrastructure 18c, you can download and apply Release Updates (RUs) and one-off patches during an Oracle GI installation or upgrade with following gridSetup.sh command options


   gridSetup.sh -applyRU <patch_directory_location> -applyOneOffs <comma_seperated_list_of_patch_directory_locations>


As grid user on first node, start GI installation and apply GI RU and OJVM RU (as one-off patch)

[grid@rac01]$ cd /u01/app/19.9.0/grid

[grid@rac01]$ ./gridSetup.sh -applyRU /u01/stage/RU/31750108 -applyOneOffs /u01/stage/OJVM/31668882

Preparing the home to patch...

Applying the patch /u01/stage/RU/31750108...

Successfully applied the patch.

Applying the patch /u01/stage/OJVM/31668882...

Successfully applied the patch.

The log can be found at: /u01/app/oraInventory/logs/GridSetupActions2020-11-04_11-33-11AM/installerPatchActions_2020-11-04_11-33-11AM.log

Launching Oracle Grid Infrastructure Setup Wizard...


The response file for this session can be found at:

 /u01/app/19.9.0/grid/install/response/grid_2020-11-04_11-33-11AM.rsp


You can find the log of this install session at:

 /u01/app/oraInventory/logs/GridSetupActions2020-11-04_11-33-11AM/gridSetupActions2020-11-04_11-33-11AM.log


After successfully applied GI RU 19.9.0.0.201020 and OJVM RU 19.9.0.0.201020, gridSetup.sh will start installation graphic interface, respond to the prompts as following,


  * In the 'Select Configuration Option' screen, select the 'Set Up Software Only' option to perform a software-only installation of Oracle Grid Infrastructure for a standalone server. Click Next.

  * In the 'Cluster Node Information' screen, click Add button to add public host names of all cluster nodes(rac01.lab.dbaplus.ca & rac02.lab.dbaplus.ca). Click Next.

  * Respond to the prompts as needed to set up Oracle Grid Infrastructure

  * The Oracle Grid Infrastructure setup wizard prompts you to run the root.sh script [on each node].


Example of root.sh execution

[root@rac01]# /u01/app/19.9.0/grid/root.sh

Performing root user operation.


The following environment variables are set as:

    ORACLE_OWNER= grid

    ORACLE_HOME=  /u01/app/19.9.0/grid


Enter the full pathname of the local bin directory: [/usr/local/bin]:

The contents of "dbhome" have not changed. No need to overwrite.

The contents of "oraenv" have not changed. No need to overwrite.

The contents of "coraenv" have not changed. No need to overwrite.


Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.


To configure Grid Infrastructure for a Cluster execute the following command as grid user:

/u01/app/19.9.0/grid/gridSetup.sh

This command launches the Grid Infrastructure Setup Wizard. The wizard also supports silent operation, and the parameters can be passed through the response file that is available in the installation media.


The installer installed new GI on all nodes, and the new GI home has GI RU 19.9.0.201020 and OJVM RU 19.9.0.201020 installed.


3. Switching the GI Home without shutting down database instances


As grid user on first node, execute gridSetup.sh with following option to switch GI home


   gridSetup.sh -switchGridHome


If you did not perform Software-Only installation (step 2), run gridSetup.sh with following options to switch GI home


   gridSetup.sh -switchGridHome -applyRU <patch_directory_location> -applyOneOffs <comma_seperated_list_of_patch_directory_locations>


Switch GI home to new patched home

[grid@rac01]$ /u01/app/19.9.0/grid/gridSetup.sh -switchGridHome

Launching Oracle Grid Infrastructure Setup Wizard...


You can find the log of this install session at:

 /u01/app/oraInventory/logs/cloneActions2020-11-05_11-05-00AM.log


Follow the steps in the configuration wizard to complete the Oracle Grid Infrastructure installation.


Note: During configuration, do NOT select the option to Automatically run configuration scripts.

   

When asked to run root.sh, running it as following with the -transparent and -nodriverupdate flags on the first node.


In order to watch process of Zero-Downtime patch, open two new terminals before execute root.sh as following


  * Terminal 1 (T1) Connect to first node rac01 as grid user, while root.sh is running on first node, keep running following commands to monitor Oracle running processes and database instances


      ps -ef | grep 'd.bin' | grep -v grep

      ps -ef | grep pmon | grep -v grep


    The script root.sh takes a while to complete, in order to keep running above commands, I created a script process.sh to repeat them every 2 seconds, the script can be terminated by press Ctrl + C, or wait for about one hour (should not be that long, it took me about 20 minutes). The source code of the script can be found at the bottom of this post.


  * Terminal 2 (T2) Connect to second node rac02 as grid user and log into ASM instance using sqlplus, while root.sh is running on first node, keep running following sql statements to monitor ASM clients connecting to ASM instance (+ASM2) running on second node rac02,


      select instance_name,count(*) from v$asm_client group by instance_name order by 1;


    I created another script asmclient.sh to run the sql every second.


3.1 Run root.sh on first node


On first node, as root user, 'run root.sh -transparent -nodriverupdate'

[root@rac01]# /u01/app/19.9.0/grid/root.sh -transparent -nodriverupdate

Performing root user operation.


The following environment variables are set as:

    ORACLE_OWNER= grid

    ORACLE_HOME=  /u01/app/19.9.0/grid


Enter the full pathname of the local bin directory: [/usr/local/bin]: 

The contents of "dbhome" have not changed. No need to overwrite.

The contents of "oraenv" have not changed. No need to overwrite.

The contents of "coraenv" have not changed. No need to overwrite.


Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Relinking oracle with rac_on option

LD_LIBRARY_PATH='/u01/app/19.8.0/grid/lib:/u01/app/19.9.0/grid/lib:'

Using configuration parameter file: /u01/app/19.9.0/grid/crs/install/crsconfig_params

The log of current session can be found at:

  /u01/app/grid/crsdata/rac01/crsconfig/rootcrs_rac01_2020-11-05_11-14-18AM.log

Using configuration parameter file: /u01/app/19.9.0/grid/crs/install/crsconfig_params

The log of current session can be found at:

  /u01/app/grid/crsdata/rac01/crsconfig/rootcrs_rac01_2020-11-05_11-14-18AM.log

Using configuration parameter file: /u01/app/19.9.0/grid/crs/install/crsconfig_params

The log of current session can be found at:

  /u01/app/grid/crsdata/rac01/crsconfig/crs_prepatch_apply_oop_rac01_2020-11-05_11-14-18AM.log

Using configuration parameter file: /u01/app/19.9.0/grid/crs/install/crsconfig_params

The log of current session can be found at:

  /u01/app/grid/crsdata/rac01/crsconfig/crs_prepatch_apply_oop_rac01_2020-11-05_11-14-18AM.log

2020/11/05 11:14:26 CLSRSC-347: Successfully unlock /u01/app/19.9.0/grid

2020/11/05 11:14:27 CLSRSC-671: Pre-patch steps for patching GI home successfully completed.

Using configuration parameter file: /u01/app/19.9.0/grid/crs/install/crsconfig_params

The log of current session can be found at:

  /u01/app/grid/crsdata/rac01/crsconfig/crs_postpatch_apply_oop_rac01_2020-11-05_11-14-27AM.log

Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [481622232].

2020/11/05 11:14:44 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd_dummy.service'

2020/11/05 11:27:19 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'

Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster upgrade state is [ROLLING PATCH]. The cluster active patch level is [481622232].

2020/11/05 11:28:46 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.

2020/11/05 11:28:47 CLSRSC-672: Post-patch steps for patching GI home successfully completed.


On T1, start process.sh while/before starting root.sh on first node, the script will keep running for about one hour unless you press ctrl+c. The output as following

================== Before root.sh started =====================

root     27131     1  0 Sep10 ?        05:56:43 /u01/app/19.8.0/grid/bin/ohasd.bin reboot _ORA_BLOCKING_STACK_LOCALE=AMERICAN_AMERICA.AL32UTF8

grid     27478     1  0 Sep10 ?        01:53:53 /u01/app/19.8.0/grid/bin/mdnsd.bin

grid     27480     1  0 Sep10 ?        04:55:13 /u01/app/19.8.0/grid/bin/evmd.bin

grid     27520     1  0 Sep10 ?        02:09:43 /u01/app/19.8.0/grid/bin/gpnpd.bin

grid     27584     1  0 Sep10 ?        05:54:32 /u01/app/19.8.0/grid/bin/gipcd.bin

root     27640     1  4 Sep10 ?        2-16:05:32 /u01/app/19.8.0/grid/bin/osysmond.bin

grid     27671     1  0 Sep10 ?        06:40:33 /u01/app/19.8.0/grid/bin/ocssd.bin

root     28227     1  0 Sep10 ?        04:58:39 /u01/app/19.8.0/grid/bin/octssd.bin reboot

root     28296     1  0 Sep10 ?        05:15:10 /u01/app/19.8.0/grid/bin/crsd.bin reboot


oracle     902     1  0 Sep25 ?        00:06:07 ora_pmon_DB19C01_2

oracle   19865     1  0 Sep17 ?        00:07:23 ora_pmon_DB19C02_2

oracle   22872     1  0 Sep10 ?        00:07:45 ora_pmon_DB12C01_2

grid     28754     1  0 Sep10 ?        00:04:27 asm_pmon_+ASM1

grid     28997     1  0 Sep10 ?        00:03:55 apx_pmon_+APX1

grid     30193     1  0 Sep10 ?        00:04:03 mdb_pmon_-MGMTDB


================== After root.sh started and ran a while =====================

root     21092     1  0 11:27 ?        00:00:00 /u01/app/grid/crsdata/rac01/csswd/oracsswd.bin

-----------------------------------------------

oracle     902     1  0 Sep25 ?        00:06:07 ora_pmon_DB19C01_2

oracle   19865     1  0 Sep17 ?        00:07:23 ora_pmon_DB19C02_2


================== After root.sh completed running =====================

root     21586     1  5 11:27 ?        00:00:03 /u01/app/19.9.0/grid/bin/ohasd.bin reboot CRS_AUX_DATA=CRS_AUXD_TGIP=yes;_ORA_BLOCKING_STACK_LOCALE=AMERICAN_AMERICA.AL32UTF8

grid     21764     1  0 11:27 ?        00:00:00 /u01/app/19.9.0/grid/bin/gpnpd.bin

grid     21766     1  0 11:27 ?        00:00:00 /u01/app/19.9.0/grid/bin/mdnsd.bin

grid     21768     1  2 11:27 ?        00:00:01 /u01/app/19.9.0/grid/bin/evmd.bin

grid     21945     1  1 11:27 ?        00:00:00 /u01/app/19.9.0/grid/bin/gipcd.bin

grid     22064     1  4 11:27 ?        00:00:02 /u01/app/19.9.0/grid/bin/ocssd.bin -P

root     22227     1  0 11:27 ?        00:00:00 /u01/app/19.9.0/grid/bin/octssd.bin reboot

root     22300     1  8 11:27 ?        00:00:03 /u01/app/19.9.0/grid/bin/crsd.bin reboot

root     22319     1  4 11:27 ?        00:00:01 /u01/app/19.9.0/grid/bin/osysmond.bin

-----------------------------------------------

oracle     902     1  0 Sep25 ?        00:06:07 ora_pmon_DB19C01_2

oracle   19865     1  0 Sep17 ?        00:07:23 ora_pmon_DB19C02_2

grid     22748     1  0 11:27 ?        00:00:00 apx_pmon_+APX1

grid     23220     1  0 11:28 ?        00:00:00 asm_pmon_+ASM1

oracle   23546     1  0 11:28 ?        00:00:00 ora_pmon_DB12C01_2


The root.sh shuts down old version database instance (DB12C01 is version 12.2.0.1), ASM instance and crs stack running from old GI home, and starts a watchdog process (oracsswd.bin) as dummy crs service from crsdata directory (neither old GI home nor new GI home).  Then, brings up crs stack from new GI home and shuts down watchdog process. At last, ASM instance and all databases instances are brought up.


How the running 19c database instances access ASM storage while ASM instance is down on local node? The answer is that ASM instance on remote node will help.


On T2, start asmclient.sh while/before starting root.sh on first node, the script will keep running for about one hour unless you press ctrl+c. The output as following

================== Before root.sh started on first node =====================

  +APX2         1

  +ASM2         1

  DB19C01_1     4

  DB12C01_1     4

  DB19C02_1     4

  rac02.lab.dbaplus.ca      1


================== After root.sh started and ASM instance is being shut down on first node =====================

  +APX2         1

  +ASM2         1

  DB19C01_1     4

  DB19C01_2     4

  DB12C01_1     4

  DB19C02_1     4

  DB19C02_2     4

  rac02.lab.dbaplus.ca      1


When ASM instance (+ASM1) is being shut down on first node (rac01), the 19c database instances running on this node (DB19C01_2, DB19C02_2) will be redirected to ASM instance (+ASM2) running on remote node (rac02). That's why database instances can still access ASM storage while the local ASM instance is down.


3.2 Run root.sh on second node (other nodes for more than two nodes cluster)


On second node, as root user, run 'root.sh -transparent -nodriverupdate'

[root@rac02]# /u01/app/19.9.0/grid/root.sh -transparent -nodriverupdate

Performing root user operation.


The following environment variables are set as:

    ORACLE_OWNER= grid

    ORACLE_HOME=  /u01/app/19.9.0/grid


Enter the full pathname of the local bin directory: [/usr/local/bin]: 

The contents of "dbhome" have not changed. No need to overwrite.

The contents of "oraenv" have not changed. No need to overwrite.

The contents of "coraenv" have not changed. No need to overwrite.


Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Relinking oracle with rac_on option

LD_LIBRARY_PATH='/u01/app/19.0.0/grid/lib:/u01/app/19.9.0/grid/lib:'

Using configuration parameter file: /u01/app/19.9.0/grid/crs/install/crsconfig_params

The log of current session can be found at:

  /u01/app/grid/crsdata/rac02/crsconfig/rootcrs_rac02_2020-11-05_12-33-56AM.log

Using configuration parameter file: /u01/app/19.9.0/grid/crs/install/crsconfig_params

The log of current session can be found at:

  /u01/app/grid/crsdata/rac02/crsconfig/rootcrs_rac02_2020-11-05_12-33-56AM.log

Using configuration parameter file: /u01/app/19.9.0/grid/crs/install/crsconfig_params

The log of current session can be found at:

  /u01/app/grid/crsdata/rac02/crsconfig/crs_prepatch_apply_oop_rac02_2020-11-05_12-33-57AM.log

Using configuration parameter file: /u01/app/19.9.0/grid/crs/install/crsconfig_params

The log of current session can be found at:

  /u01/app/grid/crsdata/rac02/crsconfig/crs_prepatch_apply_oop_rac02_2020-11-05_12-33-57AM.log

2020/11/05 12:34:03 CLSRSC-347: Successfully unlock /u01/app/19.9.0/grid

2020/11/05 12:34:04 CLSRSC-671: Pre-patch steps for patching GI home successfully completed.

Using configuration parameter file: /u01/app/19.9.0/grid/crs/install/crsconfig_params

The log of current session can be found at:

  /u01/app/grid/crsdata/rac02/crsconfig/crs_postpatch_apply_oop_rac02_2020-11-05_12-34-04AM.log

Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster upgrade state is [ROLLING PATCH]. The cluster active patch level is [481622232].

2020/11/05 12:34:17 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd_dummy.service'

2020/11/05 12:47:01 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'

Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [2862754030].

SQL Patching tool version 19.9.0.0.0 Production on Thu Nov  5 12:49:37 2020

Copyright (c) 2012, 2020, Oracle.  All rights reserved.


Log file for this invocation: /u01/app/grid/cfgtoollogs/sqlpatch/sqlpatch_21650_2020_11_05_12_49_37/sqlpatch_invocation.log


Connecting to database...OK

Gathering database info...done


Note:  Datapatch will only apply or rollback SQL fixes for PDBs

       that are in an open state, no patches will be applied to closed PDBs.

       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation

       (Doc ID 1585822.1)


Bootstrapping registry and package to current versions...done

Determining current state...done


Current state of interim SQL patches:

Interim patch 31219897 (OJVM RELEASE UPDATE: 19.8.0.0.200714 (31219897)):

  Binary registry: Not installed

  PDB CDB$ROOT: Applied successfully on 10-SEP-20 07.05.22.344726 PM

  PDB GIMR_DSCREP_10: Applied successfully on 10-SEP-20 07.05.26.197751 PM

  PDB PDB$SEED: Applied successfully on 10-SEP-20 07.05.24.262950 PM

Interim patch 31668882 (OJVM RELEASE UPDATE: 19.9.0.0.201020 (31668882)):

  Binary registry: Installed

  PDB CDB$ROOT: Not installed

  PDB GIMR_DSCREP_10: Not installed

  PDB PDB$SEED: Not installed


Current state of release update SQL patches:

  Binary registry:

    19.9.0.0.0 Release_Update 200930183249: Installed

  PDB CDB$ROOT:

    Applied 19.8.0.0.0 Release_Update 200703031501 successfully on 10-SEP-20 07.05.22.342186 PM

  PDB GIMR_DSCREP_10:

    Applied 19.8.0.0.0 Release_Update 200703031501 successfully on 10-SEP-20 07.05.26.195906 PM

  PDB PDB$SEED:

    Applied 19.8.0.0.0 Release_Update 200703031501 successfully on 10-SEP-20 07.05.24.260687 PM


Adding patches to installation queue and performing prereq checks...done

Installation queue:

  For the following PDBs: CDB$ROOT PDB$SEED GIMR_DSCREP_10

    The following interim patches will be rolled back:

      31219897 (OJVM RELEASE UPDATE: 19.8.0.0.200714 (31219897))

    Patch 31771877 (Database Release Update : 19.9.0.0.201020 (31771877)):

      Apply from 19.8.0.0.0 Release_Update 200703031501 to 19.9.0.0.0 Release_Update 200930183249

    The following interim patches will be applied:

      31668882 (OJVM RELEASE UPDATE: 19.9.0.0.201020 (31668882))


Installing patches...

Patch installation complete.  Total patches installed: 9


Validating logfiles...done

Patch 31219897 rollback (pdb CDB$ROOT): SUCCESS

  logfile: /u01/app/grid/cfgtoollogs/sqlpatch/31219897/23619699/31219897_rollback__MGMTDB_CDBROOT_2020Nov05_12_50_01.log (no errors)

Patch 31771877 apply (pdb CDB$ROOT): SUCCESS

  logfile: /u01/app/grid/cfgtoollogs/sqlpatch/31771877/23869227/31771877_apply__MGMTDB_CDBROOT_2020Nov05_12_50_01.log (no errors)

Patch 31668882 apply (pdb CDB$ROOT): SUCCESS

  logfile: /u01/app/grid/cfgtoollogs/sqlpatch/31668882/23790068/31668882_apply__MGMTDB_CDBROOT_2020Nov05_12_50_01.log (no errors)

Patch 31219897 rollback (pdb PDB$SEED): SUCCESS

  logfile: /u01/app/grid/cfgtoollogs/sqlpatch/31219897/23619699/31219897_rollback__MGMTDB_PDBSEED_2020Nov05_12_50_40.log (no errors)

Patch 31771877 apply (pdb PDB$SEED): SUCCESS

  logfile: /u01/app/grid/cfgtoollogs/sqlpatch/31771877/23869227/31771877_apply__MGMTDB_PDBSEED_2020Nov05_12_50_40.log (no errors)

Patch 31668882 apply (pdb PDB$SEED): SUCCESS

  logfile: /u01/app/grid/cfgtoollogs/sqlpatch/31668882/23790068/31668882_apply__MGMTDB_PDBSEED_2020Nov05_12_50_40.log (no errors)

Patch 31219897 rollback (pdb GIMR_DSCREP_10): SUCCESS

  logfile: /u01/app/grid/cfgtoollogs/sqlpatch/31219897/23619699/31219897_rollback__MGMTDB_GIMR_DSCREP_10_2020Nov05_12_50_39.log (no errors)

Patch 31771877 apply (pdb GIMR_DSCREP_10): SUCCESS

  logfile: /u01/app/grid/cfgtoollogs/sqlpatch/31771877/23869227/31771877_apply__MGMTDB_GIMR_DSCREP_10_2020Nov05_12_50_40.log (no errors)

Patch 31668882 apply (pdb GIMR_DSCREP_10): SUCCESS

  logfile: /u01/app/grid/cfgtoollogs/sqlpatch/31668882/23790068/31668882_apply__MGMTDB_GIMR_DSCREP_10_2020Nov05_12_50_40.log (no errors)

SQL Patching tool complete on Thu Nov  5 12:51:05 2020

2020/11/05 12:51:31 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.

2020/11/05 12:51:33 CLSRSC-672: Post-patch steps for patching GI home successfully completed.


After root.sh ran successfully on all nodes, all Oracle Grid Infrastructure services start running from the new Grid home. 


Zero-downtime GI patching is completed successfully if no ACFS file system is created and ASM storage is not using AFD. Unfortunately, they both are used and next step is required.


4. Update operating system drivers for ACFS and/or AFD


As grid user, validate the status of Oracle drivers (ACFS & AFD) by 'crsctl query driver'

[grid@rac02]$ /u01/app/19.9.0/grid/bin/crsctl query driver activeversion -all

Node Name : rac01

Driver Name : ACFS

BuildNumber : 200626

BuildVersion : 19.0.0.0.0 (19.8.0.0.0)


Node Name : rac01

Driver Name : AFD

BuildNumber : 200626

BuildVersion : 19.0.0.0.0 (19.8.0.0.0)


Node Name : rac02

Driver Name : ACFS

BuildNumber : 200626

BuildVersion : 19.0.0.0.0 (19.8.0.0.0)


Node Name : rac02

Driver Name : AFD

BuildNumber : 200626

BuildVersion : 19.0.0.0.0 (19.8.0.0.0)


[grid@rac02]$ /u01/app/19.9.0/grid/bin/crsctl query driver softwareversion -all

Node Name : rac01

Driver Name : ACFS

BuildNumber : 200813.1

BuildVersion : 19.0.0.0.0 (19.9.0.0.0)


Node Name : rac01

Driver Name : AFD

BuildNumber : 200813.1

BuildVersion : 19.0.0.0.0 (19.9.0.0.0)


Node Name : rac02

Driver Name : ACFS

BuildNumber : 200813.1

BuildVersion : 19.0.0.0.0 (19.9.0.0.0)


Node Name : rac02

Driver Name : AFD

BuildNumber : 200813.1

BuildVersion : 19.0.0.0.0 (19.9.0.0.0)


ACFS & AFD drivers are installed and used, the driver are updated to new version on both nodes, but active driver is still old version. At this time, new version driver cannot be activated because '-nodriverupdate' is used with root.sh which made root.sh skip updating operating system drivers for ACFS & AFD.


In order to activate new version of driver, run 'rootcrs.sh -updateosfiles' on each cluster node and restart the cluster nodes. Oops, where is Zero-downtime? Anyway, at least, we can postpone this downtime to when OS maintenance is planned. Or it may be better not to user ACFS & AFD.


Source code of process.sh

#!/bin/bash

for i in {1..1800}

do

  echo 

  echo "================== `date` ====================="

  ps -ef | grep 'd\.bin' | grep -v grep

  echo "-----------------------------------------------"

  ps -ef | grep pmon | grep -v grep

  sleep 2

done


Source code of asmclient.sh

#!/bin/bash

F_TMP=`ps -ef | grep asm_pmon_+ASM | grep -v grep`

F_PID=`echo $F_TMP | awk -F' ' '{print $2}'`

export ORACLE_SID=`echo $F_TMP | awk -F'_' '{ print $NF }'`

export ORACLE_HOME=`pwdx $F_PID | awk -F' ' '{print $NF}' | sed 's/\/dbs\/*$//'`

for i in {1..1800}

do

echo "

  ================== `date` ====================="

  $ORACLE_HOME/bin/sqlplus -S / as sysdba <<EOF

     set head off

     set feed off

     set pagesize 999

     col instance_name for a25

     select instance_name,count(*) from v\$asm_client group by instance_name order by 1;

EOF

  sleep 1

done


Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest

「喜欢这篇文章,您的关注和赞赏是给作者最好的鼓励」
关注作者
【版权声明】本文为墨天轮用户原创内容,转载时必须标注文章的来源(墨天轮),文章链接,文章作者等基本信息,否则作者和墨天轮有权追究责任。如果您发现墨天轮中有涉嫌抄袭或者侵权的内容,欢迎发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。

评论