暂无图片
暂无图片
1
暂无图片
暂无图片
暂无图片

ASM磁盘头损坏无法mount[存档故障]

原创 IT泥瓦工 2023-05-06
916

某个客户因部分科室客户端无法连接数据库,一线同事查询一个节点因网卡频繁掉线原因发生节点驱逐,问题节点2重启后ASM无法mount。

1.故障环境

操作系统:CentOS 6.9
数据库:Oracle RAC 11.2.0.3

2.网卡故障信息

Apr  4 16:24:28 db2 kernel: ADDRCONF(NETDEV_UP): eth1: link is not ready
Apr  4 16:24:31 db2 kernel: igb 0000:04:00.1: eth1: igb: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
Apr  4 16:24:31 db2 kernel: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
Apr  4 16:30:39 db2 init: oracle-ohasd main process (3029) killed by TERM signal
Apr  4 16:30:39 db2 init: tty (/dev/tty2) main process (5243) killed by TERM signal
Apr  4 16:30:39 db2 init: tty (/dev/tty3) main process (5245) killed by TERM signal
Apr  4 16:30:39 db2 init: tty (/dev/tty4) main process (5247) killed by TERM signal
Apr  4 16:30:39 db2 init: tty (/dev/tty5) main process (5249) killed by TERM signal
Apr  4 16:30:39 db2 init: tty (/dev/tty6) main process (5251) killed by TERM signal
Apr  4 16:30:49 db2 kernel: igb 0000:04:00.1: eth1: igb: eth1 NIC Link is Down

3.GI Alert日志

ORA-27508: IPC error sending a message
Wed Apr 04 16:29:23 2018
IPC Send timeout detected. Receiver ospid 6780 [oracle@db1 (LMD0)]
Wed Apr 04 16:29:23 2018
Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_lmd0_6780.trc:
Wed Apr 04 16:31:08 2018
Detected an inconsistent instance membership by instance 2
Evicting instance 2 from cluster
Waiting for instances to leave: 2

因网卡频繁掉线,CRS驱逐节点2。同事查到这里客户更换了交换机网口和线,网卡掉线问题已修复。

4.问题节点2重启后,asm无法mount磁盘,

节点2的GI日志

[/u01/app/11.2.0/grid/bin/oraagent.bin(9051)]CRS-5019:All OCR locations are on ASM disk groups [VOTEDISK], and none of these disk groups are mounted. Details are at "(:CLSN00100:)" in "/u01/app/11.2.0/grid/log/db2/agent/ohasd/oraagent_grid/oraagent_grid.log".

节点2的asm日志

NOTE: No asm libraries found in the system
NOTE: Assigning number (3,2) to disk (/dev/asm-vote3)
NOTE: Assigning number (3,0) to disk (/dev/asm-vote1)
WARNING: GMON has insufficient disks to maintain consensus. minimum required is 3
GMON querying group 3 at 7 for pid 23, osid 10398
NOTE: group VOTEDISK: updated PST location: disk 0000 (PST copy 0)
NOTE: group VOTEDISK: updated PST location: disk 0002 (PST copy 1)
NOTE: cache dismounting (clean) group 3/0xDB585101 (VOTEDISK) 
NOTE: messaging CKPT to quiesce pins Unix process pid: 10398, image: oracle@db2 (TNS V1-V3)
NOTE: dbwr not being msg'd to dismount
NOTE: lgwr not being msg'd to dismount
NOTE: cache dismounted group 3/0xDB585101 (VOTEDISK) 
NOTE: cache ending mount (fail) of group VOTEDISK number=3 incarn=0xdb585101
NOTE: cache deleting context for group VOTEDISK 3/0xdb585101
GMON dismounting group 3 at 8 for pid 23, osid 10398
NOTE: Disk  in mode 0x8 marked for de-assignment
NOTE: Disk  in mode 0x8 marked for de-assignment
NOTE: Disk  in mode 0x8 marked for de-assignment
ERROR: diskgroup VOTEDISK was not mounted

磁盘无法挂载,vote丢失了一块盘(3,1),查看节点1的asm日志同时出现了故障,磁盘组dismount

Wed Apr 04 16:31:09 2018
NOTE: waiting for instance recovery of group 2
Wed Apr 04 16:31:10 2018
NOTE: SMON starting instance recovery for group ARCH domain 1 (mounted)
NOTE: F1X0 found on disk 0 au 2 fcn 0.0
NOTE: starting recovery of thread=2 ckpt=41.7360 group=1 (ARCH)
NOTE: SMON waiting for thread 2 recovery enqueue
NOTE: SMON about to begin recovery lock claims for diskgroup 1 (ARCH)
NOTE: SMON successfully validated lock domain 1
NOTE: advancing ckpt for group 1 (ARCH) thread=2 ckpt=41.7360
NOTE: SMON did instance recovery for group ARCH domain 1
NOTE: SMON starting instance recovery for group DATA domain 2 (mounted)
NOTE: SMON skipping disk 0 - no header
NOTE: cache initiating offline of disk 0 group DATA
NOTE: process _smon_+asm1 (6796) initiating offline of disk 0.3915955645 (DATA_0000) with mask 0x7e in group 2
WARNING: Disk 0 (DATA_0000) in group 2 in mode 0x7f is now being taken offline on ASM inst 1
NOTE: initiating PST update: grp = 2, dsk = 0/0xe968bdbd, mask = 0x6a, op = clear--WARNING: failed to online diskgroup resource ora.DATA.dg (unable to communicate with CRSD/OHASD)
WARNING: failed to online diskgroup resource ora.VOTEDISK.dg (unable to communicate with CRSD/OHASD)

三个磁盘组都出现了问题,查看v$asm_disk信息

SQL> select name,path,header_status,mount_status from v$asm_disk;

NAME                            PATH                HEADER_STATU MOUNT_S
-----------------------  ------------------------   ----------- -------
DATA_0000                /dev/asm-data              PROVISIONED  CACHED
VOTEDISK_0001            /dev/asm-vote2             PROVISIONED  CACHED
VOTEDISK_0002            /dev/asm-vote3             PROVISIONED  CACHED
ARCH_0000                /dev/asm-arch              PROVISIONED  CACHED
VOTEDISK_0000            /dev/asm-vote1             MEMBER       CACHED

此时五块磁盘,四块盘的header是不对的,处于PROVISIONED状态,不属于磁盘组了,有其他的原因改变了磁盘头

PROVISIONED - Disk is not part of a disk group and may be added to a disk group with the ALTER DISKGROUP statement. The PROVISIONED header status is different from the CANDIDATE header status in that PROVISIONED implies that an additional platform-specific action has been taken by an administrator to make the disk available for Oracle ASM


查到这里,就需要使用kfed查看ASM磁盘的头部信息了,看看是不是被破坏了。如下:

kfed  read /dev/asm-datakfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:              2147483648 ; 0x008: disk=0
kfbh.check:                  1267702279 ; 0x00c: 0x4b8f9a07
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfdhdb.driver.provstr:         ORCLDISK ; 0x000: length=8
kfdhdb.driver.reserved[0]:            0 ; 0x008: 0x00000000
kfdhdb.driver.reserved[1]:            0 ; 0x00c: 0x00000000
kfdhdb.driver.reserved[2]:            0 ; 0x010: 0x00000000
kfdhdb.driver.reserved[3]:            0 ; 0x014: 0x00000000
kfdhdb.driver.reserved[4]:            0 ; 0x018: 0x00000000
kfdhdb.driver.reserved[5]:            0 ; 0x01c: 0x00000000
kfdhdb.compat:                186646528 ; 0x020: 0x0b200000
kfdhdb.dsknum:                        0 ; 0x024: 0x0000
kfdhdb.grptyp:                        1 ; 0x026: KFDGTP_EXTERNAL
kfdhdb.hdrsts:                        3 ; 0x027: KFDHDR_MEMBER
kfdhdb.dskname:            DATA_0000 ; 0x028: length=12
kfdhdb.grpname:                 DATA ; 0x048: length=7
kfdhdb.fgname:             DATA_0000 ; 0x068: length=12
kfdhdb.capname:                         ; 0x088: length=0
kfdhdb.crestmp.hi:             33032846 ; 0x0a8: HOUR=0xe DAYS=0x14 MNTH=0x2 YEAR=0x7e0
kfdhdb.crestmp.lo:           3868405760 ; 0x0ac: USEC=0x0 MSEC=0xcc SECS=0x29 MINS=0x39
kfdhdb.mntstmp.hi:             33067153 ; 0x0b0: HOUR=0x11 DAYS=0x4 MNTH=0x4 YEAR=0x7e2
kfdhdb.mntstmp.lo:           2590023680 ; 0x0b4: USEC=0x0 MSEC=0x28 SECS=0x26 MINS=0x26
kfdhdb.secsize:                     512 ; 0x0b8: 0x0200
kfdhdb.blksize:                    4096 ; 0x0ba: 0x1000
kfdhdb.ausize:                  1048576 ; 0x0bc: 0x00100000
kfdhdb.mfact:                    113792 ; 0x0c0: 0x0001bc80
kfdhdb.dsksize:                 2097152 ; 0x0c4: 0x00200000
kfdhdb.pmcnt:                        20 ; 0x0c8: 0x00000014
kfdhdb.fstlocn:                       1 ; 0x0cc: 0x00000001
kfdhdb.altlocn:                       2 ; 0x0d0: 0x00000002
kfdhdb.f1b1locn:                      2 ; 0x0d4: 0x00000002
kfdhdb.redomirrors[0]:                0 ; 0x0d8: 0x0000
kfdhdb.redomirrors[1]:                0 ; 0x0da: 0x0000
kfdhdb.redomirrors[2]:                0 ; 0x0dc: 0x0000
kfdhdb.redomirrors[3]:                0 ; 0x0de: 0x0000
kfdhdb.dbcompat:              168820736 ; 0x0e0: 0x0a100000
kfdhdb.grpstmp.hi:             33032846 ; 0x0e4: HOUR=0xe DAYS=0x14 MNTH=0x2 YEAR=0x7e0
kfdhdb.grpstmp.lo:           3867648000 ; 0x0e8: USEC=0x0 MSEC=0x1e8 SECS=0x28 MINS=0x39
kfdhdb.vfstart:                       0 ; 0x0ec: 0x00000000
kfdhdb.vfend:                         0 ; 0x0f0: 0x00000000
kfdhdb.spfile:                        0 ; 0x0f4: 0x00000000
kfdhdb.spfflg:                        0 ; 0x0f8: 0x00000000
kfdhdb.ub4spare[0]:                   0 ; 0x0fc: 0x00000000
kfdhdb.ub4spare[1]:                   0 ; 0x100: 0x00000000
kfdhdb.ub4spare[2]:                   0 ; 0x104: 0x00000000
kfdhdb.ub4spare[3]:                   0 ; 0x108: 0x00000000
kfdhdb.ub4spare[4]:                   0 ; 0x10c: 0x00000000
kfdhdb.ub4spare[5]:                   0 ; 0x110: 0x00000000
kfdhdb.ub4spare[34]:                  0 ; 0x184: 0x00000000
kfdhdb.ub4spare[35]:                  0 ; 0x188: 0x00000000
kfdhdb.ub4spare[36]:                  0 ; 0x18c: 0x00000000
kfdhdb.ub4spare[37]:                  0 ; 0x190: 0x00000000
kfdhdb.ub4spare[38]:                  0 ; 0x194: 0x00000000
kfdhdb.ub4spare[39]:            4930648 ; 0x198: 0x004b3c58
kfdhdb.ub4spare[40]:                  0 ; 0x19c: 0x00000000
kfdhdb.ub4spare[41]:                  0 ; 0x1a0: 0x00000000
kfdhdb.acdb.aba.seq:                  0 ; 0x1d4: 0x00000000
kfdhdb.acdb.aba.blk:                  0 ; 0x1d8: 0x00000000
kfdhdb.acdb.ents:                     0 ; 0x1dc: 0x0000
kfdhdb.acdb.ub2spare:                 0 ; 0x1de: 0x0000

这里看到kfed看到的磁盘头类型是正常的,不过kfdhdb.ub4spare[39]: 却有了数据,这里应该是 0,出现这个情况有两种情况:

Case #1] 0xaa55 on little-endian server like Linux  or 0x55aa on big-endain server like Sun Sparc indicates boot signature ( or magic number ) on MBR (Master Boot Record )  Partition. 

Case #2] Local backup software ( like Symantec image backup ) touches ASM disk header on  column kfdhdb.ub4spare[39] from kfed output.

This issue could happen outside ASM when some tools on OS ( or human ) put partition information on the affected device


此时问题就比较明朗了,因为一线告知客户刚部署了一台快速拉起的容灾机器,在这套容灾系统内有台虚拟机,一比一磁盘比例镜像ASM磁盘,猜想初始化的时候对asm头部改变了磁盘头部信息。

5.修复磁盘头部

需要这几块磁盘的头部信息,还好查询的时候asm自动备份的磁盘头信息比较完整(特性只在10.2.0.5及以上版本才有)

kfed read /dev/asm-data aun=1 blkn=254

此时执行kfed repair命令修复受影响的磁盘即可。

6.集群修复完成

随后重启crsctl stop/start crs 集群,ASM正常挂载磁盘,同时做好磁盘头备份工作。

「喜欢这篇文章,您的关注和赞赏是给作者最好的鼓励」
关注作者
1人已赞赏
【版权声明】本文为墨天轮用户原创内容,转载时必须标注文章的来源(墨天轮),文章链接,文章作者等基本信息,否则作者和墨天轮有权追究责任。如果您发现墨天轮中有涉嫌抄袭或者侵权的内容,欢迎发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。

评论