以下操作为个人记录使用
Number Major Minor RaidDevice State
0 259 10 0 active sync set-A /dev/nvme2n1
1 259 12 1 active sync set-B /dev/nvme3n1
2 259 17 2 active sync set-A /dev/nvme4n1
3 259 15 3 active sync set-B /dev/nvme5n1
4 259 25 4 active sync set-A /dev/nvme6n1
5 259 14 5 active sync set-B /dev/nvme7n1
6 259 16 6 active sync set-A /dev/nvme8n1
7 259 24 7 active sync set-B /dev/nvme9n1
8 259 23 - spare /dev/nvme10n1
9 259 22 - spare /dev/nvme11n1
[oracle@netappdb01 ~]$ df -hl
Filesystem Size Used Avail Use% Mounted on
devtmpfs 126G 0 126G 0% /dev
tmpfs 126G 0 126G 0% /dev/shm
tmpfs 126G 27M 126G 1% /run
tmpfs 126G 0 126G 0% /sys/fs/cgroup
/dev/md2 1.8T 16G 1.8T 1% /
/dev/md0 505M 151M 354M 30% /boot
/dev/md1 128M 12M 117M 9% /boot/efi
tmpfs 26G 0 26G 0% /run/user/1000
/dev/mapper/vgdata-lvu01 1.0T 34M 1.0T 1% /u01
一、移除:
[root@localhost ~]# umount /u01
[root@localhost ~]# mdadm --stop /dev/md3
mdadm: Cannot get exclusive access to /dev/md3:Perhaps a running process, mounted filesystem or active volume group?
删除lvm
[root@localhost ~]# lvdisplay
--- Logical volume ---
LV Path /dev/vgdata/lvdata
LV Name lvdata
VG Name vgdata
LV UUID XWmuaW-4gf7-OzQd-qfpp-mrBP-fyZZ-xJULkF
LV Write Access read/write
LV Creation host, time netappdb01, 2022-12-26 10:20:29 +0800
LV Status available
# open 0
LV Size 2.00 TiB
Current LE 524288
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:0
--- Logical volume ---
LV Path /dev/vgdata/lvu01
LV Name lvu01
VG Name vgdata
LV UUID wd7LxU-CX4e-68LG-IQYA-cF4Y-BN1w-Exu3rz
LV Write Access read/write
LV Creation host, time netappdb01, 2022-12-26 10:29:25 +0800
LV Status available
# open 0
LV Size 1.00 TiB
Current LE 262144
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:1
[root@localhost ~]# lvremove /dev/vgdata/lvu01
Do you really want to remove active logical volume vgdata/lvu01? [y/n]: y
Logical volume "lvu01" successfully removed
[root@localhost ~]# lvremove /dev/vgdata/lvdata
Do you really want to remove active logical volume vgdata/lvdata? [y/n]: y
Logical volume "lvdata" successfully removed
[root@localhost ~]# vgdisplay
--- Volume group ---
VG Name vgdata
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 5
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size <6.99 TiB
PE Size 4.00 MiB
Total PE 1831290
Alloc PE / Size 0 / 0
Free PE / Size 1831290 / <6.99 TiB
VG UUID CPOzTZ-eCVr-isfY-OHye-oeNB-E52L-nzjqLk
[root@localhost ~]# vgremove vgdata
Volume group "vgdata" successfully removed
[root@localhost ~]# pvdisplay
"/dev/md3" is a new physical volume of "<6.99 TiB"
--- NEW Physical volume ---
PV Name /dev/md3
VG Name
PV Size <6.99 TiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID xu2fX1-DW9P-jJA9-q6BL-6RSj-qJ7x-vaWBE1
[root@localhost ~]# pvremove /dev/md3
Labels on physical volume "/dev/md3" successfully wiped.
[root@localhost ~]# mdadm --stop /dev/md3
mdadm: stopped /dev/md3
- 创建raid5
[root@localhost ~]# mdadm --create /dev/md3 --level=5 -n 5 /dev/nvme2n1 /dev/nvme3n1 /dev/nvme4n1 /dev/nvme5n1 /dev/nvme6n1
mdadm: /dev/nvme2n1 appears to be part of a raid array:
level=raid10 devices=8 ctime=Fri Dec 23 17:58:46 2022
mdadm: /dev/nvme3n1 appears to be part of a raid array:
level=raid10 devices=8 ctime=Fri Dec 23 17:58:46 2022
mdadm: /dev/nvme4n1 appears to be part of a raid array:
level=raid10 devices=8 ctime=Fri Dec 23 17:58:46 2022
mdadm: /dev/nvme5n1 appears to be part of a raid array:
level=raid10 devices=8 ctime=Fri Dec 23 17:58:46 2022
mdadm: /dev/nvme6n1 appears to be part of a raid array:
level=raid10 devices=8 ctime=Fri Dec 23 17:58:46 2022
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md3 started.
[root@localhost ~]# mdadm --detail /dev/md3
/dev/md3:
Version : 1.2
Creation Time : Wed Dec 28 16:07:39 2022
Raid Level : raid5
Array Size : 7500967936 (6.99 TiB 7.68 TB)
Used Dev Size : 1875241984 (1788.37 GiB 1920.25 GB)
Raid Devices : 5
Total Devices : 5
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Wed Dec 28 16:08:30 2022
State : clean, degraded, recovering
Active Devices : 4
Working Devices : 5
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : bitmap
Rebuild Status : 0% complete
Name : netappdb01:3 (local to host netappdb01)
UUID : d720bb05:73972fb6:154e56c9:692efd26
Events : 11
Number Major Minor RaidDevice State
0 259 10 0 active sync /dev/nvme2n1
1 259 12 1 active sync /dev/nvme3n1
2 259 17 2 active sync /dev/nvme4n1
3 259 15 3 active sync /dev/nvme5n1
5 259 25 4 spare rebuilding /dev/nvme6n1
[root@localhost ~]# mdadm --detail /dev/md3
/dev/md3:
Version : 1.2
Creation Time : Wed Dec 28 16:07:39 2022
Raid Level : raid5
Array Size : 7500967936 (6.99 TiB 7.68 TB)
Used Dev Size : 1875241984 (1788.37 GiB 1920.25 GB)
Raid Devices : 5
Total Devices : 5
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Wed Dec 28 16:08:30 2022
State : clean, degraded, recovering
Active Devices : 4
Working Devices : 5
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : bitmap
Rebuild Status : 0% complete
Name : netappdb01:3 (local to host netappdb01)
UUID : d720bb05:73972fb6:154e56c9:692efd26
Events : 11
Number Major Minor RaidDevice State
0 259 10 0 active sync /dev/nvme2n1
1 259 12 1 active sync /dev/nvme3n1
2 259 17 2 active sync /dev/nvme4n1
3 259 15 3 active sync /dev/nvme5n1
5 259 25 4 spare rebuilding /dev/nvme6n
这次没有使用,直接组建raid,sector size仍为512
[root@netappdb01 /]# nvme format /dev/nvme2n1 -l 1
Success formatting namespace:1
..
[root@netappdb01 /]# nvme format /dev/nvme11n1 -l 1
Success formatting namespace:1
[root@netappdb01 /]# fdisk -l /dev/nvme2n1
Disk /dev/nvme2n1: 1920.4 GB, 1920383410176 bytes, 3750748848 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
[root@netappdb01 /]# mdadm --create /dev/md3 --auto=yes --level=10 --chunk=512K --raid-devices=8 /dev/nvme{2,3,4,5,6,7,8,9}n1 --spare-devices=2 /dev/nvme{10,11}n1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md3 started.
root@localhost ~]# cat /proc/mdstat
Personalities : [raid1] [raid10] [raid6] [raid5] [raid4]
md3 : active raid5 nvme6n1[5] nvme5n1[3] nvme4n1[2] nvme3n1[1] nvme2n1[0]
7500967936 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/4] [UUUU_]
[>....................] recovery = 1.5% (28285696/1875241984) finish=149.0min speed=206560K/sec
bitmap: 0/14 pages [0KB], 65536KB chunk
md1 : active raid1 nvme1n1p3[1] nvme0n1p3[0]
131008 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md2 : active raid1 nvme0n1p4[0] nvme1n1p4[1]
1857808384 blocks super 1.2 [2/2] [UU]
bitmap: 0/14 pages [0KB], 65536KB chunk
md127 : active raid1 nvme0n1p1[2] nvme1n1p1[1]
16759808 blocks super 1.2 [2/2] [UU]
md0 : active raid1 nvme0n1p2[0] nvme1n1p2[1]
523264 blocks super 1.2 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
unused devices: <none>
[root@localhost ~]# parted /dev/md3
GNU Parted 3.1
Using /dev/md3
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel gpt
(parted) mkpart primary 0% 100%
(parted) quit
Information: You may need to update /etc/fstab.
[root@netappdb01 dbadmin]# pvcreate /dev/md3
Device /dev/md3 excluded by a filter
[root@localhost ~]# wipefs -a /dev/md3
/dev/md3: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54
/dev/md3: 8 bytes were erased at offset 0x6fc5ebffe00 (gpt): 45 46 49 20 50 41 52 54
/dev/md3: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
/dev/md3: calling ioclt to re-read partition table: Success
[root@localhost ~]# pvcreate /dev/md3
Physical volume "/dev/md3" successfully created.
[root@localhost ~]# vgcreate vgdata /dev/md3
Volume group "vgdata" successfully created
[root@localhost ~]# lvcreate -l 100%FREE -n lvdata vgdata
Logical volume "lvdata" created.
[root@localhost ~]# lvdisplay
--- Logical volume ---
LV Path /dev/vgdata/lvdata
LV Name lvdata
VG Name vgdata
LV UUID OfjTQC-MjGv-hz5p-DZd7-P2hg-aiFL-KGfMTZ
LV Write Access read/write
LV Creation host, time netappdb01, 2022-12-29 09:25:34 +0800
LV Status available
# open 0
LV Size <6.99 TiB
Current LE 1831290
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:0
[root@localhost ~]# mkfs.xfs /dev/vgdata/lvdata
meta-data=/dev/vgdata/lvdata isize=512 agcount=32, agsize=58601344 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=1875240960, imaxpct=5
= sunit=128 swidth=512 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=521728, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[root@localhost ~]# umount /u01
[root@localhost ~]# lvremove /dev/vgdata/lvdata
Do you really want to remove active logical volume vgdata/lvdata? [y/n]: y
Logical volume "lvdata" successfully removed
[root@localhost ~]# lvdisplay
[root@localhost ~]# lvcreate -L 2T -n lvu01 vgdata
WARNING: xfs signature detected on /dev/vgdata/lvu01 at offset 0. Wipe it? [y/n]: y
Wiping xfs signature on /dev/vgdata/lvu01.
Logical volume "lvu01" created.
[root@localhost ~]# mkdir /data
[root@localhost ~]# mount /dev/vgdata/lvu01 /u01
[root@localhost ~]# mount /dev/vgdata/lvdata /data
[root@localhost ~]# df -hl
Filesystem Size Used Avail Use% Mounted on
devtmpfs 126G 0 126G 0% /dev
tmpfs 126G 0 126G 0% /dev/shm
tmpfs 126G 27M 126G 1% /run
tmpfs 126G 0 126G 0% /sys/fs/cgroup
/dev/md2 1.8T 16G 1.8T 1% /
/dev/md0 505M 151M 354M 30% /boot
/dev/md1 128M 12M 117M 9% /boot/efi
tmpfs 26G 0 26G 0% /run/user/1000
/dev/mapper/vgdata-lvu01 2.0T 34M 2.0T 1% /u01
「喜欢这篇文章,您的关注和赞赏是给作者最好的鼓励」
关注作者
【版权声明】本文为墨天轮用户原创内容,转载时必须标注文章的来源(墨天轮),文章链接,文章作者等基本信息,否则作者和墨天轮有权追究责任。如果您发现墨天轮中有涉嫌抄袭或者侵权的内容,欢迎发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。




