暂无图片
暂无图片
暂无图片
暂无图片
暂无图片

基于kylin V10的LVM管理

原创 yunbhuahua 2024-07-26
347

前言

Logical Volume Manager (LVM) 提供了一种灵活、高效的方式来管理存储资源,使得存储的扩容、缩容和删除变得更加简单。紧接上篇文章LVM的创建,本文将详细介绍如何使用LVM进行这些操作,以帮助企业根据业务需求动态调整存储容量,提高存储资源的利用率。

一、LV扩容

1.1 查看当前lv配置

[root@db1 ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert backup klas -wi-a----- <31.81g root klas -wi-ao---- 65.14g swap klas -wi-ao---- 2.04g lv01 vg_01 -wi-ao---- 25.00g

逻辑卷lv01使用的是逻辑卷组vg_01

1.2 查看逻辑卷组vg_01使用情况

[root@db1 ~]# vgs VG #PV #LV #SN Attr VSize VFree klas 1 3 0 wz--n- <99.00g 0 vg_01 2 1 0 wz--n- 39.99g 14.99g

vg_01剩余14.99GB

1.3 扩容lv01

1)lvextend扩容


  • 指定扩容容量


lvextend -L +2G vg_01/lv01 Size of logical volume vg_01/lv01 changed from 25.00 GiB (6400 extents) to 27.00 GiB (6912 extents). Logical volume vg_01/lv01 successfully resized.


  • 查看扩容情况


[root@db1 ~]# lvdisplay /dev/vg_01/lv01 --- Logical volume --- LV Path /dev/vg_01/lv01 LV Name lv01 VG Name vg_01 LV UUID 4Z9OQ1-d73H-JTKd-HSHO-E4Zy-Pdi0-nSpQGS LV Write Access read/write LV Creation host, time db1, 2024-07-02 05:26:25 +0800 LV Status available # open 1 LV Size 27.00 GiB Current LE 6912 Segments 2 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:3

2)指定增加的PE个数来对逻辑卷进行扩容


  • 查看PE大小


[root@db1 ~]# vgdisplay vg_01 --- Volume group --- VG Name vg_01 System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 3 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 2 Act PV 2 VG Size 39.99 GiB PE Size 4.00 MiB -->PE大小为4M Total PE 10238 Alloc PE / Size 6912 / 27.00 GiB Free PE / Size 3326 / 12.99 GiB VG UUID VkDVkl-gkSu-eJoF-qC8R-KQcZ-Ux2B-KsdHBG


  • 扩容2G


需要扩容的PE数量:2048/4=512

lvextend -l +512 /dev/vg_01/lv01


  • 查看扩容情况


[root@db1 ~]# lvdisplay /dev/vg_01/lv01 --- Logical volume --- LV Path /dev/vg_01/lv01 LV Name lv01 VG Name vg_01 LV UUID 4Z9OQ1-d73H-JTKd-HSHO-E4Zy-Pdi0-nSpQGS LV Write Access read/write LV Creation host, time db1, 2024-07-02 05:26:25 +0800 LV Status available # open 1 LV Size 29.00 GiB -->从27GB扩容到29GB Current LE 7424 Segments 2 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:3

3)将vg所有剩余空间进行扩容


  • 扩容vg所有容量


[root@db1 ~]# lvextend -l +100%FREE /dev/vg_01/lv01 Size of logical volume vg_01/lv01 changed from 29.00 GiB (7424 extents) to 39.99 GiB (10238 extents). Logical volume vg_01/lv01 successfully resized.


  • 查看扩容情况


[root@db1 ~]# lvdisplay /dev/vg_01/lv01 --- Logical volume --- LV Path /dev/vg_01/lv01 LV Name lv01 VG Name vg_01 LV UUID 4Z9OQ1-d73H-JTKd-HSHO-E4Zy-Pdi0-nSpQGS LV Write Access read/write LV Creation host, time db1, 2024-07-02 05:26:25 +0800 LV Status available # open 1 LV Size 39.99 GiB -->lv拥有VG所有的容量 Current LE 10238 Segments 2 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:3

1.4 扩容vg

当前所有的vg容量已经全部分配完,此时lv需要扩容,必须要先扩容vg

[root@db1 ~]# vgs VG #PV #LV #SN Attr VSize VFree klas 1 3 0 wz--n- <99.00g 0 vg_01 2 1 0 wz--n- 39.99g 0

1) 确认新的磁盘

[root@db1 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 20G 0 disk └─vg_01-lv01 253:3 0 40G 0 lvm /data sdb 8:16 0 20G 0 disk └─vg_01-lv01 253:3 0 40G 0 lvm /data sdc 8:32 0 20G 0 disk ==》sdc是新添加的磁盘,大小为20g sr0 11:0 1 4G 0 rom nvme0n1 259:0 0 100G 0 disk ├─nvme0n1p1 259:1 0 1G 0 part /boot └─nvme0n1p2 259:2 0 99G 0 part ├─klas-root 253:0 0 65.1G 0 lvm / ├─klas-swap 253:1 0 2G 0 lvm [SWAP] └─klas-backup 253:2 0 31.8G 0 lvm

2) 创建pv

[root@db1 ~]# pvcreate /dev/sdc Physical volume "/dev/sdc" successfully created.

3) 添加到vg

[root@db1 ~]# vgextend vg_01 /dev/sdc Volume group "vg_01" successfully extended

4)分配空间到lv


  • 确认vg剩余空间


[root@db1 ~]# vgs VG #PV #LV #SN Attr VSize VFree klas 1 3 0 wz--n- <99.00g 0 vg_01 3 1 0 wz--n- <59.99g <20.00g


  • 添加空间到lv


[root@db1 ~]# lvextend -L +2G vg_01/lv01 Size of logical volume vg_01/lv01 changed from 39.99 GiB (10238 extents) to 41.99 GiB (10750 extents). Logical volume vg_01/lv01 successfully resized.


  • 确认扩容成功


[root@db1 ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert backup klas -wi-a----- <31.81g root klas -wi-ao---- 65.14g swap klas -wi-ao---- 2.04g lv01 vg_01 -wi-ao---- 41.99g

二、LVM缩容

2.1 umount挂载点

--查看挂载点 mount -l 或 df -h --卸载挂载点 umount /data

如果不卸载文件系统,缩容会提示在线缩容不支持:

[root@db1 ~]# resize2fs /dev/vg_01/lv01 5G resize2fs 1.45.6 (20-Mar-2020) Filesystem at /dev/vg_01/lv01 is mounted on /data; on-line resizing required resize2fs: On-line shrinking not supported

2.2 查看逻辑卷lv01

[root@db1 ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert backup klas -wi-a----- <31.81g root klas -wi-ao---- 65.14g swap klas -wi-ao---- 2.04g lv01 vg_01 -wi-a----- 39.99g

2.3 缩容lv01


  • 检查逻辑卷


[root@db1 ~]# e2fsck -f /dev/vg_01/lv01 e2fsck 1.45.6 (20-Mar-2020) Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity /lost+found not found. Create<y>? yes Pass 4: Checking reference counts Pass 5: Checking group summary information


  • 缩容lv


[root@db1 ~]# resize2fs /dev/vg_01/lv01 5G resize2fs 1.45.6 (20-Mar-2020) Resizing the filesystem on /dev/vg_01/lv01 to 1310720 (4k) blocks. The filesystem on /dev/vg_01/lv01 is now 1310720 (4k) blocks long. [root@db1 ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert backup klas -wi-a----- <31.81g root klas -wi-ao---- 65.14g swap klas -wi-ao---- 2.04g lv01 vg_01 -wi-a----- 39.99g [root@db1 ~]# [root@db1 ~]# lvreduce -L 5G /dev/vg_01/lv01 5G Command does not accept argument: 5G. [root@db1 ~]# lvreduce -L 5G /dev/vg_01/lv01 WARNING: Reducing active logical volume to 5.00 GiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) Do you really want to reduce vg_01/lv01? [y/n]: WARNING: Invalid input ''. Do you really want to reduce vg_01/lv01? [y/n]: y Size of logical volume vg_01/lv01 changed from 39.99 GiB (10238 extents) to 5.00 GiB (1280 extents). Logical volume vg_01/lv01 successfully resized.


  • 确认缩容成功


[root@db1 ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert backup klas -wi-a----- <31.81g root klas -wi-ao---- 65.14g swap klas -wi-ao---- 2.04g lv01 vg_01 -wi-a----- 5.00g [root@db1 ~]#

2.4 重新挂载


  • mount挂载


mount -a 或 mount /dev/vg_01/lv01 /data


  • 检查确认挂载点


df -Th

三、LVM删除

3.1 卸载文件系统

umount /dev/data

3.2 删除lv

[root@db1 ~]# lvremove /dev/vg_01/lv01 Do you really want to remove active logical volume vg_01/lv01? [y/n]: y Logical volume "lv01" successfully removed

3.3 删除vg

[root@db1 ~]# vgremove vg_01 Volume group "vg_01" successfully removed

3.4 删除pv

[root@db1 ~]# pvremove /dev/sda /dev/sdb /dev/sdc Labels on physical volume "/dev/sda" successfully wiped. Labels on physical volume "/dev/sdb" successfully wiped. Labels on physical volume "/dev/sdc" successfully wiped. 或 [root@db1 ~]# pvremove /dev/sd{a,b,c}

四、结语

LVM作为一项成熟且强大的存储管理技术,为企业和个人提供了灵活、高效、可靠的存储解决方案。通过本文的介绍,相信您已经对LVM有了更深入的理解,并掌握了如何在实际环境中管理和优化LVM。随着技术的发展,LVM将继续发挥其在存储领域的核心作用,帮助企业构建更加稳健、可扩展的存储基础设施,以应对未来的挑战。

「喜欢这篇文章,您的关注和赞赏是给作者最好的鼓励」
关注作者
【版权声明】本文为墨天轮用户原创内容,转载时必须标注文章的来源(墨天轮),文章链接,文章作者等基本信息,否则作者和墨天轮有权追究责任。如果您发现墨天轮中有涉嫌抄袭或者侵权的内容,欢迎发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。

评论