关于LVM的操作,以前写过 (一) LVM系列之创建篇—制作LVM操作实例(完整步骤)含线性模式linear和条带模式striped(二) LVM系列之扩容篇—LV的扩容/减小相关操作实例(linear线性模式) 两篇文章,今天再写一篇关于设置了Stripes的lv如何扩大或减少容量。
==============================================================================================

测试环境:2.6.18-128.7 & 2.6.18-194.1 & LVM2
如有不明白之处,可以先看上面提到的文章。

一、增加硬盘到VG中
1、查看当前lvm情况:
[php]
[root@localhost mapper]# pvs
PV VG Fmt Attr PSize PFree
/dev/mapper/mpath0 vgtest lvm2 a- 40.00G 20.00G
/dev/mapper/mpath2 vgtest lvm2 a- 40.00G 20.00G
/dev/mapper/mpath3 vgtest lvm2 a- 40.00G 20.00G

[root@localhost ~]# vgs
VG #PV #LV #SN Attr VSize VFree
vgttt 3 1 0 wz–n- 60G 59.98G

[root@localhost ~]# lvs
LV VG Attr LSize Origin Snap% Move Log Copy% Convert
lvttt vgttt -wi-a- 30.00G

[/php]

2、增加新的硬盘或者识别新的存储,例如,本次试验新加了三块盘阵上划分过来的盘,容量和之前的一样大.需要注意的是,如果之前的 lv 做了 Stripes ,那么在新加pv时,一定要加入等量的pv到vg中。如果忘了之前lv所设置的Stripes个数,可以通过下面方法查看:
[php]
[root@localhost ~]# lvs -o lv_name,lv_attr,lv_size,seg_pe_ranges
LV Attr LSize PE Ranges
lvttt -wi-a- 60.00G /dev/mapper/mpath0:0-5119 /dev/mapper/mpath2:0-5119 /dev/mapper/mpath3:0-5119
[/php]
从上述信息来看,这个lvttt的Stripes设置了3个。

3、将新盘设置成 pv :
[php]
[root@localhost ~]# pvcreate /dev/mapper/mpath4
Physical volume "/dev/mapper/mpath4" successfully created
[root@localhost ~]# pvcreate /dev/mapper/mpath5
Physical volume "/dev/mapper/mpath5" successfully created
[root@localhost ~]# pvcreate /dev/mapper/mpath6
Physical volume "/dev/mapper/mpath6" successfully created
[root@localhost ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/mapper/mpath0 vgtest lvm2 a- 40.00G 20.00G
/dev/mapper/mpath2 vgtest lvm2 a- 40.00G 20.00G
/dev/mapper/mpath3 vgtest lvm2 a- 40.00G 20.00G
/dev/mapper/mpath4 lvm2 — 40.00G 40.00G
/dev/mapper/mpath5 lvm2 — 40.00G 40.00G
/dev/mapper/mpath6 lvm2 — 40.00G 40.00G
[/php]

4、将新的 pv 加入到 vg 中:
[php]
[root@localhost ~]# vgextend vgttt /dev/mapper/mpath4 /dev/mapper/mpath5 /dev/mapper/mpath6
Volume group "vgttt" successfully extended
[root@localhost ~]# vgs
VG #PV #LV #SN Attr VSize VFree
vgttt 6 1 0 wz–n- 60G 179.96G
[/php]

加盘的过程很简单,没什么复杂的。

二、扩大 lv 容量
1、如果 lv 正在挂载使用,必须将其 umount 掉;

2、查看该lv的信息,重复确认其Stripes数目:
[php]
[root@localhost ~]# lvs -o lv_name,lv_attr,lv_size,seg_pe_ranges
LV Attr LSize PE Ranges
lvttt -wi-a- 60.00G /dev/mapper/mpath0:0-5119 /dev/mapper/mpath2:0-5119 /dev/mapper/mpath3:0-5119
[/php]
从上述信息来看,这个lvttt的Stripes设置了3个。

3、扩大lv容量:
先扩大30G容量,在扩容时不用再指定 Stripes 的数目了。
[php]
[root@localhost ~]# lvextend -L +30G /dev/vgttt/lvttt
Using stripesize of last segment 64.00 KB
Extending logical volume lvttt to 90.00 GB
Logical volume lvttt successfully resized
[root@localhost ~]# lvs
LV VG Attr LSize Origin Snap% Move Log Copy% Convert
lvttt vgttt -wi-a- 90.00G
[root@localhost ~]# lvs -o lv_name,lv_attr,lv_size,seg_pe_ranges
LV Attr LSize PE Ranges
lvttt -wi-a- 90.00G /dev/mapper/mpath0:0-7679 /dev/mapper/mpath2:0-7679 /dev/mapper/mpath3:0-7679
[/php]

通过下面连续扩大lv,我们就可以看到,当 Stripes 使用完之前的3个设备后,才会继续使用后面新加入的3个设备。
[php]
[root@localhost ~]# lvextend -L +30G /dev/vgttt/lvttt
Using stripesize of last segment 64.00 KB
Extending logical volume lvttt to 120.00 GB
Logical volume lvttt successfully resized
[root@localhost ~]# lvs -o lv_name,lv_attr,lv_size,seg_pe_ranges
LV Attr LSize PE Ranges
lvttt -wi-a- 120.00G /dev/mapper/mpath0:0-10238 /dev/mapper/mpath2:0-10238 /dev/mapper/mpath3:0-10238
lvttt -wi-a- 120.00G /dev/mapper/mpath4:0-0 /dev/mapper/mpath5:0-0 /dev/mapper/mpath6:0-0
[/php]
[php]
[root@localhost ~]# lvextend -L +30G /dev/vgttt/lvttt
Using stripesize of last segment 64.00 KB
Extending logical volume lvttt to 150.00 GB
Logical volume lvttt successfully resized
[root@localhost ~]# lvs -o lv_name,lv_attr,lv_size,seg_pe_ranges
LV Attr LSize PE Ranges
lvttt -wi-a- 150.00G /dev/mapper/mpath0:0-10238 /dev/mapper/mpath2:0-10238 /dev/mapper/mpath3:0-10238
lvttt -wi-a- 150.00G /dev/mapper/mpath4:0-2560 /dev/mapper/mpath5:0-2560 /dev/mapper/mpath6:0-2560
[/php]
[php]
[root@localhost ~]# lvextend -L +30G /dev/vgttt/lvttt
Using stripesize of last segment 64.00 KB
Extending logical volume lvttt to 180.00 GB
Logical volume lvttt successfully resized
[root@localhost ~]# lvs -o lv_name,lv_attr,lv_size,seg_pe_ranges
LV Attr LSize PE Ranges
lvttt -wi-a- 180.00G /dev/mapper/mpath0:0-10238 /dev/mapper/mpath2:0-10238 /dev/mapper/mpath3:0-10238
lvttt -wi-a- 180.00G /dev/mapper/mpath4:0-5120 /dev/mapper/mpath5:0-5120 /dev/mapper/mpath6:0-5120
[/php]
[php]
[root@localhost ~]# lvextend -L +30G /dev/vgttt/lvttt
Using stripesize of last segment 64.00 KB
Extending logical volume lvttt to 210.00 GB
Logical volume lvttt successfully resized
[root@localhost ~]# lvs -o lv_name,lv_attr,lv_size,seg_pe_ranges
LV Attr LSize PE Ranges
lvttt -wi-a- 210.00G /dev/mapper/mpath0:0-10238 /dev/mapper/mpath2:0-10238 /dev/mapper/mpath3:0-10238
lvttt -wi-a- 210.00G /dev/mapper/mpath4:0-7680 /dev/mapper/mpath5:0-7680 /dev/mapper/mpath6:0-7680
[/php]

如果扩容完lv后,直接mount上该lv,有可能发现容量并没有发生变化,此时就得将其umount掉,然后执行resize2fs命令,例如:
[php]
[root@localhost ~]# resize2fs /dev/vgttt/lvttt
resize2fs 1.39 (29-May-2006)
Resizing the filesystem on /dev/vgttt/lvttt to 55050240 (4k) blocks.

The filesystem on /dev/vgttt/lvttt is now 55050240 blocks long.

[/php]
之后再挂载该lv,容量就应该正常了。

三、减少lv容量:
1、依旧是先umount掉该lv,要养成好习惯;

2、减少lv容量,比如减少30G:
[php]
[root@localhost ~]# lvreduce -L -30G -r /dev/vgttt/lvttt
fsck 1.39 (29-May-2006)
e2fsck 1.39 (29-May-2006)
/dev/mapper/vgttt-lvttt: clean, 11/27525120 files, 911703/55050240 blocks
resize2fs 1.39 (29-May-2006)
Resizing the filesystem on /dev/mapper/vgttt-lvttt to 47185920 (4k) blocks.
The filesystem on /dev/mapper/vgttt-lvttt is now 47185920 blocks long.

Reducing logical volume lvttt to 180.00 GB
Logical volume lvttt successfully resized
[root@localhost ~]# lvs
LV VG Attr LSize Origin Snap% Move Log Copy% Convert
lvttt vgttt -wi-a- 180.00G
[root@localhost ~]# lvs -o lv_name,lv_attr,lv_size,seg_pe_ranges
LV Attr LSize PE Ranges
lvttt -wi-a- 180.00G /dev/mapper/mpath0:0-10238 /dev/mapper/mpath2:0-10238 /dev/mapper/mpath3:0-10238
lvttt -wi-a- 180.00G /dev/mapper/mpath4:0-5120 /dev/mapper/mpath5:0-5120 /dev/mapper/mpath6:0-5120
[/php]
其中 -r 参数比较关键,如果直接缩小lv容量的话,容易把数据弄坏!

3、将其mount挂载上来,查看容量是否已经减少。

四、在VG中删除掉某个pv
如果pv没有被使用,直接删除是可以的,因为没有数据占用。但是,如果pv已经被使用,强行删除会导致数据损坏,必须找些空闲pv将其数据转移,才能再删除该pv。
1、查看pv状态:
[php]
[root@localhost ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/mapper/mpath0 vgttt lvm2 a- 40.00G 0
/dev/mapper/mpath2 vgttt lvm2 a- 40.00G 0
/dev/mapper/mpath3 vgttt lvm2 a- 40.00G 0
/dev/mapper/mpath4 vgttt lvm2 a- 40.00G 19.99G
/dev/mapper/mpath5 vgttt lvm2 a- 40.00G 19.99G
/dev/mapper/mpath6 vgttt lvm2 a- 40.00G 19.99G
/dev/mapper/mpath7 lvm2 — 40.00G 40.00G
[/php]
此时,pv 中/dev/mapper/mpath0 /dev/mapper/mpath2 /dev/mapper/mpath3 /dev/mapper/mpath4 /dev/mapper/mpath5 /dev/mapper/mpath6 都有数据在占用,只有/dev/mapper/mpath7 是完全空闲的。

2、将空闲的pv加入到vg组中:
[php]
[root@localhost ~]# vgextend vgttt /dev/mapper/mpath7
Volume group "vgttt" successfully extended
[root@localhost ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/mapper/mpath0 vgttt lvm2 a- 40.00G 0
/dev/mapper/mpath2 vgttt lvm2 a- 40.00G 0
/dev/mapper/mpath3 vgttt lvm2 a- 40.00G 0
/dev/mapper/mpath4 vgttt lvm2 a- 40.00G 19.99G
/dev/mapper/mpath5 vgttt lvm2 a- 40.00G 19.99G
/dev/mapper/mpath6 vgttt lvm2 a- 40.00G 19.99G
/dev/mapper/mpath7 vgttt lvm2 a- 40.00G 40.00G
[/php]

3、如果要删除 /dev/mapper/mpath0 这个pv,那么就应该先将其数据转移到空闲的pv上,如:
[php]
[root@localhost ~]# pvmove /dev/mapper/mpath0
/dev/mapper/mpath0: Moved: 0.5%
/dev/mapper/mpath0: Moved: 1.1%
/dev/mapper/mpath0: Moved: 1.6%
/dev/mapper/mpath0: Moved: 2.2%
/dev/mapper/mpath0: Moved: 2.7%
… …
/dev/mapper/mpath0: Moved: 99.1%
/dev/mapper/mpath0: Moved: 99.7%
/dev/mapper/mpath0: Moved: 100.0%
[root@localhost ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/mapper/mpath0 vgttt lvm2 a- 40.00G 40.00G
/dev/mapper/mpath2 vgttt lvm2 a- 40.00G 0
/dev/mapper/mpath3 vgttt lvm2 a- 40.00G 0
/dev/mapper/mpath4 vgttt lvm2 a- 40.00G 19.99G
/dev/mapper/mpath5 vgttt lvm2 a- 40.00G 19.99G
/dev/mapper/mpath6 vgttt lvm2 a- 40.00G 19.99G
/dev/mapper/mpath7 vgttt lvm2 a- 40.00G 0
[/php]
[php]
[root@localhost ~]# lvs -o lv_name,lv_attr,lv_size,seg_pe_ranges
LV Attr LSize PE Ranges
lvttt -wi-a- 180.00G /dev/mapper/mpath7:0-10238 /dev/mapper/mpath2:0-10238 /dev/mapper/mpath3:0-10238
lvttt -wi-a- 180.00G /dev/mapper/mpath4:0-5120 /dev/mapper/mpath5:0-5120 /dev/mapper/mpath6:0-5120
[/php]

4、从上述信息来看, /dev/mapper/mpath0 上的数据已经转移到了 /mapper/mpath7 上,所以 /mapper/mpath0 已经没有数据了,可以将其从组中删除:
[php]
[root@localhost ~]# vgreduce vgttt /dev/mapper/mpath0
Removed "/dev/mapper/mpath0" from volume group "vgttt"
[root@localhost ~]# vgs
VG #PV #LV #SN Attr VSize VFree
vgttt 6 1 0 wz–n- 239.98G 59.98G
[root@localhost ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/mapper/mpath0 lvm2 — 40.00G 40.00G
/dev/mapper/mpath2 vgttt lvm2 a- 40.00G 0
/dev/mapper/mpath3 vgttt lvm2 a- 40.00G 0
/dev/mapper/mpath4 vgttt lvm2 a- 40.00G 19.99G
/dev/mapper/mpath5 vgttt lvm2 a- 40.00G 19.99G
/dev/mapper/mpath6 vgttt lvm2 a- 40.00G 19.99G
/dev/mapper/mpath7 vgttt lvm2 a- 40.00G 0
[/php]