While are many excellent tutorials about creating and using LVM on Linux, not may show you how you can remove disks from LVM Volume Groups (VG) and reclaim storage or how to remove a Logical Volume (LV) from your LVM set-up.
Here is what I did:
Use -t to TEST ANY LVM action first!
We are going to release 1 TB from LVM. The Volume group was extended with 1 TB storage to serve as a cheap NFS/CIFS file server when setting up our data center. It is now deprecated and replaced by a NAS so it's no longer needed.
1) check LVM; note the four 256 GB LUNs
From the lvscan, we see that one separate Logical Volume (LV) lvshare was added to the VG vgdata and it holds the entire 1 TB.
2) check the Volume Group
The LVM is healthy; Note that vgdata is resizable, currently has 6 PVs associated with it, all of them in use; extends are 2 MB each and there are 550000 of them: that's 1 TB; currently 5 GB capacity is free
Since vgdata contains 2 separate LVs, we can remove the LV lvshare.
We've now deallocated the LV lvshare from the VG vgdata.
We still need to disassociate the 4 LUNs (PVs) from the VG:
Now for the real thing: we remove all unused PV from VG vgdata. Note how both sdb and sdg, both still in use by the LV lvuln, are skipped!
Check our status:
Note the new status of the Volume Group (VG):
Here is what I did:
Use -t to TEST ANY LVM action first!
We are going to release 1 TB from LVM. The Volume group was extended with 1 TB storage to serve as a cheap NFS/CIFS file server when setting up our data center. It is now deprecated and replaced by a NAS so it's no longer needed.
1) check LVM; note the four 256 GB LUNs
[root@server ~]# pvscan -v
Wiping cache of LVM-capable devices
Wiping internal VG cache
Walking through all physical volumes
PV /dev/sdb1 VG vgdata lvm2 [50.00 GB / 0 free]
PV /dev/sdc1 VG vgdata lvm2 [256.00 GB / 0 free]
PV /dev/sdd1 VG vgdata lvm2 [256.00 GB / 0 free]
PV /dev/sde1 VG vgdata lvm2 [256.00 GB / 0 free]
PV /dev/sdf1 VG vgdata lvm2 [256.00 GB / 0 free]
PV /dev/sdg VG vgdata lvm2 [10.00 GB / 5.00 GB free]
PV /dev/sda3 VG system lvm2 [7.88 GB / 0 free]
Total: 7 [1.07 TB] / in use: 7 [1.07 TB] / in no VG: 0 [0 ]
[root@server ~]# vgscan -v
Wiping cache of LVM-capable devices
Wiping internal VG cache
Reading all physical volumes. This may take a while...
Finding all volume groups
Finding volume group "vgdata"
Found volume group "vgdata" using metadata type lvm2
Finding volume group "system"
Found volume group "system" using metadata type lvm2
[root@server ~]# lvscan -v
Finding all logical volumes
ACTIVE '/dev/vgdata/lvuln' [54.99 GB] inherit
ACTIVE '/dev/vgdata/lvshare' [1.00 TB] inherit
ACTIVE '/dev/system/root' [4.03 GB] inherit
ACTIVE '/dev/system/tmp' [1.84 GB] inherit
ACTIVE '/dev/system/var' [2.00 GB] inherit
[root@server ~]#
From the lvscan, we see that one separate Logical Volume (LV) lvshare was added to the VG vgdata and it holds the entire 1 TB.
2) check the Volume Group
[root@server ~]# vgdisplay /dev/vgdata
--- Volume group ---
VG Name vgdata
System ID
Format lvm2
Metadata Areas 6
Metadata Sequence No 10
VG Access read/write
VG Status resizable
MAX LV 256
Cur LV 2
Open LV 1
Max PV 256
Cur PV 6
Act PV 6
VG Size 1.06 TB
PE Size 2.00 MB
Total PE 555002
Alloc PE / Size 552443 / 1.05 TB
Free PE / Size 2559 / 5.00 GB
VG UUID 30mFGo-gcFZ-Dmx2-atLC-Wh44-qGL4-MCfwia
The LVM is healthy; Note that vgdata is resizable, currently has 6 PVs associated with it, all of them in use; extends are 2 MB each and there are 550000 of them: that's 1 TB; currently 5 GB capacity is free
Since vgdata contains 2 separate LVs, we can remove the LV lvshare.
[root@server ~]# lvremove -t -v /dev/vgdata/lvshare
Test mode: Metadata will NOT be updated.
Using logical volume(s) on command line
Do you really want to remove active logical volume "lvshare"? [y/n]: y
Test mode: Skipping archiving of volume group.
Found volume group "vgdata"
Found volume group "vgdata"
Releasing logical volume "lvshare"
Test mode: Skipping volume group backup.
Logical volume "lvshare" successfully removed
Test mode: Wiping internal cache
Wiping internal VG cache
[root@server ~]# lvremove -v /dev/vgdata/lvshare
Using logical volume(s) on command line
Do you really want to remove active logical volume "lvshare"? [y/n]: y
Archiving volume group "vgdata" metadata (seqno 10).
Found volume group "vgdata"
Removing vgdata-lvshare (253:4)
Found volume group "vgdata"
Releasing logical volume "lvshare"
Creating volume group backup "/etc/lvm/backup/vgdata" (seqno 11).
Logical volume "lvshare" successfully removed
[root@server ~]#
We've now deallocated the LV lvshare from the VG vgdata.
We still need to disassociate the 4 LUNs (PVs) from the VG:
[root@server ~]# vgdisplay -v /dev/vgdata
Using volume group(s) on command line
Finding volume group "vgdata"
--- Volume group ---
VG Name vgdata
System ID
Format lvm2
Metadata Areas 6
Metadata Sequence No 11
VG Access read/write
VG Status resizable
MAX LV 256
Cur LV 1
Open LV 1
Max PV 256
Cur PV 6
Act PV 6
VG Size 1.06 TB
PE Size 2.00 MB
Total PE 555002
Alloc PE / Size 28155 / 54.99 GB
Free PE / Size 526847 / 1.00 TB
VG UUID 30mFGo-gcFZ-Dmx2-atLC-Wh44-qGL4-MCfwia
--- Logical volume ---
LV Name /dev/vgdata/lvuln
VG Name vgdata
LV UUID xrPs1G-nKkd-0Mxp-JOfB-QYIk-zJKz-MwI2V5
LV Write Access read/write
LV Status available
# open 1
LV Size 54.99 GB
Current LE 28155
Segments 3
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:3
--- Physical volumes ---
PV Name /dev/sdb1
PV UUID HkcRpz-mDRN-3SBC-Dtq3-f2tj-x8QA-J7slPT
PV Status allocatable
Total PE / Free PE 25599 / 4
PV Name /dev/sdc1
PV UUID WMYlf1-0LPe-yasi-jdeC-GVKl-hTeI-ttZzoC
PV Status allocatable
Total PE / Free PE 131071 / 131071
PV Name /dev/sdd1
PV UUID MsoDqY-uflF-Rw30-JEMK-tZ8k-VmB2-F2wBfK
PV Status allocatable
Total PE / Free PE 131071 / 131071
PV Name /dev/sde1
PV UUID e0c037-vSBy-ATdS-9Cd3-2X2p-yroz-2dBOD3
PV Status allocatable
Total PE / Free PE 131071 / 131071
PV Name /dev/sdf1
PV UUID ntIMPv-A8Vs-p7wk-KbP3-IsKk-1oMS-BJvZp5
PV Status allocatable
Total PE / Free PE 131071 / 131071
PV Name /dev/sdg
PV UUID v1mkgm-OvIW-RvS1-6Vnf-yvll-BSL1-wm3ZDw
PV Status allocatable
Total PE / Free PE 5119 / 2559
[root@server ~]# vgreduce -t -v /dev/vgdata /dev/sdc1
Test mode: Metadata will NOT be updated.
Finding volume group "vgdata"
Using physical volume(s) on command line
Test mode: Skipping archiving of volume group.
Removing "/dev/sdc1" from volume group "vgdata"
Test mode: Skipping volume group backup.
Removed "/dev/sdc1" from volume group "vgdata"
Test mode: Wiping internal cache
Wiping internal VG cache
[root@server ~]# vgreduce -a -t -v /dev/vgdata
Test mode: Metadata will NOT be updated.
Finding volume group "vgdata"
Using all physical volume(s) in volume group
Physical volume "/dev/sdb1" still in use
Test mode: Skipping archiving of volume group.
Removing "/dev/sdc1" from volume group "vgdata"
Test mode: Skipping volume group backup.
Removed "/dev/sdc1" from volume group "vgdata"
Test mode: Skipping archiving of volume group.
Removing "/dev/sdd1" from volume group "vgdata"
Test mode: Skipping volume group backup.
Removed "/dev/sdd1" from volume group "vgdata"
Test mode: Skipping archiving of volume group.
Removing "/dev/sde1" from volume group "vgdata"
Test mode: Skipping volume group backup.
Removed "/dev/sde1" from volume group "vgdata"
Test mode: Skipping archiving of volume group.
Removing "/dev/sdf1" from volume group "vgdata"
Test mode: Skipping volume group backup.
Removed "/dev/sdf1" from volume group "vgdata"
Physical volume "/dev/sdg" still in use
Test mode: Wiping internal cache
Wiping internal VG cache
Now for the real thing: we remove all unused PV from VG vgdata. Note how both sdb and sdg, both still in use by the LV lvuln, are skipped!
[root@server ~]# vgreduce -a -v /dev/vgdata
Finding volume group "vgdata"
Using all physical volume(s) in volume group
Physical volume "/dev/sdb1" still in use
Archiving volume group "vgdata" metadata (seqno 11).
Removing "/dev/sdc1" from volume group "vgdata"
Creating volume group backup "/etc/lvm/backup/vgdata" (seqno 12).
Removed "/dev/sdc1" from volume group "vgdata"
Archiving volume group "vgdata" metadata (seqno 12).
Removing "/dev/sdd1" from volume group "vgdata"
Wiping cache of LVM-capable devices
Creating volume group backup "/etc/lvm/backup/vgdata" (seqno 13).
Removed "/dev/sdd1" from volume group "vgdata"
Archiving volume group "vgdata" metadata (seqno 13).
Removing "/dev/sde1" from volume group "vgdata"
Wiping cache of LVM-capable devices
Creating volume group backup "/etc/lvm/backup/vgdata" (seqno 14).
Removed "/dev/sde1" from volume group "vgdata"
Archiving volume group "vgdata" metadata (seqno 14).
Removing "/dev/sdf1" from volume group "vgdata"
Wiping cache of LVM-capable devices
Creating volume group backup "/etc/lvm/backup/vgdata" (seqno 15).
Removed "/dev/sdf1" from volume group "vgdata"
Physical volume "/dev/sdg" still in use
[root@server ~]#
Check our status:
[root@server ~]# vgdisplay -v /dev/vgdata
Using volume group(s) on command line
Finding volume group "vgdata"
--- Volume group ---
VG Name vgdata
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 15
VG Access read/write
VG Status resizable
MAX LV 256
Cur LV 1
Open LV 1
Max PV 256
Cur PV 2
Act PV 2
VG Size 60.00 GB
PE Size 2.00 MB
Total PE 30718
Alloc PE / Size 28155 / 54.99 GB
Free PE / Size 2563 / 5.01 GB
VG UUID 30mFGo-gcFZ-Dmx2-atLC-Wh44-qGL4-MCfwia
--- Logical volume ---
LV Name /dev/vgdata/lvuln
VG Name vgdata
LV UUID xrPs1G-nKkd-0Mxp-JOfB-QYIk-zJKz-MwI2V5
LV Write Access read/write
LV Status available
# open 1
LV Size 54.99 GB
Current LE 28155
Segments 3
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:3
--- Physical volumes ---
PV Name /dev/sdb1
PV UUID HkcRpz-mDRN-3SBC-Dtq3-f2tj-x8QA-J7slPT
PV Status allocatable
Total PE / Free PE 25599 / 4
PV Name /dev/sdg
PV UUID v1mkgm-OvIW-RvS1-6Vnf-yvll-BSL1-wm3ZDw
PV Status allocatable
Total PE / Free PE 5119 / 2559
[root@server ~]#
Note the new status of the Volume Group (VG):
- resizable
- only 2 PVs, both in use
- 60 GB capacity, 5 GB free
- a few extends left on sdb1 and 2500 on sdg. Since extends are 2 MB, that's 5 GB
Comments