On Thu, Feb 14, 2013 at 11:58 PM, Ted Miller tedlists@sbcglobal.net wrote:
On 02/04/2013 06:40 PM, Robert Heller wrote:
I am planning to increase the disk space on my desktop system. It is running CentOS 5.9 w/XEN. I have two 160Gig 2.5" laptop (2.5") SATA
drives
in two slots of a 4-slot hot swap bay configured like this:
Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 125 1004031 fd Linux raid
autodetect
/dev/sda2 126 19457 155284290 fd Linux raid
autodetect
Disk /dev/sdb: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 * 1 125 1004031 fd Linux raid
autodetect
/dev/sdb2 126 19457 155284290 fd Linux raid
autodetect
sauron.deepsoft.com% cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb1[1] sda1[0] 1003904 blocks [2/2] [UU]
md1 : active raid1 sdb2[1] sda2[0] 155284224 blocks [2/2] [UU]
unused devices:<none>
That is I have two RAID1 arrays: a small (1Gig) one mounted as /boot and a larger 148Gig one that is a LVM Volume Group (which contains a pile of file systems, some for DOM0 and some that are for other VMs). What I plan on doing is getting a pair of 320Gig 2.5" (laptop) SATA disks and fail over the existing disks to this new pair. I believe I can then 'grow' the second RAID array to be like ~300Gig. My question is: what happens to the LVM Volume Group? Will it grow when the RAID array grows?
Not on its own, but you can grow it. I believe the recommended way to do the LVM volume is to partition new drive as type fd
LVM is 8e
Software RAID is fd
install new PV on new partition (will be new, larger size) make new PV part of old volume group migrate all volumes on old PV onto new PV remove old PV from volume group
You have to do this separately for each drive, but it isn't very hard. Of course your boot partition will have to be handled separately.
This is what I said ;) http://lists.centos.org/pipermail/centos/2013-February/131917.html
Or should I leave /dev/md1 its current size and create a new RAID array and add this as a second PV and grow the Volume Group that way?
That is a solution to a different problem. You would end up with a VG of about 450 GB total. If that is what you want to do, that works too.
He has to leave /dev/md1 at its current size ... it's a raid1.
The documentation is not clear as to what happens -- the VG is marked 'resisable'.
sauron.deepsoft.com% sudo pvdisplay --- Physical volume --- PV Name /dev/md1 VG Name sauron PV Size 148.09 GB / not usable 768.00 KB Allocatable yes PE Size (KByte) 4096 Total PE 37911 Free PE 204 Allocated PE 37707 PV UUID ttB15B-3eWx-4ioj-TUvm-lAPM-z9rD-Prumee
sauron.deepsoft.com% sudo vgdisplay --- Volume group --- VG Name sauron System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 65 VG Access read/write VG Status resizable MAX LV 0 Cur LV 17 Open LV 12 Max PV 0 Cur PV 1 Act PV 1 VG Size 148.09 GB PE Size 4.00 MB Total PE 37911 Alloc PE / Size 37707 / 147.29 GB Free PE / Size 204 / 816.00 MB VG UUID qG8gCf-3vou-7dp2-Ar0B-p8jz-eXZF-3vOONr
Doesn't look like anyone answered your question, so I'll tell you that the answer is "Yes".
Ted Miller
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos