[CentOS] Large RAID volume issues

Ross S. W. Walker rwalker at medallion.com
Mon Feb 4 21:49:08 UTC 2008


Rob Lines wrote:
> On Feb 4, 2008 3:34 PM, Ross S. W. Walker 
> <rwalker at medallion.com> wrote:
> 
> 
> 	Rob Lines wrote:
> 	> On Feb 4, 2008 3:16 PM, John R Pierce 
> <pierce at hogranch.com> wrote:
> 	>
> 	>       with LVM, you could join several smaller logical
> 	> drives, maybe 1TB each,
> 	>       into a single volume set, which could then contain
> 	> various file systems.
> 	>
> 	>
> 	> That looks like it may be the result.  The main reason was to
> 	> keep the amount of overhead and 'stuff' required to revive it
> 	> in the event of a server issue to a minimum.  That was one of
> 	> the reasons for going with an enclosure that handles all the
> 	> RAID internally and just presents to the server as a single
> 	> drive.  We had been trying to avoid LVM as we had run into
> 	> problems using knoppix recovering it in the past.
> 	>
> 	> It looks like we will probably just end up breaking it up
> 	> into smaller chunks unless I can find a way for the enclosure
> 	> to use 512 sectors and still have greater than 2 tb volumes.
> 	
> 	
> 	LVM is very well supported these days.
> 	
> 	In fact I default on LVM for all my OS and external storage
> 	configurations here as it provides for greater flexibility and
> 	manageability then raw disks/partitions.
> 	
> 
> 
> 
> How easy is it to migrate to a new os install?  Given the 
> situation as I described with a single 6tb 'drive' using lvm 
> and the server goes down and we have to rebuild the server 
> from scratch or move the storage to another machine (all 
> using CentOS 5) how easy is that?

To move an external array to a new server is as easy as plugging
it in and importing the volume group (vgimport).

Typically I name my OS volume groups "CentOS" and give
semi-descriptive names to my external array volume groups, such
as "Exch-SQL" or "VM_Guests".

You could also have a hot server activate the volume group via
heartbeat if the first server goes down if your storage
allows multiple initiators to attach to it.

> We are still checking with the vendor for a solution to move 
> back to the 512 sectors rather than the 2k ones. Hopefully 
> they come up with something.

I wish you luck here, but in my experience once an array is
created with a set sector size or chunk size, changing these
usually involves re-creating the array.

LVM might be able to handle the sector size though, no need to
create any partition on the disk, but future migration
compatibility could be questionable.

To create a VG out of it:

pvcreate /dev/sdb

then,

vgcreate "VG_Name" /dev/sdb

then,

lvcreate -L 4T -n "LV_Name" "VG_Name"

If you get a new external array say it's /dev/sdc and want to
move all data from the old one to the new one online and then
remove the old one.

pvcreate /dev/sdc

vgextend "VG_Name" /dev/sdc

pvmove /dev/sdb /dev/sdc

vgreduce "VG_Name" /dev/sdb

pvremove /dev/sdb

Then take /dev/sdb offline.

-Ross

PS You might want to remove any existing MBR/GPT stuff off of
/dev/sdb before you pvcreate it, with:

dd if=/dev/zero of=/dev/sdb bs=512 count=63

That will wipe the first track which should do it.


______________________________________________________________________
This e-mail, and any attachments thereto, is intended only for use by
the addressee(s) named herein and may contain legally privileged
and/or confidential information. If you are not the intended recipient
of this e-mail, you are hereby notified that any dissemination,
distribution or copying of this e-mail, and any attachments thereto,
is strictly prohibited. If you have received this e-mail in error,
please immediately notify the sender and permanently delete the
original and any copy or printout thereof.




More information about the CentOS mailing list