Ugo Bellavance wrote:
Alain Spineux wrote:
On Nov 29, 2007 6:59 AM, Ugo Bellavance ugob@lubik.ca wrote:
Hi,
This is my current config:
/dev/md0 -> 200 MB -> sda1 + sdd1 -> /boot /dev/md1 -> 36 GB -> sda2 + sdd2 -> form VolGroup00 with md2 /dev/md2 -> 18 GB -> sdb1 + sde1 -> form VolGroup00 with md1
sda,sdd -> 36 GB 10k SCSI HDDs sdb,sde -> 18 GB 10k SCSI HDDs
I have added 2 36 GB 10K SCSI drives in it, they are
detected as sdc
and sdf.
What should I do if I want to optimize disk space?
The simplest solution would be to create /dev/md3 out of sdc1 and sdf1, and add it to the VG, and increase the size of my
/vz logical volume.
However, if I could convert that to a RAID5 (it could be
possible to
re-install, but I would rather not), I could have 6 drives
in RAID5,
so I'd have 5x36 GB (180) of space available total,
instead of 3*36 (108).
180, you mean 2 X 5x18
Oh, I just realized I have 2X18 and 4X36. I have 2 other 36 GB HDD here. Maybe I could have a 6x36 RAID5 this way. Does it matter if I have 4 HDD that are 10K and 2 7200 rpm?
What about raid 6? I don't think I need fault tolerance for 2 HDD failures...
What are you trying to accomplish storage wise?
Is this for commerical or personal use?
If for personal use, then it isn't as critical how it is setup, but if this is for commercial use then you need to target your storage to your application.
Yes and without rebooting :-)
- break the 36GB mirror (using mdadm, faile /dev/sdd2, and
then remove
it),
- break the 18GB mirror (using mdadm, faile /dev/sde1, and
then remove
it),
- create sd[cf][123] of 200Mb, 18G, 18G (the 200Mb is
useless but to
keep the same partitioning schema)
- create a _degraded_ raid5 with sd[cfd]2 sde1 named /dev/mdX
- vgextend your VolGroup00 to use this new partition.
# pvcreate /dev/mdX # vgextend VolGroup00 /dev/mdX
- then move all PE on md1 to mdX
# pvmove /dev/md1/dev/mdX
- then remove md1 from the VG
# vgreduce VolGroup00 /dev/md1
- now you dont need md1 anymore, stop it (sorry I'm less
skilled with
mdadm command, without manual page )
- now add /dev/sda2 to your _degraded_ raid 5
If you want this to be reconfigured on the fly without ever rebooting then you may find your options limited on which RAID levels you can choose.
Typically I keep the system disks in a RAID1 and the data disks on separate RAID arrays setup depending on the application.
Scratch or temp files -> RAID0 File serving -> RAID5 or RAID6 (depending on disk size # of disks) Databases, large email, many VMS -> RAID10
Let us know what you want the storage for and we can suggest a configuration.
Top of my head though, I would use the 18GB for the OS and keep the 4 36GB for application data either as a RAID10 or RAID5.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.