Hi,
This is my current config:
/dev/md0 -> 200 MB -> sda1 + sdd1 -> /boot /dev/md1 -> 36 GB -> sda2 + sdd2 -> form VolGroup00 with md2 /dev/md2 -> 18 GB -> sdb1 + sde1 -> form VolGroup00 with md1
sda,sdd -> 36 GB 10k SCSI HDDs sdb,sde -> 18 GB 10k SCSI HDDs
I have added 2 36 GB 10K SCSI drives in it, they are detected as sdc and sdf.
What should I do if I want to optimize disk space?
The simplest solution would be to create /dev/md3 out of sdc1 and sdf1, and add it to the VG, and increase the size of my /vz logical volume.
However, if I could convert that to a RAID5 (it could be possible to re-install, but I would rather not), I could have 6 drives in RAID5, so I'd have 5x36 GB (180) of space available total, instead of 3*36 (108).
Also, please note that this is a Dual PIII 1.2 Tualatin. I'm runing OpenVZ with virtual machines that are not really CPU expensive, but I would not like to see my processors spending most of their time computing XORs for the RAID5.
Any suggestions/tips welcome.
Regards,
Ugo
On Nov 29, 2007 6:59 AM, Ugo Bellavance ugob@lubik.ca wrote:
Hi,
This is my current config:
/dev/md0 -> 200 MB -> sda1 + sdd1 -> /boot /dev/md1 -> 36 GB -> sda2 + sdd2 -> form VolGroup00 with md2 /dev/md2 -> 18 GB -> sdb1 + sde1 -> form VolGroup00 with md1
sda,sdd -> 36 GB 10k SCSI HDDs sdb,sde -> 18 GB 10k SCSI HDDs
I have added 2 36 GB 10K SCSI drives in it, they are detected as sdc and sdf.
What should I do if I want to optimize disk space?
The simplest solution would be to create /dev/md3 out of sdc1 and sdf1, and add it to the VG, and increase the size of my /vz logical volume.
However, if I could convert that to a RAID5 (it could be possible to re-install, but I would rather not), I could have 6 drives in RAID5, so I'd have 5x36 GB (180) of space available total, instead of 3*36 (108).
180, you mean 2 X 5x18
Yes and without rebooting :-)
- break the 36GB mirror (using mdadm, faile /dev/sdd2, and then remove it), - break the 18GB mirror (using mdadm, faile /dev/sde1, and then remove it), - create sd[cf][123] of 200Mb, 18G, 18G (the 200Mb is useless but to keep the same partitioning schema) - create a _degraded_ raid5 with sd[cfd]2 sde1 named /dev/mdX - vgextend your VolGroup00 to use this new partition. # pvcreate /dev/mdX # vgextend VolGroup00 /dev/mdX - then move all PE on md1 to mdX # pvmove /dev/md1/dev/mdX - then remove md1 from the VG # vgreduce VolGroup00 /dev/md1 - now you dont need md1 anymore, stop it (sorry I'm less skilled with mdadm command, without manual page ) - now add /dev/sda2 to your _degraded_ raid 5
et voila for the first raid5
do the same with the second one
Regards
Also, please note that this is a Dual PIII 1.2 Tualatin. I'm runing OpenVZ with virtual machines that are not really CPU expensive, but I would not like to see my processors spending most of their time computing XORs for the RAID5.
Any suggestions/tips welcome.
Regards,
Ugo
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Alain Spineux wrote:
On Nov 29, 2007 6:59 AM, Ugo Bellavance ugob@lubik.ca wrote:
Hi,
This is my current config:
/dev/md0 -> 200 MB -> sda1 + sdd1 -> /boot /dev/md1 -> 36 GB -> sda2 + sdd2 -> form VolGroup00 with md2 /dev/md2 -> 18 GB -> sdb1 + sde1 -> form VolGroup00 with md1
sda,sdd -> 36 GB 10k SCSI HDDs sdb,sde -> 18 GB 10k SCSI HDDs
I have added 2 36 GB 10K SCSI drives in it, they are detected as sdc and sdf.
What should I do if I want to optimize disk space?
The simplest solution would be to create /dev/md3 out of sdc1 and sdf1, and add it to the VG, and increase the size of my /vz logical volume.
However, if I could convert that to a RAID5 (it could be possible to re-install, but I would rather not), I could have 6 drives in RAID5, so I'd have 5x36 GB (180) of space available total, instead of 3*36 (108).
180, you mean 2 X 5x18
Oh, I just realized I have 2X18 and 4X36. I have 2 other 36 GB HDD here. Maybe I could have a 6x36 RAID5 this way. Does it matter if I have 4 HDD that are 10K and 2 7200 rpm?
What about raid 6? I don't think I need fault tolerance for 2 HDD failures...
Yes and without rebooting :-)
- break the 36GB mirror (using mdadm, faile /dev/sdd2, and then remove it),
- break the 18GB mirror (using mdadm, faile /dev/sde1, and then remove it),
- create sd[cf][123] of 200Mb, 18G, 18G (the 200Mb is useless but to
keep the same partitioning schema)
- create a _degraded_ raid5 with sd[cfd]2 sde1 named /dev/mdX
- vgextend your VolGroup00 to use this new partition.
# pvcreate /dev/mdX # vgextend VolGroup00 /dev/mdX
- then move all PE on md1 to mdX
# pvmove /dev/md1/dev/mdX
- then remove md1 from the VG
# vgreduce VolGroup00 /dev/md1
- now you dont need md1 anymore, stop it (sorry I'm less skilled with
mdadm command, without manual page )
- now add /dev/sda2 to your _degraded_ raid 5
On Nov 29, 2007 4:21 PM, Ugo Bellavance ugob@lubik.ca wrote:
Alain Spineux wrote:
On Nov 29, 2007 6:59 AM, Ugo Bellavance ugob@lubik.ca wrote:
Hi,
This is my current config:
/dev/md0 -> 200 MB -> sda1 + sdd1 -> /boot /dev/md1 -> 36 GB -> sda2 + sdd2 -> form VolGroup00 with md2 /dev/md2 -> 18 GB -> sdb1 + sde1 -> form VolGroup00 with md1
sda,sdd -> 36 GB 10k SCSI HDDs sdb,sde -> 18 GB 10k SCSI HDDs
I have added 2 36 GB 10K SCSI drives in it, they are detected as sdc and sdf.
What should I do if I want to optimize disk space?
The simplest solution would be to create /dev/md3 out of sdc1 and sdf1, and add it to the VG, and increase the size of my /vz logical volume.
However, if I could convert that to a RAID5 (it could be possible to re-install, but I would rather not), I could have 6 drives in RAID5, so I'd have 5x36 GB (180) of space available total, instead of 3*36 (108).
180, you mean 2 X 5x18
Oh, I just realized I have 2X18 and 4X36. I have 2 other 36 GB HDD here. Maybe I could have a 6x36 RAID5 this way. Does it matter if I
drop the 2x18, to drop the 2x18 complexity, or keep them in mirroring, and maybe on another VG, to backup/store more critical data
have 4 HDD that are 10K and 2 7200 rpm?
No
What about raid 6? I don't think I need fault tolerance for 2 HDD failures...
raid 6! more disk, more problems! use it as hotspare instead.
Yes and without rebooting :-)
- break the 36GB mirror (using mdadm, faile /dev/sdd2, and then remove it),
- break the 18GB mirror (using mdadm, faile /dev/sde1, and then remove it),
- create sd[cf][123] of 200Mb, 18G, 18G (the 200Mb is useless but to
keep the same partitioning schema)
- create a _degraded_ raid5 with sd[cfd]2 sde1 named /dev/mdX
- vgextend your VolGroup00 to use this new partition.
# pvcreate /dev/mdX # vgextend VolGroup00 /dev/mdX
- then move all PE on md1 to mdX
# pvmove /dev/md1/dev/mdX
- then remove md1 from the VG
# vgreduce VolGroup00 /dev/md1
- now you dont need md1 anymore, stop it (sorry I'm less skilled with
mdadm command, without manual page )
- now add /dev/sda2 to your _degraded_ raid 5
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Alain Spineux wrote:
On Nov 29, 2007 4:21 PM, Ugo Bellavance ugob@lubik.ca wrote:
Alain Spineux wrote:
On Nov 29, 2007 6:59 AM, Ugo Bellavance ugob@lubik.ca wrote:
Hi,
This is my current config:
/dev/md0 -> 200 MB -> sda1 + sdd1 -> /boot /dev/md1 -> 36 GB -> sda2 + sdd2 -> form VolGroup00 with md2 /dev/md2 -> 18 GB -> sdb1 + sde1 -> form VolGroup00 with md1
sda,sdd -> 36 GB 10k SCSI HDDs sdb,sde -> 18 GB 10k SCSI HDDs
I have added 2 36 GB 10K SCSI drives in it, they are detected as sdc and sdf.
What should I do if I want to optimize disk space?
The simplest solution would be to create /dev/md3 out of sdc1 and sdf1, and add it to the VG, and increase the size of my /vz logical volume.
However, if I could convert that to a RAID5 (it could be possible to re-install, but I would rather not), I could have 6 drives in RAID5, so I'd have 5x36 GB (180) of space available total, instead of 3*36 (108).
180, you mean 2 X 5x18
Oh, I just realized I have 2X18 and 4X36. I have 2 other 36 GB HDD here. Maybe I could have a 6x36 RAID5 this way. Does it matter if I
drop the 2x18, to drop the 2x18 complexity, or keep them in mirroring, and maybe on another VG, to backup/store more critical data
Ok, but the server can only have 6 HDD.
So, first step, how would I replace the current 2X18 by the 2x36 that I put in the server yesterday?
(BTW thanks a lot for your help, it is very interesting)
Regards,
Ugo
Ugo Bellavance wrote:
Alain Spineux wrote:
On Nov 29, 2007 4:21 PM, Ugo Bellavance ugob@lubik.ca wrote:
Alain Spineux wrote:
On Nov 29, 2007 6:59 AM, Ugo Bellavance ugob@lubik.ca wrote:
Hi,
This is my current config:
/dev/md0 -> 200 MB -> sda1 + sdd1 -> /boot /dev/md1 -> 36 GB -> sda2 + sdd2 -> form VolGroup00 with md2 /dev/md2 -> 18 GB -> sdb1 + sde1 -> form VolGroup00 with md1
sda,sdd -> 36 GB 10k SCSI HDDs sdb,sde -> 18 GB 10k SCSI HDDs
I have added 2 36 GB 10K SCSI drives in it, they are detected as sdc and sdf.
What should I do if I want to optimize disk space?
The simplest solution would be to create /dev/md3 out of
sdc1 and
sdf1, and add it to the VG, and increase the size of my
/vz logical volume.
However, if I could convert that to a RAID5 (it could be
possible
to re-install, but I would rather not), I could have 6 drives in RAID5, so I'd have 5x36 GB (180) of space available
total, instead of 3*36 (108).
180, you mean 2 X 5x18
Oh, I just realized I have 2X18 and 4X36. I have 2 other
36 GB HDD
here. Maybe I could have a 6x36 RAID5 this way. Does it
matter if I
drop the 2x18, to drop the 2x18 complexity, or keep them in
mirroring,
and maybe on another VG, to backup/store more critical data
Ok, but the server can only have 6 HDD.
So, first step, how would I replace the current 2X18 by the 2x36 that I put in the server yesterday?
(BTW thanks a lot for your help, it is very interesting)
If you wanted to reconfigure this all online it could be done, but it will be tricky depending on your current LV allocations and layout.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
Ugo Bellavance wrote:
Alain Spineux wrote:
On Nov 29, 2007 6:59 AM, Ugo Bellavance ugob@lubik.ca wrote:
Hi,
This is my current config:
/dev/md0 -> 200 MB -> sda1 + sdd1 -> /boot /dev/md1 -> 36 GB -> sda2 + sdd2 -> form VolGroup00 with md2 /dev/md2 -> 18 GB -> sdb1 + sde1 -> form VolGroup00 with md1
sda,sdd -> 36 GB 10k SCSI HDDs sdb,sde -> 18 GB 10k SCSI HDDs
I have added 2 36 GB 10K SCSI drives in it, they are
detected as sdc
and sdf.
What should I do if I want to optimize disk space?
The simplest solution would be to create /dev/md3 out of sdc1 and sdf1, and add it to the VG, and increase the size of my
/vz logical volume.
However, if I could convert that to a RAID5 (it could be
possible to
re-install, but I would rather not), I could have 6 drives
in RAID5,
so I'd have 5x36 GB (180) of space available total,
instead of 3*36 (108).
180, you mean 2 X 5x18
Oh, I just realized I have 2X18 and 4X36. I have 2 other 36 GB HDD here. Maybe I could have a 6x36 RAID5 this way. Does it matter if I have 4 HDD that are 10K and 2 7200 rpm?
What about raid 6? I don't think I need fault tolerance for 2 HDD failures...
What are you trying to accomplish storage wise?
Is this for commerical or personal use?
If for personal use, then it isn't as critical how it is setup, but if this is for commercial use then you need to target your storage to your application.
Yes and without rebooting :-)
- break the 36GB mirror (using mdadm, faile /dev/sdd2, and
then remove
it),
- break the 18GB mirror (using mdadm, faile /dev/sde1, and
then remove
it),
- create sd[cf][123] of 200Mb, 18G, 18G (the 200Mb is
useless but to
keep the same partitioning schema)
- create a _degraded_ raid5 with sd[cfd]2 sde1 named /dev/mdX
- vgextend your VolGroup00 to use this new partition.
# pvcreate /dev/mdX # vgextend VolGroup00 /dev/mdX
- then move all PE on md1 to mdX
# pvmove /dev/md1/dev/mdX
- then remove md1 from the VG
# vgreduce VolGroup00 /dev/md1
- now you dont need md1 anymore, stop it (sorry I'm less
skilled with
mdadm command, without manual page )
- now add /dev/sda2 to your _degraded_ raid 5
If you want this to be reconfigured on the fly without ever rebooting then you may find your options limited on which RAID levels you can choose.
Typically I keep the system disks in a RAID1 and the data disks on separate RAID arrays setup depending on the application.
Scratch or temp files -> RAID0 File serving -> RAID5 or RAID6 (depending on disk size # of disks) Databases, large email, many VMS -> RAID10
Let us know what you want the storage for and we can suggest a configuration.
Top of my head though, I would use the 18GB for the OS and keep the 4 36GB for application data either as a RAID10 or RAID5.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
Ross S. W. Walker wrote:
What are you trying to accomplish storage wise?
Is this for commerical or personal use?
Commercial, but non-critical use.
If for personal use, then it isn't as critical how it is setup, but if this is for commercial use then you need to target your storage to your application.
Nothing really very IO demanding. Running an OpenVZ server with many Virtual machines on it, but load is very low.
If you want this to be reconfigured on the fly without ever rebooting then you may find your options limited on which RAID levels you can choose.
Typically I keep the system disks in a RAID1 and the data disks on separate RAID arrays setup depending on the application.
Scratch or temp files -> RAID0 File serving -> RAID5 or RAID6 (depending on disk size # of disks) Databases, large email, many VMS -> RAID10
Let us know what you want the storage for and we can suggest a configuration.
Top of my head though, I would use the 18GB for the OS and keep the 4 36GB for application data either as a RAID10 or RAID5.
That would make sense. Use RAID1 18GB for /, /boot and /var and use a RAID4 with 4 36GB HDD for /vz (OpenVZ's virtual machines are located there).
Makes sense?
Thanks,
Ugo
Ugo Bellavance wrote:
Ross S. W. Walker wrote:
What are you trying to accomplish storage wise?
Is this for commerical or personal use?
Commercial, but non-critical use.
If for personal use, then it isn't as critical how it is
setup, but if
this is for commercial use then you need to target your storage to your application.
Nothing really very IO demanding. Running an OpenVZ server with many Virtual machines on it, but load is very low.
If you want this to be reconfigured on the fly without ever
rebooting
then you may find your options limited on which RAID levels you can choose.
Typically I keep the system disks in a RAID1 and the data disks on separate RAID arrays setup depending on the application.
Scratch or temp files -> RAID0 File serving -> RAID5 or RAID6 (depending on disk size # of disks) Databases, large email, many VMS -> RAID10
Let us know what you want the storage for and we can suggest a configuration.
Top of my head though, I would use the 18GB for the OS and
keep the 4
36GB for application data either as a RAID10 or RAID5.
That would make sense. Use RAID1 18GB for /, /boot and /var and use a RAID4 with 4 36GB HDD for /vz (OpenVZ's virtual machines are located there).
Makes sense?
Makes sense to me, I have found in my environment that VMs generate a lot of random io, so a RAID10 may be better suited, though it means 72GB of useable space instead of 108GB.
Also by using growing or sparse files for the vz images, a volume can get fragmented pretty quickly. To minimzie that from happening, think about creating LVs with separate small file systems to hold each vz image. If the LVs start running out of space, you can grow them and the file system as needed which will reduce the fragmentation tremendously. You will still end up with LV extents fragmented, but since they are larger it isn't as serious a performance issue.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
Ross S. W. Walker wrote:
Ugo Bellavance wrote:
Ross S. W. Walker wrote:
What are you trying to accomplish storage wise?
Is this for commerical or personal use?
Commercial, but non-critical use.
If for personal use, then it isn't as critical how it is
setup, but if
this is for commercial use then you need to target your storage to your application.
Nothing really very IO demanding. Running an OpenVZ server with many Virtual machines on it, but load is very low.
If you want this to be reconfigured on the fly without ever
rebooting
then you may find your options limited on which RAID levels you can choose.
Typically I keep the system disks in a RAID1 and the data disks on separate RAID arrays setup depending on the application.
Scratch or temp files -> RAID0 File serving -> RAID5 or RAID6 (depending on disk size # of disks) Databases, large email, many VMS -> RAID10
Let us know what you want the storage for and we can suggest a configuration.
Top of my head though, I would use the 18GB for the OS and
keep the 4
36GB for application data either as a RAID10 or RAID5.
That would make sense. Use RAID1 18GB for /, /boot and /var and use a RAID4 with 4 36GB HDD for /vz (OpenVZ's virtual machines are located there).
Makes sense?
Makes sense to me, I have found in my environment that VMs generate a lot of random io, so a RAID10 may be better suited, though it means 72GB of useable space instead of 108GB.
I understand, but my IO is not very significant... I think I'll use the RAID5 for /vz/
Also by using growing or sparse files for the vz images, a volume can get fragmented pretty quickly. To minimzie that from happening, think about creating LVs with separate small file systems to hold each vz image. If the LVs start running out of space, you can grow them and the file system as needed which will reduce the fragmentation tremendously. You will still end up with LV extents fragmented, but since they are larger it isn't as serious a performance issue.
OpenVZ doesn't work with vz image. It works as a regular filesystem.
Thanks a lot for your advice.
Ugo