I performed a test install of CentOS Beta 5 on a system with two ~60 GB drives. When installing CentOS 4 on this system, I normally work thru setting up Software RAID with identically sized partitions on each drive.
For my test, the CentOS installer only presented a single drive. I took the default of letting it do what it wanted. After looking over the system I have verified that /boot is using DMRAID, but it isn't not clear if the balance of the drive for the root file system is using both drives or not, I think it might be.
Using fdisk, I can see both disks are carved out identically:
Device Boot Start End Blocks Id System /dev/hda1 * 1 13 104391 83 Linux /dev/hda2 14 7297 58508730 8e Linux LVM
/dev/hdd1 * 1 13 104391 83 Linux /dev/hdd2 14 7297 58508730 8e Linux LVM
Using dmraid it *looks* like the drives are mirrored in total?
dmraid -r -D /dev/hda: pdc, "pdc_bacfgfjaf", mirror, ok, 117231345 sectors, data@ 0 /dev/hdd: pdc, "pdc_bacfgfjaf", mirror, ok, 117231345 sectors, data@ 0
But then using df it is less obvious:
df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 54G 1009M 50G 2% / /dev/mapper/pdc_bacfgfjafp1 99M 11M 83M 12% /boot tmpfs 125M 0 125M 0% /dev/shm
pvs shows VolGroup00 as using pdc_bacfgfjafp2:
PV VG Fmt Attr PSize PFree /dev/mapper/pdc_bacfgfjafp2 VolGroup00 lvm2 a- 55.78G 32.00M
which is RAID1??????
Seems to be:
dmraid -s *** Active Set name : pdc_bacfgfjaf size : 117231232 stride : 128 type : mirror status : ok subsets: 0 devs : 2 spares : 0
Am I properly understanding that by default, CentOS 5 is creating a full drive mirrored RAID? Then for other than the boot partition, it is laying LVM over this mirrored RAID, presumably to allow adjustments later?
Thank you in advance,
Brett
On 02/04/07, Brett Serkez bserkez@gmail.com wrote:
For my test, the CentOS installer only presented a single drive. I took the default of letting it do what it wanted. After looking over the system I have verified that /boot is using DMRAID, but it isn't not clear if the balance of the drive for the root file system is using both drives or not, I think it might be.
Why not: #cat /proc/mdstat ?
Why not: #cat /proc/mdstat ?
This was the start of my confusion:
# cat /proc/mdstat Personalities : unused devices: <none>
and thus the work above to discern what the install had done.
On the surface is seems that starting with CentOS 5, it automatically assumes if you have two drives you want to use them as RAID 1/Mirroring. I'm not sure if this can be overridden.
Further it seems to assume that you want to use LVM2, which can be overridden by choosing to manually partition the DMRAID drive.
I'm still not 100% my understanding is correct.
Brett
Brett Serkez spake the following on 4/1/2007 3:04 PM:
I performed a test install of CentOS Beta 5 on a system with two ~60 GB drives. When installing CentOS 4 on this system, I normally work thru setting up Software RAID with identically sized partitions on each drive.
For my test, the CentOS installer only presented a single drive. I took the default of letting it do what it wanted. After looking over the system I have verified that /boot is using DMRAID, but it isn't not clear if the balance of the drive for the root file system is using both drives or not, I think it might be.
Using fdisk, I can see both disks are carved out identically:
Device Boot Start End Blocks Id System /dev/hda1 * 1 13 104391 83 Linux /dev/hda2 14 7297 58508730 8e Linux LVM
/dev/hdd1 * 1 13 104391 83 Linux /dev/hdd2 14 7297 58508730 8e Linux LVM
Using dmraid it *looks* like the drives are mirrored in total?
dmraid -r -D /dev/hda: pdc, "pdc_bacfgfjaf", mirror, ok, 117231345 sectors, data@ 0 /dev/hdd: pdc, "pdc_bacfgfjaf", mirror, ok, 117231345 sectors, data@ 0
But then using df it is less obvious:
df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 54G 1009M 50G 2% / /dev/mapper/pdc_bacfgfjafp1 99M 11M 83M 12% /boot tmpfs 125M 0 125M 0% /dev/shm
pvs shows VolGroup00 as using pdc_bacfgfjafp2:
PV VG Fmt Attr PSize PFree /dev/mapper/pdc_bacfgfjafp2 VolGroup00 lvm2 a- 55.78G 32.00M
which is RAID1??????
Seems to be:
dmraid -s *** Active Set name : pdc_bacfgfjaf size : 117231232 stride : 128 type : mirror status : ok subsets: 0 devs : 2 spares : 0
Am I properly understanding that by default, CentOS 5 is creating a full drive mirrored RAID? Then for other than the boot partition, it is laying LVM over this mirrored RAID, presumably to allow adjustments later?
Thank you in advance,
Brett
Does the board have an onboard fakeraid? I think that is what DMRAID does. It supports some of the fakeraid controllers, although it is much less mature than plain software raid.
Does the board have an onboard fakeraid? I think that is what DMRAID does. It supports some of the fakeraid controllers, although it is much less mature than plain software raid.
No, it is an older PIII with two IDE hard drives plugged into the motherboard. To the best of my knowledge, this is not the case.
Brett
Brett Serkez spake the following on 4/2/2007 11:36 AM:
Does the board have an onboard fakeraid? I think that is what DMRAID does. It supports some of the fakeraid controllers, although it is much less mature than plain software raid.
No, it is an older PIII with two IDE hard drives plugged into the motherboard. To the best of my knowledge, this is not the case.
Brett
It was common for some boards in that era to still have a promise ide chip embedded in them.
No, it is an older PIII with two IDE hard drives plugged into the motherboard. To the best of my knowledge, this is not the case.
It was common for some boards in that era to still have a promise ide chip embedded in them.
I rebooted so I could check the BIOS and checked the boot messages with dmesg. I see no evidence of an sort of RAID controller. Here are the relevant dmesg messages:
ide: Assuming 33MHz system bus speed for PIO modes; override with idebus=xx ICH: IDE controller at PCI slot 0000:00:1f.1 ICH: chipset revision 2 ICH: not 100% native mode: will probe irqs later ide0: BM-DMA at 0xffa0-0xffa7, BIOS settings: hda:DMA, hdb:pio ide1: BM-DMA at 0xffa8-0xffaf, BIOS settings: hdc:DMA, hdd:DMA Probing IDE interface ide0... hda: Maxtor 36147H8, ATA DISK drive ide0 at 0x1f0-0x1f7,0x3f6 on irq 14 Probing IDE interface ide1... hdc: CRD-8400B, ATAPI CD/DVD-ROM drive hdd: Maxtor 4D060H3, ATA DISK drive hdc: Disabling (U)DMA for CRD-8400B (blacklisted) ide1 at 0x170-0x177,0x376 on irq 15 hda: max request size: 128KiB hda: 117231408 sectors (60022 MB) w/512KiB Cache, CHS=65535/16/63, UDMA(66) hda: cache flushes not supported hda: hda1 hda2 hdd: max request size: 128KiB hdd: 120069936 sectors (61475 MB) w/2048KiB Cache, CHS=65535/16/63, UDMA(66) hdd: cache flushes not supported hdd: hdd1 hdd2
Brett
Brett Serkez spake the following on 4/2/2007 2:06 PM:
No, it is an older PIII with two IDE hard drives plugged into the motherboard. To the best of my knowledge, this is not the case.
It was common for some boards in that era to still have a promise ide chip embedded in them.
I rebooted so I could check the BIOS and checked the boot messages with dmesg. I see no evidence of an sort of RAID controller. Here are the relevant dmesg messages:
ide: Assuming 33MHz system bus speed for PIO modes; override with idebus=xx ICH: IDE controller at PCI slot 0000:00:1f.1 ICH: chipset revision 2 ICH: not 100% native mode: will probe irqs later ide0: BM-DMA at 0xffa0-0xffa7, BIOS settings: hda:DMA, hdb:pio ide1: BM-DMA at 0xffa8-0xffaf, BIOS settings: hdc:DMA, hdd:DMA Probing IDE interface ide0... hda: Maxtor 36147H8, ATA DISK drive ide0 at 0x1f0-0x1f7,0x3f6 on irq 14 Probing IDE interface ide1... hdc: CRD-8400B, ATAPI CD/DVD-ROM drive hdd: Maxtor 4D060H3, ATA DISK drive hdc: Disabling (U)DMA for CRD-8400B (blacklisted) ide1 at 0x170-0x177,0x376 on irq 15 hda: max request size: 128KiB hda: 117231408 sectors (60022 MB) w/512KiB Cache, CHS=65535/16/63, UDMA(66) hda: cache flushes not supported hda: hda1 hda2 hdd: max request size: 128KiB hdd: 120069936 sectors (61475 MB) w/2048KiB Cache, CHS=65535/16/63, UDMA(66) hdd: cache flushes not supported hdd: hdd1 hdd2
Brett
You might get better performance if the second drive is a master instead of a slave. Looking at the dmraid man page, is it possible that the drives were in place on a raid system previously? It might be seeing the metadata and thinking it is a raid array.
http://www.linuxmanpages.com/man8/dmraid.8.php