Hello, My "hardware" (?) RAID system seems to work but says "duplicate PV" while booting, I don't think I was reading them before. Any clues will be appreciated.
From what I recall:
1) RAID 1 was setup (using firmware setup program) on a machine with Intel S3200 SHV Server Board. 2) Installed Centos 5.1, default LVM style. Anaconda saw a single 500GB disk so I assumed this was a true hardware RAID system. Am I wrong here? 3) Then wanted to reduce LogVol00 so as to make room for a new, data only filesystem on its own LV. Started by booting with rescue CD, lvscanned the disk, lvchanged -a y. Intended to resize root filesystem with resize2fs. Was asked to fsck, which I did (by the way, getting many errors). Fixed them all (fingers crossed), fsck again said ok. Then resize2fs worked happily. 4) Rebooted the installed system. Now "Duplicate PV" shows at boot. Honestly I don't know whether this was being displayed before (this is an inherited server). This message shows at the screen but no record of it is kept on any log file. 5) Everything seems to work well anyway. I created a new LV as I wished, just this message keeps me thinking...
Should I care? Should I fix it? Is it a true RAID board? Should I be better off going software-RAID 1?
lspci says 00:00.0 Host bridge: Intel Corporation Server DRAM Controller 00:1a.0 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 02) 00:1a.1 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 02) 00:1a.2 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 02) 00:1a.7 USB Controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 02) 00:1c.0 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 (rev 02) 00:1c.4 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 5 (rev 02) 00:1d.0 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 02) 00:1d.1 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 02) 00:1d.2 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 02) 00:1d.7 USB Controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 02) 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 92) 00:1f.0 ISA bridge: Intel Corporation 82801IR (ICH9R) LPC Interface Controller (rev 02) 00:1f.2 RAID bus controller: Intel Corporation 82801 SATA RAID Controller (rev 02) 00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 02) 02:00.0 VGA compatible controller: Matrox Graphics, Inc. MGA G200e [Pilot] ServerEngines (SEP1) (rev 02) 03:01.0 Ethernet controller: Intel Corporation 82541PI Gigabit Ethernet Controller (rev 05) 03:02.0 Ethernet controller: Intel Corporation 82541GI Gigabit Ethernet Controller (rev 05)
Thank you in advance
Eduardo Grosclaude wrote:
Hello, My "hardware" (?) RAID system seems to work but says "duplicate PV" while booting, I don't think I was reading them before. Any clues will be appreciated. From what I recall:
- RAID 1 was setup (using firmware setup program) on a
machine with Intel S3200 SHV Server Board. 2) Installed Centos 5.1, default LVM style. Anaconda saw a single 500GB disk so I assumed this was a true hardware RAID system. Am I wrong here? 3) Then wanted to reduce LogVol00 so as to make room for a new, data only filesystem on its own LV. Started by booting with rescue CD, lvscanned the disk, lvchanged -a y. Intended to resize root filesystem with resize2fs. Was asked to fsck, which I did (by the way, getting many errors). Fixed them all (fingers crossed), fsck again said ok. Then resize2fs worked happily. 4) Rebooted the installed system. Now "Duplicate PV" shows at boot. Honestly I don't know whether this was being displayed before (this is an inherited server). This message shows at the screen but no record of it is kept on any log file. 5) Everything seems to work well anyway. I created a new LV as I wished, just this message keeps me thinking...
Should I care? Should I fix it? Is it a true RAID board? Should I be better off going software-RAID 1?
lspci says
More informative output would be:
# sfdisk -d # pvs # vgs
There might be a disk from an old RAID1 set in there.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
Ross S. W. Walker wrote:
Eduardo Grosclaude wrote:
Hello, My "hardware" (?) RAID system seems to work but says "duplicate PV" while booting, I don't think I was reading
Could just be that lvm is finding your pv through another path - lvm.conf can be setup to only scan specific devices.
There might be a disk from an old RAID1 set in there.
I'll second that. I forgot to zero out one of my disks from a test raid setup & the when I rebooted for the 5.2 upgrade, lvm refused to start - duplicate uuid - IIRC. 5.1 + updates didn't present the problem, so something was changed in that regard for 5.2.
mdadm --examine <pv device(s)> will tell if there's raid metadata there, --zero-superblock will erase it.
Toby Bluhm wrote:
Ross S. W. Walker wrote:
Eduardo Grosclaude wrote:
Hello, My "hardware" (?) RAID system seems to work but says
Never mind, mdadm don't apply with HW raid.
mdadm --examine <pv device(s)> will tell if there's raid metadata there, --zero-superblock will erase it.
Toby Bluhm wrote:
Toby Bluhm wrote:
Ross S. W. Walker wrote:
Eduardo Grosclaude wrote:
Hello, My "hardware" (?) RAID system seems to work but says
Never mind, mdadm don't apply with HW raid.
Ah, but it would if a hardware RAID1 mirror were broken, a new disk stuck in, then later the old disk was inserted into the enclosure and it was presented as a regular disk...
Though he would need to determine if that is actually the case, verify it is actually not part of any existing RAID set, then remove it's LVM metadata.
If it is just a "fake" RAID not abstracting the physical disks properly then he just needs to filter them out in lvm.conf.
Key is to make sure it isn't the "fake" RAID scenario or it will have disastrous consequences.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
Eduardo Grosclaude wrote:
- Rebooted the installed system. Now "Duplicate PV" shows at boot. Honestly
To me it sounds likely that the raid controller is shitty and is presenting two sets of devices to the OS, one likely being the "RAID" device and the other a more generic device(s).
What does 'dmesg' say? Do you see more devices than you think you should have on the system?
As long as LVM is using the "right" one, I think there shouldn't be a problem. As another poster mentioned show your LVM configuration, you may want to add a filter into /etc/lvm/lvm.conf to make sure it uses the right one.
I've only seen this condition myself when:
1) Using multipathing software and multiple links to the same storage (in which case I adjust lvm.conf to account for this) 2) snapshot a volume on an array and export it to the same host that had the master(in which case I decided that wasn't the best way to accomplish what I wanted).
nate
Ross, Nate, Tony, thanks for your promptly response
On Mon, Jul 28, 2008 at 2:51 PM, nate centos@linuxpowered.net wrote:
Eduardo Grosclaude wrote:
- Rebooted the installed system. Now "Duplicate PV" shows at boot.
Honestly
To me it sounds likely that the raid controller is shitty and is presenting two sets of devices to the OS, one likely being the "RAID" device and the other a more generic device(s).
What does 'dmesg' say? Do you see more devices than you think you should have on the system?
dmesg says nothing about this, the message only appears at console when booting or otherwise using the PVs:
[root@myserver ~]# pvs Found duplicate PV 8D7K2wg15HqD0l9HxZCz7QlDfpqJOhXT: using /dev/sdb2 not /dev/sda2 PV VG Fmt Attr PSize PFree /dev/sdb2 VolGroup00 lvm2 a- 465,62G 0
[root@myserver ~]# lvs Found duplicate PV 8D7K2wg15HqD0l9HxZCz7QlDfpqJOhXT: using /dev/sdb2 not /dev/sda2 LV VG Attr LSize Origin Snap% Move Log Copy% LogVol00 VolGroup00 -wi-ao 150,00G LogVol01 VolGroup00 -wi-ao 1,94G LogVol02 VolGroup00 -wi-ao 313,69G
[root@myserver ~]# sfdisk -d # tabla de particiones de /dev/sda unit: sectors
/dev/sda1 : start= 63, size= 208782, Id=83, bootable /dev/sda2 : start= 208845, size=976543155, Id=8e /dev/sda3 : start= 0, size= 0, Id= 0 /dev/sda4 : start= 0, size= 0, Id= 0 # tabla de particiones de /dev/sdb unit: sectors
/dev/sdb1 : start= 63, size= 208782, Id=83, bootable /dev/sdb2 : start= 208845, size=976543155, Id=8e /dev/sdb3 : start= 0, size= 0, Id= 0 /dev/sdb4 : start= 0, size= 0, Id= 0
Awful--I expected to see just one device :P
There might be a disk from an old RAID1 set in there.
Don't think so, this machine was integrated here with new materials.
Oops... system-config-lvm shows under 'Uninitialized entities': /dev/sda -> part 1 -> part 2 -> unpartitioned space /dev/sdb -> part 1 -> unpartitioned space These shouldn't be appearing as two discs in the first place-- but anaconda said I only had one unit... Anyway, why the asymmetry? Did I screw the RAID volume somehow? Or did I install plain on sda and this RAID never worked as such? :P The machine BIOS correctly describes the RAID volume at start. Doesn't It smell like fake RAID? Should I declare sdb invalid to the firmware program so as to force resync? Thanks again
Eduardo Grosclaude wrote:
Ross, Nate, Tony, thanks for your promptly response
Toby
On Mon, Jul 28, 2008 at 2:51 PM, nate <centos@linuxpowered.net mailto:centos@linuxpowered.net> wrote:
Eduardo Grosclaude wrote:
snip
Oops... system-config-lvm shows under 'Uninitialized entities': /dev/sda -> part 1 -> part 2 -> unpartitioned space /dev/sdb -> part 1 -> unpartitioned space These shouldn't be appearing as two discs in the first place-- but anaconda said I only had one unit... Anyway, why the asymmetry? Did I screw the RAID volume somehow? Or did I install plain on sda and this RAID never worked as such? :P The machine BIOS correctly describes the RAID volume at start. Doesn't It smell like fake RAID? Should I declare sdb invalid to the firmware program so as to force resync? Thanks again
If it were me & I was just starting out on a new setup, I'd blow it all away and start from scratch. I hate that nagging feeling something's gonna bite me later down the road.
On Mon, Jul 28, 2008 at 3:36 PM, Toby Bluhm tkb@midwestinstruments.comwrote:
Eduardo Grosclaude wrote:
Ross, Nate, Tony, thanks for your promptly response
Toby
Ouch! Excuse me plz
If it were me & I was just starting out on a new setup, I'd blow it all away
and start from scratch. I hate that nagging feeling something's gonna bite me later down the road.
Agreed, I just expected to get a bit more knowledge from this crappy situation Cheers
Eduardo Grosclaude wrote:
On Mon, Jul 28, 2008 at 3:36 PM, Toby Bluhm tkb@midwestinstruments.com wrote:
Eduardo Grosclaude wrote:
Ross, Nate, Tony, thanks for your promptly response
Toby
Ouch! Excuse me plz
If it were me & I was just starting out on a new setup, I'd blow it all away and start from scratch. I hate that nagging feeling something's gonna bite me later down the road.
Agreed, I just expected to get a bit more knowledge from this crappy situation
Re-install with software RAID1.
RAID1 is cheap as far as CPU/IO time is concerned so it works well software wise, and you get email alerts if it gets degraded!
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
D Steward wrote:
Re-install with software RAID1.
RAID1 is cheap as far as CPU/IO time is concerned so it works well software wise, and you get email alerts if it gets degraded!
I agree with you re. CPU load, but what about hot-swap and auto rebuilding of arrays? Does software RAID give you this?
Hot swap, yes, if the hardware supports hot swap, auto rebuild, yes, if you defined a hot spare beforehand, otherwise you have to add the replacement to the array with the --add option.
That is how the hardware controllers do it. They don't auto assume that a new drive inserted is a spare for the one removed, but if you defined a hot spare then they will auto-rebuild off that just like software RAID will.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
on 7-28-2008 2:30 PM D Steward spake the following:
Re-install with software RAID1.
RAID1 is cheap as far as CPU/IO time is concerned so it works well software wise, and you get email alerts if it gets degraded!
I agree with you re. CPU load, but what about hot-swap and auto rebuilding of arrays? Does software RAID give you this?
Maybe not hot-swap yet -- I think it is in the works, but you can have hot-spares that function very well. But fakeraid doesn't do most of that either, and an ICH9 controller is fakeraid.
Scott Silva wrote:
on 7-28-2008 2:30 PM D Steward spake the following:
Re-install with software RAID1.
RAID1 is cheap as far as CPU/IO time is concerned so it works well software wise, and you get email alerts if it gets degraded!
I agree with you re. CPU load, but what about hot-swap and auto rebuilding of arrays? Does software RAID give you this?
Maybe not hot-swap yet -- I think it is in the works, but you can have hot-spares that function very well. But fakeraid doesn't do most of that either, and an ICH9 controller is fakeraid.
Scott,
I've tested hot swap and it works if the hardware says it can hot swap.
I don't know about unplugging a SATA cable while it's running and sticking in another, but you can try it on a desktop system, I don't see why it wouldn't work.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
Eduardo Grosclaude wrote:
Ross, Nate, Tony, thanks for your promptly response
On Mon, Jul 28, 2008 at 2:51 PM, nate centos@linuxpowered.net wrote:
Eduardo Grosclaude wrote:
- Rebooted the installed system. Now "Duplicate PV"
shows at boot. Honestly
To me it sounds likely that the raid controller is shitty and is presenting two sets of devices to the OS, one likely being the "RAID" device and the other a more generic device(s).
What does 'dmesg' say? Do you see more devices than you think you should have on the system?
dmesg says nothing about this, the message only appears at console when booting or otherwise using the PVs:
[root@myserver ~]# pvs Found duplicate PV 8D7K2wg15HqD0l9HxZCz7QlDfpqJOhXT: using /dev/sdb2 not /dev/sda2 PV VG Fmt Attr PSize PFree /dev/sdb2 VolGroup00 lvm2 a- 465,62G 0
[root@myserver ~]# lvs Found duplicate PV 8D7K2wg15HqD0l9HxZCz7QlDfpqJOhXT: using /dev/sdb2 not /dev/sda2 LV VG Attr LSize Origin Snap% Move Log Copy% LogVol00 VolGroup00 -wi-ao 150,00G LogVol01 VolGroup00 -wi-ao 1,94G LogVol02 VolGroup00 -wi-ao 313,69G
[root@myserver ~]# sfdisk -d # tabla de particiones de /dev/sda unit: sectors
/dev/sda1 : start= 63, size= 208782, Id=83, bootable /dev/sda2 : start= 208845, size=976543155, Id=8e /dev/sda3 : start= 0, size= 0, Id= 0 /dev/sda4 : start= 0, size= 0, Id= 0 # tabla de particiones de /dev/sdb unit: sectors
/dev/sdb1 : start= 63, size= 208782, Id=83, bootable /dev/sdb2 : start= 208845, size=976543155, Id=8e /dev/sdb3 : start= 0, size= 0, Id= 0 /dev/sdb4 : start= 0, size= 0, Id= 0
Awful--I expected to see just one device :P
There might be a disk from an old RAID1 set in there.
Don't think so, this machine was integrated here with new materials.
Oops... system-config-lvm shows under 'Uninitialized entities': /dev/sda -> part 1 -> part 2 -> unpartitioned space /dev/sdb -> part 1 -> unpartitioned space
The sfdisk output looks OK, I think it's just an issue with system-config-lvm getting confused with the "leaky" sdb.
These shouldn't be appearing as two discs in the first place-- but anaconda said I only had one unit... Anyway, why the asymmetry? Did I screw the RAID volume somehow? Or did I install plain on sda and this RAID never worked as such? :P
I think it's the on board RAID not abstracting the disks as it should.
The machine BIOS correctly describes the RAID volume at start. Doesn't It smell like fake RAID? Should I declare sdb invalid to the firmware program so as to force resync?
You could re-try the installation, or, hide /dev/sdb from lvm using filtering in lvm.conf.
You can reboot with a live cd and run a checksum comparison on the volumes on each disk to verify if the RAID is working correctly. Maybe there is a BIOS option to hide drive 2?
If you do a re-install and get the same result then you know it wasn't a mistake on your part though (unless you make it again!).
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
<snip>
-> unpartitioned space
These shouldn't be appearing as two discs in the first place-- but anaconda said I only had one unit... Anyway, why the asymmetry? Did I screw the RAID volume somehow? Or did I install plain on sda and this RAID never worked as such? :P The machine BIOS correctly describes the RAID volume at start. Doesn't It smell like fake RAID? Should I declare sdb invalid to the firmware program so as to force resync? Thanks again --
It sure looks as if it was originally a mirrored set, but broke later, maybe a kernel update no longer supports that fakeraid controller.
On Mon, Jul 28, 2008 at 6:15 PM, Scott Silva ssilva@sgvwater.com wrote:
The machine BIOS correctly describes the RAID volume at start. Doesn't It
smell like fake RAID? Should I declare sdb invalid to the firmware program so as to force resync? Thanks again --
It sure looks as if it was originally a mirrored set, but broke later, maybe a kernel update no longer supports that fakeraid controller.
Indeed. A reboot later, everything was a mess. I rebuilt the RAID and repeated the install.
Found that Disk Druid correctly sees the only device (referred to as /mapper/isw_[10 seemingly hex digits]_Volume0, everything goes completely as expected.
However, at the next boot the installed kernel no longer believes there's a single device there, and goes like this:
No RAID sets and with names 'isw_[same digits]_Volume0' failed to stat() /dev/mapper/isw_[same digits]_Volume0 ...EXT3-fs errors... ...mounts failed.... Kernel panic
My fault was not installing the proper Intel RAID driver for RHEL... the regular kernel does not provide it. Thanks very much for your help
On Tue, Jul 29, 2008 at 9:21 AM, Eduardo Grosclaude eduardo.grosclaude@gmail.com wrote:
On Mon, Jul 28, 2008 at 6:15 PM, Scott Silva ssilva@sgvwater.com wrote:
The machine BIOS correctly describes the RAID volume at start. Doesn't It smell like fake RAID? Should I declare sdb invalid to the firmware program so as to force resync? Thanks again --
It sure looks as if it was originally a mirrored set, but broke later, maybe a kernel update no longer supports that fakeraid controller.
Indeed. A reboot later, everything was a mess. I rebuilt the RAID and repeated the install.
Found that Disk Druid correctly sees the only device (referred to as /mapper/isw_[10 seemingly hex digits]_Volume0, everything goes completely as expected.
However, at the next boot the installed kernel no longer believes there's a single device there, and goes like this:
No RAID sets and with names 'isw_[same digits]_Volume0' failed to stat() /dev/mapper/isw_[same digits]_Volume0 ...EXT3-fs errors... ...mounts failed.... Kernel panic
My fault was not installing the proper Intel RAID driver for RHEL... the regular kernel does not provide it. Thanks very much for your help
Eduardo: To give you something else to consider, as an alternative: I believe there was a long thread here, awhile back, about using Software RAID, instead of fake RAID controllers. Software RAID works very well, as I recall from reading that thread. Possibly look into changing to Software RAID. Depends on the HW RAID controller. (Far OT: Years ago, I met a woman from Neuquen, in Mexico). Lanny in Colombia
On Tue, Jul 29, 2008 at 1:23 PM, Lanny Marcus lmmailinglists@gmail.comwrote:
Eduardo: To give you something else to consider, as an alternative: I believe there was a long thread here, awhile back, about using Software RAID, instead of fake RAID controllers. Software RAID works very well, as I recall from reading that thread. Possibly look into changing to Software RAID. Depends on the HW RAID controller.
Yes, I finally ended up installing software RAID because 1) I have read that, even if I installed the proper driver, Linux only uses it to configure its own dm software RAID device according to the BIOS conf-- is this completely true? If yes, no real offloading anything to hardware anyway-- even under Windows; does anybody know about this for sure? 2) I am very scared by non-kernel-tree-blessed modules which have their own install procedures and/or updating schedule, I have been bitten by this in the past.
I finally did setup two 1-RAIDed identical partitions and installed the system on the rest of both disks... Now my system won't boot if one disk is broken, but I hope I can go rescue into the data. I was formerly hoping to rely on RAID to protect the full install and simplify my life, but I was discouraged away by 1) and 2).
I have yet to see a real RAID controller... At what price do they start off?
on 7-29-2008 2:04 PM Eduardo Grosclaude spake the following:
On Tue, Jul 29, 2008 at 1:23 PM, Lanny Marcus <lmmailinglists@gmail.com mailto:lmmailinglists@gmail.com> wrote:
Eduardo: To give you something else to consider, as an alternative: I believe there was a long thread here, awhile back, about using Software RAID, instead of fake RAID controllers. Software RAID works very well, as I recall from reading that thread. Possibly look into changing to Software RAID. Depends on the HW RAID controller.
Yes, I finally ended up installing software RAID because
- I have read that, even if I installed the proper driver, Linux only
uses it to configure its own dm software RAID device according to the BIOS conf-- is this completely true? If yes, no real offloading anything to hardware anyway-- even under Windows; does anybody know about this for sure?
Fakeraid doesn't really offload anything to hardware. Most of the drivers are just software raid hidden from the OS by programming "black magic". The bios just sets metadata or stores registers that the driver reads to know what to do. I believe that the linux MD raid drivers are much more robust and way more tested.
- I am very scared by non-kernel-tree-blessed modules which have their
own install procedures and/or updating schedule, I have been bitten by this in the past.
Join the club.
I finally did setup two 1-RAIDed identical partitions and installed the system on the rest of both disks... Now my system won't boot if one disk is broken, but I hope I can go rescue into the data. I was formerly hoping to rely on RAID to protect the full install and simplify my life, but I was discouraged away by 1) and 2).
Follow the software raid howto and you should be able to boot from the second drive if the first fails.
I have yet to see a real RAID controller... At what price do they start off?
I would guess at about $150 US (450 Argentine Pesos) for a 3ware 2 channel sata PCI raid card plus shipping and any local duties. Double that for 4 channels.
Am 29.07.2008 um 23:04 schrieb Eduardo Grosclaude:
On Tue, Jul 29, 2008 at 1:23 PM, Lanny Marcus <lmmailinglists@gmail.com
wrote:
Eduardo: To give you something else to consider, as an alternative: I believe there was a long thread here, awhile back, about using Software RAID, instead of fake RAID controllers. Software RAID works very well, as I recall from reading that thread. Possibly look into changing to Software RAID. Depends on the HW RAID controller.
Yes, I finally ended up installing software RAID because
- I have read that, even if I installed the proper driver, Linux
only uses it to configure its own dm software RAID device according to the BIOS conf-- is this completely true? If yes, no real offloading anything to hardware anyway-- even under Windows; does anybody know about this for sure? 2) I am very scared by non-kernel-tree-blessed modules which have their own install procedures and/or updating schedule, I have been bitten by this in the past.
I finally did setup two 1-RAIDed identical partitions and installed the system on the rest of both disks... Now my system won't boot if one disk is broken, but I hope I can go rescue into the data. I was formerly hoping to rely on RAID to protect the full install and simplify my life, but I was discouraged away by 1) and 2).
I have yet to see a real RAID controller... At what price do they start off?
For two channels, there's little that is worth paying for, IMNSHO. 3Ware 8006 is only SATA-one, not SATA-II (two), which brings its own set of problems. For four channels: See www.areca.com.tw for models and use your local search engine to find a good offer. It's not cheap, but you get top performance. I see that they now also offer a two channel SATA-II RAID controller - newegg lists it for 180 USD. Software RAID gets more attractive every day... I can also recommend their 8 and 12 port controllers - but as discussed last time, the more disks you add, the less flexible hardware-RAID (and Linux) get and Solaris/ZFS make more sense then (16 disks upwards IMO). If your storage needs are not constantly growing, Linux is OK.
cheers, Rainer
On Tue, Jul 29, 2008 at 4:04 PM, Eduardo Grosclaude eduardo.grosclaude@gmail.com wrote:
On Tue, Jul 29, 2008 at 1:23 PM, Lanny Marcus lmmailinglists@gmail.com wrote:
Eduardo: To give you something else to consider, as an alternative: I believe there was a long thread here, awhile back, about using Software RAID, instead of fake RAID controllers. Software RAID works very well, as I recall from reading that thread. Possibly look into changing to Software RAID. Depends on the HW RAID controller.
Yes, I finally ended up installing software RAID because
COOL. There are a bunch of experts (not a word I use frequently) on this list who can answer the below questions you have about RAID. And, you can search the list for past threads about RAID.
- I have read that, even if I installed the proper driver, Linux only uses
it to configure its own dm software RAID device according to the BIOS conf-- is this completely true? If yes, no real offloading anything to hardware anyway-- even under Windows; does anybody know about this for sure? 2) I am very scared by non-kernel-tree-blessed modules which have their own install procedures and/or updating schedule, I have been bitten by this in the past.
I finally did setup two 1-RAIDed identical partitions and installed the system on the rest of both disks... Now my system won't boot if one disk is broken, but I hope I can go rescue into the data. I was formerly hoping to rely on RAID to protect the full install and simplify my life, but I was discouraged away by 1) and 2).
I have yet to see a real RAID controller... At what price do they start off?
<snip sig>